The time based schedule is available here: Schedule
We leverage open-source python tools to extract historical land cover information (1890-1950) from the United States Geological Survey (USGS) Historical Topographic Map Collection (HTMC). Based on python packages for image processing, machine learning, and geospatial analysis, we extracted historical road networks, urban areas, and forest extents to enhance our knowledge of historical landscape evolution in the United States.
In March 2020 the world is completely blocked and people are lining up to shop or to the pharmacy or to buy basic necessities. There have been many initiatives and among these I have created a worldwide map that allows anyone to check the estimated waiting times of supermarkets, pharmacies and places of interest. Here is a recap talk on how I accomplished the project hoping to inspire others
Visual localization is a key technology for applications such as augmented, mixed and virtual reality, as well as robotics and autonomous driving. It addresses the problem of estimating the 6-degree-of-freedom (DoF) camera pose from which a given image or sequence of images was captured relative to a reference scene representation, often in the form of images with known poses. Although much research has been done in this area in recent years, large variations in appearance caused by season, weather, illumination, and man-made changes, as well as large-scale environments, are still challenging for visual localization solutions. To overcome the limitations caused by appearance changes, traditional hand-crafted local image feature descriptors such as SIFT (Lowe, 2004) or SURF (Bay et al., 2008) are replaced by learned feature descriptors such as SuperPoint (DeTone et al., 2018), R2D2 (Revaud et al., 2019), ASLFeat (Luo et al., 2020), DISK (Tyszkiewicz et al., 2020) or ALIKE (Zhao et al., 2022). Hierarchical approaches combining image retrieval and structure-based localization (Sarlin et al., 2019) are developed to deal with large environments, both to keep the required computational resources low and to ensure the uniqueness of the local features.
Using Python capabilities to manage RockEval® Data, interpret them and finally create a kerogen kinetic.
In this talk, I will walk you through Python-backed Geo-Planetary data projects being developed in the last years in the realm of the European Open Science Cloud, where the ultimate goal is to bring analysis-ready data to the general public.
In this talk, we present DL4DS, a python package that implements a wide variety of state-of-the-art and novel algorithms for downscaling gridded Earth Science data with deep neural networks. DL4DS has been designed with the goal of providing a general framework for convolutional neural networks with configurable architectures and training procedures to enable benchmark, comparative and ablation studies.
You often work in a notebook using Geopandas and other libraries . But it is always nice to be able to display you map to customers using a website. We will learn how to do so without additional costs.
EOReader is a remote-sensing opensource python library reading optical and SAR sensors, loading and stacking bands, clouds, DEM and index in a sensor-agnostic way.
EOmaps is a python library to simplify the creation of static and interactive maps. It provides an easy-to-use framework to visualize, analyse and compare (potentially large) geographical datasets.
hvPlot adapts and extends the
.plot() API made popular by Pandas and Xarray to easily create interactive and static maps.
How can the information content of large and complex remote sensing data sets be easily grasped and evaluated? And in which way is it possible to identify the potential of such data sets with respect to concrete objectives? Methods from the field of manifold learning, for which implementations are available as ready-to-use Python packages, are a good remedy. This talk focuses on the application of the dimension reduction algorithm Uniform Manifold Approximation and Projection (UMAP) for the visualization of high-dimensional remote sensing data.
This paper prepared for the Inter-American Development Bank analyzes the travel patterns of different socioeconomic groups with data from the public transport electronic payment system in the Metropolitan Buenos Aires Region.
Data storytelling has never been more popular. Immanuel Kant stated the following in 1802, "The history of occurrences at different times, which is true history, is nothing other than a consecutive geography, and thus it is a great limitation on history if one does not know where something happened, or what it was like”.
To truly bring history to light we need to bring the right data into the conversation, use the right tools, and be able to hold a tension between what we would like the solutions to be and what limits the actual realization of change. The story I would like to tell by engaging spatial and non-spatial data centering around the role disinformation and politics played in the profound deforestation of the Amazon since 2018. What can we measure? What should we be measuring?
ITS_LIVE is a NASA MEaSUREs project that produces low latency, global glacier flow and elevation change datasets. The size and complexity of this data makes its distribution and use a challenge. To address these problems, ITS_LIVE was built for modern cloud-optimized data formats and includes easy-to-use Jupyter notebooks for data access and visualization.
Starting from raw GNSS position with a lot of noise and scattered patterns machine learning algorithm such as random forests help to improve the classification of GNSS positions into "good" and "bad" ones.
This talk breaks down the simple Point in Polygon problem by briefly discussing the different algorithms that tackle it and what happens behind the scene in the tools & libraries that you use to run this with a simple click of a button/ one line of code
Spatial and temporal data is in high demand by Data Scientists and crops domain experts, wishing to quickly develop models to help farmers optimize their crops production in a climate friendly way. A way to efficiently create, save and load the data is necessary. Our solution is to store the data in one large multi-dimensional geospatial and temporal dataset.
While academic research heavily depends on open-source software, the relationship is often one-way. We believe that designing research in close relation to open-source development is beneficial for all parties and present one way of doing that, by turning a research project into a component of the open-source ecosystem.
Promoting community resilience requires population data that captures human dynamics with high spatial, temporal, and demographic fidelity. Likeness is a Python toolkit that supports these aims by creating agents informed by hundreds of individual-level attributes from census microdata and producing realistic simulations of their activity spaces.
In this talk, we will show you the use of geospatial Python libraries within a Jupyter Notebook to map VIIRS active fires in South America. We will show you a straightforward workflow to visualize interactively VIIRS active fires using Geopandas and Folium libraries. This workflow could be easily customized to map active fires in any country around the world.
COVID-19 Testing was inequitably distributed at the start of the pandemic, especially to at-risk populations. An index was created and mapped to help the operational and population health team members decide on where to put additional COVID-19 testing centers.
This talk presents MovingPandas, a project that aims to provide general purpose tools for analyzing and visualizing movement data.
In this talk we will implement a location-based service for indicating the nearest parking slot to drivers by analyzing the data obtained by urban cameras. To analyze camera images, a new convolution neural networks is developed.
Simulation and an online dashboard tool to analyze population changes as a function of time and predict population as a function of speculated development scenario within the built environment.
Leveraging geospatial Python libraries to understand and predict Land Surface Temperature in urban areas considering historical openly available satellite images and urban morphological data.
Add another layer of safety to your codebase with static typing.
QGreenland is a free and open-source Greenland-focused QGIS environment for data analysis and visualization. Built using Python and open source geospatial tools like GDAL, QGreenland's software offers automated, reproducible builds to ensure consistent outputs with metadata and provenance for all included datasets.
We have developed a Python-based modeling and spatial prediction framework to accurately estimate soil carbon content at large geographic scales. These methods can provide cost-effective carbon accounting for regenerative farming operations.
From a real work done for an Oil & Gas company, a modern data pipeline analysis of vehicles tracking and monitoring data made using Google Cloud serverless tools for ETL, cleaning, storing and data visualization.
Python web frameworks, like FastAPI, Flask, Quartz, Tornado, and Twisted, are important for writing high-performance web applications and for their contributions to the web ecosystem. However, even they posit some bottlenecks either due to their synchronous nature or due to the usage of python runtime. Most of them don’t have the ability to speed themselves due to their dependence on *SGIs. This is where Robyn comes in. Robyn tries to achieve near-native Rust throughput along with the benefit of writing code in Python. In this talk, we will learn more about Robyn. From what is Robyn to the development in Robyn.
In this talk we present vegetation index time series similarity metrics for crop type classification. The use of such metrics instead of the raw satellite observations not only reduces inter-class confusion, but also helps to reduce the dimensionality and thus, ensure model transferability.
GeoPandas is one of the core packages in the Python ecosystem to work with geospatial vector data. This talk will give an overview of recent developments in GeoPandas and the broader ecosystem.
At ITC-University Twente we have been educating geo-professionals for more than 70 years. Nowadays, we try to create problems solvers, not button-pushers, so we teach them GeoComputing using Python. In this talk we explain how.
The onset of the COVID-19 pandemic in early 2020 brought an unexpected "anthropause". Border closures, travel restrictions, and economic slowdown meant a hiatus in commercial shipping, offshore energy exploration, and ocean tourism. It provided a rare research opportunity to investigate the time-series relationship between anthropogenic activities and ambient noise levels in oceans using Python and open data.
With the recent advancements in the field of AI and High Performance Computing, more organizations have started heavily investing in ML/AI research using advanced processors and humongous amount of data. Enormous amount of energy is consumed during the training process, which leads to emission of harmful greenhouse gases like Carbon Dioxide.
Python being one of the most widely used programming languages for ML/AI development, this talk focuses on educating the Python Community on how to track and reduce CO2 emissions of Python Code using CodeCarbon.
In this talk, I’ll briefly introduce the various modes in which geospatial data comes. I’ll also focus on the most efficient ways to condense large amounts of geospatial data into analyzable chunks, to speed up data processing and analysis.
A Python package to analyze and visualize 3D point cloud time series.
pygeofilter helps integrating geospatial filters in any Python application. Batteries included.
Hands on: build and deploy a geospatial web-application using Greppo, an open-source Python framework. Without any frontend, backend and web-dev experience.
How to use GeoDjango to create location-based service
Pangeo Forge is a new open-source platform that aims to make it easy to extract data from traditional data repositories and deposit it in cloud storage in analysis-ready, cloud-optimized (ARCO) formats. This workshop will teach users how to use Pangeo Forge and contribute to the growing, community driven data library.
This workshop introduces the Dask-GeoPandas library and walks you through its key components, allowing you to take a GeoPandas workflow and run it in parallel, out-of-core and even distributed on a remote cluster.
Do you need high-resolution data for your machine learning, but you have only areal aggregates? Would you like to present continuous maps instead of choropleth maps? We can transform county-level data into smaller blocks with Pyinterpolate. We will learn how to perform Poisson Kriging on the areal dataset during workshops.