DiscoverSatellite image deep learning
Satellite image deep learning
Claim Ownership

Satellite image deep learning

Author: Robin Cole

Subscribed: 13Played: 87
Share

Description

Dive into the world of deep learning for satellite images with your host, Robin Cole. Robin meets with experts in the field to discuss their research, products, and careers in the space of satellite image deep learning. Stay up to date on the latest trends and advancements in the industry - whether you’re an expert in the field or just starting to learn about satellite image deep learning, this a podcast for you. Head to https://www.satellite-image-deep-learning.com/ to learn more about this fascinating domain

www.satellite-image-deep-learning.com
38 Episodes
Reverse
In this episode I caught up with Roberto del Prete to learn about his work on AutoML for in-orbit model deployment, and how it enables satellites to run highly efficient AI models under severe power and hardware constraints. Roberto explains why traditional computer-vision architectures—optimised for ImageNet or COCO—are a poor fit for narrow, mission-specific tasks like wildfire or vessel detection, and why models must be co-designed with the actual edge devices flying in space. He describes PyNAS, his neural architecture search framework, in which a genetic algorithm drives the optimisation process, evolving compact, hardware-aware neural networks and profiling them directly on representative onboard processors such as Intel Myriad and NVIDIA Jetson. We discuss the multiobjective challenge of balancing accuracy and latency, the domain gap between training data and new sensor imagery, and how lightweight models make post-launch fine-tuning and updates far more practical. Roberto also outlines the rapidly changing ecosystem of spaceborne AI hardware and why efficient optimisation will remain central to future AI-enabled satellite constellations.* 🖥️ PyNAS on Github* 📖 Nature paper* 📺 Video of this conversation on YouTube* 👤 Roberto on LinkedInBioRoberto is an Internal Research Fellow at ESA Φ-lab specialising in deep learning and edge computing for remote sensing. He focuses on improving time-critical decision-making through advanced AI solutions for space missions and Earth monitoring. He holds a Ph.D. at the University of Naples Federico II, where he also earned his Master’s and Bachelor’s degrees in Aerospace Engineering. His notable work includes the development of “FederNet,” a terrain relative navigation system. Del Prete’s professional experience includes roles as a Visiting Researcher at the European Space Agency’s Φ-Lab and SmartSat CRC in Australia. He has contributed to key projects like Kanyini Mission, and developed AI algorithms for real-time maritime monitoring and thermal anomaly detection. He co-developed the award-winning P³ANDA project, a compact AI-powered imaging system, earning the 2024 Telespazio Technology Contest prototype prize. Co-author of more than 30 scientific publications, Del Prete is dedicated to leveraging advanced technologies to address global challenges in remote sensing and AI. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode I caught up with Julia Wąsala to learn about methane plume detection using AutoML, and how her research bridges atmospheric science and machine learning. Julia explains the unique challenges of working with TROPOMI data—extremely coarse spatial resolution, single-channel methane measurements, and complex auxiliary fields that sometimes create plume-like artefacts leading to false detections. She walks through how her approach generalises a traditional two-stage detection pipeline to multiple gases using AutoMergeNet, a neural architecture search framework that automatically designs multimodal CNNs tailored to different atmospheric gases. We discuss why methane matters, how model performance shifts dramatically between curated test sets and real-world global data, and the ongoing effort to understand sampling bias and improve operational precision.* 📖 AutoMergeNet paper* 🖥️ Code on Github* 🖥️ Julia’s homepage* 📺 Recording of this conversation on YouTubeBio: Julia Wąsala is currently working toward the Ph.D. degree in automated machine learning for Earth observation with the Leiden Institute for Advanced Computer Science, Leiden University, Leiden, The Netherlands, and with Space Research Organisation Netherlands, Leiden, The Netherlands. Her research focuses on the field of automated machine learning for earth observation focuses on designing new methods and validating them in real-world applications, such as atmospheric plume detection. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Ibrahim Salihu Yusuf from InstaDeep’s AI for Social Good team to hear the story behind InstaGeo, an open-source geospatial machine learning framework built to make multispectral satellite data easy to use for real-world applications. Ibrahim explains how the 2019–2020 locust outbreak exposed a gap between freely available satellite imagery, existing machine learning models, and the lack of tools to turn raw data into model-ready inputs. He walks through how InstaGeo bridges this gap - fetching, processing, and preparing multispectral data; fine-tuning models such as NASA IBM’s Prithvi; and delivering end-to-end inference and visualisation in a unified app. The conversation also covers practical use cases, from locust breeding ground detection to damage assessment, air quality, and biomass estimation, as well as the team’s efforts to partner with field organisations to drive on-the-ground impact.* 👤 Ibrahim on LinkedIn* 🖥️ InstaGeo on Github* 📖 Paper on InstaGeo* 📺 Video of this conversation on YouTube* 📺 Demo of InstaGeo on YouTubeBio: Ibrahim is a Senior Research Engineer and Technical Lead of the AI for Social Good team at InstaDeep’s Kigali office, where he applies artificial intelligence to address real-world challenges and drive social impact across Africa and beyond. With expertise spanning geospatial machine learning, computer vision, and computational biology, he has led high-impact projects in food security, disaster response, and immunology research. He also leads the development of InstaGeo, a platform designed to democratise access to AI-powered insights from open-source satellite imagery, reflecting his commitment to using cutting-edge AI for meaningful societal benefit. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, Roberto from ESA’s Φ-lab in Frascati introduces PhiDown, a community-driven open-source tool designed to simplify data access from the Copernicus Data Space Ecosystem (CDSE). He explains why PhiDown was created, how it uses the high-speed S5 protocol for efficient downloads, and how it differs from other platforms like Google Earth Engine. The discussion highlights real-world use cases, from automating Sentinel data pipelines to building large-scale datasets for AI models. Head to YouTube on the link below to view the recording of this conversation, along with an extended demo of using PhiDown.* 🖥️ PhiDown on Github* 📺 Video with demo on YouTube* 👤 Roberto on LinkedIn🚀 Timeline* 0:38 Motivation — PhiDown created to simplify access to Copernicus data 1:55 Key Tech — Built on S5 protocol, derived from S3, ~5–10× faster * 2:44 Comparison — Unlike Google Earth Engine, PhiDown gives direct access to raw products such as Level-0 Sentinel imagery * 5:01 Use cases — Automating pipelines (auto-download latest Sentinel products). Accessing low-level products for algorithm testing. Building large datasets for ML / foundation models. Research applications: wildfire detection, vessel monitoring, timeliness studies with Level-0 data * 6:55 Development context — Roberto notes the rise of LLMs and coding agents. Tools can help, but domain expertise still required. * 8:01 Open Source — PhiDown is on GitHub. Includes documentation + example notebooks. Community-driven project — Roberto encourages contributions, feature requests, and collaboration.BioRoberto is an Internal Research Fellow at ESA Φ-lab specialising in deep learning and edge computing for remote sensing. He focuses on improving time-critical decision-making through advanced AI solutions for space missions and Earth monitoring. He holds a Ph.D. at the University of Naples Federico II, where he also earned his Master's and Bachelor's degrees in Aerospace Engineering. His notable work includes the development of "FederNet," a terrain relative navigation system. Del Prete's professional experience includes roles as a Visiting Researcher at the European Space Agency's Φ-Lab and SmartSat CRC in Australia. He has contributed to key projects like Kanyini Mission, and developed AI algorithms for real-time maritime monitoring and thermal anomaly detection. He co-developed the award-winning P³ANDA project, a compact AI-powered imaging system, earning the 2024 Telespazio Technology Contest prototype prize. Co-author of more than 30 scientific publications, Del Prete is dedicated to leveraging advanced technologies to address global challenges in remote sensing and AI. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Jonathan Lwowski, Connor Wallace, and Isaac Corley to explore how Zeitview built an AI-powered system to monitor solar farms at continental scale. We dive into the North American Solar Scan, which surveyed every 1MW plus site using high-resolution aerial RGB and thermal-infrared imagery, then processed it through a chained ML pipeline that detects panel-level defects and fire risks. The team discusses the challenges of normalising data across regions, why a modular cascaded model design outperforms monolithic end-to-end approaches, and how human-in-the-loop review ensures high precision. They also share insights from building a generalised ML library on top of Timm, Segmentation Models PyTorch, and TorchVision to accelerate model training and deployment, their philosophy of prioritising data quality over chasing SOTA, and how the same framework extends to wind, telecom, real estate, and other renewable assets.* 🖥️ Zeitview website* 📺 Video of this conversation on YouTube* 👤 Jonathan on LinkedIn* 👤 Conor on LinkedIn* 👤 Isaac on LinkedInJonathan bio: Jonathan Lwowski is an accomplished AI leader and Director of AI/ML at Zeitview, where he guides high-performing machine learning teams to deliver scalable, real-world solutions. With deep experience spanning start-ups and enterprise environments, Jonathan bridges cutting-edge innovation with business strategy, ensuring AI efforts are aligned, impactful, and clearly communicated. He’s passionate about unlocking AI’s potential while fostering a culture of technical excellence, collaboration, and growth.Conor bio: Conor Wallace is a Machine Learning Scientist at Zeitview, where he develops computer vision systems - including vision-language models - for geospatial AI applications in aerial inspection and infrastructure monitoring. His work integrates visual, thermal, and spatial data to build scalable systems for analysing assets such as solar farms, wind turbines, and commercial rooftops. He is also completing a Ph.D. in Electrical Engineering, where his research focuses on agent modelling in multi-agent systems, emphasising behaviour prediction in dynamic, non-stationary environments. Conor is passionate about applying state-of-the-art machine learning to real-world challenges in remote sensing and intelligent decision-making.Isaac bio: Isaac Corley is a Senior Machine Learning Engineer at Wherobots, where he builds scalable geospatial AI systems. He holds a Ph.D. in Electrical Engineering with a focus on computer vision for remote sensing. Isaac previously worked as a Senior ML Scientist at Zeitview and a Research Intern at Microsoft's AI for Good Lab. He is a core maintainer of TorchGeo and is passionate about advancing open-source tools that make geospatial AI more accessible and production-ready. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode I caught up with Adam Stewart, creator of TorchGeo, to hear all the latest updates related to this pivotal piece of geospatial AI software. We discuss TorchGeo’s strong adoption in the geospatial ML community and the upcoming 1.0 release, which will introduce long-awaited time series support. Adam shares insights from a recent software literature review covering available geospatial data handling tools, sampling strategies, and the broader machine learning ecosystem. He also talks about the newly formed Technical Steering Committee, outlining its role in guiding the project’s direction. Other topics include upcoming breaking changes to geospatial datasets and samplers, how TorchGeo integrates with other libraries and tools, the project’s growing community, the role of foundation models in handling diverse geospatial products, the promise of zero-shot learning for effortless data labelling, and why no single model can dominate across all domains.* 👤 Adam on LinkedIn* 🖥️ TorchGeo* 📺 Video of this conversation on YouTubeBio: Adam J. Stewart's research interests lie at the intersection of machine learning and Earth science, especially remote sensing. He is the creator and lead developer of the popular TorchGeo library, a PyTorch domain library for working with geospatial data and satellite imagery. His current research focuses on building foundation models for multispectral imagery. He received his B.S. from the Department of Earth and Atmospheric Sciences at Cornell University and his Ph.D. from the Department of Computer Science at the University of Illinois Urbana-Champaign. He currently works as a postdoctoral researcher at the Technical University of Munich under the guidance of Prof. Xiaoxiang Zhu. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode I caught up with Tobias Augspurger to explore the Map Your Grid initiative at Open Energy Transition, an ambitious project funded by Breakthrough Energy to build a digital twin of the global electrical grid. While AI and machine learning are being used to detect substations, pylons, and transmission lines in satellite imagery, Toby explains why these approaches alone can’t deliver a complete, accurate map. We discussed the false positives, missing connections, and contextual details that challenge automated models, and how human validation and open-source mapping remain essential to producing reliable, global-scale infrastructure data. * 👤 Toby on LinkedIn* 🖥️ mapyourgrid.org* 📺 MapYourGrid YouTube Channel* 📺 Video of this conversation on YouTubeBio: Tobias Augspurger is a climate technology innovator and open-source advocate. At Open Energy Transition, he is accelerating the global energy transition by standardising electrical grid data within OpenStreetMap as part of the MapYourGrid initiative. With a PhD in atmospheric sciences and a background in aerospace engineering, Tobias combines technical expertise in remote sensing with inclusive collaboration. In his spare time, he works on OpenSustain.tech and ClimateTriage.com, connecting and promoting open projects to combat climate change and biodiversity loss This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I catch up with Federico Bessi to dive into a fascinating end-to-end project on the automatic detection of photovoltaic (PV) solar plants using satellite imagery and deep learning. Federico walks us through how he built a complete pipeline—from sourcing and preprocessing data using the Brazil Data Cube, to annotating solar farms in QGIS, training models in PyTorch, and finally deploying a web app on AWS to visualise the predictions. This is interesting because solar energy infrastructure is expanding rapidly, yet tracking it globally remains a major challenge. This project demonstrates how open data and modern ML tools can be combined to monitor solar installations at scale—automatically and remotely. It's a compelling example of applied geospatial AI in action. This video is ideal for remote sensing practitioners, machine learning engineers, and anyone interested in environmental monitoring, Earth observation, or building practical AI systems for real-world deployment.* 🖥️ Project code on Github* 👤 Federico on Linkedin* 📺 Video of this conversation on YouTube* 📺 Project demo on YouTubeBio: Federico Bessi is a Software Engineer specializing in Machine Learning, with an international background in the software, computer vision, and biometrics industries. He spent over a decade working in biometric identification for global tech companies, contributing to national ID systems across more than seven countries. In these roles, he developed software, led engineering teams, and oversaw large-scale system operations. Building on this foundation, Federico has deepened his work in machine learning and deep learning, applying it to business intelligence, user satisfaction modeling, and geospatial analysis using satellite imagery. He also became a contributor with the open-source TorchGeo project. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this conversation, I caught up with Shahab Jozdani to learn about Chat2Geo, a web-based application designed to simplify remote-sensing-based geospatial analysis through an intuitive, chatbot-style interface. Large language models, such as ChatGPT, are reshaping the way users interact with complex datasets, and it’s inspiring to see innovators like Shahab leverage this technology to democratise geospatial analytics. Note that we also recorded a demonstration video of Chat2Geo, which is linked below:* 🖥️ Chat2Geo on Github* 👤 Shahab on LinkedIn* 📺 Video of this conversation on YouTube* 📺 Demo of Chat2Geo on YouTubeBio: Data Scientist and Geomatics Engineer with over 10 years of experience in academia and industry, specialising in AI, computer vision, data science, software development, and building new solutions. Founder of GeoRetina, a Canadian company that developed and open-sourced Chat2Geo, an AI-powered platform providing real-time geospatial insights via conversational interfaces This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
OmniCloudMask

OmniCloudMask

2025-06-2718:58

In this episode, I caught up with Nick Wright to discuss OmniCloudMask—a Python library for state-of-the-art cloud and cloud shadow masking in satellite imagery. Accurate cloud masking is crucial for reliable downstream analytics, yet creating models that generalise well across different sensors, resolutions, and atmospheric conditions remains a significant challenge.OmniCloudMask addresses this through a novel image preprocessing pipeline and clever augmentation strategies that vary the image resolution presented to the model. Model generalisation is a key concern for practitioners in our field, and I found this conversation both insightful and practical—I hope you do too.* 📃 Paper* 🖥️ Code* 📺 Video of this conversation on YouTube* 👤 Nick on LinkedInBio: Nick Wright is a Senior Research Scientist at the Western Australian Department of Primary Industries and Regional Development. He is also pursuing a PhD at the University of Western Australia, focusing on deep learning applications for environmental remote sensing, specifically in cloud and water detection and sensor-agnostic models. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with James Doherty and Donal Hill, co-founders of Planetixx (formerly Plastic-i), a company using satellite imagery and AI to monitor ocean debris. Their platform not only detects plastic and other debris, but also predicts its origins and trajectory, enabling more effective interventions. Beyond plastic, they’ve expanded into monitoring algal blooms, a growing environmental concern. The conversation covers the technical and practical challenges of building AI models that work at a global scale, as well as their newly launched platform. A live demo of the platform is available as a separate video, linked below* 🖥️ Planetixx website* 👤 James LinkedIn* 👤 Donal LinkedIn* 📺 Video of this conversation on YouTube* 📺 Platform demo on YouTubeBio: Dr. James Doherty is CEO and Co-Founder of Earthshot-nominated enterprise Planetixx, where he drives environmental innovation in tackling marine plastic pollution and promoting ocean health. His unique expertise spans astronomy, data science, and law, combining scientific rigour with legal acumen. James holds a PhD in Astronomy, law degrees from the Universities of Cambridge and Oxford, and is a Science to Data Science (S2DS) Fellow. His professional background includes practising as a commercial lawyer at Eversheds Sutherland before applying his diverse skill set to environmental entrepreneurship.Bio: Dr. Donal Hill is Chief Technical Officer and Co-Founder of Planetixx, where he leads technology development initiatives in satellite data and artificial intelligence applications. His expertise spans particle physics, data science, and AI mplementation across research and industry. Donal holds a PhD in Particle Physics from the University of Oxford and spent ten years at CERN's Large Hadron Collider. His distinguished career includes serving as a Marie Curie Fellow at École Polytechnique Fédérale de Lausanne (EPFL) and holding senior data scientist positions at UEFA and the Swiss Data Science Center, where he facilitates AI adoption for industry partners. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Kai Jeggle to discuss his experience pursuing a PhD at the intersection of machine learning and remote sensing. The conversation covers Kai's work on IceCloudNet, a deep learning model that reconstructs 3D cloud structures from 2D imagery with sparse depth measurements. Data fusion and sparse machine learning are fascinating topics. I learned a lot from this conversation, and I hope you do to.* 👤 Kai on LinkedIn* 📃 IceCloudNet paper* 🖥️ Code* 💾 Dataset* 📺 Video of this conversation on YouTubeBio: Kai is passionate about leveraging machine learning to tackle climate change. His research lies at the intersection of ML, remote sensing, and climate science. He studied industrial engineering and computer science before completing his PhD in Atmospheric Physics at ETH Zurich under Prof. Ulrike Lohmann, with visiting research stays at UV Valencia and the ESA Phi Lab. He also worked as a software engineer at the Stockholm-based MLOps startup LogicalClocks. Kai is a core team member and former vice-chair of Climate Change AI, a global non-profit that catalyses impactful work at the intersection of climate change and machine learning. In his next role, he will join the meteo data team at Dexter Energy in Amsterdam, working to improve renewable energy yield forecasts. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Daniele Rege Cambrin, the organiser of the SMAC earthquake detection challenge, and Giorgio Morales, its winner. The challenge invited participants to leverage Sentinel 1 satellite imagery to identify earthquake-affected areas and measure the strength of events, while promoting scalable and resource-efficient solutions.Giorgio shared his innovative approach that secured first place, and we explored the effort behind designing and solving such a meaningful challenge. This conversation provides valuable insights into developing effective solutions and showcases the potential of satellite data in earthquake monitoring.* 📺 Video of this conversation on YouTube* 🖥️ SMAC website* 🖥️ Website of GiorgioBio: Giorgio is a PhD candidate (ABD) in computer science at Montana State University and a current member of the Numerical Intelligent Systems Laboratory (NISL). He holds a BS in mechatronic engineering from the National University of Engineering, Peru, and an MS in computer science from Montana State University, USA. His research interests are Deep Learning, Explainable Machine Learning, Computer Vision, and Precision Agriculture. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Caleb Robinson to learn about the building damage assessment toolkit from the Microsoft AI for Good lab. This toolkit enables first responders to carry out an end-to-end workflow for assessing damage to buildings after natural disasters using post-disaster satellite imagery. It includes tools for annotating imagery, fine-tuning deep learning models, and visualizing model predictions on a map. Caleb shared an example where an organisation was able to train a useful model with just 100 annotations and complete the entire workflow in half a day. I believe this represents a significant new capability, enabling more rapid response in times of crisis.* 📺 Video of this conversation on YouTube* 👤 Caleb on LinkedIn* 🖥️ The toolkit on GithubBio: Caleb is a Research Scientist in the Microsoft AI for Good Research Lab. His work focuses on tackling large scale problems at the intersection of remote sensing and machine learning/computer vision. Some of the projects he works on include: estimating land cover, poultry barns, solar panels, and cows from high-resolution satellite imagery. Caleb is interested in research topics that facilitate using remotely sensed imagery more effectively. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
Deepness QGIS plugin

Deepness QGIS plugin

2024-12-1912:57

In this episode, I caught up with Marek Kraft to learn about the Deepness QGIS plugin.QGIS is a widely used open-source tool for working with geospatial data. It’s written in Python, and its functionality can be expanded with plugins. One plugin that recently caught my attention is Deepness, developed by Marek and his team.Deepness makes it straightforward to use deep learning models in QGIS. You don’t need specialised hardware like GPUs, and it offers a range of pre-trained models through a model zoo.As a long-time QGIS user, I was thrilled to discover Deepness, and I believe it has the potential to make deep learning much more accessible to geospatial practitioners without deep learning expertise. Marek shared some fascinating examples of how the plugin is being used, and discussed the growing community around it.* 📺 Demo video showcasing Deepness in action* 📺 Video of this conversation on YouTube* 👤 Marek on LinkedIn* 🖥️ PUT Vision Lab* 📖 Deepness documentation* 🖥️ Deepness Github pageBio: Marek Kraft is an assistant professor at the Poznań University of Technology (PUT), where he leads the PUT Computer Vision Lab. The lab focuses on developingintelligent algorithms for extracting meaningful information from images, videos, and signals. This work has applications across diverse fields, including Earthobservation, agriculture, and robotics (including space robotics). Kraft's current research involves close-range remote sensing image analysis, specialising in small object detection for environmental monitoring. He also collaborated on European Space Agency projects aimed at extraterrestrial rover navigation and autonomy, making use of his knowledge of embedded systems. His research has led to over 80 publications, several patents, and a history of securing competitive research grants. Kraft is a member of IEEE and ACM. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Nicolas Gonthier to learn about the FLAIR land cover mapping challenge. In this challenge, 20cm resolution aerial imagery was used to create high-quality annotations. This data was paired with a time series of medium-resolution Sentinel 2 images to create a rich, multidimensional dataset. Participants in the challenge were able to surpass the baseline solution by 10 points in the target metric, representing a significant step forward in land cover classification capabilities. The dataset is now being expanded to cover a larger area and incorporate additional imaging modalities, which have been shown to improve performance on this task. Nicolas also provided important context about the objectives of the organisation running this challenge, such as the need to balance model performance with processing costs. * 🖥️ FLAIR website* 🖥️ Page on the objectives of FLAIR* 📖 The NeuRIPS paper about FLAIR* 🤗 IGN on HuggingFace* 🖥️ IGN datahub* 👤 Nicolas on LinkedIn* 📺 Video of this conversation on YouTubeBio: Nicolas Gonthier is a R&D project manager in the innovation team at IGN the French National Institute of Geographical and Forest Information. He received a MSc. in data science from ISAE Supaero in 2017 and a Ph.D. degree in computer vision from Université Paris Saclay - Télécom Paris in 2021. His work focus on deep learning for earth observation (land cover segmentation, change detection, etc) and computer vision for geospatial data. He participate to different research and innovation projects. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Marc Rußwurm to learn about Meta-learning with Meteor. Our conversation starts with a discussion about meta-learning and the training of Meteor, and how this approach differs from the typical approaches taken to train foundational models. We cover the advantages and challenges of this technique, and discuss the fine-tuning of Meteor with minimal examples—as few as five—for tasks like deforestation monitoring and change detection, and consider what the future could hold for this approach. Meteor showcases the significant potential of few-shot learning for processing remote sensing imagery and proves it's possible to tackle tasks even when very few training examples are available. * 👤 Marc on LinkedIn* 📖 Meteor Nature paper* 💻 Meteor code on Github* 📺 Video of this conversation on YouTubeBio: Marc Rußwurm is Assistant Professor of Machine Learning and Remote Sensing at Wageningen University. His background is in Geodesy and Geoinformation, and he obtained a Ph.D. in Remote Sensing Technology at TU Munich. During his Ph.D., he could visit the European Space Agency and the University of Oxford as a participant in the Frontier Development Lab in 2018, the Obelix Laboratory in Vannes, and the Lobell Lab in Stanford. As a postdoctoral researcher, he joined the Environmental Computational Science and Earth Observation Laboratory at EPFL, Switzerland. His research interests are developing modern machine learning methods for real-world remote sensing problems, such as classifying vegetation from satellite time series and detecting marine debris in the oceans. He is interested in domain shifts and transfer learning problems naturally arising from geographic data. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode, I caught up with Nils Lehmann to learn about Uncertainty Quantification for Neural Networks. The conversation begins with a discussion on Bayesian neural networks and their ability to quantify the uncertainty of their predictions. Unlike regular deterministic neural networks, Bayesian neural networks offer a more principled method for providing predictions with a measure of confidence. Nils then introduces the Pytorch Lightning UQ Box project on GitHub, a tool that enables experimentation with a variety of Uncertainty Quantification (UQ) techniques for neural networks. Model interpretability is a crucial topic, essential for providing transparency to end users of machine learning models. The video of this conversation is also available on YouTube here* Nils’s website* Lightning UQ box on Github* Further reading: A survey of uncertainty in deep neural networksBio: Nils Lehmann is a PhD Student at the Technical University of Munich (TUM), supervised by Jonathan Bamber and Xiaoxiang Zhu, working on uncertainty quantification for sea-level rise. More broadly his interests lie in Bayesian Deep Learning, uncertainty quantification and generative modelling for Earth Observational data. He is also passionate about open-source software contributions and a maintainer of the Torchgeo package. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode I caught up with Samuel Bancroft to learn about segmenting field boundaries using Segment Anything, aka SAM. SAM is a foundational model for vision released by Meta, which is capable of zero shot segmentation. However there are many open questions about how to make use of SAM with remote sensing imagery. In this conversation, Samuel describes how he used SAM to perform segmentation of field boundaries using Sentinel 2 imagery over the UK. His best results were obtained not by fine tuning SAM, but by carefully pre-processing a time series of images into HSV colour space, and using SAM without any modifications. This is a surprising result, and using this kind of approach significantly reduces the amount of work necessary to develop useful remote sensing applications utilising SAM. You can view the recording of this conversation on YouTube here- Samuel on LinkedIn - https://github.com/Spiruel/UKFields Bio: Sam Bancroft is a final year PhD student at the University of Leeds. He is assessing future food production using satellite data and machine learning. This involves exploring new self- and semi- supervised deep learning approaches that help in producing more reliable and scalable crop type maps for major crops worldwide. He is a keen supporter in democratising access to models and datasets in Earth Observation and machine learning. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
In this episode I caught up with Yotam Azriel to learn about interpretable deep learning. Deep learning models are often criticised for being "black box" due to their complex architectures and large number of parameters. Model interpretability is crucial as it enables stakeholders to make informed decisions based on insights into how predictions were made. I think this is an important topic and I learned a lot about the sophisticated techniques and engineering required to develop a platform for model interpretability. You can also view the video of this recording on YouTube.* tensorleap.ai* Yotam on LinkedinBio: Yotam is an expert in machine and deep learning, with ten years of experience in these fields. He has been involved in massive military and government development projects, as well as with startups. Yotam developed and led AI projects from research to production and he also acts as a professional consultant to companies developing AI. His expertise includes image and video recognition, NLP, algo-trading, and signal analysis. Yotam is an autodidact with strong leadership qualities and great communication skills. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com
loading
Comments