Skip to content

Agenda

Posters presented during the two poster sessions are listed on the Posters page.
Details about the session formats can be found on the Formats page.

Both for oral talks and posters, click on an entry to view the abstract.

  • Tuesday 17 Sep 2024
  • Wednesday 18 Sep 2024
  • Thursday 19 Sep 2024

Tuesday 17 Sep 2024

09:00

Registration opens

Welcome with tea, coffee, pastries, and fresh fruits

10:30

Chairs: Tomas Navarro & Dario Izzo

Welcome by the organisers

Earth 1 & 2

Tue 10:30 – 11:00

11:00

Chair: Dominik Dold

Opening keynote

Jürgen Schmidhuber (IDSIA & KAUST)Earth 1 & 2

Tue 11:00 – 12:00

12:00

Lunch break

13:30

Chairs: Massimo Casasco (Navigation & Control); Ioana Ciucă (Astronomy & Astrophysics)

Fast and Robust rover Navigation with Convolutional Neural Networks: FARNAV, an AI feasibility study

Stephen King (Airbus)Earth 2

With the upcoming space missions for exploration in Mars and returning to the Moon, the need of autonomy regarding rover navigation is discernible. In this regard, Airbus Defence and Space has been developing Guidance Navigation and Control (GNC) algorithms for missions like ESA’s ExoMars and the Sample Fetch Rover. While those classical rigid algorithms work for their target missions, some challenges arise regarding their flexibility for future applications. Aspects like run time and robustness to camera shifts and other miscalibration effects need to be considered, which are very common e.g. due to harsh launcher, lander and rover operations.

Thus, Airbus has taken a different approach with FARNAV (Fast And Robust NAVigation): an image based Deep Learning solution to create navigation maps (NavMaps). FARNAV’s final goal is to outperform the classical GNC stack computer vision algorithm. This paper is our first step into proving the feasibility of creating NavMaps with Convolutional Neural Networks. The tested architectures and challenges encountered when applying Deep Learning to the problem are analysed and the results discussed, concluding that the problem is solvable by the AI. The results open the path to further analysis on the integration and algorithm development towards its final goal.

Tue 13:30 – 13:50
Navigation & Control

Using ensemble learning to improve radiation tolerance of CNNs in space applications

Flavio Ponzina (UCSD)Earth 2

With the increasing complexity of space missions, improving the resilience of AI systems against radiation-induced errors is critical. This paper proposes the use of application-level optimizations of Convolutional Neural Network (CNN)-based systems to improve robustness in space applications without relying on expensive shielded electronics. In particular, our work demonstrates how a recently proposed methodology for zero-overhead ensembles of CNNs, named E2CNN, can be applied to two case studies involving 6D pose estimation for satellite navigation. Our results show that E2CNN achieves up to 5.48% higher accuracy compared to single-instance models under different error conditions, suggesting their effectiveness in improving resilience and output quality in low-earth orbit applications. The experimental evaluations indicate that even without memory protection, E2CNN offers superior performance, making them a promising solution for AI-based space systems.

Tue 13:50 – 14:10
Navigation & Control

Camera-Pose Robust Crater Detection from Chang’e 5

Matthew Rodda (Uni Adelaide)Earth 2

As space missions aim to explore increasingly hazardous terrain, crater-based pose estimation (CBPE) may provide the accurate and timely position estimates required to ensure safe navigation. Crater-detection algorithms (CDAs) are a crucial initial step to CBPE, using images collected from a spacecraft’s onboard cameras to detect impact craters in the environment. Existing literature has proposed many algorithms for crater detection, however performance has only been evaluated within a narrow operating scenario. In this work, we demonstrate the challenge in detecting craters from images containing off-nadir view angles, using Mask R-CNN, a state of the art detection algorithm. We demonstrate training on real-lunar images is superior to a simulated dataset despite lacking training images containing off-nadir view angles, achieving detection performance of 63.1% F1-score and ellipse-regression performance of 0.701 intersection over union. This work provides the first quantitative analysis of CDA performance on images containing off-nadir view angles. Towards the development of robust CDAs, we additionally provide the first annotated dataset with off-nadir view angles from the Chang’e 5 Landing Camera, available here: https://zenodo.org/doi/10.5281/zenodo.11326449.

Tue 14:10 – 14:30
Navigation & Control

XAMI – A Benchmark Dataset for Artefact Detection in XMM-Newton Optical Images

Elisabeta-Iulia Dima (UPT Timișoara)Earth 3

Reflected or scattered light produce artefacts in astronomical observations that can negatively impact the scientific study. Hence, automated detection of these artefacts is highly beneficial, especially with the increasing amounts of data gathered. Machine learning methods are well-suited to this problem, but currently there is a lack of annotated data to train such approaches to detect artefacts in astronomical observations.

In this work, we present a dataset of images from the XMM-Newton space telescope Optical Monitoring camera showing different types of artefacts. We hand-annotated a sample of 1000 images with artefacts which we use to train automated ML methods. We further demonstrate techniques tailored for accurate detection and masking of artefacts using instance segmentation. We adopt a hybrid approach, combining knowledge from both convolutional neural networks (CNNs) and transformer-based models and use their advantages in segmentation.

The presented method and dataset will advance artefact detection in astronomical observations by providing a reproducible baseline. All code and data are made available https://github.com/ESA-Datalabs/XAMI-model and https://github.com/ESA-Datalabs/XAMI-dataset.

Tue 13:30 – 13:50
Astronomy & Astrophysics

Enhancing Solar Driver Forecasting with Multivariate Transformers

Sergio Sánchez Hurtado (UPM Madrid)Earth 3

In this work, we develop a comprehensive framework for F10.7, S10.7, M10.7, and Y10.7 solar driver forecasting with a time series Transformer (PatchTST). To ensure an equal representation of high and low levels of solar activity, we construct a custom loss function to weight samples based on the distance between the solar driver’s historical distribution and the training set. The solar driver forecasting framework includes an 18-day look-back window and forecasts 6 days into the future. When benchmarked against the Space Environment Technologies (SET) dataset, our model consistently produces forecasts with a lower standard mean error in nearly all cases, with improved prediction accuracy during periods of high solar activity. All the code is available on Github: https://github.com/ARCLab-MIT/sw-driver-forecaster.

Tue 13:50 – 14:10
Astronomy & Astrophysics

Neural Surrogate HMC: Accelerated Hamiltonian Monte Carlo with a Neural Network Surrogate Likelihood

Linnea Wolniewicz (Uni of Hawaii at Manoa)Earth 3

Bayesian Inference with Markov Chain Monte Carlo requires efficient computation of the likelihood function. In some scientific applications, the likelihood must be computed by numerically solving a partial differential equation, which can be prohibitively expensive. We demonstrate that some such problems can be made tractable by amortizing the computation with a surrogate likelihood function implemented by a neural network. We show that this has two additional benefits: reducing noise in the likelihood evaluations and providing fast gradient calculations. In experiments, the approach is applied to a model of heliospheric transport of galactic cosmic rays, where it enables efficient sampling from the posterior of latent parameters in the Parker equation.

Tue 14:10 – 14:30
Astronomy & Astrophysics

14:30

Coffee break

14:50

Chairs: Massimo Casasco (Navigation & Control); Ioana Ciucă (Astronomy & Astrophysics)

Explaining AI Decisions in Autonomous Satellite Scheduling via Computational Argumentation

Cheyenne Powell (Uni Strathclyde)Earth 2

The task of scheduling satellite operations is inherently complex and highly sensitive to alterations, a challenge compounded by the increasing number of satellites in orbit. The escalating risks and complexities have prompted organizations to explore automated solutions to replace traditional manual processes. However, concerns about the trustworthiness and transparency of automated systems prevent their widespread adoption.

eXplainable Artificial Intelligence (XAI) is an emerging field that aims to address these reservations by enabling Artificial Intelligence (AI) systems to provide explanations for their decisions, thereby eliminating opaqueness in understanding their reasoning. Within XAI, the use of computational argumentation frameworks has seen increasing utilisation. This approach quantifies the supportability of decisions, offering system operators enhanced understanding and justification for utilizing automated services.

This paper expands on previous research by detailing a method for generating a tripolar argumentation approach for assessing actions based on an Earth Observation (EO) satellite schedule. The method involves calculating and presenting the weights of arguments that support or attack the scheduled actions. The results illustrate the effectiveness of the approach in producing meaningful insights into scheduling decisions, highlighting its potential for practical applications in real-world satellite operations.

Tue 14:50 – 15:10
Navigation & Control

Optimizing Multi-Task Learning for Accurate Spacecraft Pose Estimation

Francesco Evangelisti (AIKO S.r.l.)Earth 2

Accurate satellite pose estimation is crucial for autonomous guidance, navigation, and control (GNC) systems in in-orbit servicing (IOS) missions. This paper explores the impact of different tasks within a multi-task learning (MTL) framework for satellite pose estimation using monocular images. By integrating tasks such as direct pose estimation, keypoint prediction, object localization, and segmentation into a single network, the study aims to evaluate the reciprocal influence between tasks by testing different multi-task configurations thanks to the modularity of the convolutional neural network (CNN) used in this work. The trends of mutual bias between the analyzed tasks are found by employing different weighting strategies to further test the robustness of the findings. A synthetic dataset was developed to train and test the MTL network.
Results indicate that direct pose estimation and heatmap-based pose estimation positively influence each other in general, while both the bounding box and segmentation tasks do not provide significant contributions and tend to degrade the overall estimation accuracy.

Tue 15:10 – 15:30
Navigation & Control

End-to-End Al-based IP-GNC architecture for spacecraft proximity operations

Andrea Brandonisio (Politecnico di Milano)Earth 2

Autonomy is increasingly crucial in space missions due to several factors driving the exploration and utilization of space. In the meanwhile, Artificial Intelligence (AI) methods begin to play a crucial role in addressing the challenges associated with and enhancing autonomy in space missions.
The proposed work develops a closed-loop simulator for proximity operations scenarios, particularly for the inspection of an unknown and uncooperative target object, with a fully AI-based closed-loop image processing (IP) and Guidance Navigation & Control (GNC) chain. This tool is based on four main blocks: image generation, CNN-based image processing, navigation filter, and DRL-based guidance and control blocks.
The proposed AI-based architecture is first trained and tuned to investigate the interface problems between each GNC block. Afterwards, the architecture is deployed in an Montecarlo testing campaign to verify and validate the performance of the proposed IP-GNC loop.

Tue 15:30 – 15:50
Navigation & Control

Analysis and Predictive Modeling of Solar Coronal Holes Using Computer Vision and ARIMA-LSTM Networks

Juyoung Yun (Stony Brook Uni)Earth 3

In the era of space exploration, coronal holes on the sun play a significant role due to their impact on satellites and aircraft through their open magnetic fields and increased solar wind emissions. This study employs computer vision techniques to detect coronal hole regions and estimate their sizes using imagery from the Solar Dynamics Observatory (SDO). Additionally, we utilize hybrid time series prediction model, specifically combination of Long Short-Term Memory (LSTM) networks and ARIMA, to analyze trends in the area of coronal holes and predict their areas across various solar regions over a span of seven days. By examining time series data, we aim to identify patterns in coronal hole behavior and understand their potential effects on space weather.

Tue 14:50 – 15:10
Astronomy & Astrophysics

Solar filament detection, classification, and tracking with deep learning

Antonio Reche (Uni Alcalá)Earth 3

This study introduces a comprehensive deep learning framework for the detection, classification, segmentation, and tracking of solar filaments using H-Alpha images from the Global Oscillation Network Group (GONG) data archive. Solar filaments, phenomena in the solar corona, are of significant scientific interest due to their link with violent eruptive events such as coronal mass ejections. Using together a DETR-based model for detection and classification, a U-Net for instance segmentation, and a custom-made tracking algorithm, we achieved state of the art performance across all tasks, overcoming typical challenges. The proposed methodology significantly advances solar filament analysis, offering improved capabilities for automated studies and potential applications in space weather prediction.

Tue 15:10 – 15:30
Astronomy & Astrophysics

Magnetogram-to-Magnetogram: Generative Forecasting of Solar Evolution

Francesco Pio Ramunno (FHNW)Earth 3

Investigating the solar magnetic field is crucial to understand the physical processes in the solar interior as well as their effects on the interplanetary environment. We introduce a novel method to predict the evolution of the solar line-of-sight (LoS) magnetogram using image-to-image translation with Denoising Diffusion Probabilistic Models (DDPMs). Our approach combines “computer science metrics” for image quality and “physics metrics” for physical accuracy to evaluate model performance. The results indicate that DDPMs are effective in maintaining the structural integrity, the dynamic range of solar magnetic fields, the magnetic flux and other physical features such as the size of the active regions, surpassing traditional persistence models, also in flaring situation. We aim to use deep learning not only for visualisation but as an integrative and interactive tool for telescopes, enhancing our understanding of unexpected physical events like solar flares. Future studies will aim to integrate more diverse solar data to refine the accuracy and applicability of our generative model. Visit this href{https://github.com/fpramunno/MAG2MAG}{Github page} for the code and href{https://huggingface.co/spaces/fpramunno/mag2mag}{here} for the interactive tool.

Tue 15:30 – 15:50
Astronomy & Astrophysics

15:50

Coffee break

16:20

Chairs: Audrey Berquand (Frameworks & Integration); Gianluca Furano (Onboard AI)

The Onboard Artificial Intelligence Research (OnAIR) Platform

Evana Gizzi (NASA)Earth 2

In this paper, we present the NASA On-Board Artificial Intelligence Research (OnAIR) Platform, a dual-use tool for rapid prototyping autonomous capabilities for earth and space missions, serving as both a cognitive architecture and a software pipeline. OnAIR has been used for autonomous reasoning in applications spanning various domains and implementation environments, supporting the use of raw data files, simulators, embodied agents, and recently in an onboard experimental flight payload. We review the OnAIR architecture and recent applications of OnAIR for autonomous reasoning in various projects at NASA, concluding with a discussion on the intended use for the public domain, and directions for future work.

Tue 16:20 – 16:40
Frameworks & Integration

Artificial Intelligence at ArianeGroup activities and stakes

Romain Bourrier (ArianeGroup SAS)Earth 2

The integration of Artificial Intelligence (AI) technologies within ArianeGroup processes is revolutionizing various aspects of our operations. This paper presents an overview of how we are implementing AI solutions to enhance performance in space launchers development, manufacturing and operations, as well as developing new services such as Space Situational Awareness (SSA). We also present our challenges in the development of internal artificial intelligence towards processes optimization, in terms of technology bricks, company organization and conformity with regulation bodies.
The stakes for the future are significant, with potential improvements in cost efficiency, reliability, and innovation. By leveraging cutting-edge AI technologies, ArianeGroup aims to maintain its position as a leading player in the global space industry while contributing to advancements in sustainable space exploration. As we continue to explore new applications of AI within our organization, we remain committed to fostering collaboration with academic institutions and space agencies to drive innovation and ensure responsible use of these technologies.

Tue 16:40 – 17:00
Frameworks & Integration

Integrating Machine Learning and Data Science in Planetary Missions for Science Autonomy

Victoria Da Poian (TYTO / JHU)Earth 2

Future planetary exploration missions investigating habitability and potential life on distant bodies (e.g., Titan, Enceladus) will face communication constraints with limited transfer rates and short communication windows. Operations of existing missions such as to Mars rely heavily on ground-in-the-loop interactions that may not be suitable for such remote targets [1]. To address this, new missions will require greater autonomy to achieve their desired science return [2].

Our research leverages machine learning (ML) and data science techniques to enable science autonomy onboard space missions [3]. Science autonomy would enable a spacecraft to make closed-loop scientific decisions in situ without the need for constant communication with Earth’s science operations teams. It has the potential to greatly enhance decision-making speed, the level of science return, and the overall efficiency of the mission. This paper discusses the current progress and future roadmap for integrating ML and data science in planetary missions to achieve science autonomy.

Tue 17:00 – 17:20
Frameworks & Integration

AstroSpy: On detecting Fake Images in Astronomy via Joint Image-Spectral Representations

Mohammed Talha Alam (MBZUAI)Earth 2

The prevalence of AI-generated imagery has raised concerns about the authenticity of astronomical images, especially with advanced text-to-image models like Stable Diffusion producing highly realistic synthetic samples. Existing detection methods, primarily based on convolutional neural networks (CNNs) or spectral analysis, have limitations when used independently. We present AstroSpy, a hybrid model that integrates both spectral and image features to distinguish real from synthetic astronomical images. Trained on a unique dataset of real NASA images and AI-generated fakes (approximately 18k samples), AstroSpy utilizes a dual-pathway architecture to fuse spatial and spectral information. This approach enables AstroSpy to achieve superior performance in identifying authentic astronomical images. Extensive evaluations demonstrate AstroSpy’s effectiveness and robustness, significantly outperforming baseline models in both in-domain and cross-domain tasks, highlighting its potential to combat misinformation in astronomy.

Tue 17:20 – 17:40
Frameworks & Integration

Shaping Rewards, Shaping Routes: On Multi-Agent Deep Q-Networks for Routing in Satellite Constellation Networks

Manuel M. H. Roth (DLR)Earth 3

Effective routing in satellite mega-constellations has become crucial to facilitate the handling of increasing traffic loads, more complex network architectures, as well as the integration into 6G networks. To enhance adaptability as well as robustness to unpredictable traffic demands, and to solve dynamic routing environments efficiently, machine learning-based solutions are being considered. For network control problems, such as optimizing packet forwarding decisions according to Quality of Service requirements and maintaining network stability, deep reinforcement learning techniques have demonstrated promising results. For this reason, we investigate the viability of multi-agent deep Q-networks for routing in satellite constellation networks. We focus specifically on reward shaping and quantifying training convergence for joint optimization of latency and load balancing in static and dynamic scenarios. To address identified drawbacks, we propose a novel hybrid solution based on centralized learning and decentralized control.

Tue 16:20 – 16:40
Onboard AI

Graph Neural Networks for Anomaly Detection in Spacecraft

Gamze Naz Kiprit (Airbus)Earth 3

Satellites play a crucial role in the global communications system, while being subjected to the harsh space environment that often causes component degradation. Early failure detection is important to ensure that the critical
services they provide are not interrupted. This work proposes a forecasting-based anomaly detection method, where a Graph Convolutional Network (GCN) is leveraged to extract relevant information from time series. The proposed anomaly detection model reaches an F-Score of 89% on the open-source SMAP & MSL dataset, which is the best score achieved on this dataset to the best of the authors’ knowledge. Furthermore, the proposed model is deployed on the space-grade AMD-Xilinx Versal AI Core series and its performance measures as well as occupied hardware resources are demonstrated.

Tue 16:40 – 17:00
Onboard AI

Neural-based Control for CubeSat Docking Maneuvers

Matteo Stoisa (AIKO S.r.l.)Earth 3

Autonomous Rendezvous and Docking (RVD) have been extensively studied in recent years, addressing the stringent requirements of spacecraft dynamics variations and the limitations of GNC systems. This paper presents an innovative approach employing Artificial Neural Networks (ANN) trained through Reinforcement Learning (RL) for autonomous spacecraft guidance and control during the final phase of the rendezvous maneuver. The proposed strategy is easily implementable onboard and offers fast adaptability and robustness to disturbances by learning control policies from experience rather than relying on predefined models. Extensive Monte Carlo simulations within a relevant environment are conducted in 6DoF settings to validate our approach, along with hardware tests that demonstrate deployment feasibility. Our findings highlight the efficacy of RL in assuring the adaptability and efficiency of spacecraft RVD, offering insights into future mission expectations.

Tue 17:00 – 17:20
Onboard AI

Onboard AI for Enhanced FDIR: Revolutionizing Spacecraft Operations with Anomaly Detection

Livia Manovi (University of Bologna)Earth 3

Spacecraft operate in harsh environments with relevant computational constraints as well as limited communication to Earth. Traditional Fault Detection, Isolation, and Recovery (FDIR) systems rely on pre-programmed thresholds, which can miss unforeseen anomalies, thus requiring frequent human intervention from ground. This paper explores the potential of onboard Artificial Intelligence (AI) for enhanced FDIR. By continuously analyzing spacecraft telemetry data, onboard AI can detect anomalies in real-time, enabling faster and more precise responses. This approach promises to revolutionize spacecraft operations by improving autonomy, reducing reliance on ground intervention, and ensuring mission success. The paper discusses how Machine Learning based methods can enhance spacecraft Fault Detection capabilities. In particular, the performance of methods such as Principal Component Analysis, Autoregressive model, Autoencoder, and Long Short-Term Memory networks are explored. The considered use-case is based on telemetry data from Reaction Wheels, thus the AOC subsystem. This work also addresses challenges associated with implementing onboard AI, such as computational resource constraints and radiation hardening. Finally, a performance comparison between the selected methods in different conditions closes the discussion.

Tue 17:20 – 17:40
Onboard AI

17:40

Cocktail reception at the conference centre

18:30

Shuttle bus to hotels

Wednesday 18 Sep 2024

08:30

Shuttle bus arrives at venue

Welcome with tea, coffee, pastries, and fresh fruits

09:00

Chair: Tomas Navarro

Keynote

Ioana Ciucă (Australian National University & Stanford University)Earth 1 & 2

Wed 09:00 – 09:45

Keynote

Ryan McClelland (NASA)Earth 1 & 2

Wed 09:45 – 10:30

10:30

Poster session & coffee

Earth 3

Wed 10:30 – 11:30

11:30

Lunch

13:00

Chairs: Evana Gizzi (LLM-based Systems); Caleb Adams (Hardware Acceleration)

LLM-powered Assistants for Advanced Geospatial Dataset Recommendations based on Geolocated Queries

Arnaud Le Carvennec (CS Group)Earth 1

Integrating Earth observation, meteorological, and sensor data into domain-specific research often requires substantial pre-existing knowledge. Leveraging Large Language Models (LLMs) to interpret natural language prompts offers a solution, enabling scientists in fields such as climate science, biology, humanities, and economics to access relevant geospatial datasets intuitively. This paper presents an innovative system using LLMs, specifically through Retrieval Augmented Generation (RAG), to recommend advanced geospatial datasets based on geolocated queries. Developed using the LangChain framework, the system incorporates data from sources like Destination Earth, Eurostat, and ECMWF, dynamically expanding its knowledge base with online data. This approach provides precise dataset recommendations and bibliographic references, enhancing research integration and application. The architecture, workflow, and technical implementation are detailed, emphasizing the system’s traceability, multi-LLM integration, and effectiveness, as demonstrated through improved answer relevance and relevant data collection recommendations.

Wed 13:00 – 13:20
LLM-based Systems

Decision Making for Planetary Landing Applications using AI Agents and Reinforcement Learning

Tomas Navarro (ESA)Earth 1

This study explores the decision making capabilities of Large Language Model (LLM) AI agents to automate learning in planetary landing missions. In particular, the work investigates the use of AI agents to minimise human intervention in training a lunar lander by providing high level strategic guidance to a Reinforcement Learning (RL) agent within the complex environment of Kerbal Space Program. To that end, LLM AI agents are utilised to interpret a lander manual and extract information for designing the rewards function of the RL algorithm, as well as to assess the training process and refine the rewards. A case study is conducted, comparing the performance of three types of LLMs, GPT-3.5-Turbo, GPT-4, and Meta-LLama-3-70B, in lunar landing tasks. The findings highlight the potential of interactive LLM agents for automated reward function generation and refinement, thus advancing AI capabilities in space exploration and autonomous navigation tasks.

Wed 13:20 – 13:40
LLM-based Systems

Enhancing object-type searches in ESA Astronomy Science Archives extending ESASky AI capabilities with LLM and Retrieval Augmented Generation

Miguel Doctor Yuste & Marcos Lopez-Caniego (Telespazio UK for ESA)Earth 1

Due to the potential of Large Language Models (LLMs) to disrupt the way people interact with information systems across numerous industries, we have investigated options to extend functionality in the context of astronomy science archives. A frequent request by the archive users is to have the ability to search for specific types of objects in archival data, but this information is typically not available given the difficulty of classifying the billions of objects present in astronomical catalogues. In an attempt to address this request by the users, we present a proof of concept implementation aiming to enhance searches in the ESA Astronomy Science Archives. This could be done extending ESASky AI capabilities through the interaction with LLMs and Retrieval Augmented Generation using the information contained in CDS’ SIMBAD astronomical object database. The proposed proof of concept already shows that our implementation could offer new capabilities for astronomers when leveraging ESASky for their research

Wed 13:40 – 14:00
LLM-based Systems

CosmoCLIP: Generalising Large Vision-Language Models for Astronomical Imaging

Raza Imam (MBZUAI)Earth 1

Existing vision-text contrastive learning models enhance representation transferability and support zero-shot prediction by matching paired image and caption embeddings while pushing unrelated pairs apart. However, astronomical image-label datasets are significantly smaller compared to general image and label datasets available from the internet.
We introduce CosmoCLIP, an astronomical image-text contrastive learning framework precisely fine-tuned on the pre-trained CLIP model using SpaceNet and BLIP-based descriptive}captions. SpaceNet, attained via FLARE , constitutes ~13k optimally distributed images, while BLIP acts as a rich knowledge extractor. The rich semantics derived from this SpaceNet and BLIP descriptions, when learned contrastively, enable CosmoCLIP to achieve superior generalization across various in-domain and out-of-domain tasks. Our results demonstrate that CosmoCLIP}is a straightforward yet powerful framework, significantly outperforming CLIP in zero-shot classification and image-text retrieval tasks.

Wed 14:00 – 14:20
LLM-based Systems

Lost in space but not in data: Tracking Technology Trends in the Space Field

Audrey Berquand (ESA)Earth 1

Monitoring technology trends is essential for the European Space Agency (ESA) to fulfill its mission of advancing Europe’s space capabilities. Yet, in a rapidly evolving ecosystem where information is scattered across various databases and formats this is no easy task. In this study we present a novel approach to predict and track the technologies related to ESA R&D studies, based on the ESA Technology Tree. Leveraging pre-trained Large Language Models (LLMs) from OpenAI and Mistral AI, we demonstrate how to enrich a database with technology-related information, providing fresh insight into technology trends.

Wed 14:20 – 14:40
LLM-based Systems

Low-power ship detection in satellite images using Neuromorphic hardware

Gregor Lenz (Neurobus)Earth 2

Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured contain only bodies of water, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships.
We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip’s Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9%, which increases to 79.3% when evaluated solely on images containing ships, by reducing false positives.
Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.

Wed 13:00 – 13:20
Hardware Acceleration

FPGA-based Hardware Acceleration for Real-Time Maritime Surveillance and Monitoring Onboard Spacecraft

Giovanni Maria Capuano (UNINA)Earth 2

Accurate vessel detection and timely information extraction from optical remote sensing imagery are essential for a wide range of maritime surveillance operations, both civilian and defense-related. These operations include vessel tracking, unauthorized fishing, illegal migration monitoring, and search and rescue missions. Although artificial intelligence (AI) is a key component for achieving reliable and accurate detection in satellite imagery, traditional AI-based remote sensing methodologies rely on ground-based image processing. This dependence leads to significant delays between data acquisition and the generation of actionable insights, which may hinder rapid decision-making during critical maritime situations such as sea disasters. To address this challenge, we propose a novel hardware design based on the Microchip PolarFire System-on-Chip for low-power and real-time vessel detection onboard spacecraft. Our design leverages Microchip’s CoreVectorBlox, implemented on the programmable logic, to accelerate the inference process of SR-YOLOv5s, an enhanced YOLOv5s-based object detection framework. This detector incorporates a super-resolution backbone that allows the extraction of fine details and features of small targets of interest, thus improving detection accuracy. The results confirm the effectiveness of our approach, showcasing its potential as a solution to enable real-time alerts in the maritime surveillance domain through Earth Observation (EO) image processing at the edge.

Wed 13:20 – 13:40
Hardware Acceleration

Guidance and Control Neural Network Acceleration using Memristors

Zacharia Rudge (TU Delft, ESA)Earth 2

In recent years, the space community has been exploring the possibilities of Artificial Intelligence (AI), specifically Artificial Neural Networks (ANNs), for a variety of on board applications. However, this development is limited by the restricted energy budget of smallsats and cubesats as well as radiation concerns plaguing modern chips. This necessitates research into neural network accelerators capable of meeting these requirements whilst satisfying the compute and performance needs of the application. This paper explores the use of Phase-Change Memory (PCM) and Resistive Random-Access Memory (RRAM) memristors for on-board in-memory computing AI acceleration in space applications. A guidance and control neural network (G&CNET) accelerated using memristors is simulated in a variety of scenarios and with both device types to evaluate the performance of memristor-based accelerators, considering device non-idealities such as noise and conductance drift. We show that the memristive accelerator is able to learn the expert actions, though challenges remain with the impact of noise on accuracy. We also show that re-training after degradation is able to restore performance to nominal levels. This study provides a foundation for future research into memristor-based AI accelerators for space, highlighting their potential and the need for further investigation.

Wed 13:40 – 14:00
Hardware Acceleration

Artificial Intelligence Satellite Telecommunication Testbed using Commercial Off-The-Shelf Chipsets

Luis M. Garcés-Socarrás (Uni Luxembourg)Earth 2

The Artificial Intelligence Satellite Telecommunications Testbed (AISTT), part of the ESA project SPAICE, is focused on the transformation of the satellite payload by using artificial intelligence (AI) and machine learning (ML) methodologies over available commercial off-the-shelf (COTS) AI-capable chips for onboard processing. The objectives include validating artificial intelligence-driven SATCOM scenarios such as interference detection, spectrum sharing, radio resource management, decoding, and beamforming. The study highlights hardware selection and payload architecture. Preliminary results show
that ML models significantly improve signal quality, spectral efficiency, and throughput compared to conventional payload. Moreover, the testbed aims to evaluate the performance and the use of AI-capable COTS chips in onboard SATCOM contexts.

Wed 14:00 – 14:20
Hardware Acceleration

Zero-Shot Embedded Neural Architecture Search for On-board Satellite Tasks & Hardware Accelerators

Abhishek Roy Choudhury (TCS)Earth 2

Embedding Artificial Intelligence (AI) onboard satellites is becoming increasingly important in the field of SpaceTech where radiation hardened Commercial-Off-The-Shelf (COTS) hardware accelerators are becoming popular to run machine learning and deep learning workloads for benefits like optimizing transmission bandwidth utilization and real-time insights. In current practice, designing such ML/DL models requires experts in neural network design, geo-spatial imaging, and embedded systems. Additionally, the task of migrating such models to a different hardware accelerators platform typically requires significant effort to design and re-train the model. This paper shows how a Zero-Shot heuristic can be used to speed-up reinforcement learning based Neural Architecture Search (NAS) to generate tiny but accurate multi-objective models for cloud cover detection. We achieve a 15x reduction in search time, improved latency and energy consumption, to find task and hardware specific model as compared to an handcrafted model.

Wed 14:20 – 14:40
Hardware Acceleration

14:40

Coffee break

15:15

Unconference: Session 1A

Earth 1

Wed 15:15 – 15:45

Unconference: Session 2A

Earth 1

Wed 15:45 – 16:15

Unconference: Session 1B

Earth 2

Wed 15:15 – 15:45

Unconference: Session 2B

Earth 2

Wed 15:45 – 16:15

16:15

Break

16:45

Unconference: Session 3A

Earth 1

Wed 16:45 – 17:15

Unconference: Session 4A

Earth 1

Wed 17:15 – 17:45

Unconference: Session 3B

Earth 2

Wed 16:45 – 17:15

Unconference: Session 4B

Earth 2

Wed 17:15 – 17:45

18:00

Shuttle bus to dinner and hotels

19:00

Conference dinner at Oxford

22:15

Shuttle bus from dinner to hotels

Thursday 19 Sep 2024

08:30

Shuttle bus arrives at venue

Welcome with tea, coffee, pastries, and fresh fruits

09:00

Chair: Alexander Hadjiivanov

Keynote

Katja Hofmann (Microsoft Research Cambridge)Earth 1 & 2

Thu 09:00 – 09:45

Keynote

Angela Schoellig (TUM)Earth 1 & 2

Thu 09:45 – 10:30

10:30

Poster session & coffee

Earth 3

Thu 10:30 – 11:30

11:30

Lunch

13:00

Chairs: Gabriele Meoni (Earth Observation & Data Analysis); Emmanuel Blazquez (Interplanetary Trajectories & Descents)

A Novel Framework for Multi-Path Data Fusion in Earth Observation and New Observing Strategies: Applications to Predicting Forest Canopy Height

Evana Gizzi (NASA)Earth 1

Exponential growth of data from Earth Observation (EO) assets has necessitated the development of sophisticated methods for data interpretation and management. NASA’s New Observing Strategy (NOS) approach aims to coordinate operations among complex heterogenous systems of constellations, requiring advanced Artificial Intelligence and Machine Learning (AI/ML) techniques. Despite significant advancements in AI/ML across various domains, the EO and machine learning for satellite (SatML) fields remain fragmented, often relying on adapted techniques rather than domain-specific solutions.
We present a novel end-to-end data fusion framework tailored specifically for EO and SatML, addressing this gap by facilitating rapid development of AI/ML applications. This framework, called, Multimodal Earth Observation Workflow for Machine Learning (MEOW-ML), sup- ports the entire AI/ML lifecycle, from dataset manipulation, to model training, evaluation, and logging, and is designed to expedite the development of next-generation NOS deployments and SOTA in EO.
We apply our framework to predict canopy height model (CHM) derived from lidar data. We integrate multiple data modalities through a hierarchical, multi-path model architecture, effectively identifying and leveraging the unique strengths of each data source to enhance predictive accuracy.
Our experiments demonstrate that the multi-path architecture outperforms traditional single-path models and provides significant advantages in both accuracy and computational efficiency.

Thu 13:00 – 13:20
Earth Observation & Data Analysis

Operational range bounding of spectroscopy models with anomaly detection

Luís Simōes (ML Analytics)Earth 1

Safe operation of machine learning models requires architectures that explicitly delimit their operational ranges. We evaluate the ability of anomaly detection algorithms to provide indicators correlated with degraded model performance. By placing acceptance thresholds over such indicators, hard boundaries are formed that define the model’s coverage. As a use case, we consider the extraction of exoplanetary spectra from transit light curves, specifically within the context of ESA’s upcoming Ariel mission. Isolation Forests are shown to effectively identify contexts where prediction models are likely to fail. Coverage/error trade-offs are evaluated under conditions of data and concept drift. The best performance is seen when Isolation Forests model projections of the prediction model’s explainability SHAP values.

Thu 13:20 – 13:40
Earth Observation & Data Analysis

Automatic shadow detection in high-resolution optical satellite imagery

Nathan Sobetsky (Magellium)Earth 1

This paper presents a principled approach for detecting shadows in satellite imagery at a large scale. We propose a two-stage process, starting with the generation of a training set based on an airborne LiDAR point cloud and a target satellite image. This point cloud is first rasterised onto the spatial grid of the image (0.5m). Shadows are then geometrically projected using a standard hillshade algorithm. Finally we improve the quality of the training set by handling small objects and removing “bright” shadows that appear due to temporal misalignment between image and point cloud. The second stage leverages Deep Learning (DL) to automatically detect shadows in areas where airborne LiDAR is not available, or out of date. Our experiments on manually labeled shadows demonstrate that introducing variance to solar azimuth helps to reduce confusions with water and dark surfaces (asphalt). We further show that the learned model is able to replicate the quality of the LiDAR-based shadow maps on small and medium-sized shadows, in a geographical area unseen during training. We believe increasing the size of the training data set to more diverse cities would improve generalization on large shadows and on vegetation.

Thu 13:40 – 14:00
Earth Observation & Data Analysis

EclipseNETs: a differentiable description of irregular eclipse conditions

Giacomo Acciarini (University of Surrey, ESA)Earth 2

In the field of spaceflight mechanics and astrodynamics, determining eclipse regions is a frequent and critical challenge. This determination impacts various factors, including the acceleration induced by solar radiation pressure, the spacecraft power input, and its thermal state all of which must be accounted for in various phases of the mission design. This study leverages recent advances in neural image processing to develop fully differentiable models of eclipse regions for highly irregular celestial bodies. By utilizing test cases involving Solar System bodies previously visited by spacecraft, such as 433 Eros, 25143 Itokawa, 67P/Churyumov–Gerasimenko, and 101955 Bennu, we propose and study an implicit neural architecture defining the shape of the eclipse cone based on the Sun’s direction. Employing periodic activation functions, we achieve high precision in modeling eclipse conditions. Furthermore, we discuss the potential applications of these differentiable models in spaceflight mechanics computations.

Thu 13:00 – 13:20
Interplanetary Trajectories & Descents

Real-Time Fuel-Optimal Guidance Using Deep Neural Networks and Differential Algebra

Adam Evans (University of Auckland)Earth 2

This work presents a method to compute continuous low-thrust fuel-optimal guidance updates on board a spacecraft using a combination of deep learning and differential algebraic techniques. In order to create the large datasets necessary to train neural networks, we also propose a new method using polynomial maps, which provide fuel-optimal guidance updates for any state deviation from a nominal. Constructing the map at the initial time allows the generation of an arbitrarily high number of optimal trajectories for the database via the simple evaluation of polynomials. A trained deep neural network is then capable of providing continuous guidance updates, for which an interplanetary transfer scenario from Earth to Psyche is chosen for evaluation of the guidance schemes performance. As a neural network cannot generalise the highly complex mapping between state and optimal control policy perfectly, one expects some error in the policy. For sensitive dynamical systems and large flight times, this error may be unsuitable for the mission. We therefore further utilise differential algebraic techniques to refine the output of the neural network into highly accurate fuel-optimal guidance updates via a lightweight and iterative-less mapping procedure, suitable for onboard implementation.

Thu 13:20 – 13:40
Interplanetary Trajectories & Descents

Deep Visual Odometry and Pose Reconstruction through Single Image Depth Map and Triangulation

Stefano Silvestrini (Politecnico di Milano)Earth 2

Relative navigation solutions for planetary landing in close proximity to the surface have exploited geometry-based solutions of monocular Visual Odometry (VO) due to their robustness and accuracy. However, they encounter challenges in dynamic and low-texture environments, as well as the issue of scale drift, where errors accumulate over time. Recent advancements in research indicate that deep neural networks can autonomously learn scene depths and relative camera positions without relying on ground truth labels. Despite this, their accuracy still falls short compared to traditional methods, primarily due to the absence of geometric information. A hybrid solution has shown promising results, thus, this paper proposes DepthGlue, a VO pipeline that seamlessly integrates multi-view geometry and deep learning, leveraging single-image depth estimation (SIDE) for scale consistency and a CNN feature tracker and matcher network based on the LightGlue architecture.

Thu 13:40 – 14:00
Interplanetary Trajectories & Descents

14:00

Coffee break

14:30

Chairs: Gabriele Meoni (Earth Observation & Data Analysis); Emmanuel Blazquez (Interplanetary Trajectories & Descents)

Super-resolution of Sentinel-1 Imagery Using an Enhanced Attention Network and Real Ground Truth Data

Juan Francisco Amieva (Tracasa Instrumental S.L.)Earth 1

Active imaging systems, particularly Synthetic Aperture Radar (SAR), offer notable advantages such as the ability to operate in diverse weather conditions and provide day and night observations of Earth’s surface. These attributes are especially valuable when monitoring regions consistently obscured by clouds, as seen in Northern Europe. Sentinel-1 (S1) is a widely known SAR constellation, offering free imagery. However, its spatial resolution limitations and speckle noise complicate data interpretation. Commercial SAR satellites provide high-resolution data but are costly for remote sensing experts. Motivated by the outlined advantages and limitations, this paper introduces a novel deep learning-based methodology aimed at simultaneously reducing speckle noise and enhancing the spatial resolution of S1 data. Contrary to previous works that rely on a high-resolution satellite as ground truth (typically TerraSAR-X), we propose to use the same satellite in another operational mode as ground truth. Accordingly, the proposed method focuses on enhancing the spatial resolution of S1 Interferometric Wide Swath mode products from 10 to 5 m GSD by leveraging S1 Stripmap mode as the ground truth for training the model. As a result, super-resolved images duplicated the input spatial resolution, closing the gap between S1 and commercial SAR satellites.

Thu 14:30 – 14:50
Earth Observation & Data Analysis

Initialization Methods Lower Energy Needs of Spiking Neural Networks for Land Cover Classification

Magda Zajaczkowska (Loughborough University)Earth 1

Spiking Neural Networks (SNN) have received a lot of attention as an energy-efficient alternative to Artificial Neural Networks (ANN). They are particularly useful for machine learning applications in space. Many learning algorithms have been developed to build energy-efficient SNNs and applied to address real-world problems. However, the impact of initialization choices on the energy-efficiency of SNNs has not been investigated thoroughly. In this paper, we study the impact of initial values for neuronal time constants and weights, and the reset mechanisms for membrane potential of neurons on the energy requirements of trained SNNs. For this purpose, we trained several SNNs with different initialization choices on the land cover classification problem using the EuroSAT dataset. It could be clearly observed that lower initial weights and higher initial neuronal time constants result in SNNs that generate fewer spikes without exhibiting any loss in performance. Different resetting mechanisms did not have a significant impact on the number of spikes generated by trained SNNs.

Thu 14:50 – 15:10
Earth Observation & Data Analysis

AI-assisted Hazard Detection for safe lunar landing

Luca Ostrogovich (UNINA)Earth 2

Identifying hazards that could compromise a safe landing is a critical step of the final phases of a lander descent trajectory. Landing on a harsh terrain can lead to the lander tipping over or prevent a deployed payload from fulfilling its mission. Many attempts to develop Hazard Detection systems failed to provide a comprehensive and robust assessment of the hazards present in the landing area, particularly under adverse lighting conditions. The proposed algorithm utilizes deep learning techniques combined with standard processing methods to deliver a secure landing site to the onboard Guidance, Navigation, and Control system. By integrating data from optical images and LiDAR point clouds, the algorithm evaluates slope, extent, local roughness, presence of obstacles, and permanently shadowed regions, while also considering the fuel expenses required to reach the designated site. The deep learning algorithm’s training and validation, as well as the hazard detection pipeline’s testing, were conducted using synthetic images and point clouds generated from a virtual simulation environment created in Unreal Engine, which enabled varying lighting conditions and lander orientation and altitude. The algorithm has yielded satisfactory outcomes, effectively pinpointing a secure landing location across all conducted tests, thus reaffirming the validity of the approach.

Thu 14:30 – 14:50
Interplanetary Trajectories & Descents

Guidance and Control Networks with Periodic Activation Functions

Sebastien Origer (ESA)Earth 2

Inspired by the versatility of sinusoidal representation networks (SIRENs), we present a modified Guidance & Control Networks (G&CNETs) variant using periodic activation functions in the hidden layers. We demonstrate that the resulting G&CNETs train faster and achieve a lower overall training error on three different control scenarios on which G&CNETs have been tested previously. Prior work has already shown the impressive approximation power of SIREN networks, most notably for image and video reconstruction. We argue that since learning optimal control policies via behavioural cloning is essentially a regression task over a highly discontinuous function, akin to image reconstruction, it is unsurprising that the SIREN also excels in the context of G&CNETs.

Thu 14:50 – 15:10
Interplanetary Trajectories & Descents

15:30

Break & farewell

16:00

Conference closes