Skip to content

Posters

Wednesday 18 Sep 2024

Delving into the Utilisation of ChatGPT in Scientific Publications in Astronomy
Simone Astarita (ESA)

Rapid progress in machine learning approaches to natural language processing has culminated in the rise of large language models over the last two years. Recent works have shown their unprecedented adoption in academic writing, but their pervasiveness in astronomy has not been studied sufficiently. To remedy this, we extract words that ChatGPT uses more often than humans when generating academic text and search a total of 1 million articles for them. This way, we assess the frequency of word occurrence in published works in astronomy tracked by the NASA Astrophysics Data System since 2000. We then perform a statistical analysis of the occurrences. We identify a list of words favoured by ChatGPT and find a statistically significant increase for these words against a control group in 2024, which matches the trend in other disciplines. These results suggest a widespread adoption of these models in the writing of astronomy papers. We encourage organisations, publishers, and researchers to work together to establish ethical and pragmatic guidelines to maximise the benefits of these systems while maintaining scientific rigour.

Fine-tuning LLMs for Autonomous Spacecraft Control: A case study using Kerbal Space Program.
Alejandro Carrasco (MIT and Universidad Politécnica de Madrid)

Recent trends are emerging in the use of Large Language Models (LLMs) as autonomous agents that take actions based on the content of the user text prompt. This study explores the use of fine-tuned Large Language Models (LLMs) for autonomous spacecraft control, using the Kerbal Space Program Differential Games suite (KSPDG) as a testing environment. Traditional Reinforcement Learning (RL) approaches face limitations in this domain due to insufficient simulation capabilities and data. By leveraging LLMs, specifically fine-tuning models like GPT-3.5 and LLaMA, we demonstrate how these models can effectively control spacecraft using language-based inputs and outputs. Our approach integrates real-time mission telemetry into textual prompts processed by the LLM, which then generate control actions via an agent. The results open a discussion about the potential of LLMs for space operations beyond their nominal use for text-related tasks. Future work aims to expand this methodology to other space control tasks and evaluate the performance of different LLM families. The code is available at this URL: https://github.com/ARCLab-MIT/kspdg.

Spacecraft Inertial parameters estimation using time series clustering and reinforcement learning
Konstantinos Platanitis (Cranfield University)

This paper presents a machine learning approach to estimate the inertial parameters of a spacecraft in cases when those change during operations, e.g. multiple deployments of payloads, unfolding of appendages and booms, propellant consumption as well as during in-orbit servicing and active debris removal operations. The machine learning approach uses time series clustering together with an optimised actuation sequence generated by reinforcement learning to facilitate distinguishing among different inertial parameter sets. The performance of the proposed strategy is assessed against the case of a multi-satellite deployment system showing that the algorithm is resilient towards common disturbances in such kinds of operations.

Measuring Al model performance and domain shift in unlabelled image data from real scenarios
Aubrey Dunne (Ubotica Technologies)

When an Artificial Intelligence model runs in a real scenario, two situations are possible: 1) the data analysed follows the same distribution as the data used for model training and therefore the model performance is similar; or 2) the distribution of the new data is different, resulting in lower model performance. This is called “data/domain shift” and its measurement is desirable in order to reduce it. For example, for a model trained using images captured with high brightness, a change in the sensor may produce darker samples and make the model fail. To mitigate this problem, the sensor can be configured to obtain brighter images and thus reduce data shift. The simplest way to measure the shift is to compare metrics for the two data distributions. However, data captured in the real scenario is not labelled and an alternative is needed. In this work we propose using the Jensen-Shannon divergence score to measure the data shift. Results, obtained by using 5-fold cross-validation, show high correlation between the proposed metric and the accuracy (-0.81, -0.87 and -0.91) when test samples are modified for different brightness, sharpness and blur. The approach has applicability to autonomously measuring domain shift in Earth Observation data.

Synthetic Dataset of Maneuvering Low Earth Orbit Satellite Trajectories for AI Analysis
Stéfan Baudier (Mines Paris, PSL University, Thales LAS France)

The characterization of satellite behavior is of paramount importance in Space Surveillance Awareness. It involves modeling complex patterns from large operational databases, making AI tools well-suited to handle this use case. Despite existing contributions, no database is dedicated to Pattern-of-Life study in the Low Earth Orbit regime. In this context, we provide a dataset of satellite trajectories, focusing on station-keeping issues. The proposed database contains generated trajectories based on real data. Our experiments on the provided dataset and real trajectories tend to verify the representativity of the data and highlight the complexity of the Pattern-of-Life related tasks.

From a TOMS to a Digital Twin
Lionel Brayeur (Spacebel)

Training Operations and Maintenance Simulator (TOMS) are very precise and complete satellite simulators whereon runs the On Board SoftWare (OBSW). These simulators model most avionic units as well as the physical environment.
However, TOMS are not entirely representative, especially due to the degradation and wear of satellite components over time. Increasing the representativeness of a TOMS can already improve the operational work performed with it. Maintaining such a high representativeness over time would allow to go a step further by turning the TOMS into a powerful tool for predictive maintenance.
With this goal in mind, we propose an approach to turn a TOMS into a Digital Twin following two approaches further incorporating AI techniques. These approaches are namely, reconfiguration and surrogate modelling. Herein, we demonstrate a Proof of Concept (PoC) by applying these two approaches on different subsystems of a satellite and corroborate their viability. Additionally, we compare the performance of different AI techniques using these two distinct approaches for the PoC.

Learning from few labeled time series with segment-based self-supervised learning: application to remote-sensing
Antoine Saget (University of Strasbourg – ICube)

Taking advantage of unlabeled data remains a major challenge with current classification methods in most domains, including remote sensing. To address this issue, we introduce a novel method for selecting positive pairs in contrastive self-supervised learning (SSL) and apply it to remote sensing time series classification. Using preexisting groups (i.e., segments) within the data, our approach eliminates the need for strong data augmentations required in contrastive SSL. The learned representations can be used on downstream classification tasks with simple linear classifiers. We show that it achieves comparable performance to state-of-the-art models while requiring nearly half as much labeled data. We achieve 80% accuracy on a 20-class classification task with 50 labeled samples per class, while the best compared method requires 100. We experimentally validate our method on a new large-scale Sentinel-2 satellite image time series dataset for cropland classification.

Modular Testing Framework for AI Integration in Spacecraft GNC Systems: A Case Study from the Raptor Project
Aurélien Bobey (IRT Saint-Exupery)

Developing and integrating an AI component within critical onboard software developed by a separate entity presents significant challenges, especially when Intellectual Property (IP) constraints limit code sharing.
Additionally, the differing technological backgrounds, frameworks, and test requirements of the collaborating entities can make it challenging to share developed components and test scenarios.

At Institute of Research and Technology Saint-Exupery (IRT), the RAPTOR project, in partnership with Thales Alenia Space, encountered these challenges while developing an AI-based image-processing component intended for integration into a Guidance, Navigation, and Control (GNC) closed loop system for space rendezvous.

This paper outlines the solutions implemented in the project. It includes the utilization of a software simulator, the development of a Processor In the Loop (PIL) bench and processes that facilitate component exchanges between the two test environments while safeguarding IP. By employing a modular test bench, high-fidelity experiments have been conducted and software versions of developed AI-based image processing have been integrated into the software simulator. This approach demonstrates a viable way of doing for software projects aiming to integrate AI components developed by separate entities, ensuring efficient integration and testing processes while guarantying IP integrity.

Area-energy-time tradeoff with a low-power accelerator for reliable Edge AI efficiency under real-world radiation
Philippe Reiter (IDLab, University of Antwerp – imec)

With the advent of Edge AI, machine learning algorithms, such as neural networks (NN), can process data closer to data sources, including sensors using low-power,commercially available off-the-shelf (COTS) compute devices, as a result of real-time operability , reduced cost,state-of-the-art performance, etc. Edge AI thus finds itself used in space and remote terrestrial applications where it may not be ideally protected from environmental radiation, such as neutrons and protons, due to application constraints. This exposes these edge AI systems to single-event effects (SEE) that can lead to soft errors. These soft errors manifest as bit-flips and affect Edge AI reliability. However, resource limitations at the Edge require executing AI inferences more efficiently while maintaining reliability. Hence, we hypothesize that trading off additional runtime chip area for reduced execution time and energy with a hardware AI accelerator can improve Edge AI efficiency while maintaining similar levels of reliability. We tested our hypothesis by executing NN inferences in a low power, ARM platform-based COTS device with a floating point unit under neutron radiation, with proton-like SEE effects, at ChipIr in the UK. The tests showed that the efficiency improved by 1.9 times while maintaining reliability against soft error-induced bit-flips.

A Self-Supervised Task for Fault Detection in Satellite Multivariate Time Series
Carlo Cena (Politecnico di Torino)

In the space sector, due to environmental conditions and restricted accessibility, robust fault detection methods are imperative for ensuring mission success and safeguarding valuable assets. This work proposes a novel approach leveraging Physics-Informed Real NVP neural networks, renowned for their ability to model complex and high-dimensional distributions, augmented with a self-supervised task based on sensors’ data permutation. It focuses on enhancing fault detection within the satellite multivariate time series. The experiments involve various configurations, including pre-training with self-supervision, multi-task learning, and standalone self-supervised training. Results indicate significant performance improvements across all settings. In particular, employing only the self-supervised loss yields the best overall results, suggesting its efficacy in guiding the network to extract relevant features for fault detection. This study presents a promising direction for improving fault detection in space systems and warrants further exploration in other datasets and applications.

Demonstrating CLUE – AI applications for on-board FDIR and prognostics for constellations
Chiara Brighenti (S.A.T.E. Systems and Advanced Technologies Engineering S.r.l.)

Constellations require complex operations and must ensure high service reliability. In addition, the large number of satellites in orbit requires the use of strategies to reduce the risk of space debris towards the zero-debris goal promoted by ESA. These aspects make autonomy and improvement of the on-board FDIR system one of the most urgent developments to reduce operational costs and response time to unexpected events. This paper illustrates the results of some applications of CLUE to enhance on-board FDIR and prognostics for constellations. CLUE is a customizable software solution for predictive diagnostics, troubleshooting support and on-board prognostics developed by SATE, based on the selected use of artificial intelligence, with an approach that aims at rapid system configuration and validation for the entire constellation. The workflow for using CLUE includes hybrid on-board and on-ground deployment, system reconfigurability, and the evolving role of the “human in the loop.”

Assessment of DRL Testing Methodology for Autonomous Guidance in Space Applications
Andrea Brandonisio (Politecnico di Milano)

Autonomy guidance is increasingly crucial in space missions due to several factors driving the exploration and utilization of space. At the same time, among Artificial Intelligence methods, Deep Reinforcement Learning (DRL) begins to play an important role in addressing the generalization challenges associated with different mission scenarios. The proposed work develops and proposes a process based on model selection, reward modelling, and initial condition randomness, to train and test DRL for autonomous guidance in space applications. Three different scenarios are used as case study and for the methodology development: debris collision avoidance strategy planning of LEO spacecraft, guidance optimization of the launcher’s first stage landing, and robotic arm trajectory optimization for capture uncooperative targets.

Recent development in Artificial Intelligence for 5G RAN
Sanna Sandberg (ESA)

The implementation of Artificial Intelligence (AI) in Radio Access Networks (RAN) is becoming a significant tool to meet the demands of future networks, including non-terrestrial networks (NTN). This paper summarizes the significant advancements and solutions in recent years, focusing on AI’s role in enhancing specific use cases within 5G NR RAN, particularly in the domains of network energy saving, load balancing, and mobility optimization. The discussion includes recent scientific studies and technological advancements supporting these applications, as detailed in the latest issue of the 3GPP technical reports. Various AI applications are already being investigated in satellite communication spacecraft and systems design, such as the detection of faulty equipment, methods for efficient spectrum utilization and interference mitigation, adaptive beamforming, and optimization of satellite resources. However, these applications are not directly linked to the 5G standards. This paper explores the implications of AI developments in the 5G NTN standard, specifically for 5G NR non-terrestrial networks (NTNs), highlighting how AI can optimize and enhance NTN operations, also in view of the upcoming 6G network.

AI-enabled 5G base-station with Hardware Acceleration for Non-Terrestrial Networks on a Space-Grade System-on-Chip
Michael Petry & Andreas Koch (Airbus)

Although 5G Non-Terrestrial Networks envision regenerative satellites with on-board 5G modems to offer the highest degree of user experience, current base-station technology prohibits practical deployment in space. We bridge this gap by presenting a shared platform that unifies gNodeB and Artificial Intelligence (AI) functionality on a hardware-accelerated, space-grade System-on-Chip. By porting OpenAirInterface’s gNodeB onto this architecture and offloading DSP-intensive processing tasks of the Physical layer to its Field Programmable Gate Array, benchmarks indicate a reduction in resource requirements by over one order of magnitude, enabling in-space processing and parallel execution of future AI workloads. Moreover, hardware-accelerated support for the Linux Industrial I/O Subsystem interface enables efficient connectivity to space-ready RF frontends. A discussion on further steps towards core network integration concludes this paper.

Earth Observation Satellite Scheduling with Graph Neural Networks
Guillaume Infantes (Jolibrain)

The Earth Observation Satellite Planning (EOSP) problem involves scheduling requested observations on an agile satellite while adhering to visibility windows and operational constraints, and selecting a subset of observations to maximize their cumulative benefit. Traditional approaches rely on heuristic and iterative search algorithms.
This paper introduces a new technique using Graph Neural Networks and Deep Reinforcement Learning to select and schedule observations efficiently. The proposed method extracts relevant information from EOSP graphs and leverages reinforcement learning for scheduling. Experiments demonstrate that the technique learns effectively on small instances and generalizes well to larger, real-world instances, outperforming traditional learning-based methods.

EASTERN project: Earth observation models for weather event mitigation
Selene Bianco (aizoOn Technology Consulting)

Flooding is one of the most frequent natural disasters worldwide, resulting in substantial socioeconomic losses and public health threats.
The EASTERN project proposes an innovative approach exploiting Artificial Intelligence (AI) techniques to combine data collected from Synthetic Aperture Radar (SAR) imaging and ground measurements for real-time flood risk assessment. Specifically, the goal is to focus on both immediate dangers associated with landslides and secondary risks resulting from the increased prevalence of disease vectors in affected regions.

For the first use case, data from GNSS (Global Navigation Satellite System) technologies and interferometric analysis of synthetic aperture radar images (InSAR) will be combined to offer complementary insights into Earth’s surface deformation and identify susceptible locations prone to landslides. For the second use case, high resolution SAR data will be exploited to predict whether a flooded area may become an ecological niche for arbovirosis vectors.

The application of advanced AI technologies for these Earth Observation tasks will allow for a prompt response to flooding events and will become a valuable support in the decision-making process of preventing and mitigating the consequences of extreme weather events.

Application of Artificial Intelligence for Predicting Stellar Flares: Analysis and Implementation of Transformers
Michaela Veselá (Mendel University)

Flares are energetic eruptions of stars that can negatively affect orbiting planets and the atmospheres of exoplanets. Researching these flares is important for understanding their effects on the habitability of distant worlds. With the advent of exoplanets and their research, the need to predict flares has increased. This encourages the deployment of advanced machine learning and artificial intelligence algorithms for the prediction of flares, or to compare of these approaches with classical approaches. New methods make it possible to identify flares from astronomical data with high accuracy. This article managed to analyze the available data suitable for flare prediction, evaluate current approaches from the field of artificial intelligence and select models containing Transformers for flare prediction. The advantage of using models from the Transformer area is the possibility of training in broader data contexts. The article provides interesting results that can be followed up with further research.

Deep-Neural-Network -based Anomaly Detector
Miguel Fernandez Costales (University of Oviedo)

The Electrical Power Subsystem (EPS) of a spacecraft is paramount to its operation since it will guarantee that every piece of equipment is receiving its required power. Therefore, the reliability of the power subsystem is one of the cornerstones of the full spacecraft reliability. DC/DC converters are one of the main constituents of the power subsystems. A method able to estimate the degradation of a dc-dc converter would enhance the power system reliability. It would allow to detect dc-dc converters prone to failure and to take corrective actions to extend their remaining lifespan.

Failure Management via Deep Reinforcement Learning for Robotic In-Orbit Servicing
Matteo D’Ambrosio (Politecnico di Milano)

The recent applications of AI and reinforcement learning to enhance spacecraft autonomy, reactivity, and adaptability in future In-Orbit Servicing and Active Debris Removal activities have shown potential towards overcoming some of the main difficulties that are encountered in orbital robotics missions, and could eventually provide practical solutions to strengthen their current capabilities. In highly critical missions such as in the capture of a spacecraft or debris through a robotic arm, having a system with heightened fault tolerance when conditions such as single-joint manipulator failures are encountered is of particular interest, and could aid in overcoming disastrous collision events. To this extent, this work evaluates the ability of an autonomous agent trained through meta-reinforcement learning, providing the guidance of a space manipulator in real time to synchronize the end-effector to a desired state fixed to the mission target, to adapt to unexpected failures in the joints. These preliminary results show that the agent indeed improves the ability of the system to autonomously overcome manipulator failure events, as long as the goal end-effector position remains within the manipulator’s workspace subsequently to the joint failure.

AI-driven Risk-Aware scheduling for active debris removal missions
Hugo de Rohan Willner (CentraleSupélec – University of Paris-Saclay)

The proliferation of debris in Low Earth Orbit (LEO) represents a significant threat to space sustainability and spacecraft safety. Active Debris Removal (ADR) has emerged as a promising approach to address this issue, utilizing Orbital Transfer Vehicles (OTVs) to facilitate debris deorbiting, thereby reducing future collision risks. However, ADR missions are substantially complex, necessitating accurate planning to make the missions economically viable and technically effective. Moreover, these servicing missions require a high level of autonomous capability to plan under evolving orbital conditions and changing mission requirements. In this paper, an autonomous decision-planning model based on Deep Reinforcement Learning (DRL) is developed to train an OTV to plan optimal debris removal sequencing. It is shown that using the proposed framework, the agent can find optimal mission plans and learn to update the planning autonomously to include risk handling of debris with high collision risk.

Understanding the Limitations of Single-Event Upset Simulation in Deep Learning Libraries
Toon Vinck (KU Leuven / Magics Technologies)

Fault Injection (FI) for Single-Event Upset (SEU) simulation is essential to evaluate the reliability of Deep Neural Networks (DNNs) intended for deployment in radiation-intensive environments, such as space. Although accurate FI methods based on Register-Transfer Level (RTL) exist, their computational demands limit scalability. On the other hand, software-based FI simulations are faster but notoriously inaccurate. Moreover, the sources of these inaccuracies are still poorly understood.
This paper contributes to this understanding by performing a comparative analysis of RTL-level FI and DNN-level FI during inference. While the former injects bit-faults at register level in the RTL design of a DNN accelerator, the latter will inject bit-faults in intermediate tensors used by a deep learning library.
Our findings reveal that the DNN-level FI can underestimate critical errors by a factor 3 when compared to the RTL-level FI. The sources of this inaccuracy are identified and opportunities for improving the DNN-level FI are presented.

Reinforced Model Predictive Guidance and Control for Spacecraft Proximity Operations
Lorenzo Capra (Politecnico di Milano)

An increased level of autonomy is attractive above all in the framework of proximity operations and researchers are focusing more and more on artificial intelligence techniques to improve spacecrafts capabilities in these scenarios. This work presents an autonomous AI-based guidance algorithm to plan the sub-optimal path of a chaser spacecraft for the map reconstruction of an artificial uncooperative target, coupled with Model Predictive Control for the tracking of the trajectory generated. Deep Reinforcement Learning is particularly interesting for enabling spacecrafts autonomous guidance, since this problem can be formulated as a Partially Observable Markov Decision Process, and because it leverages well domain randomization to cope with model uncertainty, thanks to the Neural Networks generalizing capabilities. The main drawback of this method is that it is difficult to verify its optimality mathematically, and the constraints can be added only as part of the objective function, so it is not guaranteed that the solution satisfies them. To this aim a convex Model Predictive Control formulation is employed to track the RL-based trajectory, while simultaneously enforcing compliance with the constraints. This algorithm is extensively tested in an end-to-end AI-based pipeline with image-generation in the loop, and the results are presented.

Inter-Satellite Link Prediction with Supervised Learning: An Application in Sun-synchronous Orbits
Estel Ferrer Torres (i2CAT Foundation)

In the space industry, Distributed Space Systems are gaining importance for improving mission performance through collaboration and resource sharing among multiple satellites. When this collaboration requires communication between heterogeneous satellites, achieving satellite-autonomous cooperation is crucial to avoid complex centralized computations, ground dependencies, and extensive coordination efforts between stakeholders. Considering nano-satellites with limited resources, autonomy involves a cost-efficient method for predicting contact opportunities or Inter-satellite links (ISLs) to reduce energy wastage from unsuccessful communication attempts.

In this sense, our approach relies on a pre-trained Supervised Learning model to anticipate ISL opportunities between different pairs of Sun-synchronous orbit satellites. The contact opportunities are modeled as “close approach” considering the perturbations caused by Earth’s shape and atmospheric drag.
Results show that the SL model can anticipate two days of encounters with a Balanced Accuracy higher than 90% when compared with realistic data from an available database.

Exploring Machine Learning for Cloud Segmentation in Thermal Satellite Images of the FOREST-2 Mission
Lukas Kondmann (OroraTech GmbH)

Thermal remote sensing is critical for climate monitoring but is often hindered by clouds. Accurate cloud detection is essential for users of this data, however, research on cloud detection within the thermal domain is limited so far. In this paper, we explore cloud detection with long-wave infrared images of the FOREST-2 mission. We create a global, manually labeled dataset of 528 images. A UNet with a MobileNet encoder balances model size and performance well with a macro F1 score of 0.819 and an accuracy of 0.873. Also, Gradient Boosting performs best among more traditional techniques with 0.762 macro F1 and 0.877 accuracy. This provides a starting point for thermal-only research to inform upcoming constellations.

Insight4EO: Disaster Management in Real Time
Anya Forestell (Deimos Space UK)

Integrating Earth Observation (EO) products onto spacecraft opens new application possibilities, with data prioritization techniques conserving bandwidth and power, and emergency management solutions offering timely alerts. However, challenges arise in accessing L1/L2 data and creating an efficient execution environment. Insight4EO addresses these with optimized L1/L2 product designed for onboard processing of both Synthetic Aperture Radar (SAR) and optical payloads, and a versatile multi-application environment using Multi-Processor and FPGA technology. Demonstrated by an extreme weather application, Insight4EO facilitates efficient EO processing onboard spacecraft for disaster management in real time.

Heterogeneity: An Open Challenge for Federated On-Board Machine Learning
Maria Hartmann (University of Luxembourg)

The design of satellite missions is currently undergoing a paradigm shift from the historical approach of individualised monolithic satellites towards distributed mission configurations, consisting of multiple small satellites. With a rapidly growing number of such satellites now deployed in orbit, each collecting large amounts of data, interest in on-board orbital edge computing is rising. Federated Learning is a promising distributed computing approach in this context, allowing multiple satellites to collaborate efficiently in training on-board machine learning models. Though recent works on the use of Federated Learning in orbital edge computing have focused largely on homogeneous satellite constellations, Federated Learning could also be employed to allow heterogeneous satellites to form ad-hoc collaborations, e.g.~ in the case of communications satellites operated by different providers. Such an application presents additional challenges to the Federated Learning paradigm, arising largely from the heterogeneity of such a system. In this position paper, we offer a systematic review of these challenges in the context of the cross-provider use case, giving a brief overview of the state-of-the-art for each, and providing an entry point for deeper exploration of each issue.

Autonomous Payload Thermal Control
Alejandro Mousist (Thales Alenia Space)

In small satellites there is less room for heat control equipment, scientific instruments, and electronic components. Furthermore, the near proximity of electronic components makes power dissipation difficult, with the risk of not being able to control the temperature appropriately, reducing component lifetime and mission performance.
To address this challenge, taking advantage of the advent of increasing intelligence on board satellites, an autonomous thermal control tool that uses deep reinforcement learning is proposed for learning the thermal control policy onboard.
The tool was evaluated in a real space edge processing computer that will be used in a demonstration payload hosted in the International Space Station (ISS).
The experiment results show that the proposed framework is able to learn to control the payload processing power to maintain the temperature under operational ranges, complementing traditional thermal control systems.

Transfer learning for Wall Temperature Prediction in Regenerative Cooled 22N Thrust Chambers
Till Hörger (DLR)

A neural network originally trained for hot gas wall temperature prediction in regeneratively cooled LOX/Methane rocket engines is adapted via transfer learning to predict hot gas wall temperatures in much smaller engines operated with nitrous oxide/ethane, which show different thermal behavior. The source net was trained with synthetic data generated by CFD simulations, whereas the transfer learning was done with a limited number of experimental data. The adapted network shows good prediction of the hot gas wall temperature in the test data set.

Machine Learning-based vs Deep Learning-based Anomaly Detection in Multivariate Time Series for Spacecraft Attitude Sensors
Riccardo Gallon (TU Delft, Airbus)

In the framework of Failure Detection, Isolation and Recovery (FDIR) on spacecraft, new AI-based approaches are emerging in the state of the art to overcome the limitations commonly imposed by traditional threshold checking.

The present research aims at characterizing two different approaches to the problem of stuck values detection in multivariate time series coming from spacecraft attitude sensors. The analysis reveals the performance differences in the two approaches, while commenting on their interpretability and generalization to different scenarios.


Thursday 19 Sep 2024

Spiking monocular event based 6D pose estimation for space application
Jonathan Courtois (Université Cote d’Azur, CNRS, LEAT)

With the growing interest in on On-orbit servicing (OOS) and Active Debris Removal (ADR) missions, spacecraft poses estimation algorithms are being developed using deep learning to improve the precision of this complex task and find the most efficient solution. With the advances of bio-inspired low-power solutions, such a spiking neural networks and event-based processing and cameras, and their recent work for space applications, we propose to investigate the feasibility of a fully event-based solution to improve event-based pose estimation for spacecraft. In this paper, we address the first event-based dataset SEENIC with real event frames captured by an event-based camera on a testbed. We show the methods and results of the first event-based solution for this use case, where our small spiking end-to-end network (S2E2) solution achieves interesting results over 21cm position error and 14degree rotation error, which is the first step towards fully event-based processing for embedded spacecraft pose estimation.

Training Datasets Generation for Machine Learning: Application to Vision Based Navigation
Massimo Casasco (ESA) & Jérémy Lebreton (Airbus)

Vision Based Navigation consists in utilizing cameras as precision sensors for GNC after extracting information from images. To enable the adoption of machine learning for space applications, one of obstacles is the demonstration that available training datasets are adequate to validate the algorithms. The objective of the study is to generate datasets of images and metadata suitable for training machine learning algorithms.
Two use cases were selected and a robust methodology was developed to validate the datasets including the ground truth. The first use case is in-orbit rendezvous with a man-made object: a mockup of satellite ENVISAT. The second use case is a Lunar landing scenario. Datasets were produced from archival datasets (Chang’e,3), from the laboratory at DLR TRON facility and at Airbus Robotic laboratory, from SurRender software high fidelity image simulator using Model Capture and from Generative Adversarial Networks. The use case definition included the selection of algorithms as benchmark: an AI-based pose estimation algorithm and a COTS dense optical flow algorithm were selected. Eventually it is demonstrated that datasets produced with SurRender and selected laboratory facilities are adequate to train machine learning algorithms.

Maximizing Celestial Awareness: 6-Synchronized-Star-Trackers for Attitude Determination
Aziz Amari (Ubotica Technologies)

Attitude determination is a crucial task for space missions and relies on multiple onboard sensors such as sun sensors, magnetometers, and Earth horizon sensors. Moreover, star trackers, which identify stars in a scene and match them against an existing star catalog to determine the attitude, provide superior performance compared to traditional sensors and they were previously reserved for high-end missions. With the increasing popularity of small satellites, a trade-off between cost, efficiency, and precision is often encountered. Nowadays, star sensors have undergone significant advancements, becoming more efficient and accessible due to notable enhancements in hardware and software, particularly through the integration of neural networks. This leveraging of artificial intelligence (AI) has enabled the development of a compact and reliable star sensor, potentially eliminating the need for other sensor types. In this work, 6- synchronized star-trackers (6SST), a sensor with multiple imaging channels, is proposed to get wider celestial coverage and hence reliability. To justify this configuration, a more efficient and optimised software pipeline, along with an enhanced hardware implementation, is required.

Gaia BP/RP spectra classification through Self-Organizing Maps
Lara Pallas-Quintela (CITIC-University of A Coruna)

It was in the third data release of the Gaia mission when the gathered spectra were published for the first time. It was made a huge effort to extract astrophysical information from the low-resolution spectra (BP/RP), such as effective temperature, metallicity or the type of sources that were being analysed. However, some of the sources could not be correctly classified and the Outlier Analysis working package grouped them in a Self-Organizing map by just taking into account their spectra. Due to their promising results, we wanted to go one step further by trying to classify the BP/RP spectra of the sources following a similar and unsupervised approach. In this work, we are introducing a spectral classification approach where we build our models in an unsupervised way through Self-Organizing Maps which can be scaled to any number of sources to be processed.

Development of a robust onboard data processing unit using commercial off-the-shelf system-on-chips and Neuromorhic Co-processors
Wouter Benoot (EDGX)

A paradigm shift for onboard processing in space is currently underway. In the coming years, we will demand more autonomy and flexibility from our satellites. Companies and institutions are exploring novel technologies, such as neuromorphic computing architectures and commercial-of-the-shelf (COTS) options, to cope with that demand. The following paper discusses the ongoing development of a data processing unit (DPU) that combines high-performance System-on-Chips (SoC), such as the NVIDIA Jetson Orin, with neuromorphic Network-on-Chip (NoC) co-processors, such as the BrainChip AKD1000 and AKD1500 or the SpiNNcloud Systems SpiNNaker2. The heterogeneous DPU will withstand harsh space environments through system-level hardening. It will function as a robust, reliable and user-friendly payload processing unit. Its successful integration onboard satellites will unlock a whole new level of potential for onboard algorithms.

An open source Multi-Agent Deep Reinforcement Learning Routing Simulator for satellite networks
Federico Lozano-Cuadra (University of Malaga)

This paper introduces an open source simulator for packet routing in Low Earth Orbit Satellite Constellations (LSatCs). The simulator, implemented in Python, supports traditional Dijkstra’s based routing as well as more advanced learning solutions based on Q-Routing and Multi-Agent Deep Reinforcement Learning (MA-DRL) from our previous work. It uses an event-based approach with the SimPy module to accurately simulate packet creation, routing and queuing, providing real-time tracking of queues and latency. The simulator is highly configurable, allowing adjustments in routing policies, traffic, ground and space segment topologies, communication parameters, and learning hyperparameters. Key features include the ability to visualize system motion and track packet paths while considering the inherent uncertainties of such a dynamic system. Results highlight significant improvements in end-to-end (E2E) latency using Reinforcement Learning (RL)-based routing policies compared to traditional methods. The source code, the documentation and a Jupyter notebook with post-processing results and analysis are available on GitHub.

AI-augmented Anomaly Detection Pipeline for Real-Time Onboard Processing of Spacecraft Telemetry
Andreas Koch (Airbus)

As the number of sensors and processing capabilities of satellites keep improving, the opportunity arises for a real-time onboard monitoring system to be implemented with the goal of detecting anomalies. In this paper, we identify sets of sensors suitable for monitoring, outline their real-time processing requirements and present an AI-augmented processing pipeline that matches or exceeds these requirements. The pipeline incorporates examples for every component of streaming anomaly detection algorithms including the capability to detect concept drift and autonomously adapt to it.

Generative Design of Periodic Orbits in the Restricted Three-Body Problem
Alvaro Francisco Gil (Universidad Politécnica de Madrid)

The Three-Body Problem has fascinated scientists for centuries and it has been crucial in the design of modern space missions. Recent developments in Generative Artificial Intelligence hold transformative promise for addressing this longstanding problem. This work investigates the use of Variational Autoencoder (VAE) and its internal representation to generate periodic orbits. We utilize a comprehensive dataset of periodic orbits in the Circular Restricted Three-Body Problem (CR3BP) to train deep-learning architectures that capture key orbital characteristics, and we set up physical evaluation metrics for the generated trajectories. Through this investigation, we seek to enhance the understanding of how Generative AI can improve space mission planning and astrodynamics research, leading to novel, data-driven approaches in the field.

AI Assistants for Spaceflight Procedures: Combining Generative Pre-Trained Transformer and Retrieval-Augmented Generation on Knowledge Graphs With Augmented Reality Cues
Oliver Bensch (DLR)

This paper describes the capabilities and potential of the intelligent personal assistant (IPA) CORE (Checklist Organizer for Research and Exploration), designed to support astronauts during procedures onboard the International Space Station (ISS), the Lunar Gateway station, and beyond. We reflect on the importance of a reliable and flexible assistant capable of offline operation and highlight the usefulness of audiovisual interaction using augmented reality elements to intuitively display checklist information. We argue that current approaches to the design of IPAs in space operations fall short of meeting these criteria. Therefore, we propose CORE as an assistant that combines Knowledge Graphs (KGs), Retrieval-Augmented Generation (RAG) for a Generative Pre-Trained Transformer (GPT), and Augmented Reality (AR) elements to ensure an intuitive understanding of procedure steps, reliability, offline availability, and flexibility in terms of response style and procedure updates.

Satellite Remote Sensing and AI: The Detection and Mapping of West African Seagrass
Richard Teeuw (University of Portsmouth)

We use machine learning with satellite remote sensing for the detection of seagrass meadows.We examine areas of seagrass on the coast of West Africa, picking sites in Mauritania, Senegal andGuinea Bissau with differing seagrass and water turbidity environments. Verification of seagrass presence was supported by boat-based seafloor grab samples, scuba observations and sonar surveys. Various types of machine learning were applied to satellite
imagery for detecting seagrass and other coastal habitat types, as well as for mapping the extent of the inter-tidal zone. Large areas can be mapped by satellite imagery, at low cost relative to other survey types. That provides a useful diagnostic of regional seagrass extent, from which changes over time can be monitored.

APP4AD, the Advanced Payload data Processing for Autonomy & Decision agent for future EO and planetary exploration missions
Vito Fortunato (Planetek Italia)

APP4AD is a project funded by the Italian Space Agency aimed at fostering the development of innovative ideas on on-board space technologies. The main challenge of APP4AD is to enable on-board systems to process data from sensors and scientific instruments, and to perform evaluations that are traditionally handled by human operators. This is achieved through the use of autonomous systems and artificial intelligence. The outcome of APP4AD will be a prototype of on-board software that can be integrated into typical operational missions. Two reference scenarios have been defined for demonstration purposes: one in Earth Observation and the other in planetary exploration. The Earth Observation scenario aims to detect oil spills on-board in a timely manner using SAR data. The planetary exploration scenario aims to enable a rover to explore Mars or any other rocky celestial body almost entirely autonomously. The development team is composed of a consortium of three Italian SMEs: Planetek Italia, Geophysical Applications Processing, and S.A.T.E. Systems. & Advanced Technologies Engineering.

Ising Machines With Feasibility Guarantee
Max Bannach (ESA)

Ising machines are promising new hardware accelerators whose use is studied in many space applications, such as satellite scheduling and trajectory optimization. These accelerators can only solve unconstrained problems, which makes programming them challenging and unnatural. Previous studies have revealed that common tricks to encode constraints into these unconstrained problems lead to numeric instabilities and even infeasible solutions.

We propose a solution to both issues by relying on the well-established paradigm from symbolic artificial intelligence to decouple reasoning and optimization. To that end, we use an Ising machine (for optimization) in conjunction with a SAT solver (for reasoning) and develop an algorithm for the maximum satisfiability problem based on the implicit hitting set approach. We argue that it is more natural to use maximum satisfiability as general-purpose language, prove that our algorithm is guaranteed to output a feasible solution, and provide a prototype implementation that experimentally shows the advantages and disadvantages of the approach. In this sense, the proposed algorithm can be seen as a new interface to Ising machines that avoids the direct use of the quadratic unconstrained binary optimization problem.

Power converter parameter prediction based on Extended Kalman Filter
Miguel Fernandez Costales (University of Oviedo)

In space power systems, high levels of reliability are required to not jeopardize the objective of the mission. With the purpose of increasing their lifetimes, a non-invasive health monitoring method is presented that estimates the parasitic resistance of the converters that conform the power subsystem. This parasitic resistance increases when the system degrades. This method is based on the use of an extended Kalman filter. By taking measurements already required either by the control stage or for telemetry purposes, it is demonstrated that it is possible to detect an increase in parasitic resistance in the converter. The implementation of this method has been validated both through simulation and experimentally.

Enabling X-ray Flux Integration for Improved Storm-Time Electron Density Predictions with Recurrent Neural Networks
Liam Smith (Georgia Institute of Technology)

Current ionosphere models struggle during storm-time activity. We have developed a Machine Learning (ML) model of the electron density of the ionosphere that aims to improve upon existing approaches by incorporating Recurrent Neural Networks (RNNs) that also allow us to use X-ray flux data. We have trained three models on GPS Radio Occultation (GPSRO) data, which serve as stepping stone comparisons to highlight the advantages of RNNs and X-ray data incorporation. We compare these models with each other as well as with IRI and show improved performance, particularly during storm-time ($Kp*10 geq 50$). We see Pearson correlation coefficients for both hmf2 and nmf2 across the range of Kp values from 50 to 90 are higher across the board for the X-ray RNN approach, with a less pronounced difference during quiet-day conditions. This highlights the importance of using X-ray data to predict electron density of the ionosphere during storm-time. During quiet-day conditions, we see improved predictions when using RNNs.

AI-based Sensor Fusion for Spacecraft Relative Position Estimation around Asteroids
Iain Hall (The University of Strathclyde)

Asteroid missions depend on autonomous navigation to carry out operations. The estimation of the relative position of the asteroid is a key step but can be challenging in poor illumination conditions. We explore how data fusion of optical and thermal sensor data using machine learning can potentially allow for more robust estimation of position. Source level fusion of visible images and thermal images using Convolutional Neural Networks is developed and tested using synthetic images based on ESA’s Hera mission scenario. It is shown that the use of thermal images allows for improved feature extraction. It also demonstrates that the use of source-level sensor fusion achieves better results than just using thermal images. This results in better identification of the asteroid’s centroid but has a much smaller effect on range estimation.

YOLOv8-Based Architecture for Pose Estimation of Uncooperative Spacecraft
Matteo Palescandolo (Università degli Studi di Napoli Federico II)

Achieving autonomous spacecraft operations is crucial for missions such as On-Orbit Servicing (OOS) and Ac- tive Debris Removal (ADR). In this frame, this work fo- cuses on visual-based relative navigation by introducing a pose estimation architecture relying on the combina- tion of Convolutional Neural Networks (CNN) with state- of-the-art Perspective-n-Points (PnP) solvers. Specifically, a two-stage deep learning approach is proposed: in the first stage, a YOLOv8-based network effectively localizes the target allowing the selection of a region of interest; the second stage employs a specialized YOLOv8-pose net- work to detect and identify a set of 2D points on the im- age plane corresponding to natural features of the target’s surface by solving a regression problem. The resulting set of 2D-3D point correspondences is input to an analytical PnP solver followed by a first pose refinement step rely- ing on the numerical solution of a least-squares problem through Gauss-Newton method and a subsequent addi- tional refinement based on the Levenberg-Marquardt al- gorithm. Performance assessment is carried out by train- ing and testing the CNNs and the entire processing pipeline on the SPEED+ dataset getting a mean rotation of 1.37° and a mean translation error of 4.7 cm.

A Curriculum Learning Approach for Improving Constraint Handling in Reinforecement Learning Applications to Spacecraft Guidance and Control
Alessandro Zavoli (Sapienza University of Rome)

This paper presents a general reward function formulation for efficiently addressing a fuel-optimal spacecraft guidance and control problem via reinforcement learning, which integrates the $varepsilon$-constraint method as a form of curriculum learning to incrementally enforce constraints during training. Results from a benchmark problem demonstrate that this curriculum learning approach significantly enhances both constraint satisfaction and control optimality, providing a promising solution to the limitations of traditional RL techniques in space guidance applications.

On-orbit Servicing for Spacecraft Collision Avoidance With Autonomous Decision Making
Susmitha Patnala (CentraleSupélec – University of Paris-Saclay)

This study develops an AI-based implementation of autonomous On-Orbit Servicing (OOS) mission to assist with spacecraft collision avoidance maneuvers (CAMs). We propose an autonomous `servicer’ trained with Reinforcement Learning (RL) to autonomously detect potential collisions between a target satellite and space debris, rendezvous and dock with endangered satellites, and execute optimal CAM. The RL model integrates collision risk estimates, satellite specifications, and debris data to generate an optimal maneuver matrix for OOS rendezvous and collision prevention. We employ the Cross-Entropy algorithm to find optimal decision policies efficiently. Initial results demonstrate the feasibility of autonomous robotic OOS for collision avoidance services, focusing on one servicer spacecraft to one endangered satellite scenario. However, merging spacecraft rendezvous and optimal CAM presents significant complexities. We discuss design challenges and critical parameters for the successful implementation of the framework presented through a case study.

Using DNN-based architecture in two stages for Scene Classification in multi-spectral EO
Satish Madhogaria (Telespazio Germany GmbH)

Artificial Intelligence (AI) and Earth Observation (EO) applications are both advancing at an unbelievable pace. However, the use of AI in EO has its challenges, specially for DNN based techniques. Convolution Neural Networks (CNN) that form the basis of advanced DNN-based algorithms for automatic understanding of scenes in imagery data are conventionally meant to work on RGB images and, therefore, pose significant challenge when applying on Multispectral (MS) data. In this work, we present a two-stage approach to use Deep Neural Networks (DNN) architecture for Scene Classification in Copernicus Sentinel 2 data. Results shown here are focused on a specific scene i.e.,Water. However, the approach is generic and can easily be extended to other scenes.

Adaptive SAR Signal Compression Through Artificial Intelligence
Charlotte Crawshaw (Craft Prospect Ltd.)

The volume of data captured by modern SAR instruments and constellations is immense, and satellite downlink capacities will not keep pace with the data explosion caused by the growth in the commercial SAR sector. This has resulted in a bottleneck, hindering high-value data from reaching end-users efficiently. In response, this work took a novel approach to focus on enhancing raw data compression using machine learning (ML) models. These models are designed to select the most effective compression algorithms based on the content inferred from the raw data. Processing SAR data on-board satellites is traditionally challenging due to the high computational complexity. The ability to infer information from data without creating SAR images, as demonstrated in this work, is a significant breakthrough. We present the results from the ML based compression selector over different scenes with a range of configurable parameters weighting different factors combined with a decision tree error prediction for the final data compression. Finally, future hardware feasibility and capability is assessed, targeting a Smallsat SAR mission, with a high level roadmap developed to progress the concept toward this goal.

SAR data processing on the edge: harnessing embedded GPUs and FPGAs to unlock onboard Earth monitoring
Luca Manca (Aiko S.r.l.)

Until nowadays, onboard payload data processing has been limited to passive payloads (i.e., optical images) only. However, active payloads, such as Synthetic Aperture Radar (SAR) systems, can provide high-quality images at a sub-meter resolution regardless of lighting and weather. The current paradigm for the exploitation of SAR data foresees the complete downlink of the collected raw signals and their processing in dedicated ground facilities. As a consequence, real-time Earth monitoring applications is today unfeasible with a SAR payload. This work aims to establish the groundwork for a transformation in the SAR data processing domain. Our objective is to demonstrate the feasibility to run SAR processing algorithms on embedded hardware devices, such as NVIDIA Jetson GPU and Xilinx FPGA, with limited computational power. Our work represents a first step toward the next generation of SAR payload, opening up the opportunity to new concepts of operations.

E-mamba: Using state-space-models for direct event processing in space situational awareness
Alejandro Hernandez Diaz (University of Surrey)

In this paper, we propose a novel perception framework to enhance in-orbit autonomy and address the shortcomings of traditional SSA methods. We leverage the advances of neuromorphic cameras for a vastly superior sensing performance under space conditions. Additionally, we maximize the advantageous characteristics of the sensor by harnessing the modelling power and efficient design of selective State Space Models. Specifically, we introduce two novel event-based backbones, E-Mamba and E-Vim, for real-time on-board inference with linear scaling in complexity w.r.t. input length. Extensive evaluation across multiple neuromorphic datasets demonstrate the superior parameter efficiency or our approaches (<1.3M params), while yielding comparable performance to the state of the art in both detection and dense-prediction tasks. This opens the door to a new era of highly-efficient intelligent solutions to improve the capabilities and safety of future space missions.

Downscaling ESA CCI soil moisture products: A Comparative analysis of Machine Learning algorithms over West Africa
Odunayo David Adeniyi (ESA)

This study bridges the fields of AI and space by demonstrating how AI can enhance Earth observation products, specifically focusing on soil moisture (SM) monitoring in West Africa. This study aims to improve the coarse spatial resolution of ESA CCI SM products through downscaling using machine learning (ML) approaches. Three ML algorithms—Random Forest (RF), LightGBM, and Artificial Neural Network (ANN)— were compared for downscaling ESA CCI SM products, utilizing MODIS LST, NDVI, and TVDI data at 1 km resolution as auxiliary variables. Our findings reveal that while all three ML models enhance the downscaling process, the ANN demonstrates the lowest root mean square error (RMSE), making it particularly effective in reducing overall prediction error. However, RF and LightGBM exhibit lower bias, suggesting they provide more consistent predictions across varying SM levels. This comprehensive comparison highlights that the choice of model depends on the specific application requirements—whether minimizing overall error or reducing systematic bias. The enhanced SM monitoring achieved through these methods contributes to better water resource management and climate resilience in the region, supporting sustainable development efforts.

A machine learning approach for computing solar flare locations in X-rays on-board Solar Orbiter/STIX
Paolo Massa (University of Applied Sciences and Arts Northwestern Switzerland)

The Spectrometer/Telescope for Imaging X-rays (STIX) on-board the ESA Solar Orbiter mission retrieves the coordinates of solar flare locations by means of a specific sub-collimator, named the Coarse Flare Locator (CFL). When a solar flare occurs on the Sun, the emitted X-ray radiation casts the shadow of a peculiar “H-shaped” tungsten grid over the CFL X-ray detector. From measurements of the areas of the detector that are illuminated by the X-ray radiation, it is possible to retrieve the (x,y) coordinates of the flare location on the solar disk.
In this paper, we train a neural network on a dataset of real CFL observations to estimate the coordinates of solar flare locations. Further, we apply a post-training quantization technique specifically tailored to the adopted model architecture. This technique allows all computations to be in integer arithmetic at inference time, making the model compatible with the STIX computational requirements. We show that our model outperforms the currently adopted algorithm for estimating the flare locations from CFL data regarding prediction accuracy while requiring fewer parameters. We finally discuss possible future applications of the proposed model on-board STIX.

Predicting crop evapotranspiration and irrigation requirements with Machine Learning and satellite imagery
Sergio Ramírez Gallego (Thales Alenia Space)

Adopting advisory systems that provide accurate forecasts of crop water requirements, specifically crop evapotranspiration, can enhance water use efficiency in agriculture. The successful development of these advisory systems relies on access to timely updates on crop canopy parameters (usually from satellite imagery) and weather forecasts several days in advance, at a low operational cost.

Common agronomy methods are relied upon for accurate estimations of reference evapotranspiration. Nevertheless, they require substantial and typically expensive data, as well as complex configurations, then hampering their integration into water management systems. Furthermore, these methods offer approximate estimations that are not usually aligned with real evapotranspiration measurements found in automatic weather stations. To address these issues, a Machine Learning (ML) model has been trained on in-field evapotranspiration measurements from real crops, and powered by a set of minimal weather forecast inputs.

An extensive experimental framework has been applied to our proposal and alternative methods. Target data comprises real evapotranspiration and weather forecasts from 100 real crops in Southern Spain spanning several years (2018-2021). Experiments have demonstrated the advantage of ML for evapotranspiration prediction when compared to state-of-the-art estimators, at the same time reducing input complexity.

Benchmarking Large Language Models for Earth Observation
Simon Gruening (askEarth AG)

We explore the potential of large language models to perform complex earth observation tasks. To this end, we create a first testing set for the Beyond the Imitation Game benchmark (BIG-Bench). With 20+ subtasks, we aim for a large variety of questions which are currently beyond the capabilities of existing models. We share a method of testing proprietary models without access to token distributions. This paper serves as a first step towards measuring the general capabilities of large language models in an earth observation setting, establishing an evaluation framework, and examining first results.

Trajectory Optimization with Reinforcement Learning-driven Control of Multi-Arm Robots in On-Orbit Servicing Operations
Celia Redondo-Verdú & Jorge Pomares (University of Alicante)

Intelligent motion planning and control systems are crucial in managing the inherently unpredictable and non-deterministic conditions of space. Trajectory optimization (TO) has been a key method in space robotics for precision control and guidance. However, the effectiveness of TO is limited by its reliance on accurate robotic and environmental models. Reinforcement Learning (RL) has emerged as a promising alternative, enhancing robustness and adaptability in control systems. Unlike TO, RL does not depend on predefined models but learns effective control policies through simulated interactions. This model-free approach has shown potential in handling complex control tasks and adapting to environmental uncertainties and perturbations. This paper proposes an integrated TO and RL approach for enhanced path planning and control of a four-arm robot designed for on-orbit servicing tasks. By combining the predictive power of TO with the adaptive capabilities of RL, this approach aims to optimize the robot’s operational effectiveness. The integration seeks to leverage the detailed pre-planned motion profiles provided by TO, while incorporating the flexibility and resilience of RL-based control strategies to accommodate real-time operational dynamics and uncertainties. The results presented demonstrate how this hybrid approach can significantly improve the robot’s trajectory tracking performance and operational adaptability.

Gamma-Ray Burst Light Curve Reconstruction using Bidirectional-LSTM
Shashwat Sourav (Indian Institute of Science Education and Reserch)

Gamma-ray bursts (GRBs) are intense phenomena that release copious amounts of gamma rays in seconds. Investigating these transient events is paramount in cosmological research because they originate from sources located at considerable redshifts. To effectively conduct such research, it is imperative to establish a connection between the observable characteristics of GRBs while minimizing uncertainty. Hence, it becomes essential to comprehensively depict the general GRB light curve (LC) to facilitate these studies. Nevertheless, the irregular spacing of observations and significant gaps in the LC, often stemming from various unavoidable factors, pose considerable challenges in characterizing GRBs. Consequently, the task of categorizing GRB LCs remains a formidable endeavor. This study introduces an innovative approach employing bidirectional Long Short-Term Memory (BiLSTM) to reconstruct gamma-ray burst (GRB) light curves. Our experimental results demonstrate that the BiLSTM approach surpasses traditional methods in performance, yielding smoother and more authentic reconstructions of GRBs.