Adaptive Hybrid System Framework for Unified Admittance and Impedance Control of Manipulators
Journal of Intelligent & Robotic Systems — Springer, 2018
Read on Springer ↗Synthetic Satellite Imagery Pipeline
Configurable photorealistic datasets for on-board satellite detection — from orbit parameters to labeled frames.
Explore the Pipeline ↓From mission parameters to training-ready datasets
SatScenes generates photorealistic labeled imagery of satellites and space debris in Earth Orbit, but easily adaptable to other celestial body orbits, purpose-built for training deep learning object detection models on-board spacecraft. Where existing benchmarks such as SPEED, SPARK 2022, and DLVS³ are anchored to few and limited target spacecrafts, SatScenes scales without bound: adding a new target requires only a 3D model import, with zero pipeline changes.
SatScenes is designed to support a virtually infinite number of satellites and space debris models. Up to now 6 satellite models are available: Starlink V1, Sentinel-3B, ION Mk2, Nimbus, RadarSat-1 and a generic Stylized Satellite.
The rendering engine is a physically-based path tracer, producing frames with fully configurable optics and resolution. Field of view and sensor dimensions are configurable via JSON config. The possibility to configure lens distortion coefficients, e.g. radial k1/k2/k3 and tangential p1/p2, enabling faithful replication of any real camera geometry, is being integrated.
Orbital altitude is sampled from a configurable range, defaulting to the 400–800 km LEO band. Each frame draws independently from a full scene parameter space: target scale (extreme far to close-up), in-frame position (centred to partially visible), attitude (frontal/lateral/oblique/rear/edge-on), lighting condition (fully lit/partially lit/grazing/eclipse), background type (Earth dominant/Earth partial/deep space/Moon visible), and focus (sharp/slight blur/motion blur). This combinatorial diversity means no two frames are alike.
Class composition is user-controlled via probability weights: assign any distribution across your satellite catalogue — for example 60% Sentinel-3B, 20% ION Mk2, 10% Nimbus, 10% RadarSat-1 — and regenerate a balanced or intentionally skewed dataset in a single command. Output is RGB frames paired with vast set of metadata in json format, e.g. bounding box annotations, satellite model, position, attitude.
The flexibility in dataset configuration aims at supporting different use cases, from basic satellite detection/determintation for SSA/SDA applications in a wide range of orbits to creation of very specialized models for more specific applications.
Next version of the pipeline will target generation of datasets for position, velocity and pose estimation. The configurability of the pipeline will still provide an advantage with respect to existing datasets.
Define target models, class weights, orbit altitude range, sensor profile and scene diversity parameters via JSON config.
Run the render engine to execute the scene graph: Earth, target satellite, lighting, camera, one frame per parameter sample, fully automated.
RGB frames with per-frame JSON metadata, including bounding box coordinates, camera intrinsics, and full scene parameters — ready to convert into YOLO, COCO, or any custom annotation format.
Rendered frames from the pipeline
I am a Space Engineer and AI researcher with 10+ years of software experience in particular for spacecraft avionics and spacecraft HIL framework development, currently serving as Institutional & Defense BU Tech Strategist at D-Orbit S.p.A.
My work spans the full stack of spacecraft software: from RTOS embedded systems and CAN bus protocols on ION spacecraft, to leading avionics SW development for a SAR satellite of the Iride constellation — including payload data handling, FDIR, and symmetric encryption subsystems.
I hold an MSc in Space Engineering from Politecnico di Milano, where my thesis research on adaptive hybrid control for robotic manipulators was published in the Journal of Intelligent & Robotic Systems (Springer, 2018).
SatScenes is a personal research initiative born from the need for flexible, physics-accurate training data for on-board AI — a gap I encountered firsthand in operational satellite programs.
Journal of Intelligent & Robotic Systems — Springer, 2018
Read on Springer ↗Interested in a custom dataset, a collaboration, or just curious about what I'm building? I'd love to hear from you.
Message received. I'll get back to you soon.