M3ED
Updates #
- 2025/03/01: The M3ED SLAM Challenge has been published. You have until June 9, 2025 to submit your solutions. See you all in Nashville 🎸!
- 2025/02/28: M3ED Rev. 1.2 was released! This version includes evo ground-truth pose files, and compressed local scans from FasterLIO.
- 2023/08/06: The code used to process M3ED has been released in the Github repo. All the h5 data files have been reprocessed to include the
versionattribute with the corresponding commit hash. We also improved the Data Overview section with a better description of the folder structure and the data files. - 2023/07/27: M3ED Rev. 1.1 was released! This version includes several updates and fixes: fixed GT odometry relative to local map for long sequences, improved density of GT depth, improved semantic segmentation reprojection and add visualizations of the data. Additionally, a few sequences have been added.
- 2023/06/19: M3ED Rev. 1.0 was released at the CVPR 2023 Workshop on Event-based Vision 🎉!
Overview #
M3ED provides high-quality synchronized and labeled data from multiple platforms, including wheeled ground vehicles (car), legged robots (spot), and aerial robots (falcon), operating in challenging conditions such as driving along off-road trails, navigating through dense forests, and executing aggressive flight maneuvers.
M3ED processed data, raw data, and code are available to download. Check out our Github repo for an overview on how the data is processed.
What researchers say about M3ED #
Congrats to @KostasPenn & team at @GRASPlab @PENN! They created a multi-camera dataset for high-speed robotics with our Metavision® EVK4 HD. The M3ED dataset tackles challenges like vibrations, segmentation & demanding scenarios for event cameras👉https://t.co/wu65XKxfT9 @IEEEorg pic.twitter.com/8sxHUWPwCB
— Prophesee (@Prophesee_ai) July 26, 2023
M3ED overcame these shortcomings by providing comprehensive ground-truth depth and poses with HD stereo event data recorded in diverse scenarios.
High definition (HD) data are not available in these datasets until the appearance of M3ED, which utilizes a Prophesee EVK4 event camera with a spatial resolution of 1280 × 720 pixels.
The largest event camera dataset containing multi-sensor data.
M3ED acts as an informal successor to the MVSEC dataset. It is composed of 110 minutes of outdoor driving sequences, with high spatial definition stereo events (1280×720) and images (1280×800), point clouds from a 64-channel LiDAR at 10Hz with a maximum range of 120m.
The M3ED dataset provides data collected from event cameras on various platforms, such as quadruped robots, vehicles, and drones. Notably, it has the highest resolution among all datasets, with event cameras at 1280×720.
We verify that timestamps are accurately synchronized in M3ED, but we observe a noticeable misalignment in the timestamps of the RGB camera and the event camera in DSEC.
The M3ED dataset by Chaney et al. is the first multi-sensor event camera dataset specifically designed for high-speed dynamic motions in robotics.
Derived datasets #
Several research groups have built new datasets and benchmarks on top of M3ED:
3EED: 3D bounding box annotations for M3ED’s drone and quadruped sequences, enabling 3D object detection across multiple embodied platforms. Li et al. Paper, Project page, GitHub, HuggingFace.
Pi3DET: The first cross-platform 3D detection benchmark, built upon M3ED with annotated LiDAR sequences across vehicle, drone, and quadruped platforms. Liang et al. Paper, Project page, GitHub, HuggingFace.
T²CEF: A dense time-to-collision dataset built from M3ED with refined camera poses at 7 ms resolution, enabling high-speed collision prediction research. Bisulco et al. Paper, GitHub.
M3ED-Semantic: A semantic segmentation subset of M3ED with per-frame segmentation masks across drone and quadruped sequences, supporting 11 semantic classes. Li et al. Paper, GitHub.
EXPo: A large-scale event-based cross-platform semantic segmentation benchmark with 89k frames spanning vehicle, drone, and quadruped platforms from M3ED. Kong et al. Paper, Project page.
M3ED-active: A curated split of M3ED’s indoor quadruped sequences that expose an active stereo pattern, enabling research on active stereo depth estimation with event cameras. Bartolomei et al. Paper, GitHub.
RoboSense Track#5: A cross-platform 3D object detection challenge built on M3ED, where participants adapt vehicle-trained detectors to drone and quadruped platforms. Kong et al. Challenge page, GitHub, HuggingFace.
Contact us to add your dataset to this list!
License #
M3ED is released under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. You are allowed to share and adapt under the condition that you give the appropriate credit, indicate if changes were made, and distribute your contributions under the same license.
Read the paper #
You can access the paper from the CVPRW Proceedings.
@InProceedings{Chaney_2023_CVPR,
author = {Chaney, Kenneth and Cladera, Fernando and Wang, Ziyun and Bisulco, Anthony and Hsieh, M. Ani and Korpela, Christopher and Kumar, Vijay and Taylor, Camillo J. and Daniilidis, Kostas},
title = {M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2023},
pages = {4015-4022}
}