Disparity Benchmark
The task is to estimate dense disparity from stereo event cameras and stereo global shutter cameras in the event camera frame.
Details
- The predictions will be evaluated on the left event camera on a subset of image timestamps on the test set.
- If the global shutter cameras are used, their usage must be specified within the submitted report or paper.
- The evaluation will be performed on all pixels on which ground truth disparity is available.
Metrics
The main metric used for the ranking is the mean absolute error (MAE) of the disparity on the test set of DSEC.
We also provide the following error metrics:
- 1PE: 1-pixel-error, the percentage of ground truth pixels with disparity error > 1
- 2PE: 2-pixel-error, the percentage of ground truth pixels with disparity error > 2
- RMSE: Root mean square error of the disparity
Submission Format
The submission will be evaluated on the predicted disparity in the left event camera on a subset of image timestamps on the test set. Visit the submission format page for more details.
Report (mandatory)
The submission must eventually be accompanied by a
- short technical report (max 4 pages, 10 MB, pdf) that summarizes the approach if the project was part of a previous competition (e.g. CVPR 2021 competition). Visit the technical report page for more details.
- a workshop paper (max 8 pages, 50 MB, pdf) if the project is published at a workshop that is part of a conference (e.g. ICRA, CVPR workshops).
- a paper (max 100 MB, pdf) if the project is published at a conference or journal.
In either case, specify the venue (competition, workshop, conference, or journal and year) or write “under review” in the “Additional information” field of the submission.
Note that you can still add a report/paper after your submission.
We will regularly delete submissions older than 6 months without a report or paper or the specifics to identify the venue. If your work has been under review for more than 6 months you must resubmit your work.
Note on Double-Blind Review
If your submitted work is under review in a double-blind process. Consider adding “anonymous” to the author and affiliation field for the submission (as well as “under review” in the “additional information” field). When the work is accepted, you can delete the submission and resubmit with the full information mentioned above.
Submission Policy
- We require that the submission uses the same parameter set for the whole test set. I.e. it is not allowed to use different hyperparameters for different sequences.
- It is not allowed to use test data in any way to optimize your model for your submission. E.g. fine-tuning your model with a photometric loss on the test set is prohibited.
- The test data should be used strictly for reporting the final results. The tuned algorithm should be run only once on the test set. Hence, the evaluation server may not be used for parameter tuning.
- It is not allowed to register multiple times with different email addresses. Use your institutional email address (e.g., .edu).
Questions
Contact mgehrig@ifi.uzh.ch for inquiries regarding the benchmark. Issues and questions regarding the dataset should be discussed on GitHub instead.
Citation
Cite the following work when using this benchmark or DSEC in general:
@Article{Gehrig21ral,
author = {Mathias Gehrig and Willem Aarents and Daniel Gehrig and Davide Scaramuzza},
title = {DSEC: A Stereo Event Camera Dataset for Driving Scenarios},
journal = {IEEE Robotics and Automation Letters},
year = {2021},
doi = {10.1109/LRA.2021.3068942}
}
Error
You must be logged in to upload submissions
Click here to log in or register a new account: Login
Results
All sequence averages
{{ prop }} ▲ ▼ ▲▼ |
---|
{{ value.name }} Details {{ value }} |
Individual sequences
{{ sequence }}
{{ prop }} ▲ ▼ ▲▼ |
---|
{{ value.name }} Details {{ value }} |