The full potential of the benchmark will be realized when it is populated with additional algorithms, tasks, and results. We invite contributions and hope to inspire the community to participate in this endeavour.

To facilitate this process, we provide a sbibm, an extensible framework for benchmarking (see Code & Reproducibility). We have put together some notes for contributions to sbibm:

Please do not hestitate to get in touch via email or by opening an issue on the repository.