80

GitHub - paperswithcode/sotabench-eval: Easily evaluate machine learning models...

 4 years ago
source link: https://github.com/paperswithcode/sotabench-eval
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

sotabencheval.png


PyPI version Generic badge

sotabencheval is a framework-agnostic library that contains a collection of deep learning benchmarks you can use to benchmark your models. It can be used in conjunction with the sotabench service to record results for models, so the community can compare model performance on different tasks, as well as a continuous integration style service for your repository to benchmark your models on each commit.

Benchmarks Supported

PRs welcome for further benchmarks!

Installation

Requires Python 3.6+.

pip install sotabench-eval

Get Benching! 🏋️

You should read the full documentation here, which contains guidance on getting started and connecting to sotabench.

Integration is lightweight. For example, if you are evaluating an ImageNet model, you initialize an Evaluator object and (optionally) link to any linked paper:

from sotabencheval.image_classification import ImageNetEvaluator
evaluator = ImageNetEvaluator(
             model_name='FixResNeXt-101 32x48d',
             paper_arxiv_id='1906.06423')

Then for each batch of predictions your model makes on ImageNet, pass a dictionary of keys as image IDs and values as a np.ndarrays of logits to the evaluator.add method:

evaluator.add(output_dict=dict(zip(image_ids, batch_output)))

The evaluation logic just needs to be written in a sotabench.py file and sotabench will run it on each commit and record the results:

results.png

Contributing

All contributions welcome!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK