benchm-ml

ML benchmark

A benchmark for evaluating machine learning algorithms' performance on large datasets

A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning algorithms for binary classification (random forests, gradient boosted trees, deep neural networks etc.).

GitHub

2k stars
148 watching
334 forks
Language: R
last commit: about 2 years ago
Linked from 2 awesome lists

data-sciencedeep-learninggradient-boosting-machineh2omachine-learningpythonrrandom-forestsparkxgboost

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
nikolaydubina/go-ml-benchmarks A benchmarking project comparing performance of different machine learning inference frameworks and models on Go platform 30
mlcommons/inference Measures the performance of deep learning models in various deployment scenarios. 1,236
catboost/benchmarks Comparative benchmarks of various machine learning algorithms 169
dask/dask-ml A Python library for scalable machine learning using Dask alongside popular ML libraries 902
talwalkarlab/leaf A benchmarking framework for federated machine learning tasks across various domains and datasets 851
zk-ml/research Research on integrating machine learning with emergent runtimes to improve performance and security. 22
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
valdanylchuk/swiftlearner A collection of machine learning algorithms implemented in Scala for prototyping and experimentation. 39
mazhar-ansari-ardeh/benchmarkfcns Provides benchmarking functions for mathematical optimization algorithms 66
alonsovidales/go_ml Provides a collection of machine learning algorithms and implementations for data analysis in Go 203
mlr-org/mlr Provides an infrastructure for machine learning in R, enabling users to focus on experiments without writing lengthy wrappers and boilerplate code. 1,643
damo-nlp-sg/m3exam A benchmark for evaluating large language models in multiple languages and formats 92
microsoft/private-benchmarking A platform for private benchmarking of machine learning models with different trust levels. 6
ailab-cvc/seed-bench A benchmark for evaluating large language models' ability to process multimodal input 315
brml/climin A framework for optimizing machine learning functions using gradient-based optimization methods. 180