benchm-ml
ML benchmark
A benchmark for evaluating machine learning algorithms' performance on large datasets
A minimal benchmark for scalability, speed and accuracy of commonly used open source implementations (R packages, Python scikit-learn, H2O, xgboost, Spark MLlib etc.) of the top machine learning algorithms for binary classification (random forests, gradient boosted trees, deep neural networks etc.).
2k stars
148 watching
334 forks
Language: R
last commit: over 2 years ago
Linked from 2 awesome lists
data-sciencedeep-learninggradient-boosting-machineh2omachine-learningpythonrrandom-forestsparkxgboost
Related projects:
Repository | Description | Stars |
---|---|---|
nikolaydubina/go-ml-benchmarks | A benchmarking project comparing performance of different machine learning inference frameworks and models on Go platform | 30 |
mlcommons/inference | Measures the performance of deep learning models in various deployment scenarios. | 1,256 |
catboost/benchmarks | Comparative benchmarks of various machine learning algorithms | 169 |
dask/dask-ml | A Python library for scalable machine learning using Dask alongside popular ML libraries | 907 |
talwalkarlab/leaf | A benchmarking framework for federated machine learning tasks across various domains and datasets | 856 |
zk-ml/research | Research on integrating machine learning with emergent runtimes to improve performance and security. | 22 |
aifeg/benchlmm | An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models | 84 |
valdanylchuk/swiftlearner | A collection of machine learning algorithms implemented in Scala for prototyping and experimentation. | 39 |
mazhar-ansari-ardeh/benchmarkfcns | Provides benchmarking functions for mathematical optimization algorithms | 67 |
alonsovidales/go_ml | Provides pre-built implementations of machine learning algorithms in Go | 202 |
mlr-org/mlr | Provides an infrastructure for machine learning in R, enabling users to focus on experiments without writing lengthy wrappers and boilerplate code. | 1,648 |
damo-nlp-sg/m3exam | A benchmark for evaluating large language models in multiple languages and formats | 93 |
microsoft/private-benchmarking | A platform for private benchmarking of machine learning models with different trust levels. | 7 |
ailab-cvc/seed-bench | A benchmark for evaluating large language models' ability to process multimodal input | 322 |
brml/climin | A Python package for gradient-based function optimization in machine learning | 181 |