prim-benchmarks

Memory-centric computing benchmarks

A benchmarking suite for evaluating the performance of memory-centric computing architectures

PrIM (Processing-In-Memory benchmarks) is the first benchmark suite for a real-world processing-in-memory (PIM) architecture. PrIM is developed to evaluate, analyze, and characterize the first publicly-available real-world PIM architecture, the UPMEM PIM architecture. Described by Gómez-Luna et al. (https://arxiv.org/abs/2105.03814).

GitHub

137 stars
6 watching
50 forks
Language: C
last commit: 7 months ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
cmmmu-benchmark/cmmmu An evaluation benchmark and dataset for multimodal question answering models 46
szilard/benchm-ml A benchmark for evaluating machine learning algorithms' performance on large datasets 1,869
damo-nlp-sg/m3exam A benchmark for evaluating large language models in multiple languages and formats 92
aifeg/benchlmm An open-source benchmarking framework for evaluating cross-style visual capability of large multimodal models 83
hi-primus/optimus A Python library that provides a simple API for data preparation and analysis on various big-data engines 1,481
lukka/cppopenglwebassemblycmake A C++/OpenGL/OpenAL based application and benchmarking project demonstrating the performance difference between native compilation and WebAssembly compilation. 73
microsoft/private-benchmarking A platform for private benchmarking of machine learning models with different trust levels. 6
mazhar-ansari-ardeh/benchmarkfcns Provides benchmarking functions for mathematical optimization algorithms 66
ailab-cvc/seed-bench A benchmark for evaluating large language models' ability to process multimodal input 315
eembc/coremark A benchmarking tool used to evaluate the performance of embedded systems and microcontrollers 971
omimo/xrbm An implementation of Restricted Boltzmann Machines and their variants using TensorFlow. 55
cbcrg/benchfam Generates a benchmark dataset for evaluating protein alignment programs 3
mlcommons/inference Measures the performance of deep learning models in various deployment scenarios. 1,236
tlk00/bitmagic A C++ library for compact data structures and algorithms optimized for memory efficiency and high performance 412
p-ranav/criterion A microbenchmarking library for measuring performance of C++ code 211