INGInious

Assessment tool

An automated exercises assessment platform using code grading and pluggable interfaces with existing LMS

INGInious is a secure and automated exercises assessment platform using your own tests, also providing a pluggable interface with your existing LMS.

GitHub

210 stars
22 watching
139 forks
Language: Python
last commit: 4 months ago
assessmentautogradingcoding-interviewse-assessmenteducationevaluationexercisegradinginginiousinterviewlearn-to-codelearningltimoocprogramming-exercisetechnical-coding-interviewtraining

Related projects:

Repository Description Stars
ruixiangcui/agieval Evaluates foundation models on human-centric tasks with diverse exams and question types 714
olical/conjure An interactive environment for evaluating code within a running program. 1,806
internlm/internlm-techreport An evaluation of a multilingual large language model's capabilities on comprehensive exams and comparison with other models. 901
silurianyang/uni-app-tools A collection of utility libraries for uni-app development 377
inkryption/comath A comptime math evaluation library for the Zig programming language. 57
uknowsec/sharptoolsaggressor Tools for internal network penetration testing and vulnerability assessment 498
quick/spry A Swift Playground unit testing library based on Nimble for iOS and Mac development. 326
igalia/piglit Automated tests for various graphics APIs to ensure driver quality and compatibility 9
1024pix/pix-editor An online platform offering innovative evaluation and certification of digital skills 6
ethicalml/xai An eXplainability toolbox for machine learning that enables data analysis and model evaluation to mitigate biases and improve performance 1,135
megvii-research/tlc Improves image restoration performance by converting global operations to local ones during inference 231
maxsokolov/cribble A tool for visual testing of iPhone and iPad app UIs by simulating device movements. 266
milankinen/cuic A Clojure library for automating UI tests with Chrome 36
howiehwong/trustllm A toolkit for assessing trustworthiness in large language models 491
psycoy/mixeval An evaluation suite and dynamic data release platform for large language models 230