fact-checker
LLM fact checker
A tool for fact-checking LLM outputs with self-ask using prompt chaining
Fact-checking LLM outputs with self-ask
289 stars
5 watching
40 forks
Language: Jupyter Notebook
last commit: over 1 year ago
Linked from 1 awesome list
llmpython
Related projects:
Repository | Description | Stars |
---|---|---|
| Detects whether a given text sequence is part of the training data used to train a large language model. | 23 |
| A benchmarking framework for large language models | 81 |
| A Python package for measuring memorization in Large Language Models. | 126 |
| An open-source toolkit for building and evaluating large language models | 267 |
| Enforces consistent use of optional type annotations in Python code | 17 |
| Tool to identify broken links in web pages and websites | 900 |
| A tool to check Python code quality in Jupyter notebooks. | 28 |
| Converts JupyterHub into an LTI Tool Provider to enable integration with various Learning Management Systems (LMS) and online platforms. | 69 |
| A Coq library providing tactics to manipulate hypotheses in formal proofs. | 20 |
| Evaluates and compares the performance of multimodal large language models on various tasks | 56 |
| An open-source framework for detecting factual errors in AI-generated text | 839 |
| An extension of Vitest for testing LLM applications using local language models | 31 |
| A Python library to augment large language models by enabling them to think and reason more effectively | 1,550 |
| Identifies and reports on the slowest tests in a Django application | 181 |
| Tools for developing and implementing spell-checking and grammar-checking capabilities in low-resource languages. | 3 |