SciTail
Textual entailment system
Reproducible code and pre-trained model for an ACL2018 paper on textual entailment via deep explorations of inter-sentence interactions.
This released code is for our ACL2018 paper "End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions". It gets STOA performance in a textual entailment benchmark -- 82.1% accuracy on SciTail dataset. We release the code and the pretrained model
16 stars
4 watching
6 forks
Language: Python
last commit: almost 7 years ago Related projects:
Repository | Description | Stars |
---|---|---|
| Improves pre-trained language models by encouraging an isotropic and discriminative distribution of token representations. | 92 |
| A codebase for training and using models of sentence embeddings. | 33 |
| Provides training and testing code for a CNN-based sentence embedding model | 2 |
| Code for training universal paraphrastic sentence embeddings and models on semantic similarity tasks | 193 |
| This codebase provides an implementation of a novel adversarial reward learning algorithm for generating human-like visual stories from image sequences. | 136 |
| Reproduces the results of an ACL 2018 paper on simple word-embedding-based models for natural language processing tasks. | 284 |
| This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
| Trains and evaluates a universal multimodal retrieval model to perform various information retrieval tasks. | 114 |
| This project enables multi-modal language models to understand and generate text about visual content using referential comprehension. | 79 |
| This project demonstrates the effectiveness of reinforcement learning from human feedback (RLHF) in improving small language models like GPT-2. | 214 |
| Trains an autoencoder to learn generic sentence representations using convolutional neural networks | 34 |
| A pre-trained BERT-based Chinese text encoder with enhanced N-gram representations | 645 |
| A framework for training multi-modal language models with a focus on visual inputs and providing interpretable thoughts. | 162 |
| This is a repository for a subword ELMo model pre-trained on a large corpus of text. | 12 |
| Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,652 |