SciTail
Textual entailment system
Reproducible code and pre-trained model for an ACL2018 paper on textual entailment via deep explorations of inter-sentence interactions.
This released code is for our ACL2018 paper "End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions". It gets STOA performance in a textual entailment benchmark -- 82.1% accuracy on SciTail dataset. We release the code and the pretrained model
16 stars
4 watching
6 forks
Language: Python
last commit: over 6 years ago Related projects:
Repository | Description | Stars |
---|---|---|
yxuansu/tacl | Improves pre-trained language models by encouraging an isotropic and discriminative distribution of token representations. | 92 |
jwieting/acl2017 | A codebase for training and using models of sentence embeddings. | 33 |
xiaoqijiao/coling2018 | Provides training and testing code for a CNN-based sentence embedding model | 2 |
jwieting/iclr2016 | Code for training universal paraphrastic sentence embeddings and models on semantic similarity tasks | 193 |
eric-xw/arel | This codebase provides an implementation of a novel adversarial reward learning algorithm for generating human-like visual stories from image sequences. | 136 |
dinghanshen/swem | Reproduces the results of an ACL 2018 paper on simple word-embedding-based models for natural language processing tasks. | 284 |
yunwentechnology/unilm | This project provides pre-trained models and tools for natural language understanding (NLU) and generation (NLG) tasks in Chinese. | 439 |
tiger-ai-lab/uniir | Trains and evaluates a universal multimodal retrieval model to perform various information retrieval tasks. | 114 |
sy-xuan/pink | This project enables multi-modal language models to understand and generate text about visual content using referential comprehension. | 79 |
ethanyanjiali/minchatgpt | This project demonstrates the effectiveness of reinforcement learning from human feedback (RLHF) in improving small language models like GPT-2. | 214 |
zhegan27/convsent | Trains an autoencoder to learn generic sentence representations using convolutional neural networks | 34 |
sinovation/zen | A pre-trained BERT-based Chinese text encoder with enhanced N-gram representations | 645 |
deepcs233/visual-cot | A framework for training multi-modal language models with a focus on visual inputs and providing interpretable thoughts. | 162 |
jiangtong-li/subword-elmo | This is a repository for a subword ELMo model pre-trained on a large corpus of text. | 12 |
ymcui/chinese-xlnet | Provides pre-trained models for Chinese natural language processing tasks using the XLNet architecture | 1,652 |