Code-LMs

Language model toolkit

A guide to using pre-trained large language models in source code analysis and generation

Guide to using pre-trained large language models of source code

GitHub

2k stars
44 watching
248 forks
Language: Python
last commit: 5 months ago
deep-learninggpt-2source-code

Related projects:

Repository Description Stars
vpgtrans/vpgtrans Transfers visual prompt generators across large language models to reduce training costs and enable customization of multimodal LLMs 269
flagai-open/aquila2 Provides pre-trained language models and tools for fine-tuning and evaluation 437
nttcslab-nlp/doc_lm This repository contains source files and training scripts for language models. 12
openai/finetune-transformer-lm This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. 2,160
xverse-ai/xverse-7b A multilingual large language model developed by XVERSE Technology Inc. 50
samholt/l2mac Automates large code generation and writing tasks using a large language model framework 70
luogen1996/lavin An open-source implementation of a vision-language instructed large language model 508
csuhan/onellm A framework for training and fine-tuning multimodal language models on various data types 588
evolvinglmms-lab/longva This project provides a model for long context transfer from language to vision using a deep learning framework. 334
elanmart/psmm An implementation of a neural network model for character-level language modeling. 50
rdspring1/pytorch_gbw_lm Trains a large-scale PyTorch language model on the 1-Billion Word dataset 123
melih-unsal/demogpt A comprehensive toolset for building Large Language Model (LLM) based applications 1,710
aiplanethub/beyondllm An open-source toolkit for building and evaluating large language models 261
matthias-wright/flaxmodels Provides pre-trained deep learning models for the Jax/Flax ecosystem. 238
lxtgh/omg-seg Develops an end-to-end model for multiple visual perception and reasoning tasks using a single encoder, decoder, and large language model. 1,300