Code-LMs

Language model toolkit

A guide to using pre-trained large language models in source code analysis and generation

Guide to using pre-trained large language models of source code

GitHub

2k stars
44 watching
252 forks
Language: Python
last commit: 7 months ago
deep-learninggpt-2source-code

Related projects:

Repository Description Stars
vpgtrans/vpgtrans Transfers visual prompt generators across large language models to reduce training costs and enable customization of multimodal LLMs 270
flagai-open/aquila2 Provides pre-trained language models and tools for fine-tuning and evaluation 439
nttcslab-nlp/doc_lm This repository contains source files and training scripts for language models. 12
openai/finetune-transformer-lm This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. 2,167
xverse-ai/xverse-7b A multilingual large language model developed by XVERSE Technology Inc. 50
samholt/l2mac Automates large code generation and writing tasks using a large language model framework 79
luogen1996/lavin An open-source implementation of a vision-language instructed large language model 513
csuhan/onellm A framework for training and fine-tuning multimodal language models on various data types 601
evolvinglmms-lab/longva An open-source project that enables the transfer of language understanding to vision capabilities through long context processing. 347
elanmart/psmm An implementation of a neural network model for character-level language modeling. 50
rdspring1/pytorch_gbw_lm Trains a large-scale PyTorch language model on the 1-Billion Word dataset 123
melih-unsal/demogpt A comprehensive toolset for building Large Language Model (LLM) based applications 1,733
aiplanethub/beyondllm An open-source toolkit for building and evaluating large language models 267
matthias-wright/flaxmodels Provides pre-trained deep learning models for the Jax/Flax ecosystem. 240
lxtgh/omg-seg Develops an end-to-end model for multiple visual perception and reasoning tasks using a single encoder, decoder, and large language model. 1,336