web2code
Webpage-to-code model trainer
A dataset and framework for training large multimodal language models on webpage-to-code generation tasks
Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs
67 stars
0 watching
6 forks
Language: Python
last commit: 4 months ago Related projects:
Repository | Description | Stars |
---|---|---|
| A collection of multilingual language models trained on a dataset of instructions and responses in various languages. | 94 |
| Automates large code generation and writing tasks using a large language model framework | 79 |
| This project provides code and model for improving language understanding through generative pre-training using a transformer-based architecture. | 2,167 |
| Develops a method for pre-training language understanding models by combining masked and permuted techniques, and provides code for implementation and fine-tuning. | 288 |
| A guide to using pre-trained large language models in source code analysis and generation | 1,789 |
| Provides pre-trained language models and tools for fine-tuning and evaluation | 439 |
| A small language model designed to run efficiently on edge devices with minimal resource requirements. | 607 |
| An end-to-end trained model capable of generating natural language responses integrated with object segmentation masks for interactive visual conversations | 797 |
| A tool for training and fine-tuning large language models using advanced techniques | 387 |
| A PyTorch-based framework for training large language models in parallel on multiple devices | 679 |
| Develops large language models for text understanding and generation tasks. | 85 |
| Trains a large Chinese language model on massive data and provides a pre-trained model for downstream tasks | 230 |
| Library that provides a unified API to interact with various Large Language Models (LLMs) | 367 |
| Trains and evaluates a universal multimodal retrieval model to perform various information retrieval tasks. | 114 |
| A framework for training and fine-tuning multimodal language models on various data types | 601 |