browsertrix-crawler
Crawler
A containerized browser-based crawler system for capturing web content in a high-fidelity and customizable manner.
Run a high-fidelity browser-based web archiving crawler in a single Docker container
677 stars
24 watching
86 forks
Language: TypeScript
last commit: 4 days ago
Linked from 1 awesome list
crawlercrawlingwaczwarcweb-archivingweb-crawlerwebrecorder
Related projects:
Repository | Description | Stars |
---|---|---|
archiveteam/grab-site | A web crawler designed to backup websites by recursively crawling and writing WARC files. | 1,406 |
helgeho/web2warc | A Web crawler that creates custom archives in WARC/CDX format | 25 |
internetarchive/brozzler | A distributed web crawler that fetches and extracts links from websites using a real browser. | 678 |
webrecorder/archiveweb.page | A high-fidelity web archiving system for storing and replaying interactive web pages in browsers. | 903 |
cocrawler/cocrawler | A versatile web crawler built with modern tools and concurrency to handle various crawl tasks | 188 |
stewartmckee/cobweb | A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner | 226 |
spider-rs/spider | A tool for web data extraction and processing using Rust | 1,234 |
fredwu/crawler | A high-performance web crawling and scraping solution with customizable settings and worker pooling. | 945 |
brendonboshell/supercrawler | A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. | 380 |
amoilanen/js-crawler | A Node.js module for crawling web sites and scraping their content | 254 |
apiel/test-crawler | A tool for end-to-end testing of web applications by crawling and comparing screenshots. | 33 |
apache/incubator-stormcrawler | A scalable and versatile web crawling framework based on Apache Storm | 895 |
rivermont/spidy | A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling | 340 |
joenorton/rubyretriever | A Ruby-based tool for web crawling and data extraction, aiming to be a replacement for paid software in the SEO space. | 143 |
vida-nyu/ache | A web crawler designed to efficiently collect and prioritize relevant content from the web | 459 |