 browsertrix-crawler
 browsertrix-crawler 
 Crawler
 A containerized browser-based crawler system for capturing web content in a high-fidelity and customizable manner.
Run a high-fidelity browser-based web archiving crawler in a single Docker container
677 stars
 24 watching
 86 forks
 
Language: TypeScript 
last commit: 11 months ago 
Linked from   1 awesome list  
  crawlercrawlingwaczwarcweb-archivingweb-crawlerwebrecorder 
 Related projects:
| Repository | Description | Stars | 
|---|---|---|
|  | A web crawler designed to backup websites by recursively crawling and writing WARC files. | 1,406 | 
|  | A Web crawler that creates custom archives in WARC/CDX format | 25 | 
|  | A distributed web crawler that fetches and extracts links from websites using a real browser. | 678 | 
|  | A high-fidelity web archiving system for storing and replaying interactive web pages in browsers. | 903 | 
|  | A versatile web crawler built with modern tools and concurrency to handle various crawl tasks | 188 | 
|  | A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner | 226 | 
|  | A tool for web data extraction and processing using Rust | 1,234 | 
|  | A high-performance web crawling and scraping solution with customizable settings and worker pooling. | 945 | 
|  | A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. | 380 | 
|  | A Node.js module for crawling web sites and scraping their content | 254 | 
|  | A tool for end-to-end testing of web applications by crawling and comparing screenshots. | 33 | 
|  | A scalable and versatile web crawling framework based on Apache Storm | 895 | 
|  | A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling | 340 | 
|  | A Ruby-based tool for web crawling and data extraction, aiming to be a replacement for paid software in the SEO space. | 143 | 
|  | A web crawler designed to efficiently collect and prioritize relevant content from the web | 459 |