brozzler
Web Crawler
A distributed web crawler that fetches and extracts links from websites using a real browser.
brozzler - distributed browser-based web crawler
673 stars
40 watching
97 forks
Language: Python
last commit: about 18 hours ago
Linked from 1 awesome list
Related projects:
Repository | Description | Stars |
---|---|---|
webrecorder/browsertrix-crawler | A containerized browser-based crawler system for capturing web content in a high-fidelity and customizable manner. | 663 |
archiveteam/grab-site | A web crawler designed to backup websites by recursively crawling and writing WARC files. | 1,400 |
brendonboshell/supercrawler | A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. | 380 |
cocrawler/cocrawler | A versatile web crawler built with modern tools and concurrency to handle various crawl tasks | 187 |
stewartmckee/cobweb | A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner | 226 |
hominee/dyer | A fast and flexible web crawling tool with features like asynchronous I/O and event-driven design. | 134 |
apache/incubator-stormcrawler | A scalable and versatile web crawling framework based on Apache Storm | 893 |
bplawler/crawler | A Scala-based DSL for programmatically accessing and interacting with web pages | 148 |
spider-rs/spider | A web crawler and scraper built on top of Rust, designed to extract data from the web in a flexible and configurable manner. | 1,185 |
jmg/crawley | A Pythonic framework for building high-speed web crawlers with flexible data extraction and storage options. | 187 |
helgeho/web2warc | A Web crawler that creates custom archives in WARC/CDX format | 24 |
puerkitobio/gocrawl | A concurrent web crawler written in Go that allows flexible and polite crawling of websites. | 2,037 |
uscdatascience/sparkler | A high-performance web crawler built on Apache Spark that fetches and analyzes web resources in real-time. | 411 |
postmodern/spidr | A Ruby web crawling library that provides flexible and customizable methods to crawl websites | 808 |
vida-nyu/ache | A web crawler designed to efficiently collect and prioritize relevant content from the web | 456 |