cocrawler

Web Crawler

A versatile web crawler built with modern tools and concurrency to handle various crawl tasks

CoCrawler is a versatile web crawler built using modern tools and concurrency.

GitHub

188 stars
20 watching
24 forks
Language: Python
last commit: over 3 years ago
Linked from 1 awesome list

aiohttpaiohttp-clientasync-pythonconcurrencycrawlerpluggable-modulespython3screenshotwarc

Backlinks from these awesome lists:

Related projects:

Repository Description Stars
brendonboshell/supercrawler A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. 380
apache/incubator-stormcrawler A scalable and versatile web crawling framework based on Apache Storm 895
stewartmckee/cobweb A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner 226
puerkitobio/gocrawl A concurrent web crawler written in Go that allows flexible and polite crawling of websites. 2,036
webrecorder/browsertrix-crawler A containerized browser-based crawler system for capturing web content in a high-fidelity and customizable manner. 677
turnersoftware/infinitycrawler A web crawling library for .NET that allows customizable crawling and throttling of websites. 248
internetarchive/brozzler A distributed web crawler that fetches and extracts links from websites using a real browser. 678
fmpwizard/owlcrawler A distributed web crawler that coordinates crawling tasks across multiple worker processes using a message bus. 55
archiveteam/grab-site A web crawler designed to backup websites by recursively crawling and writing WARC files. 1,406
jmg/crawley A Pythonic framework for building high-speed web crawlers with flexible data extraction and storage options. 188
hu17889/go_spider A modular, concurrent web crawler framework written in Go. 1,827
rivermont/spidy A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling 340
elliotgao2/gain A Python web crawling framework utilizing asyncio and aiohttp for efficient data extraction from websites. 2,037
spider-rs/spider A tool for web data extraction and processing using Rust 1,234
helgeho/web2warc A Web crawler that creates custom archives in WARC/CDX format 25