cobweb

Web Crawler

A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner

Web crawler with very flexible crawling options. Can either use standalone or can be used with resque to perform clustered crawls.

GitHub

226 stars
9 watching
45 forks
Language: JavaScript
last commit: almost 2 years ago
Linked from 1 awesome list


Backlinks from these awesome lists:

Related projects:

Repository Description Stars
rivermont/spidy A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling 340
brendonboshell/supercrawler A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. 378
cocrawler/cocrawler A versatile web crawler built with modern tools and concurrency to handle various crawl tasks 187
apache/incubator-stormcrawler A collection of resources for building web crawlers on Apache Storm using Java 891
jmg/crawley A Pythonic framework for building high-speed web crawlers with flexible data extraction and storage options. 186
webrecorder/browsertrix-crawler A containerized browser-based crawler system for capturing web content in a high-fidelity and customizable manner. 652
hominee/dyer A fast and flexible web crawling tool with features like asynchronous I/O and event-driven design. 133
uscdatascience/sparkler A high-performance web crawler built on Apache Spark that fetches and analyzes web resources in real-time. 410
internetarchive/brozzler A distributed web crawler that fetches and extracts links from websites using a real browser. 671
amoilanen/js-crawler A Node.js module for crawling web sites and scraping their content 253
vida-nyu/ache A web crawler designed to efficiently collect and prioritize relevant content from the web 454
archiveteam/grab-site A web crawler designed to backup websites by recursively crawling and writing WARC files. 1,398
dyweb/scrala A web crawling framework written in Scala that allows users to define the start URL and parse response from it 113
twiny/spidy Tools to crawl websites and collect domain names with availability status 149
c-sto/recursebuster A tool for recursively querying web servers by sending HTTP requests and analyzing responses to discover hidden content 242