urlgrab
Link crawler
A tool to crawl websites by exploring links recursively with support for JavaScript rendering.
A golang utility to spider through a website searching for additional links.
331 stars
10 watching
60 forks
Language: Go
last commit: over 4 years ago
Linked from 1 awesome list
spider
Related projects:
Repository | Description | Stars |
---|---|---|
| A modular, concurrent web crawler framework written in Go. | 1,827 |
| A Pythonic framework for building high-speed web crawlers with flexible data extraction and storage options. | 188 |
| A framework for building fast and efficient web crawlers and scrapers in Go. | 261 |
| A utility for systematically extracting URLs from web pages and printing them to the console. | 268 |
| A concurrent web crawler written in Go that allows flexible and polite crawling of websites. | 2,036 |
| A Node.js module for crawling web sites and scraping their content | 254 |
| A tool to extract URLs from HTML attributes by crawling in and evaluating JavaScript | 255 |
| A distributed web crawler that fetches and extracts links from websites using a real browser. | 678 |
| A web crawler designed to crawl websites while obeying robots.txt rules, rate limits and concurrency limits, with customizable content handlers for parsing and processing crawled pages. | 380 |
| A framework for building cross-platform web crawlers using Go | 780 |
| A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner | 226 |
| A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling | 340 |
| A web crawler designed to efficiently collect and prioritize relevant content from the web | 459 |
| Tools to crawl websites and collect domain names with availability status | 151 |
| A distributed web crawler that coordinates crawling tasks across multiple worker processes using a message bus. | 55 |