spidy

Domain crawler

Tools to crawl websites and collect domain names with availability status

Domain names collector - Crawl websites and collect domain names along with their availability status.

GitHub

151 stars
6 watching
27 forks
Language: Go
last commit: over 1 year ago
backlinkscrawlerdomainexpired-domaingolangscraperseotoolsspider

Related projects:

Repository Description Stars
postmodern/spidr A Ruby web crawling library that provides flexible and customizable methods to crawl websites 809
rivermont/spidy A simple command-line web crawler that automatically extracts links from web pages and can be run in parallel for efficient crawling 340
rndinfosecguy/scavenger An OSINT bot that crawls pastebin sites to search for sensitive data leaks 634
stewartmckee/cobweb A flexible web crawler that can be used to extract data from websites in a scalable and efficient manner 226
hu17889/go_spider A modular, concurrent web crawler framework written in Go. 1,827
spatie/laravel-site-search A package to create a private search index by crawling and indexing a website 275
glebarez/cero A tool that extracts domain names from SSL certificates of arbitrary hosts during TLS handshakes 623
elixir-crawly/crawly A framework for extracting structured data from websites 994
diogo-fernan/domfind A tool to find identical domain names with SOA DNS records under different TLDs 24
edoardottt/cariddi A tool for crawling and scanning websites for sensitive information such as endpoints, secrets, and tokens. 1,551
iamstoxe/urlgrab A tool to crawl websites by exploring links recursively with support for JavaScript rendering. 331
amoilanen/js-crawler A Node.js module for crawling web sites and scraping their content 254
s0rg/crawley A utility for systematically extracting URLs from web pages and printing them to the console. 268
dyweb/scrala A web crawling framework written in Scala that allows users to define the start URL and parse response from it 113
italolelis/reachable A CLI tool to check the availability of web domains 33