Surreal Crawler
Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.
TODO
- Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
- Conditionally save content - based on filename or file contents
- GUI / TUI ?
- Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.
Description
Languages
Rust
97%
HTML
2.7%
CSS
0.2%