2025-03-19 15:03:39 -06:00
2025-03-19 15:03:11 -06:00
2025-03-19 12:41:08 -06:00
2025-03-18 16:09:46 -06:00
2025-03-19 04:59:50 +00:00
2025-03-19 04:59:50 +00:00
2025-03-19 15:03:24 -06:00
2024-12-12 15:32:04 -07:00
2025-03-19 15:03:49 -06:00

Surreal Crawler

Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.

TODO

  • Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
  • Conditionally save content - based on filename or file contents
  • GUI / TUI ?
  • Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.
Description
Web crawler + storage + visualization (soon)
Readme 2.2 MiB
Languages
Rust 97%
HTML 2.7%
CSS 0.2%