Web crawler + storage + visualization (soon)
Go to file
2024-12-13 18:02:54 -07:00
.vscode add s3 support 2024-11-12 21:03:58 -07:00
demo_site added demo site for testing elements 2024-11-12 17:50:59 -07:00
src update storage shcema 2024-12-13 18:02:54 -07:00
.gitignore add s3 support 2024-11-12 21:03:58 -07:00
Cargo.lock updating for base64 2024-12-13 13:28:24 -07:00
Cargo.toml updating for base64 2024-12-13 13:28:24 -07:00
compose.yml add s3 support 2024-11-12 21:03:58 -07:00
jsconfig.json starting on the extension 2024-12-12 15:32:04 -07:00
README.md readme updates 2024-11-12 21:24:57 -07:00

Surreal Crawler

Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.

TODO

  • Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
  • Conditionally save content - based on filename or file contents
  • GUI / TUI ?
  • Better asynchronous getting of the sites. Currently it all happens serially.