2025-04-15 13:06:53 -06:00
2025-03-31 14:18:11 -06:00
2025-03-20 15:11:01 -06:00
2025-04-17 09:35:57 -06:00
2025-03-21 05:59:40 +00:00
2025-04-15 13:07:47 -06:00
2025-04-15 13:07:47 -06:00
2025-04-15 13:38:28 -06:00
2024-12-12 15:32:04 -07:00
2025-03-31 15:05:18 -06:00

Surreal Crawler

Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.

How to use

  1. Clone the repo and cd into it.
  2. Build the repo with cargo build -r
  3. Start the docker conatiners
    1. cd into the docker folder cd docker
    2. Bring up the docker containers docker compose up -d
  4. From the project's root, edit the Crawler.toml file to your liking.
  5. Run with ./target/release/internet_mapper

You can view stats of the project at http://<your-ip>:3000/dashboards

# Untested script but probably works
git clone https://git.oliveratkinson.net/Oliver/internet_mapper.git
cd internet_mapper

cargo build -r

cd docker
docker compose up -d
cd ..

$EDITOR Crawler.toml

./target/release/internet_mapper

TODO

  • Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
  • Conditionally save content - based on filename or file contents
  • GUI / TUI ? - Graphana
  • Better asynchronous getting of the sites. Currently it all happens serially.
  • Allow for storing asynchronously - dropping the "links to" logic fixes this need
  • Control crawler via config file (no recompliation needed)

3/17/25: Took >1hr to crawl 100 pages

3/19/25: Took 20min to crawl 1000 pages This ment we stored 1000 pages, 142,997 urls, and 1,425,798 links between the two.

3/20/25: Took 5min to crawl 1000 pages

3/21/25: Took 3min to crawl 1000 pages

About

Screenshot

Description
Web crawler + storage + visualization (soon)
Readme 2.2 MiB
Languages
Rust 97%
HTML 2.7%
CSS 0.2%