This commit is contained in:
2025-03-20 15:11:01 -06:00
parent b9c1f0b492
commit 7df19a480f
9 changed files with 63 additions and 105 deletions

View File

@@ -2,7 +2,6 @@
Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.
### TODO
- [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
@@ -11,5 +10,14 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed
- [x] Better asynchronous getting of the sites. Currently it all happens serially.
- [ ] Allow for storing asynchronously
3/19/25: Took 20min to crawl 100 pages
This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.
3/17/25: Took >1hr to crawl 100 pages
3/19/25: Took 20min to crawl 1000 pages
This ment we stored 1000 pages, 142,997 urls, and 1,425,798 links between the two.
3/20/25: Took 5min to crawl 1000 pages
# About
![Screenshot](/pngs/graphana.png)