diff --git a/README.md b/README.md index 94913df..67e2356 100644 --- a/README.md +++ b/README.md @@ -8,4 +8,5 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed - [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia. - [ ] Conditionally save content - based on filename or file contents - [ ] GUI / TUI ? -- [ ] Better asynchronous getting of the sites. Currently it all happens serially. \ No newline at end of file +- [ ] Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages +This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.