readme updates

This commit is contained in:
Rushmore75 2025-03-19 15:05:32 -06:00
parent 71b7b2d7bc
commit b9c1f0b492

View File

@ -7,6 +7,9 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed
- [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
- [ ] Conditionally save content - based on filename or file contents
- [ ] GUI / TUI ?
- [ ] Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages
- [x] GUI / TUI ? - Graphana
- [x] Better asynchronous getting of the sites. Currently it all happens serially.
- [ ] Allow for storing asynchronously
3/19/25: Took 20min to crawl 100 pages
This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.