From b9c1f0b49287d4de62741f17b4c554821fda6b2a Mon Sep 17 00:00:00 2001 From: Rushmore75 Date: Wed, 19 Mar 2025 15:05:32 -0600 Subject: [PATCH] readme updates --- README.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 67e2356..4832c6b 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,9 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed - [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia. - [ ] Conditionally save content - based on filename or file contents -- [ ] GUI / TUI ? -- [ ] Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages +- [x] GUI / TUI ? - Graphana +- [x] Better asynchronous getting of the sites. Currently it all happens serially. +- [ ] Allow for storing asynchronously + +3/19/25: Took 20min to crawl 100 pages This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.