readme updates
This commit is contained in:
		@@ -7,6 +7,9 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed
 | 
			
		||||
 | 
			
		||||
- [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
 | 
			
		||||
- [ ] Conditionally save content - based on filename or file contents
 | 
			
		||||
- [ ] GUI / TUI ?
 | 
			
		||||
- [ ] Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages
 | 
			
		||||
- [x] GUI / TUI ? - Graphana
 | 
			
		||||
- [x] Better asynchronous getting of the sites. Currently it all happens serially.
 | 
			
		||||
- [ ] Allow for storing asynchronously
 | 
			
		||||
 | 
			
		||||
3/19/25: Took 20min to crawl 100 pages
 | 
			
		||||
This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user