add most recent long run
This commit is contained in:
parent
1f6a0acce3
commit
bac3cd9d1d
@ -8,4 +8,5 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed
|
||||
- [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
|
||||
- [ ] Conditionally save content - based on filename or file contents
|
||||
- [ ] GUI / TUI ?
|
||||
- [ ] Better asynchronous getting of the sites. Currently it all happens serially.
|
||||
- [ ] Better asynchronous getting of the sites. Currently it all happens serially.3/19/25: Took 20min to crawl 100 pages
|
||||
This ment we stored 100 pages, 142,997 urls, and 1,425,798 links between the two.
|
||||
|
Loading…
x
Reference in New Issue
Block a user