no recomp needed

This commit is contained in:
2025-03-31 14:53:10 -06:00
parent 4a433a1a77
commit add6f00ed6
4 changed files with 28 additions and 7 deletions

View File

@@ -4,11 +4,12 @@ Crawls sites saving all the found links to a surrealdb database. It then proceed
### TODO
- [ ] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
- [x] Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
- [ ] Conditionally save content - based on filename or file contents
- [x] GUI / TUI ? - Graphana
- [x] Better asynchronous getting of the sites. Currently it all happens serially.
- [ ] Allow for storing asynchronously
- [x] Allow for storing asynchronously - dropping the "links to" logic fixes this need
- [x] Control crawler via config file (no recompliation needed)
3/17/25: Took >1hr to crawl 100 pages