de80418c0011f352d9215343e90549654b3e9968
				
			
			
		
	Surreal Crawler
Crawls sites saving all the found links to a surrealdb database. It then proceeds to take batches of 100 uncrawled links untill the crawl budget is reached. It saves the data of each site in a minio database.
TODO
- Domain filtering - prevent the crawler from going on alternate versions of wikipedia.
 - Conditionally save content - based on filename or file contents
 - GUI / TUI ?
 - Better asynchronous getting of the sites. Currently it all happens serially.
 
Description
				
					Languages
				
				
								
								
									Rust
								
								97.8%
							
						
							
								
								
									HTML
								
								2%
							
						
							
								
								
									CSS
								
								0.1%