This project is read-only.

Zeta Web Spider

A web spider library written in C# to grab websites and store them locally.



Today, while looking through some older code, I came across a set of classes I wrote at the beginning of this year for a customer project.

The classes implement a basic web spider (also called "web robot" or "web crawler") to grab web pages (including resources like images and CSS), download them locally and adjust any resource hyperlinks to point to the locally downloaded resources.

While this article is not a full-featured article with detailed explanations as I usually like to write, I still want to put the code online with this short article. Maybe some reader can still take some ideas from this code and use it as a starting point for his own project.

The classes allow for synchronous as well as asynchronous download of the web pages, allowing multiple options to be specified like hyperlink-depth to follow and proxy settings.


The downloaded resources get their own new file names, based on the hash code of the original URL. I did this to simplify the process (for me as the programmer).

To parse a document, I am using the SGMLReader DLL.

Also, since I didn't need it for the project I wrote, the library does not care about "robots.txt" or throttling or other features.

Using the code

The download for this article contains the library ("WebSpider") and a testing console application ("WebSpiderTest"). The testing application is rather short and should be rather easy to understand.

Basically, you do create an instance of the WebSiteDownloaderOptions class, configure several parameters, create an instance of the WebSiteDownloader class, optionally connect event handlers and then tell the instance to either start synchronously or asynchronously processing the given URL.


  • 2006-09-10 - First release as a article.
  • 2010-02-09 - First release (with updates and fixes) to CodePlex.

Last edited Feb 9, 2010 at 7:52 PM by UweKeim, version 5