Search engines use software and algorithms to deliver rankings. Search engines also use spiders (sometimes called robots) to crawl and read your site. The robots (and this is plural because some search engines like Google have many) are basically software applications that travel the Internet by following links and finding websites to crawl and add to their databases. These robots will travel to your website if they find a link to follow to your site.
Once the robot finds your site it indexes as many pages as it can find. Having a sitemap will help the search engines find as many pages as possible. The robot will then move along to other sites from your outbound links.
When you website is indexed, then another process takes place and this is the process of determining your ranking for specific keyword or key-phrase searches performed upon the search engine. All of the major search engines that use robots to index websites also use complicated algorithms for determining the page rank of your site.
The search engine algorithm software looks at the title and description of the website along with keyword density and prominence to name a few to see which keyword or phrase your site is optimized for (if any). In-bound links are also considered when determining rankings. Certain weight is given for each of these components the page is ranked accordingly. Remember, Google ranks pages and not websites. That said, authority sites usually have pages that rank better than non-authority sites.
The advantages of the spider-driven search engines are that robots will continue to index your site on a regular basis. In addition, the volume (and popularity) of people using the search engines far outweighs the volume of other web locations such as directories.