How do search engines work?

 Search jobs typically involve four basic operations:

  1. Crawling - navigation through different pages, using automated programs, also called "spiders" or "bots". Try out the utility at seochat.com to see the web page through the eyes of a spider. 
  2. Index creation - as spider crawls through websites, it stores the information in a database. If a page appears in the search results, it must have been recorded in a database or searched by a particular search engine. i.e. the web page was been indexed.
  3. Processing Search - when a user enters search word or phrase, the  program "pulls" from the database a list of sites containing the searched word or phrase. This leads to the most important part - sorting results.
  4. Search results sorting - the significance of a particular webpage (webpage ranking) is calculated by using a complex algorithm  taking into account many factors. The basic factors include webpage content, number of backlings, semantics and  even page loading speed. The ranking algorithm is not known to any web designer, therefore no one can fully guarantee first ranks in the search results. 

 What are the basic rules for  creating successfull web content?

 People say that the content rules, which is true. Well-written and appealing texts make others refer to your webpage,, thus providing a natural way of gathering backlings.

Accessibility for web search robots is of most importance for high search rankings.. There are many pages that still ignore this fact.

 Keep proper semantics  - always complete important tags - title, Hx, meta tags, use them in the right order and at the right places. Experts seomoz.org agreed that the tag "title" and "H1" are the most important on your site and search engines  consider them the most relevant.

Switch to Desktop Version