Wednesday, August 18, 2010

SEO Introduction To Crawler Based Search Engines

One of the main rules in a ranking algorithm involves the location and frequency of keywords on a web page. Call it the location/frequency method, for short. Remember the librarian mentioned above? They need to find books to match your request of "travel", so it makes sense that they first look at books with travel in the title.

Search engines operate the same way. Pages with the search terms appearing in the HTML title tag are often assumed to be more relevant than others to the topic. Search engines will also check to see if the search keywords appear near the top of a web page, such as in the headline or in the first few paragraphs of text.

They assume that any page relevant tot the topic will mention those words right from the beginning.Frequency is the other major factor in how search engines determine relevancy. A search engine will analyze how often keywords appear in relation other words in a web page.

Those with a higher frequency are often deemed more relevant than other web pages.

Spice in the Recipe

Now it's time to qualify the location/frequency method described above. All the major search engines follow it to some degree' in the same way cooks may follow a standard chili recipe. But cooks like to add their own secret ingredients. In the same way, search engines and spice to the location/frequency method.

Nobody does it exactly the same, which is one reason why the same search on different search engines produces different result.

To begin with, some search engines index more web pages than others. Some search engines also index web pages more often than others. The result is that no search engine has the exact same collection of web pages to search through. That naturally produces differences, when comparing their results.

Search engines may also penalize pages or exclude them from the index, if they detect search engine "spamming". An example is when a word is repeated hundreds of time on a page, to increase the frequency and propel the page higher in the listings.

Search engines watch for common spamming methods in a variety of ways, including following up on complaints from their users.

Off the page factors

Crawler-based search engines have plenty of experience now with webmasters who constantly rewrite their web pages in an attempt to gain better rankings. Some sophisticated webmasters may even go to great lengths to "reverse engineer" the location/frequency systems used by a particular search engine. Because of this, all major search engines now also make use of "off the page" ranking criteria.

Off the page factors are those that a webmasters cannot easily influence. Chief among these is link analysis. By analyzing how pages link to each other, a search engine can both determine what a page is about and whether that page is deemed to be "important" and thus deserving of a ranking boost.

In addition, sophisticated techniques are used to screen out attempts by webmasters to build "artificial" links designed to boost their rankings.

Another off the page factor is click through measurement. In short, this means that a search engine may watch what result someone selects for a particular search, then eventually drop high-ranking pages that aren't attracting clicks, while promoting lower-ranking pages that do pull in visitors.

As with link analysis, systems are used to compensate for artificial links generated by eager webmasters.

Search Engine Ranking Tips

A query on a crawler-based search engine often turns up thousands or even millions of matching web pages. In many cases, only the 10 most "relevant" matches are displayed on the first page.

Naturally, anyone who runs a website wants to be in the "top ten" results. This is because most users will find a result they like in the top ten. Being listed 11 or beyond means that many people may miss your web site.

The tips below will help you come closer to this goal, both for the keywords you think are important and for phrases you may not even be anticipating.

For example, say you have a page devoted to stamp collecting. Anytime someone types, "stamp collecting", you want your page to be in the top ten results. Then those are your target keywords for that page.

Each page in you web site will have different target keywords that reflect the page's content. For example, say you have another page about the history of stamps. Then "stamp history" might be your keywords for that page.

Your target keywords should always be at least two or more words long. Usually, too many sites will be relevant for a single word, such as "stamps".

This competition means your odds of success are lower. Don't waste your time fighting the odds. Pick phrases of two or more words, and you will have a better shot at success.
Related Posts Plugin for WordPress, Blogger...