For calculating relevancy of different search strings and understand the importance (ranking) of several websites, search engines had started out with simple search algorithms. These initial search algorithms used to seek information in the form of index files or keyword meta tags from webmasters. The aim was to use the meta tag keywords (which should represent the website’s content) to index pages accurately. However, this basis on meta tag keywords was not reliable because of the possibility of meta tag keywords not representing the actual content of a website. It was common to have webmasters submit meta tag keywords only to get their website indexed without considering its relevance to the actual content and purpose of their website. This leads to incorrect indexing and inaccurate ranking of webpages in search results.
Earlier, too much importance was attached to keyword density and this led to some bad practices being adopted by webmasters just to improve their search engine ranking. It was found that even though many webmasters were genuinely trying different methods to get a higher rank, there were some webmasters who tried manipulating the rank by simply using excessive keywords and over-stuffing their web pages.
Another bad practice identified was the manipulation of certain attributes within the HTML source of a web page to make it rank higher. Such malpractices with regards to search engine ranking led to several modifications to search algorithms by the initial search engines such as AltaVista. These adjustment were done to prevent the practice of rank manipulation by webmasters.
The aim of search engines was to provide relevant search results to the users and so they started considering plenty of additional factors to devise complicated search algorithms. This ensured that webmasters had a difficult time manipulating the ranking of their website. Slowly, they were forced to try genuine SEO techniques to counter the complex search algorithms devised by search engines. Also, the popular search engines never shared the algorithms they utilized for ranking web pages.
Let’s discuss about how Google tackled the problem of rank manipulation. Google observed that manipulation was being done with regards to on-page factors such as keyword density, meta tags, links, heading, etc. So, they started considering off-page factors like Page Rank, Hyperlinks, etc. This made things extremely difficult for webmasters who survived only by manipulating search engines ranking for their web pages.
Google search engine accounts for more than half of the total number of searches carried out by the global Internet population. Hence, it is easily the topmost search engine among other popular ones like MSN, AOL, Yahoo, etc.
As I mentioned earlier, Google makes use of on-page factors as well as off-page factors to determine the relevance of search results. Additionally, it is not possible for anyone to crack the complex algorithms used by Google. This makes people trust the Google search result more than the results from any other search engine. They know that Google search engine always comes up with most relevant search results. To maintain their consistency, Google keeps updating its algorithms and also introduces penalty for malpractices carried out by webmasters to cheat Google algorithms as far as relevancy calculation is concerned.
With the high degree of relevance attached to Google search results, webmasters or websites owners will get tremendous traffic flow to their websites if they appear in top search result pages of Google for search queries specific to their niche area.