What is Cloaking for Search Engine?

By definition, cloaking can be described as an HTML technique in the area of search engine optimization. When optimizing using cloaking, the focus is on improving the ranking of a website within the results list of search engines such as Google. For this purpose, the web crawlers of the search engine are presented with different content than the normal page visitor at the same URL. The goal is to present the website visitor with content that is attractively designed and does not focus on SEO criteria such as Keywords. In contrast, the crawler is usually presented with text-based page content that implements SEO methods as best as possible but loses design attributes.

While designing your web site, sometimes you have to serve different content for different user-agents in small portions. This is different than cloaking. Google calls the different content for different devices as “Mismatching”. For now, Google Algorithms tolerate the “Mismatch” errors in small portions, but still, you need to serve the same content from the same URL to every user-agent.

In the area of search engine optimization (SEO) whenever a server serves a different version of the website for a search engine crawler than the human visitors under the same URL, the cloaking conditions occur. The black hat method is used as a deliberate deception to improve the ranking of a website. Google and other search engine providers, therefore, regard this type of search engine optimization as a rule violation.

Origin of the cloaking method

Websites whose content consists primarily of graphics, videos, or flash animations generally do poorly in the search results. Multimedia content, which may be well received by many users, can only be read out in general by text-based search engines. This deficit can be compensated for by cloaking: Instead of the original page, the search engine crawler is shown a description of the image and video content and thus an HTML website in pure text form. Search engines can easily process such content and index the website accordingly. But the selective output of different website versions holds a high potential for abuse.

Since, in the old times, Googlebot can’t render the Javascript, the web pages with heavy multimedia and design have suffered in rankings. Also, Google has improved its ability to perceive images and visual content in last years for both indexings and evaluating for the search intent. So, Cloaking is born because of those old-time conditions and it pushed the Search Engines to implement manual actions against cloaking while improving their crawlers.

Cloaking as a deception

Website operators repeatedly use cloaking methods to present content to the search engine that is not available to site visitors. Such an attempt at manipulation can be illustrated using the example of a fictitious casino website:

In order to increase visibility on the World Wide Web, the provider of an online casino delivers a search engine crawler targeted content for board games. This is despite the fact that only paid games of chance are available to visitors to the website. As a result, the search engine lists the online offer due to the incorrect information in the index for board games and incorrectly outputs the website as a search result for the relevant keywords. This annoys misdirected visitors and reduces the user-friendliness of the search engine.

To prevent such tricks, search engine operators are strict on cloaking. Market leader Google maintains a specialized webspam team, especially for such cases. Website operators who use the appropriate methods can expect their project to be completely removed from the search index (Google Penalty). In the guidelines for webmasters (Google Webmaster Guidelines), Google lists cloaking under concrete recommendations as to which methods should be avoided.

Cloaking techniques

Since cloaking is based on server systematics, the server can use the IP or the identifier to identify whether a user or browser or a robot wants to access the data and thus represent the optimized or reduced website.

Website operators who use the black hat method usually use two different techniques to deceive search engines.

User Agent Cloaking (Agent Name Delivery)

A website is usually accessed via a so-called user agent. Examples of user agents are client applications such as web browsers or automated programs such as crawlers. These can be identified using a specific identifier, the agent name. This enables a web server to tailor requested content to the respective browser using optimized style sheets, thereby increasing the usability of a website. Agent Name Delivery thus forms the basis for a device-optimized display. However, such a procedure becomes cloaking when website operators integrate certain mechanisms that react specifically to agent names of well-known web crawlers such as the Googlebot and provide them with separate content. In order to avoid agent name delivery, search engines sometimes disguise their web crawlers as ordinary browsers.

IP cloaking (IP delivery)

In addition to the agent name, the IP address also offers a way to categorize website visitors and to provide them with special content. The method is used, among other things, in the context of geotargeting in order to display different language versions or regional offers to website visitors. IP delivery becomes cloaking when a website operator delivers tailor-made content to a crawler’s standard IP address. Such a black hat SEO is only successful if the bot in question always uses the same IP address. In order to prevent this manipulation, most search engines now rely on changing IP addresses.

In an official video article on the subject, Matt Cutts, the former head of the Google webspam team, emphasized, however, that IP-based geotargeting and the adaptation of website content to mobile user agents definitely do not represent cloaking, but rather desired measures in the sense of Are user-friendliness. Accordingly, it does not matter to Google whether a website visitor from France is shown different content than a German visitor due to special language versions, provided that the content matches what a web crawler finds on the website.

Note: Serving the different content from the same URL is not recommended by Search Engines. You should always serve the same content from the same URL to the all user-agents. Matt Cutts is talking about language-based differences, you need to use “hreflang” attribute for clarifying that which section of the web page is for which geography and language.

Cloaking in 2020, What is its Effects and How does Google Punish it?

Google and other Search Engines are even stricter against the webspam in 2020. They also decrease the Trust Score of some web entities if they violate the guidelines. And winning this trust score back is not easy. Along with this, some hacker methodologies such as Japanese Keyword Hack is based on cloaking. On this technique, hacker accesses the system of the web site’s server and changes the content of the web pages for only the Search Engine Crawlers. But, the hacker doesn’t change the content for the users. So while the Search Engine sees Japanese Content on the web site, users see the old web site as it is. So, the web site owner doesn’t realize that his or her site has been hacked. But Search Engines change the rankings of the web site and the queries of the web site take impressions and clicks. Also, this technique is being replicated for PageRank flows.

To learn more about hackers methodologies and how you can protect your site against them, you can continue to follow Holistic SEO Guidelines.

Koray Tuğberk GÜBÜR

Leave a Comment

What is Cloaking for Search Engine?

by Koray Tuğberk GÜBÜR time to read: 5 min
0