Technology, Communication and Social Media

Google Algorithm Updates: Panda and Penguin – Part 1

Google Algorithm Updates: Panda and Penguin - Part 1

Search engines like Google function by scanning the internet to create a massive collection of web pages. Once the pages are collected, a complex algorithm is applied to determine which of the pages provide the ‘best’ information for each particular search query. This algorithm calculates many components to rank a web page for the value of its content. Some examples include examining on-page factors like keyword content, HTML and site structure, as well as off-page elements like links, social reputation and authority.

Google algorithm updates are done in an effort to constantly provide the best search results. They can do this by modifying the way they index or scan the web, or through changing the factors that help a page to rank. This can also include applying filters to stop websites that are maneuvering to undermine the system by violating Google’s guidelines. The recent Panda and Penguin updates are simply Google’s most recent attempt to provide the best search results to their users.

Google Panda Update and Algorithm ChangeGoogle Algorithm Updates: A Brief History

These recent Google algorithm updates have created quite the online uproar, but this is nothing new. The ‘Vince’ update in 2009 added more weight to factors that convey the importance of a page like trust, quality, and page rank. Then in 2010, Google enacted the ‘Mayday’ update that included between 400 to 500 individual changes to its search algorithm. It specifically attacked long tail search referrals. The main casualties were webpages that targeted very specific search terms, but were part of a site with little-to-no domain value.

Google Algorithm Updates: Panda

In February 2011, webmasters were introduced to a new form of machine learning in Google algorithm updates: Panda . The Panda update targeted content farms that had gained ranking by using low quality or shallow content. Scraper websites that borrowed or stole their content were also negatively affected. Instead of a change, Panda represents a new layer being added to Google’s ranking algorithm. The Panda filter has been effective at ‘flagging’ what it believes to be low quality web pages. If your site contains too many of these pages, Panda penalizes the entire site. This does not mean your entire site is removed from Google’s rankings, but it carries a penalty designed to ensure better quality sites make it to Google’s front page.

The Panda filter doesn’t constantly run, but instead it is periodically scheduled to catch new, poor quality content. Each time Google runs the filter, it is accompanied by improvements intended to identify and punish poor quality content sites. The January 2012 improvement to Panda, known as the “page layout algorithm”, affected pages with too many ads above the fold. There are no exacts with Google, but there seems to be a new version of the Panda filter every 4 to 6 weeks.

The most recent version (Panda 3.5) went live on April 24, only 5 days after the new Penguin update. This timing created a lot of confusion with webmasters because very few understood that the Panda update had occurred. Most sites that lost ranking automatically blamed Penguin as the source of their problems. While Panda seems to have been the culprit, there were still many websites with lower rankings due to the Penguin update.  In part 2 of our examination of Google algorithm updates, we will take a closer look at the Penguin update and what it means for websites today.

 


Leave a Reply

Your email address will not be published.

    Click To Contact

Contact Us

Call Us: 512-200-2732

Recent Posts