3 Technical SEO Fundamentals You Shouldn’t Neglect

March 24, 2015

According to one of my favorite motivational speakers, Jim Rohn, “Success is neither magical nor mysterious. Success is the natural consequence of consistently applying the basic fundamentals.”


This is especially true in the context of SEO.


If you don’t know the difference between a canonical tag and hreflang tag; how responsive web design compares to adaptive web design; or how best practices in information architecture can bolster your SEO efforts, this article is probably for you.


In the age of Penguin, Panda, and Hummingbird, more business owners than ever are moving towards content marketing, defined by Content Marketing Institute as “creating and distributing valuable, relevant, and consistent content in an effort to attract and retain a clearly-defined audience.”


A positive by-product of creating and distributing high-quality, audience-aligned content is that it earns links, allowing business owners to move away from having to actively build links while simultaneously bolstering the strength of their search presence.


Unfortunately, all the high-quality content and earned links in the world won’t do much to move your site up the search engine results pages (SERPs) if you’re neglecting the fundamentals of technical SEO.


1. Duplicate Content

According to Google, duplicate content is content “that either completely match[es] other content or [is] appreciably similar.” What’s interesting is that Google makes it a point to also state that duplicate content is mostly not deceptive in origin.


Duplicate content


Non-deceptive duplicate content is among the most frequent technical SEO issue we see here at Vertical Measures, rearing its ugly head in the form of everything from printer-only versions of web pages to sites in which there is no expressed preference over whether search engines should serve up the “www” or “non-www” versions of their content.


Why is this such an issue?


The goal of Google’s web crawler – Googlebot – is to crawl the web to discover new and updated pages to be added to Google’s index. It also happens to run on a huge set of computers.


Like any other computer, it is only capable of doing so much at once. In order to maximize efficiency, Googlebot uses an algorithmic process to determine which sites to crawl, how often, and how many pages to fetch from each site.


Duplicate content hampers Googlebot in achieving its goal. Remember: Googlebot’s goal is to discover new and updated pages. By definition, duplicate content is neither new nor up-to-date, and after a while, Googlebot will start to crawl your site less often and with less depth.


As a result, any new content you create — regardless of whether it’s Pulitzer Prize material or not — has less of a chance of being found by Googlebot, let alone indexed and showing up in the SERPs for its target keyword.


So, what can you do? A lot.


Luckily, Art, our Director of SEO Services, held a webinar on this topic entitled Prepare for Panda: How to Destroy ALL Your Duplicate Content. In it, you can find a boatload of actionable tips for cleaning up your site and setting systems in place to prevent any unintentional duplicate content in the future.


2. Mobile-Friendliness

On February 26, 2015, Google released a blog on their Webmaster Central blog entitled Finding more mobile-friendly search results.


In it, they state:



“Starting April 21, we will be expanding our use of mobile-friendliness as a ranking signal. This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results.”


Let me repeat that…


“Starting April 21 … [Google’s] use of mobile-friendliness as a ranking signal … will have a significant impact in [their] search results.”


If the importance of mobile-friendliness is a surprise to you, here are some mobile statistics that’ll help you understand why Google is all of a sudden expressly stating a date for an algorithm modification (something they rarely do):



  • “Among U.S. adults, 22.9% of all media time in 2014 was spent on mobile” ~ Kapost
  • “Mobile searches (roughly 85.9 billion) will surpass desktop searches in 2015″ ~ Business2Community
  • “57% of the United States owns a smartphone” ~ Business2Community
  • “81% of conversions from mobile search happen within five hours of the search” ~ Business2Community

Mobile friendly on Google


To remain competitive and aligned with its belief that all else will follow if you simply focus on the user, Google must adapt its algorithm to the significant role mobile-friendliness plays in the overall user experience of the search engine.


If Google continued to serve up sites that weren’t friendly to mobile devices, it’d likely result in an immediate loss of search engine market share.


That said, how do you know if Google considers your site mobile-friendly?


Luckily for us, they’ve provided two tools that tell you exactly whether or not they consider your site mobile-friendly, and if not, why not: the Mobile-Friendly Test and the Mobile Usability Report.


The first is a publicly-facing tool; the other, a tool you can only access via Google Webmaster Tools.


For those without Webmaster Tools, plug in your website’s URL — or multiple URLs of content within subdirectories of your website — and you’ll know within the matter of a few seconds whether Google considers those URLs to be mobile-friendly.


For those with Webmaster Tools, simply click the 2nd link above.


Regardless of which tool you use, if your site doesn’t pass as mobile-friendly, you’ll be given reasons why along with links to resources on how to fix the issues.


Identify the issues, plug them into a spreadsheet, and begin working on addressing them. If you have a web developer, even better! Simply shoot the spreadsheet over to them and let them do what they do best.


3. Site Structure

There is no question that a good site structure makes for a great user experience, and that a great user experience often results in search engine benefits, but why?


Consider for a second that your site is suddenly void of its colors and fonts, its kerning and images – everything that makes it pretty.


What’s left? Text in the raw.


Now consider someone visits your site at this exact moment. How confident are you that they will be able to find what they are looking for without any visual cues? My guess is not very confident.


Jean PiagetIn the 1900s, Swiss psychologist Jean Piaget (the happy, old man smiling in the picture on this page) developed the concept of cognitive equilibrium. Cognitive equilibrium, Piaget stated, is a state of balance between an individual’s expectations of how something should be and those expectations being met by whatever they’re interacting with.


Admittedly, knowing what an individual’s exact expectations are with regards to a website isn’t possible, but you can be fairly confident that at the core of their expectations is the ability to find and navigate to the information they’re looking for in a way that logically makes sense to them.


What’s interesting is that Googlebot functions in much the same way!


When Googlebot hits your site, it gets your site’s text in the raw. Its cognitive equilibrium can only be achieved if it can find and navigate to the information it’s looking for, except there’s an element of being able to understand what the content is about, as well.


How can you accomplish this?


There are many ways, but in the context of site structure, here are two steps you can take to start the process:



  1. Indexable content. Ensure your content is indexable in the first place by putting your most important content in HTML text format. A common issue business owners’ run into is that they fall in love with aesthetics at the expense of crawl-ability. Instead of their most important content residing in HTML text format, they trap it in images, Flash files, and Java applets, making it pretty difficult for crawlers to figure out what the content is about, let alone index it.
  2. Crawlable link structures. Crawlable links, by definition, are those that enable crawlers to browse the pathways of a website. A common issue with link structures is that sometimes content is placed too deep within a site’s architecture. The issue with this is that Googlebot isn’t able to navigate to it as quickly as they would like, and because of this, it halts all attempts of finding it after an appreciable amount of time. Generally, you should make all of your content reachable within as a few clicks as possible, ideally within 3, e.g. http://www.example.com/category-keyword/subcategory-keyword/primary-keyword.html

To wrap things up, here’s a quote from another one of my favorite motivational speakers, John C. Maxwell:



“Small disciplines repeated with consistency every day lead to great achievement gained slowly over time.”


If you want to achieve great success for your business in search, you must stick to the small disciplines — the seemingly minor and inconsequential things that make up the fundamentals of technical SEO — and you must do them often.


There’s no better time to start than now. Review the three things above and audit your site for them now. Set things in motion to address them now. Schedule regular audits to ensure everything is sound now, because if you don’t, your competition probably is, and if they’re not, don’t you want to get ahead?


Remember: Success is neither magical nor mysterious. Go back to basics today and stick to them. Before you know it, you’ll achieve more than you’ve ever imagined possible.


See you at the top!

Digital & Social Articles on Business 2 Community

(255)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.