“You’re at the top of the search results in Google” muttered my teenage son this week. “Not bad.” he added. “Oh, really?” I replied nonchalantly. I had asked him to view a post on this blog as I thought he would enjoy seeing the Linux-based cartoon which I had used as a featured image. I was secretly pleased; about the grudging admiration from my son but also about the search ranking position, although I wasn’t quite sure why or what what it meant….
Now, I realise that the word “compostingqueen” is obscure and forms part of the “long tail” of less used search terms; consequently returning only 555 results. The more reasonable search term “composting queen” returns 1,060,000 results. So it was fairly inevitable that my blog would come up quite high in the search results. (I am not planning to disappoint my son with this information, though.) However, it did set me thinking about 3 key issues, namely:
- how people search
- the relationship between UX design and Search Engine Optimisation (SEO)
- effects of semantic web technology on search engines and SEO.
These issues have been written about extensively and I will therefore limit myself to giving a brief overview with some examples from my personal experience during this project.
1. SEARCHING USING (META)TAGS
As an student of Information Studies, I am interested in the role of folksonomies (user-assigned search terms). I used Delicious to publically tag my online searches until about 2011. I then needed a multiplatform tool and coupled with a lack of access to my data during the company’s transition to new ownership, I switched to Evernote. It’s a user-friendly, stable tool with a facility to share notebooks publically, although this is not encouraged quite as actively as it was via Delicious. I was therefore pleased to see that WordPress actively encourages the use of tags and I was keen to see how these affected my posts. I assigned tags which I thought would be of interest to other people learning to design a website. Wordpress supports this folksonomic approach by using the tags to compile lists of blogs which readers can access according to subject matter:
As a result, I have noted a steady increase in the number of views and comments I have received for this blog. Although some have been spam, I have also received some genuine, useful feedback and ideas. In return, I hope that by publically tagging this blog I may have pointed people to some useful resources. My experience with WordPress is an illustration of how keywords can be used to support effective searching within a controlled environment. Owners of websites face the task of tailoring their product to increase the likelihood of their site being retrieved from the entire Internet. In turn, this has led to the development of the Search Engine Optimisation (SEO) industry.
2. COMMON GROUND BETWEEN UX AND SEO
According to Wikipedia (2012, para. 1), SEO is:
“the process of affecting the visibility of a website or a web page in a search engine’s “natural” or un-paid (“organic”) search results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine’s users.”
Bowles and Box (2011) and Ward (2010) discuss how there is a common purpose between SEO specialists and UX designers. Fishkin (2012) views the matter from the point of view of an SEO specialist. He recommends including high quality content and targetting well-researched keywords. He argues that once this base is constructed, additional aspects such as link-building with other sites and greater social interaction, e.g. via social media can be used to optimise results.
The chart below demonstrates the range of practices which SEO practitioners can undertake (including the good, the bad and the ugly!):
For me, this chart reflects how there is less need for “black hat” practices where good UX design (such as high quality IA, graphic design and content) has been implemented. For example, whilst building our website it became apparent that images were loading very slowly. According to a study by Kissmetrics, 40% of people abandon a website that takes more than 3 seconds to load (although this outcome may depend upon a number of factors, for example, the user type, platform used and the type of website). Google has included site speeds into its search rankings since 2010 However, by using good iterative UX methodolgy we were able to address both the SEO and usability issues by changing the properties of the affected images.
Tools For SEO
UX design is a cyclical activity and SEO could be viewed as a set of tools and techniques which are part of the cycle. Numerous web analytic tools are available to provide information. However, tools vary in their focus and the choice of software will depend greatly on the website, organisation and user types. For example, the information requirements of an e-commerce provider would be different to those of a local community website.
Our group website is password-protected and we are therefore unable to gain any meaningful data to test the analysis tools. This is disappointing but inevitable due to the academic nature of the task. Instead, I have glanced briefly at the data generated by Webalizer and Awstats, the two open source tools which are part of the web hosting package for the website I recently created to test CRMs. I am frankly, mystified. It is evident that I require different software to analyse the data. The market leader in this software is Google Analytics. Although it is free, Melaugh (2012) discusses some of its drawbacks, including inaccuracies in recording bounce rates, the lack of real time recording and the harvesting of data by Google. For this reason, I am now using the open-source Piwik. As a website owner, I need to be clear about my organisational and user requirements in order to measure relevant factors; SEO can only be achieved if the data is pertinent to these objectives. This is reflected by the experimental nature of my website; with no measurable objectives, it is unsurprising that it is predominantly visited by bots!
So what about Google?
SEO activities are largely tailored to fit with the market leader, Google. It is sometimes accused of abusing this role (Guardian, 2013) and the collation and onward use of user data is one of the reasons that some people use alternatives such as DuckDuckGo. Google has attempted to make the decision-making process about searching more transparent,:
Any changes and updates to its algorithms are analysed by SEO specialists in order to ensure continued high rankings. However, it is likely that the SEO industry’s search standards and measurements will be affected by the Semantic Web.
3. SEARCHING THE SEMANTIC WEB
I have broadly described the Semantic Web in previous posts related to the future of the Internet and website design. The changes on the web, in particular, the customisation features are an aspect which will directly affect searching. Several semantic search engines are already in existence, including the abovementioned DuckDuckGo and Hakia. These have included features such as disambiguation pages/call out boxes. However, the industry leader, Google is due to launch its own semantic search engine, Knowledge Graph in the UK:
Although the promotional video does not explain it, it, the product appears to rely upon Linked Open Data (LOD) which is part of the Semantic Web project run by the World Wide Web Consortium (W3C). I am unsure however, about the long term implications of Google fusing the data it collects about its users across all its products with LOD. I prefer not to “put all my eggs in one basket” and given that I already subscribe to several Google products, I have switched to using DuckDuckGo as my preferred search engine.
How will Knowledge Graph and the Semantic Web Affect SEO?
One clear benefit to users will be the disambiguation of terms. For example, a current search using Google with the keywords “river island” predominantly retrieves links to a site for a retailer, whereas the user may be searching for information related to islands situated in rivers. Knowledge Graph would use semantic technology to clarify the search terms and would then retain the information to build content that may be of interest to the user in the future.
With Google as the market leader, Penson (2013) discusses how SEO practitioners will need to adapt their techniques to ensure maximum benefit for their websites. This includes:
- developing a semantic strategy by mapping the relationships between the website and its semantic environment. He advises using tools to obtain semantic relationships. By inputting content from the project website into this software, I was able to quickly retrieve potential code, metatags and a metadescription which the website owners could use to create a semantic map of their business:
- planning for semantic outreach by broadening activities to generate content for the website which meets semantic strategy. Thus, we could advise the owners of our project website to write reviews about related folk events in nearby counties.
- reducing reliance on links as the relevance of results based on focus on how many times a site/product is mentioned in close proximity to key phrases, rather than on the number of inbound links. The improved accessibility of social media on the website and the inclusion of a revised homepage for post-festival commentary by users should support semantic SEO.
- including high quality content in the form of text, keywords and well-marked audio visual content. Thus, although the text within the website is outside the scope of our academic project, we have added significantly to the audiovisual content and marked it to enhance retrievability.
In the near future, the Semantic Web may therefore lead to the diminishment of “black hat” SEO practices, whilst enhancing the role of UX design. Its iterative methodology may be the key to enabling searchers to not just feel but to be “ducky”.
REFERENCES
Brinck, T. et al. 2002. Usability for the Web: designing web sites that work. San Francisco, CA: Morgan Kaufmann
Fishkin, Rand. 2012. Fundamentals of great SEO [online]. San Francisco: slideshare. Available at: http://www.slideshare.net/randfish/fundamentals-of-great-seo [Accessed 15 January 2013].
Leibowitz, J. 2013. FTC chairman defends Google search bias settlement as legally sound [online]. Available at: http://www.guardian.co.uk/technology/2013/jan/11/ftc-chairman-defends-google-settlement-search The Guardian 11 January 2013. [Accessed 13 January 2013].
Kissmetrics. 2011. Speed is a killer: why decreasing page load time can drastically increase conversions [online]. San Francisco: Kissmetrics. Available at: http://blog.kissmetrics.com/speed-is-a-killer/ [Accessed 13 January 2013].
Melaugh, S. 2012. Web stats: alternatives to Google Analytics [online]. Available at: http://imimpact.com/web-stats-alternatives-to-google-analytics/ [Accessed 13 January 2013].
Penson, S. 2013. Semantic web and link building without links: the future for SEO? [online]. Seattle: SEOmoz. Available at: http://www.seomoz.org/blog/semantic-web-and-link-building-without-links-the-future-for-seo [Accessed: 13 January 2013].
Ward, T. 2012. Cage Match: UX vs SEO [online]. Available at: http://viget.com/inspire/cage-match-ux-vs-seo Washington: Viget. [Accessed 13 January 2013].
Wikipedia. 2012. Search engine optimization [online]. Available at: http://en.wikipedia.org/wiki/Search_engine_optimization [Accessed: 13 January 2013].