Back in October of last year Google Apps underwent a fairly extensive redesign. The overall intent of this realignment was to improve the user interface design, which has traditionally seemed to be a stumbling point for Google.
Personally I think the new designs are excellent and really appreciate the fact that they address my need to access my Google Apps from a variety of devices. I also like the fact that they allow me my own preference for the visual density of content within the page.
In fact this is such a killer feature that I reverse engineered it to better understand it. I’ll be looking to use this knowledge in some of my future work. Whilst I wouldn’t really refer to this deconstruction as such, plagiarism is the sincerest form of flattery. Unfortunately, due to extensive work commitments, I’ve only just gotten around to writing up my work…
Continue reading “User-controlled Content Density”…Back in April 2007, I wrote a short article on HTML element identifiers for the .net magazine “Expert Advice” section. I have never republished it online—despite it being 400 words, nicely to the point and suitably succinct—and have long been meaning to reexamine the subject in more detail.
By a total coincidence a discussion in a recent front-end code review and a discussion over the past couple of days on Twitter both relate to the use of HTML element identifiers. The former, a discussion on semantic value and remembering that HTML should not prognosticate styling; the latter, a discussion on the validity of using IDs to target CSS. With both of these considerations I’m going to study the fundamentals of writing good element identifiers within this article.
Continue reading “On HTML Element Identifiers”…I’m currently involved in a project to write a fairly extensive set of best practices for front-end development. Alongside myself, this project includes input from a fair cross-section of my peers in the front-end development community. These best practices will be implemented alongside a coding standard as standards for development within the organisation I work for, and hopefully many other organisations when they are published.
Of all the standards that a front-end team might want to implement, those that concern the identification and graceful degradation of cross-browser feature sets can be the hardest to define.
With that in mind, I’ve been poking around the front-end community looking for possible solutions. By far the most common approach—and one that gains an astounding level of attention in the community—is to implement Modernizr, a JavaScript feature sniffer created by Faruk Ateş, Paul Irish and Alex Sexton.
Unfortunately, despite my respect for the developers involved, I just can’t advocate Modernizr as a solution. Let me explain why; but first, let’s revisit some concepts that are going to be quite relevant…
Continue reading “Sniff My Browser: The Modernizr Inadequacy”…Following on from “SEO for Web Developers: Keywords and Links”, this next article in my SEO series focuses on page construction. Whilst I’ve previously stated that in-links (i.e. external incoming links) are the fundamental workhorse of good SEO, it is also important to make sure you are constructing your pages in a way that easily exposes your content, and that clearly links it to your identified keywords.
Further to that, it’s important to know what the search bots are looking for when they spider your pages. From URL structure through to semantic markup and page specific metadata, there are a multitude of features you can build-in from the start that will improve the search engines’ insight into your content.
Continue reading “SEO for Web Developers: Page Construction”…Since the dawn of the web, developers and content editors have sought insight and enlightenment into the arcane art of search engine optimisation (SEO). This relentless battle to rise above the competition and feature at the top of the search engine result page (SERP), for a chosen keyword or search term, has forced a constant clandestine evolution upon the search engines themselves whilst they adapt to out-trick the trickster.
What’s more the proliferation of false knowledge, the equally charlatan-populated worlds of SEO consultancy and web development, and the sheer abundance of SEO false positives have meant that snippets of true SEO wisdom are drowning in a sea of irrelevant balderdash.
It is no wonder then that your average web developer is not armed with the erudition required to fight the good fight; knowledge that will allow them to articulate the message of their content to the search engines and therefore be exposed to a wider audience.
Fear ye not, weary web devs, for here is your codex of power. I hereby commit to blog my personal experiences and knowledge of Getting SEO To Actually Work™. Everything I discuss in this series of articles has been learned through experimentation, careful analysis, and search-bot honey-pot test sites. Oh, and mummyfrakkin’ science, bitches.
Continue reading “SEO for Web Developers: Keywords and Links”…Categories: