With a background in marketing, design, and technology, Dave Burke focuses on bridging disciplinary gaps to create web products that delight both users and business owners. [More...]

On the Social Webs

Dave Burke on Twitter
Dave Burke on Vimeo
Dave Burke on Slideshare



Happy to be speaking at the IA Summit (not that happiness is everything)

I was truly thrilled and honored to be selected to speak at the 2012 IA Summit. I've been going to the Summit for years (first time was in Vegas), and have always found the content to be top notch and the community to be even better.

So here's the proposal for the talk. I'm happy to hear feedback or suggestions as I put it together.

Happiness is overrated.

It's almost a mantra in the user experience world: you should aim to "delight" your users. You should stud their experience with moments of “wow”. You should exceed their expectations.

And when you do it well, you’ll create loyal customers, and even some social evangelists who will happily spread the word of your awesomeness across the web.

It’s conventional wisdom. But is it true?

Research conducted by the Corporate Executive Board (my old company), delved into the relationship between customers’ levels of satisfaction, and their loyalty to a brand. The study, which focused primarily on customer service websites and call centers, casts doubt on the dollar value of delight.

Delighting users doesn’t necessarily make them more loyal or profitable. In fact, your highest-value opportunities for boosting user loyalty lie in helping the ones who aren’t happy by eliminating their most common irritants.

During this talk we’ll get into:

  • How bad customer experiences affect the bottom line compared to good ones
  • Our individual and organizational bias toward designing for delight
  • The top two drivers of user dissatisfaction, and how to avoid them
  • Real-life examples of websites that have been updated specifically to build loyalty through better online experience
  • The importance of service design to customer loyalty
  • A new metric to measure (and present to senior management) the success of your service design

User Research Methods: What vs. Why X Surface vs. Strategy

A recent conversation about user testing new features and designs induced me to get a little more organized in my thinking about the right method for particular testing goals. Two key dimensions to consider are:

  • User Dimension (X Axis): Is our research goal to better understand how users are behaving on the site (aka, what users are doing), or is it more about understanding their underlying goals, tasks, and assumptions (aka, why users are doing what they do)?
  • Product Dimension (Y Axis): Are we more interested in how well our product is aligning with user goals and tasks in terms of features, content, and high-level structure (aka, our product Strategy*), or are we measuring how well the design is presenting the product through visual presentation, IA, and labeling (aka, our product Surface*)?

* Surface and Strategy are designations borrowed here from Jesse James Garrett's Elements of User Experience. Surface is represented by the eyeball icon; Strategy by the lightbulb.  

Given those dimensions, I tried to map the relative usefulness of various research and testing methods. What are your thoughts? How would you arrange these research methods?


The UX Hierarchy of Needs To Be Fixed

This is a poster presentation for the 2011 IA Summit. Download the poster PDF here.

Here's a summary of the proposal:

The UX Hierarchy of Needs To Be Fixed is composed of a graded set of user consequences for UX defects. The idea is that, when facing a backlog of defects in a product nearing launch, you should fix defects in the more severe categories — the ones at the bottom of the pyramid — before tackling ones that are higher up. The categories are, from most to least severe:

Goal/Task Loss. The defect consistently blocks the user from completing a critical task or goal, either due to system failure or severe usability issues. Such issues include unexplained data loss, unrecoverable errors, and critical browser incompatibility.

Confusion. The defect leaves the user unable to find the path to task completion without considerable effort or multiple missteps, causing a high risk of task abandonment. Examples include multiple unclear paths, requests for information that the user does not understand, and unrecognizable calls to action.

Frustration. While the defect does not cause the user to lose the path to task completion, it adds considerable effort and cognitive load, ultimately creating a moderate risk of abandonment. Examples include missing desired features, difficult language or jargon, and requests for information that the user cannot obtain without considerable effort.

Inefficiency. The user can complete all tasks and goals, but completion takes more effort or inconvenience than is necessary. Examples include multi-step processes that could be completed as one, system slowness, and annoyingly strict security requirements.

Love loss. The defect has no impact on task completion or efficiency, but it reduces the brand affinity or overall delight that the user feels. Examples include typos, misaligned graphics, and misuse of color.

The hierarchy is intended to qualify the impact of a defect on a single user. But to prioritize defects, product managers must also consider other factors including:


  • the business criticality of the affected features
  • how many users will be affected
  • the business importance of the affected user segments
  • the effort required to implement a fix


These factors are represented on the poster as well.

The hierarchy is informed by my own work as UX designer and product owner on various web projects as well as collaboration with other UX professionals, but it is still a work in progress. I hope that presenting this poster at the IA Summit will spark some conversation, and help me refine and clarify the hierarchy further.