— August 23, 2017
Whether you have created a landing page for idea validation or have built a stable release for your flagship product, few results are more unsettling than launching to silence or getting a great deal of traffic with a low conversion rate.
Your first reaction may be to fix this problem in-house, or you may find an outside consultant to give you an audit, evaluation, or teardown. These are essentially the same service, but the names for it vary.
The output of a teardown depends on a consultant’s specialty and goals. A wide range of consultants offer teardowns: developers, designers, marketers, and user experience (UX) practitioners.
Teardowns may merely show people how a product works to satisfy their curiosity; they can provide more competitive intelligence; or they can identify ways to increase user retention. Fixing problems identified in a teardown can improve business metrics, and UX teardowns in particular can show businesses where they can improve their customer service.
Three common traps in website or app teardowns
However, not all teardowns are created equal – even when high-end consultants do them. Several kinds of teardowns can provide misguided advice, which does not benefit the organizations who order them.
Self-centered teardowns: Providing the wrong audience’s perspective
In 97 Things Every Programmer Should Know, Kevlin Henney tells developers that they do not ultimately build products for themselves. About Face expands on this point; developers and their users rarely have the same goals, pain points, technical skill level, or domain expertise.
Neither does a consultant who does a teardown of your product. But there is a more subtle trap than their expertise: their personal tastes.
Especially in a screencast or a meeting, an evaluator can easily tell you to change your product to suit his preferences for visual design, content, and features. I call this a self-centered teardown.
But an evaluator is not your audience either. Following advice that makes him like your product may go against your audience’s goals or aggravate their pain points. That can ultimately cost you users.
Paint-by-numbers teardowns: Misapplying best practices
A development agency owner in my network had someone else do a teardown of his agency’s website. The evaluator told him to remove animations from his homepage because they made it load slowly.
Decreasing loading times on e-commerce sites is definitely a best practice. KissMetrics reported that 40% of users abandon websites which do not load within 3 seconds. That leads to a measurable loss in revenue.
However, potential clients for agencies have different goals. Rather than buying a product quickly, they want to choose an agency who does impressive work. A quarter of this agency owner’s inbound leads were asking him to create similar animations. Removing an animation to follow a best practice would have cost him revenue.
Advice to follow best practices can also lead to significant redesign efforts with little potential return. For example, removing a carousel from a homepage can lead to a full redesign of that homepage to account for the carousel’s content. Pursuing this might not make sense if a potential redesign would not make business outcomes noticeably better.
Wish list teardowns: Failing to prioritize findings
Lastly, usability evaluations have a reputation as lengthy reports that add hundreds or thousands of work items to your backlog. Triage – a time-consuming and costly activity – falls on the client who ordered it.
When usability evaluations are outsourced to low-cost providers, the client’s real cost comes in deciding which items to implement. According to Daniel Cook, users rarely have enough say in these meetings. Instead, internal preferences or a desire to merely finish the meeting can carry too much weight.
Unless required by a law or contract, fixing each item might not provide the best return on effort. Teardowns should account for each item’s impact on users and business metrics instead of treating each issue as equal.
A twist to make teardowns more effective
Alan Cooper introduced user personas in The Inmates Are Running the Asylum in 1998. Personas represent realistic archetypes of different kinds of users of a product, and they help guide a team so that their design decisions match their users’ goals.
For full design projects, personas should be generated from user research data. User research can take weeks, which would swell the scope and cost of a teardown project considerably.
Cooper and his coauthors also discuss a lighter-weight method in About Face. Provisional personas involve briefly documenting key distinctions between several types of users of a product. By checking the goals and pain points of each pair of personas against each other, you can determine ideal primary and secondary audiences for your product as a whole or specific sections of it.
Personas form the basis of the “twist” that helps a teardown avoid the three traps above. But how can evaluators truly examine a product from perspectives other than their own?
Enter a classic technique of 19th-century theatre. Method acting teaches actors and actresses to embody their characters. It helps spectators see the character more than the actor during the performance.
In a teardown, then, evaluators should use what they know about their provisional personas to see the product from the standpoint of the personas’ goals and pain points. All the while, they would shift between the perspectives of each persona and let relevant best practices be their guide.
Individual evaluators should limit themselves to 3 or 4 personas per session. A larger set would make the teardown slower and less accurate.
Benefits of teardowns from your users’ perspectives
A teardown from even one persona’s perspective can lead to significant gains over a self-centered teardown. By showing what advice matters to a product’s target audience, it yields more targeted feedback that is more likely to improve analytics and, if applicable, revenue for people who match that audience.
Teardowns from several personas’ perspectives show the implications of design decisions across a product’s user base. Since they show how accommodating one group of users better can help or hurt another group, they improve a team’s critical thinking as they discuss design improvements, new features, new designs, and redesigns.
Learn more about user research and designing from data
This article is adapted from content in UX for Development Shops: Declutter Your Interface, Design from Data, & Increase User Adoption & Retention. You may download the first 2 chapters of this ebook for free at https://davidp.io/ebook.