3

KPI-centered design

 3 years ago
source link: https://uxdesign.cc/kpi-centered-design-8d1f4e231a5
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
KPI-centered design. What’s KPI-Centered Design? What’s…

TL;DR: Despite evidence that employing a human-centered design (HCD) approach to product design yields a strong competitive edge, many companies neglect HCD or don’t even consider it in the first place. One reason for that is probably that many who think they’re working user-centered are practicing something entirely different: KPI-Centered Design. This article inquires into different forms of design processes, into the differences between proper HCD and KPI-Centered Design, and how a reasonable compromise can look like: Formulating hypotheses based on evidence rather than just KPIs or gut feeling, and complementing quantitative experimentation with more qualitative insights.

Originally published in the Interactive Pioneers InnovationLAB Blog and on 2008 ‒ Tales of Design & User Experience.

Human-Centered Design (HCD) is a term that has been around since the 1980s already (cf. Mike Cooley: Architect or Bee?), but still counts as a hot topic in the digital and e-commerce industries. A McKinsey Quarterly report from 2018 ‒ The Business Value of Design ‒ identifies “user experience” (UX) and “continuous iteration” as two of the four core competencies of overperformers compared to the industry benchmark. Nevertheless, the report shows that many companies either don’t consider a proper HCD process in the first place or don’t implement it effectively. In this article, I want to take a closer look at a specific form of the latter ‒ thinking one’s working in a user-centered manner, but missing the point.

I’ve been involved in e-commerce since 2012, and also worked for a startup and did research at a large American university in the meantime. That means for almost ten years now, I have attended conferences, seminars, roundtables, meetups, and (mostly unspeakable) “networking events” and discussed HCD and UX, but also topics like conversion rate optimization (CRO) and growth marketing with a large number of other participants. In the process, two things struck me again and again:

  1. UX, CRO, and growth marketing are often being mixed up, a topic I address in my article “Growth Marketing Considered Harmful” in the upcoming issue of i-com.
  2. Many who talk about how “human-centered design is already being done” at their company are often doing something fundamentally different, which is what I want to devote the rest of this article to: KPI-Centered Design.

In Theory: Human-Centered Design

The double-diamond process of Human-Centered Design. Left diamond: discover & define; right diamond: design & deliver.
The double-diamond process of Human-Centered Design. Left diamond: discover & define; right diamond: design & deliver.
The traditional double diamond of Human-Centered Design (by Johanna Jagow based on this source).

When I talk about a “proper HCD process,” what exactly do I mean? I’d like to refer to Don Norman’s book The Design of Everyday Things, in which he illustrates the HCD process in the form of a double diamond, which additionally came to fame through the work of the British Design Council. The idea behind these two diamonds is to illustrate that in a two-step process, one first diverges and then converges again to identify a design problem and then solve it with the greatest possible benefit to the user. During this process, a number of qualitative as well as quantitative research methods can (or should) be applied to investigate both the personal attitudes of users (“attitudinal”) and their actual behavior (“behavioral”).

In the left-hand part of the process (“Discover & Define”), usually through rather open-ended methods, the goal is to uncover a large number of pain points in the everyday lives of users or during the use of a product (divergence), in order to subsequently agree on a single problem X that is particularly promising to solve (convergence). Probably the most effective and valid method for discovering real user problems is ethnographic field research, or high-tech anthropology, as Rich Sheridan calls it in the context of digital products in his book Joy, Inc. Because this method is costly and resource-intensive (in particular, it requires a lot of time and patience), it is comparatively rarely used in industry. Instead, researchers often rely on methods like card sorting, surveys, and user interviews, which are much less costly, but also less effective because users cannot be observed in their “natural environment.”

The subsequent, right-hand part of the process involves first designing a number of possible solutions ‒ often in the form of simple prototypes ‒ for the identified problem X (divergence) and then settling on a solution validated by user tests (convergence). In this phase, the methods used are less open-ended and more quantitative, as the focus is more on measurable user behavior and the comparability of potential solutions. Finally, the best solution is implemented and further validated; often resulting in further iterations (“Measure & Learn”). Thus, an HCD process is never really finished but culminates in the continuous iteration identified as a core competency by McKinsey.

In Practice: KPI-Centered Design

The double-diamond process of KPI-Centered Design. Left diamond: KPI-based hypothesis; right diamond: A/B test.
The double-diamond process of KPI-Centered Design. Left diamond: KPI-based hypothesis; right diamond: A/B test.
The double diamond of KPI-Centered Design.

After the theory, let’s now take a look at how things often play out in practice. However, as a precaution, I would like to clarify two things beforehand.

On the one hand: When I speak of the “practice,” I don’t mean to imply that no one is following a proper HCD process. I’m simply referring to my observations from a large variety of conversations, as described in the introduction. Hopefully, these observations do not apply to the majority of companies but have accumulated to such an extent that I just couldn’t not write an article about it.

On the other hand: There is absolutely nothing wrong with a certain discrepancy between theory and practice. Anything else would be a denial of reality. At this point, I would therefore like to quote Al Roth: “In theory, there is no difference between theory and practice”. It’s only really problematic when the discrepancy becomes too large.

Such a large discrepancy exists when one speaks of Human-Centered Design, but in fact practices what I’d like to call KPI-Centered Design. This means a process that superficially appears to focus heavily on the user because you are testing and iterating ‒ and thus at least scratching the surface of HCD ‒ but at the core, there is only a focus on one (or more) business-critical target metrics (key performance indicators or KPIs). This strongly (but of course not exclusively) favors the implementation of dark patterns, especially in digital products. A classic example here is an aggressively designed scarcity effect, which creates artificial scarcity through clever communication and can be found on many, if not most, e-commerce websites today. However, this merely exploits a cognitive bias that can cause users to buy products they don’t actually need or want. That’s definitely good for business KPIs, but human-centered? I doubt it.

I’ve illustrated the by far most common form of KPI-Centered Design in the figure above. It’s based almost exclusively on A/B testing (or “experimentation,” as some call it). The left diamond of the process is greatly reduced in size; the “discovery” part is completely gone. Instead, it now mainly involves hypotheses about how a (digital) product could be changed to influence a specific business KPI. In this phase, user needs can certainly still play a role, but usually take a back seat, in comparison to classic business metrics such as conversion rates, which, e.g., measure the number of purchases or newsletter subscriptions.

Subsequently, in the right-hand part of the process, different variations are designed based on a selected hypothesis and compared against each other in an online experiment with random (and unsuspecting) groups of users. If one of the variations performs (statistically) significantly better than the control group, based on the previously determined primary target metric, it can be implemented, otherwise, another iteration is necessary. Complementary analytics data can also be used to keep an eye on secondary metrics that are not directly measured in the experiment.

A/B tests have one big advantage: they can be carried out quickly and ‒ in contrast to other methods of user research ‒ relatively easily with the live product. Yet, one of their biggest disadvantages is that they only show how users behave, but not why they do it that way. Therefore, an A/B test might massively increase a certain KPI, but at the same time worsen the user experience without anyone noticing.

What can be a reasonable compromise?

The double-diamond process of Evidence-Centered Design. Left diamond: evidence-based hypotheses; right diamond: A/B tests + qualitative research.
The double-diamond process of Evidence-Centered Design. Left diamond: evidence-based hypotheses; right diamond: A/B tests + qualitative research.
The double diamond of Evidence-Centered Design.

Now, of course, A/B tests absolutely have their raison d’être and you have to face reality: It’s practically impossible to build every design process exactly along the double diamond, especially in industry. Fast and straightforward methods are necessary to provide answers to urgent questions at short notice. Of course, this requires taking shortcuts and a little design process is still better than no design process at all, isn’t it‽ Because, of course, one could also practice what can be called “design by chance,” where the left part of the double diamond disappears completely ‒ all “discovery” decisions are based solely on gut feeling.

But still, what can be a reasonable compromise? How can you make sure that, for all the KPI (and especially conversion) centricity that is common in industry, you don’t slowly lose sight of the user and still think you’re doing human-centered design?

To achieve this, we adapt the KPI-Centered Design process in two places. First, any hypotheses about how a product could be improved should be backed up with hard, user-centered facts, or evidence. This prevents too much focus on ideas that are merely based on gut feelings (in the worst case, a HiPPO’s). Examples of valid evidence may include: Results from already completed user studies or usability tests, user feedback, competitive analyses, heat maps, analytics data, etc. Hypotheses with evidence should generally be prioritized higher than those without (and you can further refine by quality and amount of evidence ‒ cf. the “What Works” institutes in David Halpern’s Inside The Nudge Unit). This increases the “Define” diamond a little again.

Second, we extend the right-hand part of the process to include usability studies and other more qualitative user research methods. Those can already be used to validate designs for A/B test variations before they’re being implemented, but should be used, at the latest, after an A/B test has been completed to uncover the “why?” after learning about the “how?”. That additional qualitative research should not be seen as an accessory here, but as a necessary, equal “partner” alongside the A/B test, in order to get a more complete picture from a KPI and (as best as possible) user perspective. Of course, this again increases the effort compared to pure “experimentation”, but there are efficient, relatively resource-saving methods the impatient can make use of ‒ such as think-aloud studies with 5 to 10 participants. Those can take place asynchronously and completely online and at least ensure that no elementary usability or UX problems are caused by the winning variation of an A/B test (cf. Jakob Nielsen: “Heuristic Evaluation of User Interfaces“). It doesn’t always have to be ethnographic field research. All in all, let’s call this adapted process Evidence-Centered Design, due to its greater focus on user-centered facts.

Another approach to evolving from KPI-Centered Design, and thus increase the left diamond of the design process, can be an (additional) focus on product KPIs instead of the purely business-centered classics ‒ number of orders, shopping cart size, new registrations, etc. Frameworks that lend themselves to this include, e.g., AARRR (also called the “Pirate Metrics” for obvious reasons 🏴‍☠️) and North Star. Strictly speaking, such a process is of course still a form of KPI-Centered Design, but with target metrics that are much closer to the user (and yet do not ignore business needs).

Conclusion

Five types of design processes, sorted by degree of user-centricity, from left to right: Design by chance, KPI-centered design, product-KPI-centered design, evidence-centered design, human-centered design.
Five types of design processes, sorted by degree of user-centricity, from left to right: Design by chance, KPI-centered design, product-KPI-centered design, evidence-centered design, human-centered design.
The types of design processes discussed in this article, from Design by Chance all the way to Human-Centered Design.

Although it may sound like that a bit, A/B testing is not the core of the problem described in this article. That core is more abstract and has more to do with the general mindset and priorities in a company. A/B testing has its rightful place in the set of design and research methods but pops up so often in the context of purely metrics-driven design because the method is exclusively quantitative and thus extremely data-driven. This is further fueled by approaches like Lean Startup, which ‒ interpreted the wrong way ‒ can give the impression that A/B testing, or “experimentation”, is enough to be user-centric. However, only a combination of methods that answer different questions can achieve a truly deep understanding of users.

There is absolutely nothing wrong with a strong focus on business-critical KPIs; after all, business viability is one of the pillars of good design, along with user-centeredness, and, of course, technical feasibility (cf. Tim Brown: “Design Thinking“). My only wish is: If you decide to follow that approach, then please just call it what it actually is ‒ not user-centered, but KPI-Centered Design.

Calls to Action

  1. Read the British Design Council’s article about their double diamond framework.
  2. Review your own or your company’s design or experimentation processes. Which of the five forms of design processes above do they resemble most?
  3. Ask yourself: When was the last time you formulated a design or research hypothesis? Which evidence did you use to back it? If none, which could/should you have used?

Acknowledgment

Many thanks to Daniel who, while proofreading, had the idea to include “Design by Chance”, AARRR and North Star in the article. 🙂

0*970_ypQyXVKSe4PY?q=20
kpi-centered-design-8d1f4e231a5
The UX Collective donates US$1 for each article published on our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK