Participatory research: it’s not everything, it’s the only thing

Campo Volantin footbridge by dalylab on flickrOne of my favorite structures in Bilbao is the Campo Volantin footbridge, designed by Santiago Calatrava. I went out of my way to walk over it many times while I was visiting that beautiful city. Approaching it was a visual treat and there were always musicians playing on it, an aural treat.

But once you start walking on the bridge, you can’t help but notice the ugly plastic rug that’s been laid down on top of the glass tiles (over a layer of worn-out black strips).

Floor of the Campo Volantin footbridge in Bilbao

It seems that the designer didn’t think about the fact that glass is slick when wet and Bilbao gets an average of 200 days of rain each year. When the bridge opened, people slipped, fell, and injured themselves (not to mention how frightening it would be to slide toward the edge). Yikes!

In my talk at the Salud 2.0 conference in Bilbao, I didn’t mention the bridge, but I did tell, once again, the story of Diana Forsythe, who uncovered a similar example of design failure.

In one chapter of her book, Studying Those Who Study Us, she describes how, in the 1990s, she did some fieldwork in an artificial intelligence lab that was asked to create an information kiosk for newly diagnosed migraine patients. The idea was that patients could walk up to the kiosk, punch in questions, and get some answers before or after they saw their doctor. It was a nice idea, ahead of its time in some ways. But when it launched, it was a failure. Patients were polite, as they often are, but it wasn’t useful. Why? Because the kiosk’s designers had not asked patients what they wanted to know. They relied on an interview of a single doctor to tell them what he thought patients should want to know.

As Forsythe wrote, “The research team simply assumed that what patients wanted to know about migraine was what neurologists want to explain.”

The mismatch was complete. The kiosk failed to answer the number one question among people newly diagnosed with migraine: Am I going to die from this pain? It’s an irrelevant, even silly question from the view point of a neurologist, but it is a secret fear that people may have felt comfortable expressing to a kiosk, but not to their clinician.

I asked the audience to consider how they can design health interventions that take into account the reality of people’s lives. The reality that people may be accessing information on their mobile phones, not on large screens. The reality that more people track their health “in their heads” or on paper than track using technology. The reality that patients and caregivers know things and want to share their knowledge.

If you’re interested, I found a student’s critical analysis of the Campo Volantin bridge explaining the problem. You won’t find mention of the bridge on the designer’s website. He’d probably rather forget the mistake. I bet you won’t find mention of that migraine kiosk on the CV of anyone who worked on it, either.

Don’t be that guy.

Design for what could be, as Forsythe wrote. Let patients help, as E-patient Dave deBronkart has said. Listen, more than ask. Build participation by your target audience into your work. Win by inclusion.

(For anyone who doesn’t recognize the paraphrase in the title of this post: “Winning isn’t everything, it’s the only thing.” — attributed to American football coaches Red Sanders and Vince Lombardi.)

5 thoughts on “Participatory research: it’s not everything, it’s the only thing

  1. So well said, Susannah. Where I live, we have some very high-end waterfront condos with exterior covered corridors leading to individual front doors (instead of the more commonly-designed interior hallways). A swell idea – except in winter, when those exterior corridors turned into icy skating rinks. Like your observation about the Campo Volantin bridge, I don’t think you’ll see that condo design goof bragged about in the architect’s portfolio either.

    I’m increasingly distressed by the number of health apps that seem more focused during the design phase on investors and other techies than on the needs of Real Live Patients – until much much later, as Jessie Gruman complained about in her now-famous Open Letter to Mobile Health App Developers: “These developers weren’t interested in my needs, but rather were seeking endorsement of the beta version of their new apps.” – http://www.cfah.org/blog/2013/an-open-letter-to-mobile-health-app-developers-and-their-funders#.UeRVZFNQ2PY

    • Thanks, Carolyn! I love Jessie’s blog posts, but had missed that one. Your comment on it is classic — too true how people often rush in with their own “fixes” (although I am intrigued by that 2-minutes-for-brushing app).

      Let’s celebrate learning from mistakes. I like it when I see a publication claim its errors, as Slate does when it tweets a link to their Corrections page (for example). And there’s a tradition in medicine to dissect and learn from mistakes — the “M&M conference” (morbidity and mortality). If you see other examples, especially related to technology and communications, please let me know.

  2. First, WOW. What a great addition to my own experience of Bilbao a couple of years ago.

    Second, WOW on that undergraduate’s paper. Abstract includes “The practicalities of the Footbridge are also discussed, posing the question of whether the structure has been intelligently designed.” (And near the end he establishes that the answer is pretty much “no,” with specific reason.)

    Third, re Forsyth: YES, YES, YES. So often when I see shortfall in medicine, at least half of it can be traced back to not asking the ultimate stakeholders (patients) what THEY want. How ironic that the industry says it wants patients to be more engaged, when so often they don’t engage us. Physician, heal thyself.

    In my case, when I was diagnosed, you BET the first thing I wanted to know was “How likely is it that I’m dying soon?” You BET. But even today, try to find a website about kidney cancer where that’s easy to find.

    At least as important is that the answer you get from other patients is far more nuanced than the dry, useless numbers on a medical website.

    Hey hospitals – want to know how to improve your patients’ experience of value, without blinding driving costs up? Really get to know some patients, and find out what they really care about. (I know some hospitals are doing this, but most don’t get it yet.)

  3. Susannah – great post!
    I love the many different terms and ways to express this idea: let patients help, participatory research & medicine, co-design…

    I’ve been on a kick lately of talking to other hospital admin types about their challenges. I’ll ask, “have you asked any patients who have been through that area?” And then I wait for the jaw drop and head tilt followed by: “oh wow!..”

    So much of what we do is grounded in assumptions. ‘I think patients would want this’…’if I were a patient, I’d want that’. Administrators do it, designers do it, doctors do it. Patients do it about providers and their respective organizations.

    I’ve been thinking about the root cause too. I think we falsely reward having an answer, particularly a clever one, too often. Someone speaks up at a meeting and sounds on track, “lets do that!” I can imagine the same thing happening in the case of research. Usually those ideas come from a well-meaning place. But they are assumptive, they are at best guesses of what someone else would want. Perhaps taking even 15 minutes to engage an ‘end user’ is a sign of weakness?

    I’m not sure how we can bring about a cultural shift towards participatory _everything_, but I think posts like this help!

    • Thank you! And yes, this happens in research, too. Too often there is a meeting, just as you describe, and someone makes a clever observation that sounds right to everyone around the table, who are all under pressure to get a survey into the field that night, and they go with it. Often it works out fine. Sometimes it doesn’t.

      A good organization (and Pew Research is one) examines why the question didn’t work. A really good organization (and I can say that my section of Pew Research – Pew Internet – does this) doesn’t field a survey question on that topic again before going to school on it, interviewing experts in that area, listening to the public conversation around the topic, etc.

      That’s what happened when we started looking at self-tracking in 2010. We fielded two exploratory questions, listened to feedback on how they fell short, crowdsourced a new set of questions, and the result was a much, much better piece of research. We couldn’t have done it without employing participatory research methods.

      And now for a commercial break: I’m leading a class on participatory research at the Stanford Medicine X conference in September. This blog is one place I’m collecting resources (click on the participatory research tag). I’m learning as I go, so I need everyone’s help to create a useful class. Please comment or tweet or email resources, questions, examples: @SusannahFox on Twitter or sfox at pewinternet dot org for email.

Leave a Reply

Your email address will not be published. Required fields are marked *