A new interview with Stephen Wolfram on “why he thinks your life should be measured, analyzed, and improved” popped up on the same day that American Medical News ran a story advising clinicians to look for “red flags” like unfilled prescriptions and delayed screenings since it may mean the patient has lost their job or is having transportation trouble.
What if the big data vision of ubiquitous surveillance comes true and people are monitored — and helped — to a much greater degree than they are now?
No answers, just questions, but I thought I’d share. What do you think?
An aside: I love how MIT Technology Review lists “upcoming articles” at the bottom of their Big Data Gets Personal feature. I may steal that for upcoming research reports and blog posts (no dates attached, just ideas I know I’ll write about).
Ian Eslick says
Ubiquitous surveillance of our online lives has already come true, and large datasets of information about our preferences, desires, and status is bought and sold daily (including our medical data). The unfortunate truth is that it is mostly used to enrich corporations through influence over our spending habits, not to improve the quality of our health or lives. As with any technological capability, it is lack of transparency that breeds corruption and abuse. If we have the means to ensure that the flow of all this information about us is used for the purposes we desire, then Wolfram’s call to action is the start of something potentially exciting. For example, it would be interesting to be able to say “Please have my RN to call me when I show signs of depression and haven’t refilled my prescription recently” or better, “please contact my BFF when I show signs of depression and my credit card bills indicate I’m eating junk food again!” The data necessary to automate these kinds of behaviors is already collected by advertisers and more accurate instruments relevant to health are explored by emerging mHealth providers like Ginger.IO.
Susannah Fox says
Yes! Where do I sign up? With caveats and protections, of course.
My first reaction to reading about Stephen Wolfram, I confess, is usually, “Oh, please. You’re so wealthy, powerful, and beyond that I can’t even see how you are part of the reality I live in.” That’s what I meant when I wrote about how “men are out on the patio enjoying a cigar and contemplating their personal time-use philosophy while women clear the table, sweep the floor, get the kids to bed, and frantically send emails about the next day’s meetings.”
But I’m realizing that this was a failure of my imagination. Sure, we can’t all have Wolfram’s system, but maybe we can build something that is as useful, on a different scale, for everyone to use and benefit from. That’s the lesson I got today, reading those articles in tandem.
Ian Eslick says
I mentioned Ginger.IO, because they measure interesting properties of our lives using a smartphone app without any explicit ongoing interaction. Most of Wolfram’s data is like that — measures that emerge as we interact with the growing number of internet connected things. They are researching how to turn these activity signals into validate estimates of quantities like emotional state, metabolic activity, and pain.
More generally, we are discovering that traces of behavior and activity can be surrogate measures for the variables we elicit from patients in doctor’s visits or clinical research. Making this available to inform your interaction with a professional, or to generate the kind of automated prompts I describe above, is accessible to almost everyone today.
There is, of course, a ridiculous amount of work in front of us to characterize all these surrogate measures, evolve an ecosystem for managing them, and and investigating the impact of using this data to influence share decisions, treatment, and behavior.
Brett Alder says
“There is, of course, a ridiculous amount of work in front of us…”
Amen! Which makes me wonder: How to bridge the chasm? You are right that big data is being generated and put to use for corporations, but how to bring the big data value to individuals?
I know there have been many movements to go from big data directly to relevant insights (remember the semantic web?), but I’m not aware of any big successes. Most of the ones I’m familiar with go from big data, to human curation, and then to relevant insights. Google relied on human curation (links between pages) to build its early search algorithms, and there are many other examples (Pinterest, Quora, StackOverflow, Yelp, Amazon) that rely on human curation.
So I guess that’s what I struggle to understand about the self-tracking/quantified self movement is, “How do we take all of this generated data and convert it into useful insights? Is human curation possible or necessary in this case?” I’d love to hear your ideas…
Ian Eslick says
I think what people are calling Big Data is very different than the semantic web; instead of deep representation, we apply statistics at scale. We look for patterns in the data which are predictive of variables that people can act on.
At Compass Labs, for example, we help advertisers do a better job of identifying people on Facebook and other social media platforms engage with their audience. We do this by analyzing public text, identify groups of people who talk about specific topics, then cluster them with similar people to identify a large population (100k-1M users), and map those clusters back to the parameters you can use to target people on the advertising platforms. We curate the algorithms by hand, and we’re using human-produced content, but it’s leveraged work; we don’t need to curate the raw content every day, which is good since we have billions of messages to work with.
Perhaps a hypothetical would help. A recent UCSF study found patterns in heart rate monitoring data that were predictive of the patients who had cardiac events. What if this signal turns out to be useful predictively? If we had biometrics from 100k older patients and could identify a pattern of heart rate variability that was predictive of a likely cardiac event. What if we found that those people, and only those people, actually receive any benefit from statins or some other therapy to reduce cardiac events. We may not know why, but we can still act on it to dramatically reduce how much money we spend on statins (and human suffering from side effects or drug-drug interactions). For example, perhaps then we would then treat 3 patients per heart attack prevented instead of the current 100 treated patients per prevented heart attack. Of course to make this a standard practice, there would need to be some controlled clinical trial on the intervention to justify removing people from the standard of care of statins.
You can also check out my work at Personal Experiments for another view of how QS with modest human curation can generate “big data” about health to better guide behavior. It’s early days for that site and model, but very promising. I have a post coming out at lybba.org/blog in two weeks introducing the site plus some research studies.
Ian Eslick says
PS – For example, our e-mail or social media activity is a treasure trove of information that has implications for tracking and health: http://ianeslick.com/2012/02/12/what-can-we-learn-from-our-e-mail-logs/
Brett Alder says
Thanks Ian! Very interesting. Can’t reply on your response above (too many embedded replies…), but look forward to learning more about Personal Experiments. We definitely need to solve this problem of scale.
Susannah Fox says
And to continue the theme of “let’s learn together” (with humility and open minds):
Andrew McAfee’s essay is a must-read for doubters: Pundits: Stop Sounding Ignorant About Data
Christina says
Personally, working in the healthcare field (albeit with the relatively disenfranchised youth), I LOVE the idea of some larger monitoring system, because in our current mode, we expend substantial manpower trying to track down “missing persons” — patients who do not come in for appointments and then are not reachable via telephone (and sometimes mail). Under our current state law, we can not link a parental email to a child’s account as there is concern that it can not be unlinked in the future when said child does not want a parent to access their healthcare information.
Sure, sign me up too. It’s hard not to imagine this spurring considerable backlash and/or constitutional amendments regarding privacy protection.
Susannah Fox says
Chris,
I can’t remember if I’ve pointed this out before — if so, forgive me, it’s one of my favorite examples of how data exhaust (or in Ian’s lovely phrasing, activity signals) can be used to help kids living with chronic conditions.
Project HealthDesign funded a pilot program at Stanford called Living Profiles which, with teens’ permission, tracked the words they used in text messages with friends & family and their song choices. If the data showed a downward, depressive trend, the system sent a simple reminder for the teen to take their meds. Research had shown that when teens get sad, they don’t take their meds, they get sick, they get heat from their parents, and it all cycles down. The teens in the pilot study loved being treated as responsible people — and didn’t mind the surveillance because it wasn’t their parents watching them, but an automated system that kept them on track.
In small world news, one of the investigators was Peter Chira.