There has been a steady drip-drip-drip of articles documenting how health apps are sharing data with third parties:
- Data sharing practices of medicines related apps and the mobile ecosystem: traffic, content, and network analysis, by Grundy et al. (British Medical Journal, Feb. 25, 2019)
- You Give Apps Sensitive Personal Information. Then They Tell Facebook. Wall Street Journal testing reveals how the social-media giant collects a wide range of private data from developers; ‘This is a big mess’, by Sam Schechner and Mark Secada (Wall Street Journal, Feb. 22, 2019)
- Is your pregnancy app sharing your intimate data with your boss? As apps to help moms monitor their health proliferate, employers and insurers pay to keep tabs on the vast and valuable data, by Drew Harwell (Washington Post, April 10, 2019)
- Assessment of the Data Sharing and Privacy Practices of Smartphone Apps for Depression and Smoking Cessation, by Huckvale, Torous, and Larsen (JAMA Network Open, 2019)
This post is my chance to share some relevant data, add my perspective, and ask for your input.
First, the data from a 2018 Hopelab/Well Being Trust study I helped write:
- 71% of female teens and young adults say they have tried mobile apps related to health, compared to 57% of males. Three in ten (30%) females say they currently use a health app, compared to two in ten (20%) males.
- Fully 48% of females ages 18- to 22-years-old and 25% of teen girls say they have used a period tracking app, compared with 2% of males.
- Sixteen percent of females use a meditation app, compared with 5% of males.
From a 2015 Pew Research Center study:
- Eight in ten smartphone owners said they have downloaded apps and 60% of app downloaders surveyed said they have “chosen not to install an app when they discovered how much personal information it required in order to use it, while 43% had uninstalled an app after downloading it for the same reason.”
- People appear to use popularity as a proxy for trustworthiness: 57% of apps downloaders say it is important to know how many times an app has been downloaded when they are making their choice about which app to use.
This is a fraction of the data available about people’s use of apps. Bottom line: This is a big and growing market.
From my perspective, here are the two sentences that are the crux of the JAMA Network Open article:
Our data highlight that, without sustained and technical efforts to audit actual data transmissions, relying solely on either self-certification or policy audit may fail to detect important privacy risks. The emergence of a services landscape in which a small number of commercial entities broker data for large numbers of health apps underlines both the dynamic nature of app privacy issues and the need for continuing technical surveillance for novel privacy risks if users and health care professionals are to be offered timely and reliable guidance.
What does this mean in practice? Who is responsible for auditing these apps’ security practices?
In England, the NHS features an apps library on their front page and NHS England takes responsibility for evaluating the apps featured (although it appears they rely heavily on self-certification). The existence of the library is emblematic of how the UK approaches health and health care — leaning in a bit more paternalistically to digital health than we in the U.S. have done. I was the CTO at HHS when NHS was launching their library and warned them to be cautious, which of course they already knew they needed to be, and we had some spirited discussions about how to approach the creation of their recommendations list. I’m happy to see the library featured so prominently since I suspect it means it is a popular feature.
HHS, by contrast, limits itself to recommending fact sheets about prevention and wellness. Nothing dynamic or personalized — just the basics of immunization schedules, physical activity guidelines, etc. Very much the “better safe than sorry” approach to digital health. HHS’s main regulatory arm, the Food and Drug Administration (FDA) has chosen to focus their oversight of digital health on medical apps, not wellness apps, but also created a cybersecurity oversight structure that provides guidance for developers. HHS’s Office of Civil Rights (OCR) maintains fact sheets about what entities are covered by HIPAA. (OCR also created a developer portal but I’m not linking to it since a warning keeps popping up in my browser that it’s an insecure site. Oops.) Meantime the Federal Trade Commission (FTC) has created a handy checklist for app developers who want to stay on the right side of all the laws that could apply to them.
The basic regulatory structures for consumer protection are being locked into place but it’s still just scaffolding.
Here are my other take-aways:
- We know people are hungry for guidance and are eagerly using health apps, with varying degrees of success and satisfaction.
- We also know people are not fully aware of the data sharing that health apps are engaging in.
- Public shaming by reporters and researchers is currently the main check on companies’ use of people’s personal data.
- There is a big trust gap that could be filled by government agencies or by companies and organizations willing to do the work of continually testing and auditing health apps’ effectiveness and security practices.
Now: Your turn.
What are you seeing in the health apps marketplace? Which apps do you trust? Which apps have you stopped using or deleted because of data sharing concerns? If you are in a leadership role, either in the government or at an organization that could hold sway, what are you doing to build toward a vibrant ecosystem? Or are you in a protective crouch? (Note: You can comment with a pseudonym.)
Featured image: “Privacy” by Matteo Papagna on Flickr.
Garry G says
This is a solid framing. Great capture of recent work. Busy day ahead but a quick a question/note:
Do we need base layer protection for our personal health-wellness data?
Ocean Protocol —
If not on your radar — Ocean Protocol is an early days effort to develop low level blockchain protocol for data protection-sharing-exchanges. Hard to wrap one’s head around but it’s a solid team and the project will need time for the present to catch up to the future. https://blog.oceanprotocol.com/building-a-better-world-with-safe-sharing-of-data-d448ec2472b0
They have a few pilot relationships in healthcare-wellness — https://medium.com/datadriveninvestor/connectedlifes-healthcare-ai-journey-with-ocean-protocol-7f93309f7cd1
I’ve been following OP since beginning — and still trying to imagine implications. My trust now is in the team and their ability to see future problems. Not seeing this as a near term solution.
I also keep Diigo tags here:
https://www.diigo.com/user/garrygolden/?query=%23data+health
https://www.diigo.com/user/garrygolden/?query=%23data+wellness
Susannah Fox says
Garry, thanks so much for these resources. I clicked through and read this article by the Ocean Protocol team: Improving diagnosis and treatment for Parkinson’s sufferers with data and AI. It marries up your comment & Gary Wolf’s comment below with its focus on sensors and non-app data solutions for secure sharing.
And the vision is indeed beautiful:
“Using a wearable with motion sensors that Parkinson’s Disease patients can wear on their wrists, ConnectedLife measures their movement and provide more complete, objective data to doctors about their condition and the effectiveness of the levodopa dosage. The device captures motion via sensors 50 times a second. The data is run through algorithms designed to recognise bradykinesia or dyskinesia. With the ConnectedLife device, patients can be monitored continuously, over long periods, wherever they are…
“Biomedical algorithms could even be predictive, [Dr. Franz Pfister, Chief Medical Officer of ConnectedLife] says, by recognising patterns that could greatly add value to support doctors’ assessments. He likens it to the revolution in insulin treatment in the past few decades, where the development of self-monitoring of blood glucose has allowed patients to adjust insulin doses themselves. What is needed is an objective parameter for Parkinson’s similar to that of blood glucose levels for diabetes sufferers. Mining all this data makes that goal achievable, he says.”
Gary Wolf says
OK, let’s go. Thanks Susannah. I track these stories too, and, while the details matter, all consistently point to the development of a health surveillance system that is already being operated abusively, with a promise of things getting worse. Garry Golden’s link list on Diigo is well worth looking at in the context of this post. Thank you Garry for adding it. Typical link on Garry’s list:
Walmart Patent Wants To Monitor Your Health & Stress Levels While You Shop: https://www.cbinsights.com/research/walmart-patent-biometric-shopping-cart/
Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates:
https://www.propublica.org/article/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates
These links are small examples of the ideas that are driving the development of the data sharing systems in which ALL apps operate. For instance, the existence of the IMEI, a unique identifier implemented in hardware on mobile phones, allows tracking at a layer beneath any protections offered at the account level. See the post last week by Joel Reardon: “Why do you even need the IMEI?” https://blog.appcensus.mobi/2019/04/26/why-do-you-even-need-the-imei/
Although I agree that health agencies have a responsibility to offer more considered guidance about apps, taking into account the reality of abuse of privacy, I’d like to offer a seemingly extreme, but, in my view, realistic assessment of the app ecosystem: it’s the wrong place to develop health tracking tools. (What?? Heresy!!) Apps are for commerce. The app marketplace and the technologies that support them relentlessly drive toward exposing the status of individual users for the purpose of integrating personal data into large scale sales and service functions. This large scale functions are important, but they aren’t the only important functions, and when personal data is treated merely as raw material (even if this is raw material for “customization”) all the other interests we have in our own observations is neglected, and the harms we suffer from being surveilled are presented as inevitable.
Why should all health tracking begin with apps? With the commoditization of sensors, which allow for inexpensive local tracking, and already well developed technical resources for personal control of personal data, such as virtual private networks, our health data can live “closer” to us; and then the focus of development can be on systems of targeted permissioning that allow sharing for specific purposes. I realize this vision of a different kind of health data ecosystem poisons the dreams of a thousand unicorns. &.. that’s a good thing!
Susannah Fox says
I was today years old when I found out that my phone has an immutable identifier — wow. For those who don’t know what an IMEI is, I highly recommend clicking through on the article Gary shared above.
And YES I agree with your question: Why should all health tracking begin with apps? You are familiar with the findings of the first national survey measuring American adults’ tracking activities, which I conducted, but for those just tuning in: Seven in ten U.S. adults track a health indicator for themselves or for a loved one. People living with chronic conditions are significantly more likely to track a health indicator or symptom. And when we asked respondents to think about the health indicator they pay the most attention to, either for themselves or someone else, and to tell us how they track it:
– 49% of trackers say they keep track of progress “in their heads.”
– 34% say they track the data on paper, like in a notebook or journal.
– 21% say they use some form of technology to track their health data.
(Total exceeds 100% due to multiple responses.)
Now, of course, it’s six years later and I bet we’d get a much higher reading of tech use. But let’s not forget the paper & pencil people, nor those who want to count steps without having anyone count along with them in the background.
To bring the conversation back to apps, though: Just as millions of people are continuing to use Facebook despite their continual failure to put data protections in place (to name one issue of many), millions of people are continuing to download and use apps despite these headlines. What can be done to guide people to alternatives?
Lorraine Johnson says
This is a great discussion! Yes consumers don’t understand what Apps are doing, collecting, selling and reconnecting data. I think that data aggregators, like Apple, Facebook, Amazon, and Google should NOT be in the healthcare industry b/c their business model depends on complete user information to one extent or another (some would argue re Apple, but I just know that things change.) All of these companies are driven by shareholder value, not patient values.
We run MyLymeData patient registry and operate as a trusted third party intermediary for the use, reuse, privacy, and researcher vetting purposes. As a nonprofit advocacy organization our interests align with the patients we represent who a) opt in, b) have selected data stewardship with an org they trust, and c) who can leave at any time–that is the ultimate in accountability.
We PAY our vendor for the privilege of having the data we collect remain ours to steward on behalf of the community. If we did not pay, it would be sold to pharma, industry etc b/c that is the revenue sustainability model of the platform vendors who house patient registries. They are not bad, but their incentives do not align with patient concerns, like privacy, data use, reuse, re-id, etc. I do not get in a snit about this (though I do think that gov resources might be better allocated to help patient registries become sustainable) because I understand it is a business model.
Perhaps patients should pay for service and privacy? Align commercial incentives with patient values by making patients pay for that right at least with respect to Apps. Otherwise, why develop them, what sustains them? It also nudges toward conscious choice (I want privacy, but that costs $$ or I don’t care so much about privacy and it is cheaper). But we have no transparency. And also I know that I cannot ask patients to pay for privacy. The motivation to participate would tilt away from enrollment.
Just a last point about sustainability and patient registries–no one wants patient groups taking pharma $$ or industry $$, and that means patient orgs have to fundraise for $$ to keep data private and make sure it is used appropriately. It’s a really good cause, but where is the $$ support for that mission? We do it and I think it is very, very hard.
Lorraine
Susannah Fox says
Lorraine, thanks so much for this comment. We are all circling this intersection of business models and health data rights. Your perspective is very helpful.
I was listening yesterday to Kara Swisher’s interview of Julia Angwin, a pioneer in data-powered journalism. Here’s an excerpt from the transcript, when she is talking about her book on MySpace:
“Julia: One of the many things I was shocked about while writing that book was sort of the dawning of the realization that there was a market for personal data. That’s really what the social networks were doing is monetizing your data. And so, I thought, I want to start an investigative project on that topic…
…What I found was that that type of reporting, it could lead to more concrete results, because the fact that you diagnosed the problem so clearly and you released your data set meant that people could really clearly identify the problem, and there was a way to solve it. I mean, obviously, we haven’t solved any of those problems, but …
Kara: When you were writing those things, there was no people being upset about it that much. There was some. They were doing this wholesale taking of data, not stealing, you gave it up. There wasn’t that much anger over it. It was celebrated. It has been celebrated for a long time.
Julia: I was too early for the outrage. People were like, “Why are you writing about this? It’s just creepy ads.” I think it had to get to the point where … The election was where people realized, “Oh, this is affecting our common discourse,” in the elections. That’s why I feel like people woke up in 2016. When I was writing in 2010, Jeff Jarvis blogged, like, “this is so dumb, you’re taking down the innovation economy, like, what a stupid series of articles.” That was kind of the common tech view of it.”
– end of transcript excerpt –
Are we still “too early for the outrage”? My worry is that the drip-drip-drip will make some people who could benefit from a great app decide NOT to try it because they have read or heard about these stories and studies. Who or what can help guide people to trustworthy apps? There is an absence of leadership right now. And that is a huge opportunity in the marketplace.
Susannah Fox says
One reason I maintain this blog is to provide an open forum for people to share resources, debate ideas, and meet each other where we can type more than will fit into a tweet. Thanks to all who are making the jump from Twitter or another platform!
Meanwhile, on Twitter: Theresa Defino shared a link to HHS OCR’s HIPAA FAQ (typing that many acronyms in a row gives me government-service flashbacks).
Lucia Savage shared a link to a definitive document that I also highly recommend: “Examining Oversight of the Privacy & Security of Health Data Collected by Entities Not Regulated by HIPAA” (PDF)
Gary Wolf says
Lorraine wrote: “Just a last point about sustainability and patient registries–no one wants patient groups taking pharma $$ or industry $$, and that means patient orgs have to fundraise for $$ to keep data private and make sure it is used appropriately. It’s a really good cause, but where is the $$ support for that mission? We do it and I think it is very, very hard.”
I can’t let this pass without
The limited gov’t and foundation support is a key barrier; when I present on this topic I find a lot of easy agreement and then a return to trendy data aggregation and science communication approaches, with almost no commitment to community development and non-commercial tools. (Would love to be proven wrong. Pointers gladly accepted. It’s pretty rough out there for the “apps won’t solve everything” crowd.)
Gary Wolf says
(That was an emoji that didn’t post correctly.)
Susannah Fox says
Thanks, Gary! Don’t know why your no-link comments are not posting automatically. What was the emoji? (just out of curiosity)
Gary Wolf says
/applause! (I like that Lorraine talked money)
Mighty Casey says
First, answering the original question, I do use a few health apps – tracking physical activity, mostly – and find that the one that’s native to my phone, and the semi-smart (they’re all pretty dumb, really) watch made by my phone’s manufacturer, are my most used.
I’ve always been tech-toy-curious, since engineer (broadcasting), and am always interested in trying new stuff. That said, as I’ve become part of the e-patient rabble, rousing for change and landscape-flattening to benefit citizens vs. Mega Corps (aka US healthcare industrial complex), I’ve noticed what Lorraine points out as the need “to fundraise for $$ to keep data private and make sure it is used appropriately” – I call it the warm handshakes/cold bagels compensation model offered to us ground-level experts in healthcare system issues, while over our heads $4T/year gets exchanged by Mega Corps and all who sail in them, as we attempt to pay our bills with handshakes and bagels with varying levels (spoiler: no) success.
Given the amount of treasure minted by Big Tech off our every move – hello, back to the tracking apps! including the ones we don’t know are tracking us! – I started pushing the idea of people/citizens getting cut in on the treasure. What CA Gov Gavin Newsome tagged as the “data dividend.” I’m not really advocating for individual direct payments to individual humans – that would amount to pennies, really. What I’m looking for is some kind of public trust fund established to benefit the citizens that have created this bonanza – Big Tech’s never-ending shareholder ROI and ever-growing market cap(s) – since Big Tech apparently pays little in the way of corporate tax to … well, any nation, much less the corporate-tax-cut-of-all-the-world US of A. (2018 taxes: https://www.cbsnews.com/news/2018-taxes-some-of-americas-biggest-companies-paid-little-to-no-federal-income-tax-last-year/)
We really are at a crisis point – health of the nation, health of humanity, individual health, all of the above – with what Big Tech has wrought in the “move fast, break things” gold rush that’s been the last 25-30 years of the digitization of (hu)man. What could have been a Gutenberg moment has turned into solidification of a global oligarchy. Now what?
I’m reading Mike Monteiro’s “Ruined By Design: How Designers Destroyed the World, and What We Can Do to Fix It” (https://www.amazon.com/Ruined-Design-Designers-Destroyed-World-ebook/dp/B07PS16XY9) – after reading most of Roger MacNamee’s “Zucked” (still working through it), and starting Shoshana Zuboff’s “Surveillance Capitalism,” I may have a more dystopian view of what tracking is doing for (really, to) us.
Like so much of America’s promise … it’s still a promise undelivered, ’cause ever’body got distracted by ALL THE MONEY. STREETS ARE PAVED WITH GOLD! We’re still swallowing that fairy tale. Works out for a few, but for the rest of us? YMMV, and most of us are still in ox carts that have been tricked out with a Ford chassis.
Camille Nebeker says
The Connected and Open Research Ethics (CORE) is a learning community to support the ethical design of digital health research. You can access IRB-approved research protocols and consent language on the Resource Library, pose questions on the Q&A Forum and connect with our global Network of experts in the digital health research sector. We hope you find this free resource useful and, please contribute to the conversation: https://thecore-platform.ucsd.edu/
The CORE platform was developed with support from the Robert Wood Johnson Foundation.
Susannah Fox says
Thanks, Camille, and double-thanks for making the jump from Twitter.
As a researcher, I see CORE as a goldmine of tools and resources. But as an entrepreneur and advisor to both startups and big legacy health care companies, I’m not sure I see the value. Can you help me understand how someone who is NOT a researcher might benefit from aligning their work with the practices you & your colleagues endorse? Please know I’m asking this as a friendly fan, hoping you’ll take the opportunity to brag.
For example, I can think of a recent situation with a startup health company that made the strategic decision to NOT work with an IRB: uBiome. Now, this is extreme, but the FBI searched their offices for evidence related to alleged billing improprieties. Apples and oranges? Or maybe these threads are related? Back in 2013, Janet Stermwendel blogged about uBiome, writing (presciently): “Ethics takes more than simply meeting legal requirements.
Thanks in advance for your help!
Camille Nebeker says
Hi Susannah,
We began the CORE initiative in 2015 to bridge a gap within the traditional academic digital health research space. Researchers needed help designing studies and selecting digital tools and IRBs needed help with evaluating these studies. One way of doing that was to bring together a diverse community of experts (the CORE Network) who could foster a learning “ethics” system and help with resource development and informal peer review. The CORE platform features were developed with input from the researchers and IRBs.
Since that time, citizen science (in many forms) and the health tech startups have increased. When involving self (e.g., DIY, Lead Innovators, N=1) or others in health research, the CORE can be a resource – especially when an IRB is not required or utilized. Those operating in under/unregulated space may have questions about how to do a self-study safely or want guidance on a protocol they are developing.
So, the value to those outside of the traditional academic research setting may not be readily apparent. However, our infrastructure is in place and can be expanded to support the citizen science community and health tech startups.
Specific to startups, how can we incentivize a company to seek out peer review, not because of a regulatory or legal requirement but, because they don’t want to be the next Theranos? Some of these companies are exploiting the lack of regulatory infrastructure and some are clearly unaware or simply don’t know what they don’t know. What will motivate them to do the right thing?
We are presently working with a group of patients (Project Apollo) interested in designing self-experiments as a cohort but, they don’t have training in research design and ethical practices. Building from what we’ve learned during the Blood Tester participant led research (PLR) study, we are co-designing a training program to increase research literacy by adapting our Building Research Integrity and Capacity (BRIC) course. We’ve discussed creating space within the CORE platform for groups like the PLR so there is a safe space to convene and learn together. We are also creating an IRB that is register with the Office of Human Research Protections and familiar with citizen science forms of research. Let me know if you’d like to be involved.
Here’s a link to papers we’ve published on the CORE initiative: https://thecore.ucsd.edu/publications/ and more on the BRIC program is here: https://bric.ucsd.edu/
Susannah Fox says
Thank you! Not being the next Theranos is one heck of a motivating principle — and you lay out a compelling argument for everyone to dig into these issues as we go forward into this exciting (to me, scary to others) time.
Camille Nebeker says
It is exciting and I’d like to, with input from the greater community, begin to define what makes for an ethical unicorn. Let me know if you’d like to explore this with me.
Sad Nerd says
I’ve spent party of the last year or so getting paid to read applications’ Terms of Service and Privacy Policy. My general conclusion from that is… users are fucked.
I’m not a lawyer, but these agreements all seem very one-sided: written to protect the companies’ interests. Generally speaking, they do the bare minimum to inform users, couched in legalese, and written at a reading level far beyond the average American.
Our legal framework for this is broken. Full stop. Worse, it feels like a cruel joke, at our expense.
The bare minimum acceptable threshold here would be to:
1. Force companies to adopt standardized Terms, that can be presented to users graphically.
2. Ban forced arbitration and favorable jurisdiction clauses.
3. Force companies to provide users with a full accounting of all the places the data will flow to and all the specific uses it will be subject to.
4. Force companies to take responsibility for follow-on data sharing, similar to how HIPAA requires Business Associate Agreements.
5. Force companies to get affirmative consent initially and for any changes to the Terms.
And if we were really serious about empowering and protecting users, we’d have a central data registry they could use to monitor and cancel their accounts, at any time, from one interface.
But what do I know? I’m just a software nerd.
David Harlow says
Assuming that the basic structure of the online data economy is not about to undergo any tectonic shifts, it seems to me that one key approach to be promoted is a mix of proper labeling and informed consent.
Labeling à la cigarette packages (succinct notices, changing over time so they don’t become invisible), but including images (as in other countries, like Australia).
Informed consent through better privacy policy explanations, as ONC has kinda sorta tried to promote through its Model Privacy Notice program (with a model notice released in 2011 and updated in 2016, and a challenge that yielded three winners in 2017; see https://www.healthit.gov/topic/privacy-security-and-hipaa/model-privacy-notice-mpn). I haven’t seen any of these in the wild. A readable layered notice (simple, icon-based on top; detailed underneath) to replace the usual 40 pages of unread mumbo-jumbo could be a boon to end user understanding and could lead to more informed decisionmaking about which “free” apps to use.
Garry Golden says
Following up to my post. This is a great interview w/ BetterPath CEO (company just acquired by Humanity.co). A great framing of data in healthcare sector w/ societal and business context.
https://soundcloud.com/healthunchained/ep-23-comprehensive-health-data-matt-sinderbrand-ceo-betterpath
Susannah Fox says
Thanks to John Irvine, this post has a new venue: The Deductible. 13 comments and counting — definitely worth checking out that other thread if you want to keep digging into this topic.