In the world of mental health apps, privacy scandals have become almost routine. Every few months, reporting or research uncovers unscrupulous-seeming data sharing practices at apps like the Crisis Text Line, Talkspace, BetterHelp, and others: people gave information to those apps in hopes of feeling better, then it turns out their data was used in ways that help companies make money (and don’t help them).
It seems to me like a twisted game of whack-a-mole. When under scrutiny, the apps often change or adjust their policies — and then new apps or problems pop up. It isn’t just me: Mozilla researchers said this week that mental health apps have some of the worst privacy protections of any app category.
Watching the cycle over the past few years got me interested in how, exactly, that keeps happening. The terms of service and privacy policies on the apps are supposed to govern what companies are allowed to do with user data. But most people barely read them before signing (hitting accept), and even if they do read them, they’re often so complex that it’s hard to know their implications on a quick glance.
“That makes it completely unknown to the consumer about what it means to even say yes,” says David Grande, an associate professor of medicine at the University of Pennsylvania School of Medicine who studies digital health privacy.
So what does it mean to say yes? I took a look at the fine print on a few to get an idea of what’s happening under the hood. “Mental health app” is a broad category, and it can cover anything from peer-to-peer counseling hotlines to AI chatbots to one-on-one connections with actual therapists. The policies, protections, and regulations vary between all of the categories. But I found two common features between many privacy policies that made me wonder what the point even was of having a policy in the first place.
We can change this policy at any time
Jessica Roberts, director of the Health Law and Policy Institute at the University of Houston, and Jim Hawkins, law professor at the University of Houston, pointed out the problems with this type of language in a 2020 op-ed in the journal Science. Someone might sign up with the expectation that a mental health app will protect their data in a certain way and then have the policy rearranged to leave their data open to a broader use than they’re comfortable with. Unless they go back to check the policy, they wouldn’t know.
Having this type of flexibility in privacy policies is by design. The type of data these apps collect is valuable, and companies likely want to be able to take advantage of any opportunities that might come up for new ways to use that data in the future. “There’s a lot of benefit in keeping these things very open-ended from the company’s perspective,” Grande says. “It’s hard to predict a year or two years, five years in the future, about what other novel uses you might think of for this data.”
If we sell the company, we also sell your data
Feeling comfortable with all the ways a company is using your data at the moment you sign up to use a service also doesn’t guarantee someone else won’t be in charge of that company in the future. All the privacy policies I looked at included specific language saying that, if the app is acquired, sold, merged with another group, or another business-y thing, the data goes with it.
The policy, then, only applies right now. It might not apply in the future, after you’ve already been using the service and giving it information about your mental health. “So, you could argue they’re completely useless,” says John Torous, a digital health researcher in the department of psychiatry at Beth Israel Deaconess Medical Center.
And data could be specifically why one company buys another in the first place. The information people give to mental health apps is highly personal and therefore highly valuable — arguably more so than other types of health data. Advertisers might want to target people with specific mental health needs for other types of products or treatments. Chat transcripts from a therapy session can be mined for information about how people feel and how they respond to different situations, which could be useful for groups building artificial intelligence programs.
“I think that’s why we’ve seen more and more cases in the behavioral health space — that’s where the data is most valuable and most easy to harvest,” Torous says.
I asked Happify, Cerebral, BetterHelp, and 7 Cups about these specific bits of language in their policies. Only Happify and Cerebral responded. Spokespeople from both described the language as “standard” in the industry. “In either circumstance, the individual user will have to review the changes and opt-in,” Happify spokesperson Erin Bocherer said in an email to The Verge.
The Cerebral policy around the sale of data is beneficial because it lets customers keep treatment going if there’s a change in ownership, said a statement emailed to The Verge by spokesperson Anne Elorriaga. The language allowing the company to change the privacy terms at any time “enables us to keep our clients learned of how we process their personal information,” the statement said.
Now, those are just two small sections of privacy policies in mental health apps. They jumped out at me as specific bits of language that give broad leeway for companies to make sweeping decisions about user data — but the rest of the policies often do the same thing. Many of these digital health tools aren’t staffed by medical professionals talking directly with patients, so they aren’t subject to HIPAA guidelines around the protection and disclosure of health information. Even if they do decide to follow HIPAA guidelines, they still have broad freedoms with user data: the rule allows groups to share personal health information as long as it’s anonymized and stripped of identifying information.
And these broad policies aren’t just a factor in mental health apps. They’re common across other types of health apps (and apps in general), as well, and digital health companies often have tremendous power over the information that people give them. Aim mental health data gets additional scrutiny because most people feel differently about this data than they do other types of health information. One survey of US adults published in JAMA Network Open in January, for example, found that most people were less likely to want to share digital information about depression than cancer. The data can be incredibly sensitive — it includes details about people’s personal experiences and vulnerable conversations they may want to be held in confidence.
Bringing healthcare (or any personal activities) online usually means that some amount of data is sucked up by the internet, Torous says. That’s the usual tradeoff, and expectations of total privacy in online spaces are probably unrealistic. But, he says, it should be possible to moderate the amount that happens. “Nothing online is 100 percent private,” he says. “But we know we can make things much more private than they are right now.”
Still, making changes that would truly improve data protections for people’s mental health information is hard. Demand for mental health apps is high: their use skyrocketed in popularity during the COVID-19 pandemic, when more people were looking for treatment, but there still wasn’t enough accessible mental health care. The data is valuable, and there aren’t real external pressures for the companies to change.
So the policies, which leave openings for people to lose control of their data, keep having the same structures. And until the next big media report draws attention to a specific case of a specific app, users might not know the ways that they’re vulnerable. Unchecked, Torous says, that cycle could erode trust in digital mental health overall. “Healthcare and mental health care is based on trust,” he says. “I think if we continue down this road, we do eventually begin to lose trust of patients and clinicians.”