Advice

Why you shouldn’t tell ChatGPT your secrets

AI chatbot stock

Search bots have answers - but can you trust them with your questions?

Since OpenAI, Microsoft and Google introduced AI chatbots, millions of people have experimented with a new way to search the internet: Engaging in a conversational back-and-forth with a model that regurgitates learnings from across the web.

Given our tendency to turn to Google or WebMD with questions about our health, it’s inevitable we’ll ask ChatGPT, Bing and Bard, too. But these tools repeat some familiar privacy mistakes, experts say, as well as create new ones.

“Consumers should view these tools with suspicion at least, since - like so many other popular technologies - they are all influenced by the forces of advertising and marketing,” said Jeffrey Chester, executive director of the digital rights advocacy group Center for Digital Democracy.

Here’s what to know before you tell an AI chatbot your sensitive health information, or any other secrets.

Are AI bots saving my chats?

Yes. ChatGPT, Bing and Bard all save what you type in. Google’s Bard, which is testing with limited users, has a setting that lets you tell the company to stop saving your queries and associating them with your Google account. Go to the menu bar at the top left and turn off “Bard Activity.”

ADVERTISEMENT

OpenAI also lets you limit how long chat histories are saved. Go to settings and turn off “Chat History & Training,” and the company says it will save your chats for only 30 days and avoid using them for AI training.

What are these companies using my chats for?

These companies use your questions and responses to train the AI models to provide better answers. But their use for your chats doesn’t always stop there. Google and Microsoft, which launched an AI chatbot version of its Bing search engine in February, leave room in their privacy policies to use your chat logs for advertising. That means if you type in a question about orthopedic shoes, there’s a chance you’ll see ads about it later.

That may not bother you. But whenever health concerns and digital advertising cross paths, there’s potential for harm. The Washington Post’s reporting has shown that some symptom-checkers, including WebMD and Drugs.com, shared potentially sensitive health concerns such as depression or HIV along with user identifiers with outside ad companies. Data brokers, meanwhile, sell huge lists of people and their health concerns to buyers that could include governments or insurers. And some chronically ill people report disturbing targeted ads following them around the internet.

So, how much health information you share with Google or Microsoft should depend on how much you trust the company to guard your data and avoid predatory advertising.

OpenAI, which makes ChatGPT, says it only saves your searches to train and improve its models. It doesn’t use chatbot interactions to build profiles of users or advertise, said an OpenAI spokeswoman, who added Thursday that it has no plans to do so in the future.

Some people may not want their data used for AI training regardless of a company’s stance on advertising, said Rory Mir, associate director of community organizing at the Electronic Frontier Foundation, a privacy rights nonprofit group.

“At some point that data they’re holding onto may change hands to another company you don’t trust that much or end up in the hands of a government you don’t trust that much,” he said.

Do any humans look at my chats?

In some cases, human reviewers step in to audit the chatbot’s responses. That means they’d see your questions, as well. Google, for instance, saves some conversations for review and annotation, storing them for up to four years. Reviewers don’t see your Google account, but the company warns Bard users to avoid sharing any personally identifiable information in the chats. That includes your name and address, but also details that could identify you or other people you mention.

How long are my chats stored?

Companies collecting our data and storing it for long periods create privacy and security risks - the companies could be hacked, or share the data with untrustworthy business partners, Mir said.

OpenAI’s privacy policy says the company retains your data for “only as long as we need in order to provide our service to you, or for other legitimate business purposes.” That could be indefinitely, and a spokeswoman declined to specify. Google and Microsoft can store your data until you ask to delete it.

Can I trust the health information the bots provide?

The internet is a grab bag of health information - some helpful, some not so much - and large language models like ChatGPT may do a better job than regular search engines at avoiding the junk, said Tinglong Dai, a professor of operations management and business analytics at Johns Hopkins University who studies AI’s effects on health care.

For example, Dai said ChatGPT would probably do a better job than Google Scholar helping someone find research relating to their specific symptoms or situation. And in his research, Dai is examining rare instances where chatbots correctly diagnose an illness doctors failed to spot.

But that doesn’t mean we should rely on chatbots to provide accurate health guidance, he noted. These models have been shown to make up information and present it as fact - and their wrong answers can be eerily plausible, Dai said. They also pull from disreputable sources or fail to cite. (When I asked Bard why I’ve been feeling fatigued, it provided a list of possible answers and cited a website about the temperaments of tiny Shih Tzu dogs. Ouch.) Pair all that with the humans tendency to place too much trust in recommendations from a confident-sounding chatbot, and you’ve got trouble.

ADVERTISEMENT

“The technology is already very impressive, but right now it’s like a baby, or maybe like a teenager,” Dai said. “Right now people are just testing it, but when they start relying on it, that’s when it becomes really dangerous.”

What’s a safe way to search for health information?

Because of spotty access to health care or prohibitive costs, not everyone can pop by the doctor when they’re under the weather. If you don’t want your health concerns sitting on a company’s servers or becoming fodder for advertising, use a privacy-protective browser such as DuckDuckGo or Brave.

Before you sign up for any AI chat-based health service - such as a therapy bot - learn the limitations of the technology and check the company’s privacy policy to see if it uses data to “improve its services” or shares data with unnamed “vendors” or “business partners.” Both are often euphemisms for advertising.

Tatum Hunter writes about personal technology and its impact on our wallets, brains and environment. She joined The Washington Post from Built In, where she covered software and the tech workforce.

ADVERTISEMENT