21 August, 2024
Operators of artificial intelligence (AI) chatbot tools have made it clear that users' requests can be saved and used to further develop the AI systems.
But what if a user does not want that? Some tools permit users to request that personal information used in chatbot requests not be saved or used to develop or train AI systems.
Technology experts say it could be too late for users who have already provided information to these tools to have the data removed. But the Associated Press is offering the following advice for users who want to increase their privacy protections.
Google Gemini
Google saves chatbot interactions, known as conversations, with its Gemini tool. The company says it uses the data to train its machine learning systems. But the company does give users a way to limit the information captured and to remove past conversations.
For users 18 or older, requests are kept for 18 months, although this can be changed in user settings. Human workers are sometimes used by Google to examine some user conversations as part of efforts to improve Gemini's systems. In general, Google warns Gemini users not to enter any sensitive information they do not want human workers to see.
Gemini users can change or “opt-out” of these default settings. From the main Gemini website page, users should find and click on the “Activity” button toward the bottom left of the page. From there, they can click the “Turn off” button next to the heading “Gemini Apps Activity.” Users then have the chance to block future conversations from being saved. They can also choose to have all previous conversations removed.
Whether a user chooses to turn their activity off or leave it on, Google notes that all conversations with Gemini are saved for 72 hours to “provide the service and process any feedback.”
Meta AI
Meta has an AI chatbot used across its social media services Facebook, WhatsApp and Instagram. The company says its AI models are trained on information shared by users including social media posts and photos. Meta says it does not train its AI systems on private messages sent by users to friends or family.
Not everyone can opt out of this policy. People in the 27-nation European Union and Britain – both of which have strong privacy rules – can. This process can be completed from Meta's main Privacy Center. Click “Other Policies and Articles” from the list near the bottom on the left side, then click the part related to AI. Users can then find a link to a form to opt out.
People in the United States and other countries without national data privacy laws do not have this ability.
Meta's Privacy Center does link to a form where users can request that their data captured by third parties not be used to "develop and improve AI at Meta.” But the company says these requests are examined before being acted upon and might be rejected based on local laws.
Microsoft Copilot
With Microsoft's Copilot chatbot, personal users cannot opt out of having their data used to develop the company's AI models. The best a user can do is to remove conversations with the chatbot by going to Microsoft account's settings and privacy page. Find the drop-down choice called “Copilot interaction history” or “Copilot activity history” to find the button to remove the history.
OpenAI's ChatGPT
Users of OpenAI's ChatGPT service can make privacy changes from the tool's settings page. Find the “data controls” setting and remove the choice called "Improve the model for everyone." If a user does not have an account, they can click on the small question mark at the bottom right of the page. Then click “Settings” to see the same choice to opt out of AI training.
OpenAI explains on its data controls “help page” that when users opt out, their conversations will still appear in the history but will not be used for training. The company says these temporary conversations will be kept for 30 days.
Anthropic's Claude AI
Anthropic is an AI research company based in San Francisco. The company says its Claude AI tool is not trained on personal data. However, users can request permission for specific conversations to be used in training or not. Users can do this by giving the conversation a “thumbs up” or “thumbs down” or by emailing the company.
I'm Bryan Lynn.
Bryan Lynn wrote this story for VOA Learning English, based on reports from The Associated Press, Google, Meta and other online sources.
___________________________________________
Words in This Story
chatbot – n. a computer program designed to interact with humans
default – n. what exists or usually happens if no changes are made
button – n. an image, or icon, that appears on a computer screen which the user can click to cause software to perform some kind of action
feedback – n. information or statements of opinion about something