Experts have warned users of AI-powered “relationship” chatbots that their data and privacy are at risk, after failing all 11 apps they tested.
Non-profit Mozilla chose Valentine’s Day to release new research into the chatbots as part of its long-running *Privacy Not Included series of reports.
Since generative AI (GenAI) burst onto the scene, there has been an explosion in romantic or relationship chatbots marketed as providing companionship to lonely hearts.
However, in reality, they either deliberately or negligently ignore privacy and security best practices, Mozilla argued.
“To be perfectly blunt, AI girlfriends are not your friends,” said Mozilla researcher Misha Rykov.
“Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”
All 11 chatbots assessed in these tests – including titles such as Romantic AI, Talkie Soulful AI, and EVA AI Chat Bot & Soulmate – were slapped with a *Privacy Not Included warning label. That singles the category out as one of the worst the non-profit has ever reviewed for privacy.
The chatbots have collectively been downloaded by over 100 million users from Google Play, but most come with a string of privacy and security issues including:
- No public information on how they manage security vulnerabilities (73%)
- No clear information about encryption and whether they use it (64%)
- Permission to use weak passwords, including “11111” (45%)
- Selling user data, sharing it for targeted advertising, or not providing enough information in their privacy policy to confirm they don’t (90%)
- Forbidding deletion of personal data (54%)
“Today we’re in the Wild West of AI relationship chatbots. Their growth is exploding and the amount of personal information they need to pull from you to build romances, friendships, and sexy interactions is enormous. And yet, we have little insight into how these AI relationship models work,” warned Privacy Not Included director, Jen Caltrider.
“One of the scariest things about the AI relationship chatbots is the potential for manipulation of their users. What is to stop bad actors from creating chatbots designed to get to know their soulmates and then using that relationship to manipulate those people to do terrible things, embrace frightening ideologies, or harm themselves or others? This is why we desperately need more transparency and user control in these AI apps.”
SOME NOT-SO-FUN FACTS ABOUT THESE BOTS:
- Hundreds and thousands of trackers
Trackers are little bits of code that gather information about your device, or your use of the app, or even your personal information and share that with third parties, often for advertising purposes. We found that these apps had an average of 2,663 trackers per minute. To be fair, Romantic AI brought that average way, way up, with 24,354 trackers detected in one minute of use. The next most trackers detected was EVA AI Chat Bot & Soulmate with 955 trackers in the first minute of use.
- Anything you say to your AI lover can and will be used against you
There’s no such thing as “spousal privilege” — where your husband or wife doesn’t have to testify against you in court — with AI partners. Most companies say they can share your information with the government or law enforcement without requiring a court order. Romantic AI chatbots are no exception.
- NSFL content clicks away
Of course, we expected to find not-safe-for-work content when reviewing romantic AI chatbots! We’re not here to judge — except about privacy practices. What we didn’t expect was so much content that was just plain disturbing — like themes of violence or underage abuse — featured in the chatbots’ character descriptions. CrushOn.AI, Chai, and Talkie Soulful AI come with a content warning from us.
- *Kindness not included!
If your AI companion has nothing nice to say, that won’t stop them from chatting with you. Though this is true of all romantic AI chatbots since we didn’t find any personality guarantees, Replika AI, iGirl: AI Girlfriend, Anima: Friend & Companion, and Anima: My Virtual Boyfriend specifically put warnings on their websites that the chatbots might be offensive, unsafe, or hostile.
So what can you do about it?
So we recommend you read the reviews to understand your risk level and choose a chatbot that seems worth it. But, at least for all the romantic AI chatbots we’ve reviewed so far, none get our stamp of approval and all come with a warning: *Privacy Not Included.
Now, if you do decide to dip your toe into the world of AI companionship, here’s what we suggest you do (and don’t do) to stay a little bit safer:
Most importantly: DON’T say anything to your AI friend that you wouldn’t want your cousin or colleagues to read. But also:
DO
- Practice good cyber hygiene by using a strong password and keeping the app updated.
- Delete your personal data or request the company delete it when you’re done with the service.
- Opt out of having the contents of your personal chats used to train the AI models, if possible.
- Limit access to your location, photos, camera, and microphone from your device’s settings.
Something else you can do? Dare to dream of a higher privacy standard and more ethical AI!
You shouldn’t have to pay for cool new technologies with your safety or your privacy. It’s time to bring some rights and freedoms to the dangerous web-based wild west. With your help, we can raise the bar on privacy and ethical AI worldwide.
Source: Mozilla