1

Big Tech is already warning us about AI privacy problems

 11 months ago
source link: https://www.theverge.com/2023/5/19/23730037/openai-ban-apple-banks-privacy
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Big Tech is already warning us about AI privacy problems

/

That is, if you’re paying attention.

By Elizabeth Lopatto, a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg.

May 19, 2023, 6:20 PM UTC|

Share this story

acastro_190116_3172_AI_Larping_0001.0.jpg
Illustration by Alex Castro / The Verge

So Apple has restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal reports. ChatGPT has been on the ban list for months, Bloomberg’s Mark Gurman adds.

It’s not just Apple, but also Samsung and Verizon in the tech world and a who’s who of banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan). This is because of the possibility of confidential data escaping; in any event, ChatGPT’s privacy policy explicitly says your prompts can be used to train its models unless you opt out. The fear of leaks isn’t unfounded: in March, a bug in ChatGPT revealed data from other users.

I am inclined to think of these bans as a very loud warning shot.

One of the obvious uses for this technology is customer service, a place companies try to minimize costs. But for customer service to work, customers have to give up their details — sometimes private, sometimes sensitive. How do companies plan to secure their customer service bots?

This isn’t just a problem for customer service. Let’s say Disney has decided to let AI — instead of VFX departments — write its Marvel movies. Is there a world where Disney would want to let Marvel spoilers leak?

One of the things that is generally true about the tech industry is that early-stage companies — like a younger iteration of Facebook, for instance — don’t pay a lot of attention to data security. In that case, it makes sense to limit exposure of sensitive materials, as OpenAI itself suggests you do. (“Please don’t share any sensitive information in your conversations.”) This isn’t an AI-specific problem.

But I am curious about whether there are intrinsic problems with AI chatbots. One of the expenses that comes with doing AI is compute. Building out your own data center is expensive, but using cloud compute means your queries are getting processed on a remote server, where you are essentially relying on someone else to secure your data. You can see why the banks might be fearful here — financial data is incredibly sensitive.

On top of accidental public leaks, there’s also the possibility of deliberate corporate espionage. At first blush, that looks like more of a tech industry issue — after all, trade secret theft is one of the risks here. But Big Tech companies moved into streaming, so I wonder if that isn’t also a problem for the creative end of things.

There’s always a push-pull between privacy and usefulness when it comes to tech products. In many cases — for instance, that of Google and Facebook — users have exchanged their privacy for free products. Google’s Bard is explicit that queries will be used to “improve and develop Google products, services, and machine-learning technologies.”

It’s possible these large, savvy, secrecy-focused companies are just being paranoid and there’s nothing to worry about. But let’s say they’re right. If so, I can think of a few possibilities for the future of AI chatbots. The first is that the AI wave turns out to be exactly like the metaverse: a nonstarter. The second is that AI companies are pressured into overhauling and clearly outlining security practices. The third is that every company that wants to use AI has to build its own proprietary model or, at minimum, run its own processing, which sounds hilariously expensive and hard to scale. And the fourth is an online privacy nightmare, where your airline (or debt collectors, pharmacy, or whoever) leaks your data on a regular basis.

I don’t know how this shakes out. But if the companies that are the most security-obsessed are locking down their AI use, there might be good reason for the rest of us to do it, too.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK