5

AI Digital Assistants Require Human Oversight to Avoid Cybersecurity Threats

 2 years ago
source link: https://hackernoon.com/ai-digital-assistants-require-human-oversight-to-avoid-cybersecurity-threats
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
AI Digital Assistants Require Human Oversight to Avoid Cybersecurity Threats
April 25th 2022 1,583 reads
0
Security industry has seen an explosion in the number of applications and trends around trying to integrate these digital applications into institutions to stay ahead with helping businesses adapt or even automate security processes. If a digital assistant is compromised, it can disrupt people's lives in many ways: anti-social behavior, stealing secrets, pilfering confidential information. Ángel Rama, an Assistant Professor in the Department of Computer Science at the University of California at Santa Barbara, has another approach: "We have to keep people in mind when designing technology," he says.

Audio Presented by

Speed:
Read by:
Your browser does not support theaudio element.

The world is already full of AI-powered digital services. Recently, the security industry has seen an explosion in the number of applications and trends around trying to integrate these digital applications into institutions to stay ahead with helping businesses adapt or even automate security processes. Intelligent assistant-based sources have become an option for today’s organizations facing data encryption, threat detection, and attack prevention. In the fast-paced 21st century where cyber attacks are out of control, now more than ever is time to build policies and strategies specific to each cyber event.

Cyber incidents cannot be classified as a singular type event, which makes it difficult to deploy an appropriate response plan fast without integrating traditional legislative and operational toolchains. In addition, it is hard to rely on objective detection tools when there are events that are affected by human behaviors (e., deliberate actions ) and human errors (e., false positives).

Creating an AI Digital Assistant

We started our journey by designing some tools to help organizations use these digital AI assistants and apply them in their organizations. The main objective is to make it easier for people to quickly set up these solutions without having to worry about a lot of technical details.

To achieve this, we designed a secured environment based on the latest in AI technology so that people can build and deploy their own digital assistants to react to the most frequent cyber events.

With this method, organizations are can create a digital agent that responds on its own and interact with the world around it. We also refer to this system as an "AI Assistant" because these tools are not a substitute for human interaction but better complement it.

What Are the Consequences of a Cyberattack Against Your AI Digital Assistant?

AI assistants are not a substitute for human interaction and they require human oversight at all times. If a digital assistant is compromised, it can disrupt people's lives in many ways: anti-social behavior, stealing secrets, pilfering confidential information, and more. This damage is exponentially more catastrophic than physical damage because it can render people's data unusable and therefore irreversible.

If the digital assistant is compromised or misused, the damage could be even worse – perhaps to the point of destroying people's identity and freedom. If you believe that intelligent agents should have only limited power, you might need to rethink that.

AI Assistants Require Human Oversight

AI assistants are not a substitute for human interaction and human oversight, but rather a vital complement and can only be used with human supervision.

Ángel Rama, an Assistant Professor in the Department of Computer Science at the University of California at Santa Barbara, has another approach: "We have to keep people in mind when designing technology," he says.

The United Nations' Universal Declaration of Human Rights requires that all people have "the right to the enjoyment of the highest attainable level of physical and mental health".

That will require greater accountability for human overseers. Innovation is needed, but regulation is not enough. Rama believes that it would be wrong to take AI power away from people, but he says that technology should have less access to powerful people's data because that can be abused.

"If you are a CEO, you should not be able to access data of other CEOs," he says. "If you want to regulate AI, the best approach is to understand its limits."

While that might be good advice for AI developers and researchers, it is a negative idea for governments who can't afford to do that kind of thing — or who would prefer to not do it.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK