2

Trustworthy AI versus ethical AI - what's the difference, and why does it matter...

 2 years ago
source link: https://diginomica.com/trustworthy-ai-versus-ethical-ai-whats-difference-and-why-does-it-matter
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Trustworthy AI versus ethical AI - what's the difference, and why does it matter?

By Neil Raden

May 11, 2021

6 min reading



Human and robot shaking hands over stacked coins AI finance trust concept © Andrey_Popov - Shutterstock
(© Andrey_Popov - Shutterstock)

I've written before about semantic ambiguity in natural languages and how difficult, if impossible, it would be to render natural languages into a digital object.

The reasons: the gestalt of a silicon processor versus the human brain and brain-to-brain communication:

Something is ambiguous when it can be understood in two or more possible senses or ways. If the ambiguity is in a single word, it is called lexical ambiguity. ... In everyday speech, ambiguity can sometimes be understood as something witty or deceitful.

Today, the terms trustworthy AI and ethical AI are used interchangeably. The problem is that trustworthy AI is not necessarily ethical, and ethical AI is necessarily trustworthy. The casual commingling of the terms has unfortunate circumstances

Let's break down trust and trustworthiness. There is a difference between 'trust,' which can be described in pretty straightforward factual terms and 'trustworthiness,' which is a very different matter and has a value component. It is about what or who should be trusted. Unfortunately, we can both trust those who are not trustworthy, and not trust those who are. Trust and transparency go together: we can only trust what we are clear is being asked of us.

Ethics determines whether a strategy should be chosen because it seeks simple utilitarian terms to secure the best overall aggregate balance of harms and costs. Or whether it rests on a belief that there are fundamental human rights that should never be sacrificed. Values inform a judgment of a proper or proportionate balancing of the loss of individual liberty and privacy for the gain of certain public goods; or whether it is fair to expect some social and age groups to suffer disproportionately in any public health initiative.

Here is one attempt to define ethical AI:

Organizations ready to embrace AI and thrive in the Age of With must start by putting trust at the center. First, they must thoroughly assess whether their organization meets the criteria for trustworthy and ethical AI; it's a necessary step in increasing the returns and managing the risks that constitute the transformational promise of AI.

This is messed up. Trust is something given based on transparency, reputation, and sometimes, unfortunately, blind faith in wholly untrustworthy characters. It is not a consummate good. Putting trust at the center implies ethics are of secondary concern.

Here are a few examples of trustworthy but potentially unethical models:

Predictive Policing: City government is in an endless cycle of allocation of resources for all of the things they have to do. One area that gets cut is policing. In an attempt to introduce some element of fairness (or at least science) to how to deploy and redeploy policing, they have turned to several AI solutions that predict where police need to be. Implementing these systems shows a matter of trust in the operation and outcome, but the experience has led to unethical outcomes.

The models themselves are, for the most, free from bias, but they calculate occurrences of crimes in segments of the city and assign more policing. The problem is, though organized to fight Class1 crimes, homicides, arson and assaults, more boots on the ground begin to pick up more Class 2 crimes, such as vagrancy, trespass, curfew or hold a small number of drugs. As this data flows back to the system in an infinite feedback loop, more police are assigned, and crime rates soar. Moreover, since these phenomena occur in neighborhoods with a high degree of color, the result is entirely unethical.

Intrusive personalization: Giving your most initiate information to ad servers and marketers, clicking through websites, ordering online, talking to Siri, people tend to trust these applications even though subsequent use of the data can be highly unethical in persuasion, digital phenotyping and disrupting civil society.

Life insurance: life insurance is the paradigm of trust. Their commercials of "good hands," "the future is safe," partners afterlife too," "for a secure life." The assumption when purchasing life insurance is that the face value will be distributed promptly to your beneficiaries. There are circumstances, clearly elucidated in the contract, where that would not happen, such as suicide in the first two years, or acts of war. But two-year-exclusion doesn't offer complete protection. Another exclusion is the matter of  "material representations on the application."  Effectively, it provides the insurance company the right to deny the death claim on material representations, which can be minor:

  • Lying about income
  • Not disclosing another life insurance policy
  • Incorrect or incomplete answers put on an application by an insurance agent
  • Failing to mention treatment for minor ailments
  • Lying about weight
  • Misrepresenting immigration status
  • Not mentioning smoking one cigarette a day.

The cynical part of this is that Insurers typically do not investigate these situations at the beginning. Still, when a claim is large, or just beyond the two years, they will dig into thousands of sources to invalidate a claim and return the premium plus interest, but not the death benefit. This is an example of the perception of a trustworthy instrument: "If I die, my family will be taken care of," which is, in fact, an unethical process.

Ethical, but not trust trustworthy: one prominent example of ethical but not trustworthy is the use of machine learning in radiology. After some early gaffes when Stanford Medical's radiation oncology model produces noticeable result between different ethnic groups, they went back to the drawing board. They developed a system that identified tumors that most of a panel of radiologists did not, and the degree of false-positive and negatives was evenly distributed across groups. They'd developed an ethical system, cleansed of bias, but trust was a different issue. First of all, doctors are a conservative bunch. Many refused to accept the result. Then there was an unanticipated problem. As they licensed the software to other hospitals, the accuracy of the system dropped dramatically. The reason was that Stanford had state-of-the-art imaging technology that other hospitals did not. Trust in the system plummeted, and took some time to regain.

My take

Let's drop "trustworthy" as a criterion for ethical AI. Ethics are about knowing what to do and doing it. Trust is about what or who should be trusted or how to create trust, whether or not it's ethical. Though they are commingled in specific ways, pursuing trust to the exclusion of ethics is dangerous.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK