1

AI can see clearly now: Why transparency leads to ethical and fair AI systems

 1 month ago
source link: https://siliconangle.com/2024/03/31/ai-can-see-clearly-now-transparency-leads-ethical-fair-ai-systems/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

AI can see clearly now: Why transparency leads to ethical and fair AI systems

window-4354467_1920-geralt-pixabay.jpeg
AI

Although artificial intelligence has proved its ability to reshape industries, redefine customer experiences and reimagine business operations, it also carries inherent risks. And though robots haven’t overtaken the world as foreshadowed time and time again by science fiction movies, there’s a very real threat to businesses of AI going awry.

One of the key components in helping ensure AI is behaving the way it’s intended to is transparency. AI cannot be operating in a black box in which no one understands how its making decisions – that’s how you run into issues such as unintentional discrimination and bias.

At its core, transparency in AI refers to the ability to understand and trace how AI systems make decisions. It’s about making the inner workings of AI algorithms clear to humans, particularly those who use, regulate or are affected by them.

These systems learn from vast amounts of data, often making decisions in ways that are not inherently clear, even to their creators. If an AI algorithm operates as the aforementioned black box, we call this opaque AI – we can’t see it or understand it. AI systems can inadvertently perpetuate and amplify biases in their training data. Transparency allows for the examination and understanding of how these biases occur, leading to more ethical and fair AI systems.

Law and ethics: Why transparency is critical

Transparency builds trust with consumers, employees and stakeholders. When users understand how and why an AI system makes decisions, they are more likely to trust and accept it. But, depending on the industry, the level of AI opacity varies. For example, in highly regulated industries, transparency is paramount for legal and regulatory compliance. Not complying could mean serious implications and costly fines that could upend a business.

The regulatory environment is often much slower than the speed of innovation and there’s a chasm between the governing strategies of various regions. For example, in the U.S., there’s the possibility that 50-plus different privacy laws could govern AI depending on the legislative appetite in each state, whereas in Europe, there is a consensus approach between EU member states. This makes things very complicated, depending on a business’ location and where there their customers are, and operating transparently means better compliance with local regulations.

If regulatory compliance doesn’t compel businesses to be transparent, what will? The answer should be ethics. If transparency is part of an organization’s core values and is incorporated into AI strategies, they are demonstrating empathy for customers and stakeholders because the business prioritizes fairness, respect and privacy, which is in the best interest of us all.

Challenges with achieving transparency in AI

Developing more explainable AI models is the core tactic for achieving transparency, but that’s typically easier said than done. Many view AI models and algorithms as a “secret sauce,” that if exposed would be tantamount to ceding competitive advantage: Algorithms can be classified by some as intellectual property.

There’s also a relationship between opacity and predictive power. Opaque models are often more powerful. As a marketers, this is a similar comparison to the relationship between audience reach and accuracy in data-driven campaigns. The wider the audience is, the less relevant the messaging might be, whereas if the audience is more granular, the messaging may resonate more despite reaching less people. It’s a tradeoff we must analyze against our goals and budgets.

When it comes to statistical and machine learning models, they range from simple and transparent to complex and opaque. Some AI models are incredibly complex, such as deep neural networks. Some examples of technology that uses DNNs are voice assistants such as Siri and Alexa, recommendation algorithms like those used by Netflix and YouTube, language translation services, and self-driving cars.

Simplistic models include linear regression and decision trees. A decision tree can be made on a simple piece of paper by someone who isn’t a data scientist, since it’s extremely easy to see the decision path toward an outcome. Decision trees can be used for loan approval processes, while linear regression is used in credit scoring and real estate pricing.

There’s a tradeoff between accuracy and opacity. Netflix recommendations are going to be a lot more accurate versus a human using a decision tree to determine a loan approval. And though there is an algorithm that’s widely used for real-estate appraisals, the process varies based on factors outside the model, including who’s performing the evaluation. This all leads to challenges in finding the right balance to achieve true transparency while also ensuring accuracy.

Strategies for enhancing transparency

Despite these challenges, there are strategies that can help enhance organizational AI transparency. One is to integrate transparency considerations into your AI systems from the beginning of the development process.

This goes hand-in-hand with creating an organizational culture that strives for transparency. Accountability should be shared – not just taken on by technologists, but from functional areas such as marketing, operations, sales, customer service and beyond to reinforce its importance and make it part of company culture.

Additionally, continuous monitoring by a human to oversee AI decisions and performance is essential in maintaining transparency. If there is a problem or bias emerges, a human auditor can catch it before it’s reinforced over and over. 

Businesses should also clearly state and publicize how data is collected, used, processed and handled, since AI systems are only as fair and accurate as the data fed into them. Not only does this enhance transparency, it also enhances consumer trust. Most organizations that handle consumer data post their privacy policies online, and if we do the same for AI governance policies, we can further build trust and foster adoption.

Setting industry standards is also important and achievable. This requires organizations to come together and develop a framework for responsible AI best practices, or establish agnostic organizations that develop and maintain standards, offer benchmarking and conduct research to measure adherence to such frameworks.

As AI becomes more and more integrated into enterprise operations and the everyday lives of consumers, transparency will be critical to unlocking its full potential. It is central to building consumer trust, ensuring fairness for marginalized groups, and meeting regulatory standards across industries. While technologists are still solving challenges that contribute to the opacity of AI algorithms, we can simultaneously come together to create accountable cultures, best practices and agreed-upon frameworks in pursuit of a more transparent and ethical future.

Tara DeZao is director of product marketing, adtech and martech, at Pegasystems Inc., which develops software for customer relationship management and business process management. She wrote this article for SiliconANGLE.

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK