17

Explainable BI: eliminating the black box of human decision-making

 2 years ago
source link: https://www.computerweekly.com/blog/Data-Matters/Explainable-BI-eliminating-the-black-box-of-human-decision-making
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Explainable BI: eliminating the black box of human decision-making

Business Applications Editor
Published: 15 October 2021 11:33

This is guest blog post by James Fisher, Chief Product Officer, Qlik

No, it’s not a typo – Explainable BI. Not to be mixed up with its close relative, Explainable AI.

It is rapidly becoming recognised as an essential element of organisations’ data and analytics strategy. Why?

First, we need to consider the drivers behind Explainable AI. As Computer Weekly readers will no doubt be aware, Explainable AI focuses on lifting the veil on the automated decision-making of intelligent machines. It is a key component of the EU guidelines on ethics in artificial intelligence, which states “an explanation should be available on how AI systems influence and shape the decision-making process, on how they are designed, and on what is the rationale for deploying them”. And with good reason. The decisions made by intelligent machines will only be trusted if we can understand how they came to that decision. Without it, we are left putting all our faith in the output of a black box.

But it is not just machines that are capable of black box decision-making. When we make decisions based on information without an understanding of the validity of the source or context, we too are making black box decisions. You could argue the misinformation epidemic is one symptom of this.

Ultimately, whether its humans or machines, explainability is critical to trust in decision-making.

Why now? Given the maturity of the data and analytics industry and the significant investments in data technologies and training over the past decade, why is the concept of Explainable BI just now coming to the fore?

In recent years, there has been a significant shift in the role of data in the enterprise. The iron gates of the IT office, which until recently gave them total control of the company’s data – who collected it, cleaned it, transformed it, and used it – were prized open. Data was democratised across the organisation with the rise of self-service analytics. Every business department became active participants in collecting, consuming and analysing data to help them make better, more informed decisions.

And this was absolutely the right move. Qlik’s own research revealed that when data is democratised across an organisation to a data literate employee-base, global businesses can enjoy an additional $500m in enterprise value.

But in doing so, far from creating a single version of the truth, we instead proliferated multiple versions of the truth. What was missing from that model was an understanding behind the insight or visualisation. When the IT office retained control of an organisation’s data and analytics, the same team would be responsible for identifying the valuable information and managing its governance before the report – with the business-ready insight – was pulled. It understood and could communicate where the insight came from.

In a self-service world, that understanding of the data lineage was lost. When an employee uses a SaaS analytics platform to create dashboards with drag-and-drop data from the enterprise’s data catalogue, they can’t necessarily tell you the background of where that insight has come from. How do you define the gold-standard calculation, metric or KPI? So, a simple question of “where did that number come from” in a meeting cannot be answered. And with that, trust in the insight is lost. And few are willing to bet their fortune on a data point they can’t back-up.

That’s where Explainable BI comes in. It offers visibility into the data’s lineage – the source and (ideally) at a field level, including the governance and business logic applied to the data that has informed the insight. It empowers individuals to trust the analysis, so they can take actions in the business moment with confidence and seize new opportunities.

And as we move towards a model of Active Intelligence, where data is no longer passively consumed when we need to make decisions, but is proactively served to compel us to take actions and trigger automated responses, this will only grow in importance.

Explainability – whether of the data itself or the intelligent systems generating the insights – will be key to trust in the insight, confidence in the decision and, ultimately, better business outcomes.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK