3

The metaverse is too important to get wrong, so it needs to be open

 2 years ago
source link: https://venturebeat.com/2021/11/18/the-metaverse-is-too-important-to-get-wrong-so-it-needs-to-be-open/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

What is explainable AI? Building trust in AI models

Software developer
Man and 2 laptop screen with program code.
Image Credit: VeniThePooh via Getty

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


As AI-powered technologies proliferate in the enterprise, the term “explainable AI” (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them.

A June 2020 IDC report found that business decision-makers believe explainability is a “critical requirement” in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology. Startups are emerging to deliver “explainability as a service,” like Truera, and tech giants such as IBM, Google, and Microsoft have open-sourced both XAI toolkits and methods.

How Zapier Uses Airtable to Transform Product Development & Deliver Great Customer Experiences._

But while XAI is almost always more desirable than black-box AI, where a system’s operations aren’t exposed, the mathematics of the algorithms can make it difficult to attain. Technical hurdles aside, companies sometimes struggle to define “explainability” for a given application. A FICO report found that 65% of employees can’t interpret how AI model decisions or predictions are made — exacerbating the challenge.

What is explainable AI (XAI)?

Generally speaking, there are three types of explanations in XAI: Global, local, and social influence.

  • Global explanations shed light on what a system is doing as a whole as opposed to the processes that lead to a prediction or decision. They often include summaries of how a system uses a feature to make a prediction and “metainformation,” like the type of data used to train the system.
  • Local explanations provide a detailed description of how the model came up with a specific prediction. These might include information about how a model uses features to generate an output or how flaws in input data will influence the output.
  • Social influence explanations relate to the way that “socially relevant” others — i.e., users — behave in response to a system’s predictions. A system using this sort of explanation may show a report on model adoption statistics, or the ranking of the system by users with similar characteristics (e.g., people above a certain age).

As the coauthors of a recent Intuit and Holon Institute of Technology research paper note, global explanations are often less costly and difficult to implement in real-world systems, making them appealing in practice. Local explanations, while more granular, tend to be expensive because they have to be computed case-by-case.

Presentation matters in XAI

Explanations, regardless of type, can be framed in different ways. Presentation matters — the amount of information provided, as well as the wording, phrasing, and visualizations (e.g., charts and tables), could all affect what people perceive about a system. Studies have shown that the power of AI explanations lies as much in the eye of the beholder as in the minds of the designer; explanatory intent and heuristics matter as much as the intended goal.

As the Brookings Institute writes: “Consider, for example, the different needs of developers and users in making an AI system explainable. A developer might use Google’s What-If Tool to review complex dashboards that provide visualizations of a model’s performance in different hypothetical situations, analyze the importance of different data features, and test different conceptions of fairness. Users, on the other hand, may prefer something more targeted. In a credit scoring system, it might be as simple as informing a user which factors, such as a late payment, led to a deduction of points. Different users and scenarios will call for different outputs.”

A study accepted at the 2020 ACM on Human-Computer Interaction discovered that explanations, written a certain way, could create a false sense of security and over-trust in AI. In several related papers, researchers find that data scientists and analysts perceive a system’s accuracy differently, with analysts inaccurately viewing certain metrics as a measure of performance even when they don’t understand how the metrics were calculated.

The choice in explanation type — and presentation — isn’t universal. The coauthors of the Intuit and Holon Institute of Technology layout factors to consider in making XAI design decisions, including the following:

  • Transparency: the level of detail provided
  • Scrutability: the extent to which users can give feedback to alter the AI system when it’s wrong
  • Trust: the level of confidence in the system
  • Persuasiveness: the degree to which the system itself is convincing in making users buy or try recommendations given by it
  • Satisfaction: the level to which the system is enjoyable to use
  • User understanding: the extent a user understands the nature of the AI service offered

Model cards, data labels, and fact sheets

Model cards provide information on the contents and behavior of a system. First described by AI ethicist Timnit Gebru, cards enable developers to quickly understand aspects like training data, identified biases, benchmark and testing results, and gaps in ethical considerations.

Model cards vary by organization and developer, but they typically include technical details and data charts that show the breakdown of class imbalance or data skew for sensitive fields like gender. Several card-generating toolkits exist, but one of the most recent is from Google, which reports on model provenance, usage, and “ethics-informed” evaluations.

Data labels and factsheets

Proposed by the Assembly Fellowship, data labels take inspiration from nutritional labels on food, aiming to highlight the key ingredients in a dataset such as metadata, populations, and anomalous features regarding distributions. Data labels also provide targeted information about a dataset based on its intended use case, including alerts and flags pertinent to that particular use.

Along the same vein, IBM created “factsheets” for systems that provide information about the systems’ key characteristics. Factsheets answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. For natural language systems specifically, like OpenAI’s GPT-3, factsheets include data statements that show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Technical approaches and toolkits

There’s a growing number of methods, libraries, and tools for XAI. For example, “layerwise relevance propagation” helps to determine which features contribute most strongly to a model’s predictions. Other techniques produce saliency maps where each of the features of the input data are scored based on their contribution to the final output. For example, in an image classifier, a saliency map will rate the pixels based on the contributions they make to the machine learning model’s output.

So-called glassbox systems, or simplified versions of systems, make it easier to track how different pieces of data affect a system. While they do not perform well across domains, simple glassbox systems work on types of structured data like statistics tables. They can also be used as a debugging step to uncover potential errors in more complex, black-box systems.

Introduced three years ago, Facebook’s Captum uses imagery to elucidate feature importance or perform a deep dive on models to show how their components contribute to predictions.

In March 2019, OpenAI and Google released the activation atlases technique for visualizing decisions made by machine learning algorithms. In a blog post, OpenAI demonstrated how activation atlases can be used to audit why a computer vision model classifies objects a certain way — for example, mistakenly associating the label “steam locomotive” with scuba divers’ air tanks.

IBM’s explainable AI toolkit, which launched in August 2019, draws on a number of different ways to explain outcomes, such as an algorithm that attempts to spotlight important missing information in datasets.

In addition, Red Hat recently open-sourced a package, TrustyAI, for auditing AI decision systems. TrustyAI can introspect models to describe predictions and outcomes by looking at a “feature importance” chart that orders a model’s inputs by the most important ones for the decision-making process.

Transparency and XAI shortcomings

policy briefing on XAI by the Royal Society provides an example of the goals it should achieve. Among others, XAI should give users confidence that a system is an effective tool for the purpose and meet society’s expectations about how people are afforded agency in the decision-making process. But in reality, XAI often falls short, increasing the power differentials between those creating systems and those impacted by them.

A 2020 survey by researchers at The Alan Turing Institute, the Partnership on AI, and others revealed that the majority of XAI deployments are used internally to support engineering efforts rather than reinforcing trust or transparency with users. Study participants said that it was difficult to provide explanations to users because of privacy risks and technological challenges and that they struggled to implement explainability because they lacked clarity about its objectives.

Another 2020 study, focusing on user interface and design practitioners at IBM working on XAI, described current XAI techniques as “fail[ing] to live up to expectations” and being at odds with organizational goals like protecting proprietary data.

Brookings writes: “[W]hile there are numerous different explainability methods currently in operation, they primarily map onto a small subset of the objectives outlined above. Two of the engineering objectives — ensuring efficacy and improving performance — appear to be the best represented. Other objectives, including supporting user understanding and insight about broader societal impacts, are currently neglected.”

Forthcoming legislation like the European Union’s AI Act, which focuses on ethics, could prompt companies to implement XAI more comprehensively. So, too, could shifting public opinion on AI transparency. In a 2021 report by CognitiveScale, 34% of C-level decision-makers said that the most important AI capability is “explainable and trusted.” And 87% of executives told Juniper in a recent survey that they believe organizations have a responsibility to adopt policies that minimize the negative impacts of AI.

Beyond ethics, there’s a business motivation to invest in XAI technologies. A study by Capgemini found that customers will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and punish those that don’t.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member

The Future of Work Summit

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies online January 12, 2022.

Learn More
ADVERTISEMENT
ADVERTISEMENT
Sponsored

The metaverse is too important to get wrong, so it needs to be open

November 18, 2021 05:40 AM
pasted-image-0-1.png?fit=930%2C504&strip=all

Presented by Crucible


Open vs Closed has always been a key defining battle ahead of us when it comes to the next chapter of our lives with the internet. At its core, this is about our digital sovereignty. Science fiction has been illustrating dark visions of what a closed future can look like, but these were always warnings not prophecies. We are now living through it in real-time and we — the people — have some decisions to make. It requires action from all of us in the community if the metaverse is going to have an open future.

On June 24, 2021, Crucible, our company developing software blueprints for the Open Metaverse, took that action and established the Open Meta Association in Zug, Switzerland, next door to Ethereum, Cardano, Polkadot, Cosmos, Solana, and another several billion dollar web3 networks. Open Meta exists to be an organizing principle for decentralized  governance, driving us towards an open metaverse on web3 rails, owned and controlled by the community.

I believe the metaverse will touch nearly every person on earth over the next decade and be worth trillions. It will reshape the way people make their livelihood and it will be a powerful tool for social mobility. If done correctly it can be life changing for billions of people around the world, but it must be built open to achieve this.

This is all far too much for the closed corporate world to own like they do web2. Full stop.

Facebook reframed their future under the new corporate name Meta — and new ticker symbol MVRS. This reaction signals a drive towards owning our lives and our work in the metaverse — something that will be repeated in nearly every boardroom of every company on Earth that touches technology. I know this because many of them get in touch with me in their pursuit to understand the shift that is taking place and shape strategies for what role they play.

This shift is not just technical, we are undergoing the next step in our evolution as a networked species. The internet has been around nearly 30 years. The same age I am. I come from the generation intimately familiar but not native to the internet. When I was born, billboards and advertisements didn’t watch you back. Now they do online, and they track your every move and report back to Big Tech.

Born from the internet was Big Tech, these companies that started as small, disruptive startups that moved fast and broke things — and then, they swallowed up the world economy. Their business model is to accumulate attention and then optimize it for buying behavior — be that purchasing, supporting, or even believing in what their customers are selling. The health and well being of those customers does not seem to matter to many of them — and that is why distrust is extremely prevalent right now. Just look at the memes.

The truth of the matter is that by putting corporate profit before community, the policies implemented by these companies have made them complicit in the deteriorating mental health and well being of the billions who use — and are addicted — to their products. Which is no accident, it was methodically engineered and the deck is stacked against your average citizen. This kind of immense imbalance calls for far more accountability from the ones benefiting most in this dynamic — Big Tech.

By farming and controlling the individual’s data, these tech companies get trillions of dollars, the brands make billions, and the communities of people using the products get very little more than content.

The gaming industry has a unique opportunity to learn these lessons now and embrace what is happening. If the metaverse is the internet becoming game-like, then game developers and publishers hold the keys to this future once they learn and implement web3 values. This is where we can offer an alternate future that is open source and driven by the community. It’s where Decentralised Autonomous Organizations (“DAOs”) replace traditional corporate gatekeepers and their captive ecosystems by handing control back to the community and allowing the individual players to directly participate in the upside — from their own efforts. The metaverse becomes a public utility for everyone to use and benefit. It is the one thing Big Tech doesn’t stand a chance competing with — garnering passion, trust, and excitement from the people.

In pursuit of our vision to build a truly open metaverse, the Open Meta Association will mint a DAO in January 2022 — the “Open Meta DAO” — distributing a fungible governance token to the 50+ founding members from our June presale round. The current valuation established is $100M; this is the ground floor of what will be needed.

Founding participants in the Open Meta DAO include Animoca Brands, Outlier Ventures, Enjin, Polygon executives Sandeep Nailwal and Shreyansh Singh in India, Dapper Labs’ venture fund, Wilder World founder David Waslen, Spartan Group, South East Asia-based KardiaChain, Yield Guild Games founder Gabby Dizon, Asia-based LD Capital, Boson Protocol, and NFT collector and investor Sillytuna, among others.

Ryan Gill, Managing Director of Open Meta DAO, said of the launch, “We believe the metaverse will touch nearly every person on Earth over the next decade and will be worth trillions. It will reshape the way people make their livelihood and it will be a powerful tool for social mobility. If done correctly, it can be life-changing for billions of people around the world, but it must be built openly to its achieve true vision. In response to the Facebook announcement, there are a growing number of disjointed efforts around this shared open vision in a scramble to make a stand. It’s creating syncopation in the effort. We built the Open Meta DAO and chose founding membership on a vector for long term success, and as a movement to align and unify different communities and technologies — ultimately creating a path to necessary collaborative action. We exist to be a rising tide for the entire ecosystem”.

Yat Siu, executive chairman and co-founder of Animoca Brands, added, “At Animoca Brands we have made it our mission to support an open metaverse, ranging from our in-house projects to the many companies that we acquire or invest in. We are strong believers in the incredible potential of Web3 and games that are based on open economies and real digital property rights. Joining the Open Meta DAO is another key step in our strategy to support and contribute to open and community-driven initiatives that will make the metaverse a reality”.

Roham Gharegozlou, CEO of Dapper Labs, stated, “With the launch of Dapper Collectives, we’re focused on building and releasing open source tools to help mainstream communities engage in decentralized ownership and governance on Flow. Joining the Open Meta DAO as a founding member is another amazing step towards adoption across the ecosystem and we are amped to be creating the future together”.

Gabby Dizon, CEO of Yield Guid Games said, “The success of play-to-earn relies heavily on an open economy for gaming, we have a strong case study in the Philippines for the benefits this can have for players and we’re excited to be joining Ryan in establishing the Open Meta DAO to apply these new models all over the world. This is an exciting time in history, but it is extremely important that we get these next few years right. A DAO is the perfect way to hand more decision making power to the communities of people who will participate in play to earn games, guilds and what comes next in the Open Metaverse”.

Jamie Burke, CEO of Outlier Ventures, added, “Since Ryan and Crucible went through our accelerator in early 2020, we have been collaborating on a shared vision for the open metaverse, based on web3 tech and principles, as a shared operating system. That starts with Crucible’s mission for universal cross-gaming-engine SSI (Self Sovereign Identity) but is going to involve a huge amount of interoperability and collaboration across ecosystems, so it’s great to have Flow, Polygon and Enjin out of the gate to represent multiple blockchains”.

The Open Meta DAO will launch with a DAO portal, an on-chain platform to facilitate DAO activity, processes, and proposals for members to vote on DAO activity. Governance for Open Meta DAO will be community-driven and explore new practices — both tactical and philosophical — as well as the challenges and opportunities around digital sovereignty, design, new business models, mental health, emotional intelligence, and the wider social, economic, and technological implications of web3 adoption.

The DAO portal will facilitate community-driven proposals and will leverage a voting system with mechanisms for reputation and rewards, taking into account the sentiment of the DAO as a whole, as well as each individual vote to ensure consensus in the collective throughout the voting process.

The funds raised from DAO Token public listing later in 2022 will create a treasury to provide transparent grants for community projects from developers that facilitate an open metaverse. This will go to support many in the gaming industry who are interested in finding on-ramps for play to earn and on-chain game development.

This is a rallying call for immediate action and collaboration. In response to the Facebook announcement, there are a growing number of disjointed efforts around this shared open vision — in a scramble to make a stand. It’s creating syncopation in the effort. We built the Open Meta DAO on a vector for long-term success as a movement to align and unify different communities and create a path to collaborative action. We exist to be a rising tide for the entire ecosystem.

We have an opportunity to claim back our individual sovereignty and exist neutral to any single corporate interest, or even single blockchain. The only way we will be able to do this is with a unified effort. Together with the codified community and our core members, we will build the products that deliver a metaverse that is open and available for all to participate and benefit from. Many Davids vs a few Goliaths.

Get more context on Crucible here.

And get all the details on Open Meta DAO here.

Ryan Gill is Co-founder & CEO of Crucible. 


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].

The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2

Join today’s gaming leaders on January 25 - 27, as we explore trends and major changes impacting the industry, new growth opportunities, and the impact of the metaverse in gaming culture.

Learn More

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK