6

Esri boosts digital twin tech for its GIS mapping tools

 2 years ago
source link: https://venturebeat.com/2021/07/18/esri-boosts-digital-twin-tech-for-its-gis-mapping-tools/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Esri boosts digital twin tech for its GIS mapping tools

Esri mapping using digital twin tech
ESRI digital GIS mapping
ADVERTISEMENT

What it takes to get your video game funded

The competition is fiercer than ever. Learn the many options for funding, and what investors are looking for.

Register Now!
ADVERTISEMENT

All the sessions from Transform 2021 are available on-demand now. Watch now.


Geographic information system (GIS) mainstay Esri is looking to expand its stake in digital twin technologies through significant updates in its product portfolio. As it announced at its recent user conference, the company is updating complex data conversion, integration, and workflow offerings to further the digital twin technology mission.

In fact, GIS software is foundational to many digital twin technologies, although that is sometimes overlooked as the nebulous digital twin concept seeks greater clarity in the market.

Welcome to Transform 2021

Esri’s updates to its ArcGIS Velocity software promise to make diverse big data types more readily useful to digital twin applications. At Esri User Conference 2021, these enhancements were also joined by improvements in reality capture, indoor mapping, and user experience design for digital twin applications.

Reality capture is a key to enabling digital twins, according to Chris Andrews, who leads Esri product development in geo-enabled systems, intelligent cities, and 3D. Andrews gave VentureBeat an update on crucial advances in Esri digital twins’ capabilities.

“Reality capture is a beginning — an intermittent snapshot of the real world in high accuracy 3D, so it’s an integral part of hydrating the digital twin with data,” he said. “One area we will be looking at in the future is indoor reality capture, which is something for which we’re hearing significant demand.”

What is reality capture? One of the most important steps in building a digital twin is to automate the process of capturing and converting raw data into digital data.

There are many types of raw data, which generally involve manual organization. Esri is rapidly expanding workflows for creating, visualizing, and analyzing reality capture content from different sources. This includes point clouds (lidar), oriented and spherical imagery (pictures or circular pictures), reality meshes, and data derived from 2D and 3D raster and vector content such as CAD drawings.

For example, Esri has combined elements it gained from acquiring SiteScan and nFrames over the last two years with its in-house developed Drone2Map. Esri also created and is growing the community around I3S, an open specification for fusing data captured by drones, airplanes, and satellites, Andrews told VentureBeat.

ArcGIS Velocity handles big data

Esri recently disclosed updates to ArcGIS Velocity, its cloud integration service for streaming analytics and big data.

ADVERTISEMENT

ArcGIS Velocity is a cloud-native, no-code framework for connecting to IoT data platforms and asset tracking systems, and making their data available to geospatial digital twins for visualization, analysis, and situational awareness. Esri released the first version of ArcGIS Velocity in February 2020.

“Offerings like ArcGIS Velocity are integral in bringing data into the ArcGIS platform and detecting incidents of interest,” said Suzanne Foss, Esri product manager.

Updates include stateful real-time processing introduced in December 2020, machine learning tools in April and June 2021, and dynamic real-time geofencing analysis in June 2021. The new stateful capabilities allow users to detect critical incidents in a sensor’s behavior over time, such as change thresholds and gap detection. Dynamic geofencing filters improve the analysis between constantly changing data streams.

Velocity is intended to lower the bar for bringing in data from across many different sources, according to Foss. For example, a government agency could quickly analyze data from traffic services, geotagged event data, and weather reports to make sense of a new problem. While this data may have existed before, it required much work to bring it all together. Velocity lets users get mashup data into new analytics or situational reports with a few clicks and appropriate governance. It is anticipated that emerging digital twins will tap into such capabilities.

Building information modeling tie-ins

One big challenge with digital twins is that vendors adopt file formats optimized for their particular discipline, such as engineering, operations, supply chain management, or GIS. When data is shared across tools, some of the fidelity may be lost. Esri has made several advances to bridge this gap such as adding support for Autodesk Revit and open IFC formats. It has also improved the fidelity for reading CAD data from Autodesk Civil 3D and Bentley MicroStation in a way that preserves semantics, attribution, and graphics. It has also enhanced integration into ArcGIS Indoors.

Workflows are another area of focus for digital twin technology. The value of a digital twin comes from creating digital threads that span multiple applications and processes, Andrews said. It is not easy to embed these digital threads in actual workflows.

“Digital twins tend to be problem-focused,” he said. “The more that we can do to tailor specific product experiences to include geospatial services and content that our users need to solve specific problems, the better the end user experience will be.”

Esri has recently added new tools to help implement workflows for different use cases.

  • ArcGIS Urban helps bring together available data with zoning information, plans, and projects to enable a digital twin for planning applications.
  • ArcGIS Indoors simplifies the process of organizing workflows that take data from CAD tools for engineering facilities, building information modeling (BIM) data for managing operations, and location data from tracking assets and people. These are potentially useful in, for example, ensuring social distancing.
  • ArcGIS GeoBIM is a new service slated for launch later this year that will provide a web experience for connecting ArcGIS and Autodesk Construction Cloud workflows.

Also expected to underlie digital twins are AR/VR technologies, AI, and analytics. To handle that, Esri has been working to enable the processing of content as diverse as full-motion imagery, reality meshes, and real-time sensor feeds. New AI, machine learning, and analytics tools can ingest and process such content in the cloud or on-premises.

AI digital twin technology farm models

The company has also released several enhancements to organizing map imagery, vector data, and streaming data feeds into features for AI and machine learning models. These can work in conjunction with ArcGIS Velocity either for training new AI models or for pushing them into production to provide insight or improve decision making.

For example, a farmer or agriculture service may train an AI model on digital twins of farms, informed by satellite feeds, detailed records of equipment movement, and weather predictions, to suggest steps to improve crop yield.

Taken as a whole, Esri’s efforts seek to tie very different kinds of data together into a comprehensive digital twin. Andrews said the company has made strides to improve how these might be scaled for climate change challenges. Esri can potentially power digital twins at “the scale of the whole planet” and address pressing issues of sustainability, Andrews said.

Like so many events, Esri UC 2021 was virtual. The company pledged to resume in-person events next year.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member
Sponsored

Government and business can develop an ethical AI future together, KPMG study finds

VB StaffJune 08, 2021 05:20 AM
GettyImages-1143065746.jpg?fit=930%2C486&strip=all
Image Credit: Getty Images

Transform 2021

Catch up on everything you missed this week!

On-Demand

Watch Now

Presented by KPMG 


The pandemic turned the world upside down and businesses stepped up to the challenge, accelerating their digital transformation and harnessing the power of artificial intelligence to help overcome new challenges in a new world.

A new study by KPMG, Thriving in an AI World: Unlocking the Value of AI across 7 Industries,” found that while some executives are experiencing a bit of COVID-19-induced whiplash as they reckon with AI challenges, industry leaders are optimistic about the new administration’s role in helping to achieve an AI-forward future.

“We reached out to decision-makers, many of whom said AI is moving too fast, but many also felt that the U.S. is being left behind when it comes to AI adoption,” says Swami Chandrasekaran, managing director at the KPMG Digital Lighthouse and Head of Digital Solutions Architecture.

Yet overwhelmingly, industry leaders believe the Biden administration will not only help advance the adoption of AI, they also believe the government has an essential role to play in regulating AI technology as adoption grows.

This confidence comes from a confluence of major events across the globe, Chandrasekaran says, including how the pandemic accelerated activity in the AI landscape among both consumers and enterprises. Major companies and technology vendors are investing more rapidly in the technology, a growing number of AI startups are springing up every week, and the way ordinary people interact in their daily lives has changed fundamentally.

“The huge uptick in mainstream AI technologies coming to the market, data being made available, and AI becoming increasingly ubiquitous in daily life because of the pandemic all come parallel to this change in our administration,” he says. “This intersection point is causing these expectations to rise.”

What industry leaders want from the Biden administration

Business leaders firmly believe the government has an essential role to play in regulating AI technology. And industry execs from industrial manufacturing (90%), technology (88%), and retail (85%) are most optimistic that the Biden administration will help advance the adoption of AI in the enterprise.

Younger respondents were more optimistic, Chandrasekaran says, with 90% of Gen X leaders positive about the current administration versus 79% of baby boomers. But expectations around how and where the administration would play a role in adoption differs, with government execs focused on health care and vaccine rollouts as well as defense and national security.

The industrial and manufacturing industry wants to ramp up AI adoption as a solution for things like the predictive maintenance of equipment including scheduled optimization, product design, and engineering, as well as optimizing the supply chain. Meanwhile, health care execs believe the administration will help adoption in use cases like telemedicine and patient care, as well as vaccine administration.

Advancing AI: Where business fits in

Going forward, while leaders across industries recognize how essential the government’s role is in regulating AI, navigating the evolving AI landscape will have to be a collaborative effort. Trust in government as an authority on AI has been growing, but 33% of respondents identified business as the most trusted authority.

The bipartisan National Security Commission on Artificial Intelligence also recently warned that the U.S. isn’t yet prepared to defend or compete in the AI era. The technologists, national security professionals, business executives, and academic leaders of the committee have spelled out an AI strategy – a comprehensive roadmap for government to defend against AI threats, employ the technology responsibly for national security, and secure the country’s prosperity, security, and welfare by winning the global technology race.

However, to execute that strategy, and to continue driving the AI narrative in the U.S., the committee said government will need to partner with business leaders, academia, and civil society. In part, that comes from the need for responsible, effective AI, Chandrasekaran says.

“Security, privacy, and ethics are posing the biggest risks for AI, and in our study, both business and government decision-makers unanimously agreed that there needs to be an AI ethics policy,” he explains.

However, in the rush to adopt and implement AI strategies, tools, and solutions, particularly over the past year, many organizations don’t yet have an ethics policy in place — or it’s just not being enforced.

Only 53% of government leaders said their department has an ethics policy, while 70% said AI is moving so fast that it’s hard to keep up, so that a policy that works today may be obsolete next week.

Many study respondents were ready to accept the government defining those regulations — including 86% of leaders in financial services.

“Across the board, having a baseline set of governing policies and ethics is not a bad thing for the government to define — but at the same time, make sure you don’t stifle innovation,” Chandrasekaran says. “The government can help define baseline regulation, but after that, the business role is creating the executable version of an AI ethics policy.”

Businesses need to implement conscious, continuous monitoring for bias and drift right from the start as they develop their AI models. Imbalances will and do occur in data and models, and in worst-case scenarios, they can cause businesses to end up as headline news. This monitoring needs to happen alongside more transparency and explainability of AI models. For instance, if a consumer loan application is rejected according to an AI algorithm, it should be clear, from the AI model’s results, why that conclusion was reached including the counter-factual on what should have happened for the loan to have been approved.

Businesses also need to plan for changes to their continuous devaluation, Chandrasekaran adds, because building a model and checking it for bias isn’t a one-and-done operation. As models learn and develop, they must be continuously evaluated for inherited bias and drift, as new data is added. And from a security and privacy perspective, businesses need to continually check the model’s resilience with security penetration tests.

Many clients Chandrasekaran is working with acknowledge the fact that they need to bring bias detection, imbalance detection, and drift detection into their software development lifecycle including DevOps, because at the end of the day, an AI model is a software core, he says. But that’s just the first step.

“If you acknowledge that you need to run these tests, use these tools, then businesses need to ask themselves, which are the metrics to measure and how do you quantify them? What is the threshold based on which you pass or fail? What are the tools and technologies that I need to bring into this process? How should my DevOps for AI look like?” he explains. “Now you’re getting to an executable version of an AI ethics policy.”

Moving forward into an ethical AI future

Business leaders are clear in their belief that AI will yield tangible results for their business and their industry. And they are optimistic about the impact the Biden administration will have on AI adoption and regulations, but achieving those goals requires businesses to make significant investments up front, Chandrasekaran says.

That includes prioritizing, refactoring, or transforming large applications and systems into reusable microservices that would allow for embedding or integrating AI into them. It also includes complying with the data security and privacy regulations that are already in existence.

“Everybody is very conscious of the fact that you don’t want to create an AI model that cannot be measured or quantified for things like bias,” he says. “But care has to be taken to ensure you’re using only the data you’re supposed to use, and respecting the privacy of the individuals from whom the data may have been collected.”

Companies also need to invest in their people, skilling up existing employees, and making them data and AI literate. They must put a solid data infrastructure in place to train the AI models. And, always, they must evaluate AI use cases in terms of their impact to the business.

“There’s a vital balancing act in nailing down the budget and resources needed to implement these AI investments — how do you compete and make tradeoffs with investment in other areas of your business?” Chandrasekaran says. “With clients, we challenge them and ask, why this use case? What is the business value? What’s the return on investment? What metrics can we quantify? There’s always a business value.”

Dig Deeper: Read the entire 2021 KPMG study, “Thriving in an AI World.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK