5

Duke Energy used computer vision and robots to cut costs by $74M

 2 years ago
source link: https://venturebeat.com/2021/07/18/duke-energy-used-computer-vision-and-robots-to-cut-costs-by-74m/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Duke Energy used computer vision and robots to cut costs by $74M

Transform 2021

Catch up on everything you missed this week!

On-Demand

Watch Now

All the sessions from Transform 2021 are available on-demand now. Watch now.


Duke Energy’s AI journey began because the utility company had a business problem to solve,  Duke Energy chief information officer Bonnie Titone told VentureBeat’s head of AI content strategy Hari Sivaraman at the Transform 2021 virtual conference on Thursday.

Duke Energy was facing some significant challenges, such as the growing issue of climate change and the need to transition to clean energy in order to reach net zero emissions by 2050. Duke Energy is considered an essential service, as it supplies 25 million people with electricity daily, and everything the utility company does revolves around a culture of safety and reliability. The variables together was a catalyst for exploring AI technologies, Titone said, because whatever the company chose to do, it had to support the clean energy transition, deliver value to customers, and find a way for employees to work and improve safety.

Welcome to Transform 2021

“We look to emerging data science tools and AI solutions, which in turn brought us to computer vision, and ultimately, drones in order to inspect our solar farms,” Titone said.

There is a significant amount of solar farms in the shift to clean energy — Florida alone has 3 million solar panels, Titone said — and inspecting them is a very labor-intensive, time-consuming, and risky endeavor. It can take about 40 hours to inspect one unit, and a regular solar site may have somewhere between 20 and 25 units to inspect. It’s a dangerous task, as technicians walk around 500-acre solar sites with heat guns so they can inspect the panels and may need to touch live wires. The company began experimenting with advanced drones with infrared cameras to try to streamline the work. The technicians were able to use the images taken by the drones to determine where they’re seeing faults and issues. Thousands of images were stitched together with computer vision, giving technicians the ability to look for issues using the images in a much safer way, Titone said.

After finding the computer vision, Duke Energy began to consider automating the process. The company developed a MOVES model (Mobile Observation Vehicle and Equipment Solutions) that collects and processes the data and images from the drones and identifies the faults within minutes. Through applying AI and machine learning technologies, the program has significantly reduced labor and time costs for the company. Accuracy also continued to improve over the time; the latest model used in the inspection reached 91% accuracy.

“We compiled that information for the technicians and gave them the ability to navigate pretty easily to where we can schedule maintenance for customers, and we did this all without a technician ever having to go out to the site,” Titone said. The program has led to more than $74 million reductions in cost and 385,000 in man-hours.

Cloud and edge processing

Duke Energy had to consider the question of how to process the data the drones were collecting. A typical drone flight can produce thousands of photos, sometimes with no precise location data associated with the images. Trying to do the analysis in the cloud to figure out if the drone image showed a solar site would be impossible because of the sheer amount of data and information involved. Duke Energy had to process the images at the edge so that it could make real-time decisions. The images had to be stitched together to make a precise picture of the solar farm without having to require somebody go and actually walk around the site.

Instead of trying to do everything at once, Duke Energy worked on small increments of the project. Once one thing worked, the team moved on to the next step. Since Duke Energy had its own software engineering team, it was able to build its own models with its own methodologies as part of a one-stop shop. This process eventually led to creating over 40 products.

Titone said, “Had we not had that footprint in the cloud journey, we wouldn’t have been able to develop these models and be able to process that data as quickly as we could.”

Working with data

Titone also discussed best practices with storing and cleaning data. As the team has moved toward a cloud-based data strategy, it uses a lot of data lakes. The data lakes are accessible by other systems and also by some data analysis and data science components that must quickly process the information.

“I would say we’re using a lot of the traditional methods around data lakes in order to process all of that,” Titone said, and the team models the data with “what we call our MATLAB, which stands for machine learning, AI and deep learning.”

Reflecting upon the high accuracy that the product reached, Titone said that it was important to be OK with failing in the beginning. “I think at the beginning of the journey, we didn’t have an expectation that we would get right out of the gate,” she said. As time moved on, the team learned and continued to modify the model according to the results. For example, in the journey with iterations and reflections, the team realized that they should not only extract images but piece different processing techniques together. They also adjusted the angle and height of the drone.

AI as a career opportunity

The fact that AI is more efficient and cost-effective does result in reduced labor hours, which raises the concern that AI is taking jobs away from people. Titone said the better perspective was to view this as an opportunity. She said that upskilling employees to be able to work with AI was an investment in the workforce. If the employees understand AI, she said, they become more valuable as workers because they qualify for more advanced roles.

“I never approach AI as taking somebody’s job or role; the way I’ve always approached AI is that it should complement our workforce, that it should give us a set of skills and career paths that our teammates can take,” Titone said.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member
Sponsored

Government and business can develop an ethical AI future together, KPMG study finds

VB StaffJune 08, 2021 05:20 AM
GettyImages-1143065746.jpg?fit=930%2C486&strip=all
Image Credit: Getty Images

Transform 2021

Catch up on everything you missed this week!

On-Demand

Watch Now

Presented by KPMG 


The pandemic turned the world upside down and businesses stepped up to the challenge, accelerating their digital transformation and harnessing the power of artificial intelligence to help overcome new challenges in a new world.

A new study by KPMG, Thriving in an AI World: Unlocking the Value of AI across 7 Industries,” found that while some executives are experiencing a bit of COVID-19-induced whiplash as they reckon with AI challenges, industry leaders are optimistic about the new administration’s role in helping to achieve an AI-forward future.

“We reached out to decision-makers, many of whom said AI is moving too fast, but many also felt that the U.S. is being left behind when it comes to AI adoption,” says Swami Chandrasekaran, managing director at the KPMG Digital Lighthouse and Head of Digital Solutions Architecture.

Yet overwhelmingly, industry leaders believe the Biden administration will not only help advance the adoption of AI, they also believe the government has an essential role to play in regulating AI technology as adoption grows.

This confidence comes from a confluence of major events across the globe, Chandrasekaran says, including how the pandemic accelerated activity in the AI landscape among both consumers and enterprises. Major companies and technology vendors are investing more rapidly in the technology, a growing number of AI startups are springing up every week, and the way ordinary people interact in their daily lives has changed fundamentally.

“The huge uptick in mainstream AI technologies coming to the market, data being made available, and AI becoming increasingly ubiquitous in daily life because of the pandemic all come parallel to this change in our administration,” he says. “This intersection point is causing these expectations to rise.”

What industry leaders want from the Biden administration

Business leaders firmly believe the government has an essential role to play in regulating AI technology. And industry execs from industrial manufacturing (90%), technology (88%), and retail (85%) are most optimistic that the Biden administration will help advance the adoption of AI in the enterprise.

Younger respondents were more optimistic, Chandrasekaran says, with 90% of Gen X leaders positive about the current administration versus 79% of baby boomers. But expectations around how and where the administration would play a role in adoption differs, with government execs focused on health care and vaccine rollouts as well as defense and national security.

The industrial and manufacturing industry wants to ramp up AI adoption as a solution for things like the predictive maintenance of equipment including scheduled optimization, product design, and engineering, as well as optimizing the supply chain. Meanwhile, health care execs believe the administration will help adoption in use cases like telemedicine and patient care, as well as vaccine administration.

Advancing AI: Where business fits in

Going forward, while leaders across industries recognize how essential the government’s role is in regulating AI, navigating the evolving AI landscape will have to be a collaborative effort. Trust in government as an authority on AI has been growing, but 33% of respondents identified business as the most trusted authority.

The bipartisan National Security Commission on Artificial Intelligence also recently warned that the U.S. isn’t yet prepared to defend or compete in the AI era. The technologists, national security professionals, business executives, and academic leaders of the committee have spelled out an AI strategy – a comprehensive roadmap for government to defend against AI threats, employ the technology responsibly for national security, and secure the country’s prosperity, security, and welfare by winning the global technology race.

However, to execute that strategy, and to continue driving the AI narrative in the U.S., the committee said government will need to partner with business leaders, academia, and civil society. In part, that comes from the need for responsible, effective AI, Chandrasekaran says.

“Security, privacy, and ethics are posing the biggest risks for AI, and in our study, both business and government decision-makers unanimously agreed that there needs to be an AI ethics policy,” he explains.

However, in the rush to adopt and implement AI strategies, tools, and solutions, particularly over the past year, many organizations don’t yet have an ethics policy in place — or it’s just not being enforced.

Only 53% of government leaders said their department has an ethics policy, while 70% said AI is moving so fast that it’s hard to keep up, so that a policy that works today may be obsolete next week.

Many study respondents were ready to accept the government defining those regulations — including 86% of leaders in financial services.

“Across the board, having a baseline set of governing policies and ethics is not a bad thing for the government to define — but at the same time, make sure you don’t stifle innovation,” Chandrasekaran says. “The government can help define baseline regulation, but after that, the business role is creating the executable version of an AI ethics policy.”

Businesses need to implement conscious, continuous monitoring for bias and drift right from the start as they develop their AI models. Imbalances will and do occur in data and models, and in worst-case scenarios, they can cause businesses to end up as headline news. This monitoring needs to happen alongside more transparency and explainability of AI models. For instance, if a consumer loan application is rejected according to an AI algorithm, it should be clear, from the AI model’s results, why that conclusion was reached including the counter-factual on what should have happened for the loan to have been approved.

Businesses also need to plan for changes to their continuous devaluation, Chandrasekaran adds, because building a model and checking it for bias isn’t a one-and-done operation. As models learn and develop, they must be continuously evaluated for inherited bias and drift, as new data is added. And from a security and privacy perspective, businesses need to continually check the model’s resilience with security penetration tests.

Many clients Chandrasekaran is working with acknowledge the fact that they need to bring bias detection, imbalance detection, and drift detection into their software development lifecycle including DevOps, because at the end of the day, an AI model is a software core, he says. But that’s just the first step.

“If you acknowledge that you need to run these tests, use these tools, then businesses need to ask themselves, which are the metrics to measure and how do you quantify them? What is the threshold based on which you pass or fail? What are the tools and technologies that I need to bring into this process? How should my DevOps for AI look like?” he explains. “Now you’re getting to an executable version of an AI ethics policy.”

Moving forward into an ethical AI future

Business leaders are clear in their belief that AI will yield tangible results for their business and their industry. And they are optimistic about the impact the Biden administration will have on AI adoption and regulations, but achieving those goals requires businesses to make significant investments up front, Chandrasekaran says.

That includes prioritizing, refactoring, or transforming large applications and systems into reusable microservices that would allow for embedding or integrating AI into them. It also includes complying with the data security and privacy regulations that are already in existence.

“Everybody is very conscious of the fact that you don’t want to create an AI model that cannot be measured or quantified for things like bias,” he says. “But care has to be taken to ensure you’re using only the data you’re supposed to use, and respecting the privacy of the individuals from whom the data may have been collected.”

Companies also need to invest in their people, skilling up existing employees, and making them data and AI literate. They must put a solid data infrastructure in place to train the AI models. And, always, they must evaluate AI use cases in terms of their impact to the business.

“There’s a vital balancing act in nailing down the budget and resources needed to implement these AI investments — how do you compete and make tradeoffs with investment in other areas of your business?” Chandrasekaran says. “With clients, we challenge them and ask, why this use case? What is the business value? What’s the return on investment? What metrics can we quantify? There’s always a business value.”

Dig Deeper: Read the entire 2021 KPMG study, “Thriving in an AI World.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK