1

'Do more with less': Why public cloud services are key for AI and HPC in an unce...

 1 year ago
source link: https://venturebeat.com/ai/do-more-with-less-why-public-cloud-services-are-key-for-ai-and-hpc-in-an-uncertain-2023/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
VB Lab Insights

‘Do more with less’: Why public cloud services are key for AI and HPC in an uncertain 2023

GettyImages-1433828528-1.jpg?fit=750%2C428&strip=all
Image Credit: Getty Images

This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia.

Don’t miss additional articles in this series providing new industry insights, trends and analysis on how AI is transforming organizations. Find them all here.  


Amidst widespread uncertainty, enterprises in 2023 face new pressures to profitably innovate and improve sustainability and resilience, for less money. C-suites — concerned with recession, inflation, valuations, fiscal policy, energy costs, pandemic, supply chains, war and other political issues — have made “do more with less” the order of the day across industries and organizations of all sizes.

After two years of heavy investment, many businesses are reducing capital spending on technology and taking a closer look at IT outlays and ROI. Yet unlike many past periods of belt-tightening, the current uneasiness has not yet led to widespread, across-the-board cuts to technology budgets.

Public cloud and AI infrastructure services top budget items

To the contrary, recent industry surveys and forecasts clearly indicate a strong willingness by enterprise leaders to continue and even accelerate funding for optimization and transformation. That’s especially true for strategic AI, sustainability, resiliency, and innovation initiatives that use public clouds and services to support critical workloads like drug discovery and real-time fraud detection.

Gartner predicts worldwide spending on public cloud services will reach nearly $600 billion in 2023, up more than 20% year over year. Infrastructure as a Service (IaaS) is expected to be the fastest-growing segment, with investments increasing nearly 30% – to $150 billion. It’s followed by Platform as a Service (PaaS), at 23%, to $136 billion.

“Current inflationary pressures and macroeconomic conditions are having a push-and-pull effect on cloud spending,” writes Sid Nag, Vice President Analyst at Gartner. “Cloud computing will continue to be a bastion of safety and innovation, supporting growth during uncertain times due to its agile, elastic and scalable nature.” The firm forecasts a continued decline in spending growth of traditional (on-premises) technology though 2025, when it’s eclipsed by cloud (Figure 1). Other researchers see similar growth in related areas, including AI infrastructure (Figure 2).

ARticle-2.1.png?resize=977%2C562&strip=all

Global spending on cloud technology is expected to surpass traditional on-premise investments in 2025.

Msft.1-1.png?resize=1074%2C950&strip=all

Omar Khan, General Manager of Microsoft Azure, says savvy enterprise budgeters continue to show a strong strategic belief in public cloud economics and benefits in volatile market conditions. Elasticity and reduced costs for IT overhead and management are especially attractive to the senior IT and business leaders he speaks with, Khan says, as are newer “multi-dimensional” capabilities, such as accelerated AI processing.

Why public cloud makes business sense now

Leveraging public clouds to cost-effectively advance strategic business and technology initiatives makes good historical, present and future sense, says Khan. Today’s cloud services build on proven economics, deliver new capabilities for current corporate imperatives, and provide a flexible and reusable foundation for tomorrow. That’s especially true for cloud infrastructure and for scaling AI and HPC into production, and here’s why:

1. Public cloud infrastructure and services deliver superior economics

In the decade or so since cloud began to gain traction, it’s become clear: cloud provides far more favorable economics than on-premise.

An in-depth 2022 analysis by IDC, sponsored by Microsoft, found a wide range of dramatic financial and business benefits from modernizing and migrating with public cloud. Most notable: a 37% drop in operations costs, 391% ROI in three years, and $139 million higher revenue per year, per organization.

While not AI-specific, such dramatic results should impress even the most tight-fisted CFOs and technology committees.  Compare that to a recent survey that found only 17% of respondents reporting high utilization of hardware, software and cloud resources worth millions — much of it for AI.

Khan says when making the case, avoid simplistic A-to-B cost workload comparisons. Instead, he advises focusing on the important number: TCO (total cost of ownership). Dave Salvator, Director of Product Marketing at Nvidia’s Accelerated Computing Group, notes that processing AI models on powerful time-metered systems saves money because it’s faster and thus less costly. Low utilization of IT resources, he adds, means that organizations are sitting on unused capacity and show far better ROI and TCO by right-sizing in the cloud and using only what they need.

2. Purpose-built cloud infrastructure and supercomputers meet the demanding requirements of AI

Infrastructure is increasingly understood as a fatal choke point for AI initiatives. “[Our] research consistently shows that inadequate or lack of purpose-built infrastructure capabilities are often the cause of AI projects failing,” says Peter Rutten, IDC research vice president and global research lead on Performance Intensive Computing Solutions. Yet, he concludes, “AI infrastructure remains one of the most consequential but the least mature of infrastructure decisions that organizations make as part of their future enterprise.”

The reasons, while complex, boil down to this: Performance requirements for AI and HPC are radically different from other enterprise applications. Unlike many conventional cloud workloads, increasingly sophisticated and huge AI models with billions of parameters need massive amounts of processing power. They also demand lightning-fast networking and storage at every stage for real-time applications, including natural language processing (NLP), robotic process automation (RPA), machine learning and deep learning, computer vision and many others.

“Acceleration is really the only way to handle a lot of these cutting-edge workloads. It’s table stakes,” explains Nvidia’s Salvator. “Especially for training, because the networks continue to grow massively in terms of size and architectural complexity. The only way to keep up is to train in a reasonable time that’s measured in hours or perhaps days, as opposed to weeks, months, or possibly years.”

AI’s stringent demands have sparked development of innovative new ways to deliver specialized scale-up and scale-out infrastructures that can handle enormous large language models (LLMs), transformer models and other fast-evolving approaches in a public cloud environment. Purpose-built architectures integrate advanced tensor-core GPUs and accelerators with software, high-bandwidth, low-latency interconnects and advanced parallel communications methods, interleaving computation and communications across a vast number of compute nodes.

A hopeful sign: A recent IDC survey of more than 2,000 business leaders revealed a growing realization that purpose-built architecture will be crucial for AI success.

3. Public cloud optimization meets a wide range of pressing enterprise needs

In the early days, Microsoft’s Khan notes, much of the benefit from cloud came from optimizing technology spending to meet elasticity needs ­(“Pay only for what you use.”) Today, he says, benefits are still rooted in moving from a fixed to a variable cost model. But, he adds, “more enterprises are realizing the benefits go beyond that” in advancing corporate goals. Consider these examples:

Everseen, a solution builder in Cork, Ireland, has developed a proprietary visual AI solution that can video-monitor, analyze and correct major problems in business processes in real time. Rafael Alegre, Chief Operating Officer, says the capability helps reduce “shrinkage” (the retail industry term for unaccounted inventory), increase mobile sales and optimize operations in distribution centers.

Mass General Brigham, the Boston-based healthcare partnership, recently deployed a medical imaging service running on an open cloud platform.  The system puts AI-based diagnostic tools into the hands of radiologists and other clinicians at scale for the first time, delivering patient insights from diagnostic imaging into clinical and administrative workflows. For example, a breast density AI model reduced the results waiting period from several days to just 15 minutes. Now, rather than enduring the stress and anxiety of waiting for the outcome, women can talk to a clinician about the results of their scan and discuss next steps before they leave the facility.

4. Energy is a three-pronged concern for enterprises worldwide

Energy prices have skyrocketed, especially in Europe. Power grids in some places have become unstable due to severe weather and natural disasters, overcapacity, terrorist attacks, and poor maintenance, among others. An influential Microsoft study in 2018 found that using a cloud platform can be nearly twice as energy- and carbon-efficient than on-premises solutions.  New best practices for optimizing energy efficiency on public clouds promise to help enterprises achieve sustainability goals even (and especially) in a power environment in flux.

What’s next: Cloud-based AI supercomputing

Industry forecasters expect the shift of AI to clouds will continue to race ahead. IDC forecasts that by 2025, nearly 50% of all accelerated infrastructure for performance-intensive computing (including AI and HPC) will be cloud-based.  

To that end, Microsoft and Nvidia announced a multi-year collaboration to build one of the world’s most powerful AI supercomputers. The cloud-based system will help enterprises train, deploy and scale AI, including large, state-of-the-art models, on virtual machines optimized for distributed AI training and inference.

“We’re working together to bring supercomputing and AI to customers who otherwise have a barrier to entry,” explains Khan. “We’re also working to do things like making fractions of GPUs available through the cloud, so customers have access to what was previously very difficult to acquire on their own, so they can leverage the latest innovations in AI. We’re pushing the boundaries of what is possible.”

In the best of times, public cloud services make clear economic sense for enterprise optimization, transformation, sustainability, innovation and AI. In uncertain times, it’s an even smarter move.

Learn more at Make AI Your Reality.

#MakeAIYourReality #AzureHPCAI #NVIDIAonAzure

VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK