

Driving into the future from autonomous to AI
source link: https://venturebeat.com/2020/12/07/driving-into-the-future-from-autonomous-to-ai/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Nvidia leverages AWS marketplace to advance GPU deployment

Nvidia today announced it is making 21 tools for building applications on its graphical processor units (GPUs) available on the Amazon Marketplace. This is part of a larger effort to streamline the process of embedding AI capabilities into apps.
The relationship between Nvidia and AWS is becoming more complicated. In addition to agreeing to make rival GPUs from Intel available as a cloud service earlier this month, AWS also signaled its intention to build its own GPUs.
Already available on Nvidia GPU Cloud (GPC), these tools are encapsulated as Docker containers that can be deployed anywhere, including the GPU cloud service based on Nvidia processors that AWS makes available.
Nvidia claims these components have already been downloaded more than a million times by 250,000 developers and data scientists.
The goal is to make it as simple as possible to build, train, and deploy AI applications on Nvidia processors, which will soon include Arm processors it is gaining via a previously announced acquisition expected to close next year.
This alliance marks the first time the entire Nvidia portfolio is available on the AWS Marketplace, said Adel El Hallak, director for Nvidia GPC, in an interview with VentureBeat.
Previously, individual Nvidia components had been made available on the AWS Marketplace. By making all the components available, Nvidia is looking to reduce the number of steps developers would otherwise have to make by downloading components from a separate platform, El Hallak said.
That’s critical, because AI is no longer just being included within a research project or proof of concept, he added, noting that enterprise IT organizations are now more routinely including AI capabilities in the applications that are being deployed in production. “We’ve reached an inflection point,” said El Hallak.
Nvidia has already committed to making its portfolio available on other cloud marketplaces. AWS, however, was the initial priority given the available resources, said El Hallak.
Nvidia GPUs are mainly employed to more cost-effectively train AI models using GPUs that are usually accessed via the cloud. The inference engine AI models run on are most commonly deployed on x86 processors. Nvidia has been making a case for also using either lower GPUs or Arm processors to run those AI inference engines.
Regardless of the processor type, it should be feasible to deploy Nvidia software that is encapsulated in containers. Those tools span everything from instances of MXNet, TensorFlow, Nvidia Triton Inference Server, and PyTorch software to frameworks for video analytics and software development kits made up of multiple compilers and libraries.
Naturally, competition is fierce as cloud service providers battle for the hearts and minds of the developers building these applications, given the amount of compute resources they consume. As competition drives down the cost of accessing those resources, the rate at which AI applications are being built and deployed should accelerate.
But the real challenge is not so much accessing the tools and compute resources needed to build these applications as it is finding and retaining the data scientists and developers required to build, deploy, and maintain them.
Updated 12/18/20, 1:45pm PT with comments from an interview
Driving into the future from autonomous to AI
This article is part of a Technology and Innovation Insights series paid for by Samsung.
With new connective technology, autonomous systems, and innovative business models, the transportation industry is on the cusp of a transformation that could expand the market by more than a trillion dollars over the next decade and drastically reduce road injuries, one of the top 10 causes of death worldwide.
Mobileye is the global leader in the development of Advanced Driver Assistance Systems (ADAS) and the artificial intelligence (AI) that is critical in developing autonomous driving. This technology is deployed by more than 25 global automakers across 60 million vehicles worldwide and counting. The Co-Founder, CEO, and President of Mobileye, Professor Amnon Shashua, believes that new transportation technology is going to profoundly transform our society, an idea he explored with Young Sohn, President and Chief Strategy Officer of Samsung Electronics, in the latest episode of The Next Wave with Young Sohn.
Sophistication and accuracy
Up until now, Shashua explains, there have been two distinct categories of the products we rely on. The first category is complex and highly sophisticated products where occasional flaws can’t be avoided, and are therefore tolerated –for example, smart phones or computers, which can process and produce incredible amounts of information, but are also vulnerable to glitches, hacking, and viruses.
The second category includes products that are less complex but that must perform tasks precisely and reliably. Airplanes, for example, do one thing very well with almost no room for error.
Self-driving cars represent an unprecedented combination of both categories. Autonomous driving is based on cutting-edge software, data analytics, AI, and hardware. But like an airplane, they must function without fail. Bringing these two characteristics together is a major challenge that the automotive industry must tackle.
Autonomous vehicles must make decisions fast and reliably
Against this background, Shashua explains the various obstacles engineers must overcome when developing self-driving cars. First and foremost, the criteria for the decision-making process of robotic engines needs to be standardized and regulators need to agree on clear definitions for recklessness and caution. After all, a robotic engine can only understand caution based on clear rules that it will be able to follow consistently.
Another point that needs to be clarified is how robotic engines will detect the environment around them and how to process data quickly enough. To do that, Mobileye uses two separate fully self-driving, redundant systems: one based on cameras alone and one based only on radar and LiDAR (Light Detection and Ranging) sensors. These two subsystems will eventually be combined into an AV that essentially has two fully self-driving systems within it, ensuring a very low chance of failure at any given moment.
The next step: Robotaxis
While the development of autonomous vehicles has made great progress so far, there are still important steps needed to move away from a niche market and towards a mass market. Shashua believes that robotaxis are an attractive next step in order to become a mass consumer product, for three good reasons:
First, the tolerance of robotaxi costs are high. To add a fully self-driving system to consumer vehicles would add considerable costs, but if self-driving systems are first introduced through ride-hailing or public transit networks, that cost becomes more feasible. For example, a transit network company can re-coup the investment in self-driving technology in the long run — they won’t need to employ as many drivers, and can use data-driven insights to optimize fleet use based on demand.
Secondly, the service is geographically scalable. A robotaxi does not necessarily have to be driven everywhere. The business can also work if that service is only available in a specific location.
Finally, from a regulatory point of view, it’s easier to regulate only one particular fleet instead of a consumer product that is available everywhere, on the way to regulation that is ready for consumer AVs.
Computer vision adds value to other branches
Mobileye also develops computer vision, which forms the technological basis for autonomous driving. But these technological advances have other uses as well, for example, supporting people who are blind or visually impaired. Shashua realized this very early on and founded OrCam in addition to Mobileye ten years ago. OrCam develops smart portable mini cameras which can read printed and digital texts from every surface in real-time, as well as recognize faces, products, and banknotes.
These technologies fall at the intersection of business value, consumer interest, and public good. As Shashua discusses with Sohn, there is incredible potential to improve lives with these innovations, as long as we have the persistence to pursue them and the wisdom to use them in the right way.
Catch up on all the episodes of The Next Wave including conversations with VMWare CEO Pat Gelsinger, the CRO & CMO of Factory Berlin, the CEO of Solarisbank, the CEO of Axel Springer, the CEO of wefox, and Rafaèle Tordjman, President and Founder of Jeito Capital.
VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK