26

Introducing AVS, an Open Standard for Autonomous Vehicle Visualization from Uber

 5 years ago
source link: https://www.tuicool.com/articles/hit/YFzea2a
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

na6nq23.png!web

Understanding what autonomous vehicles perceive as they navigate urban environments is essential to developing the systems that will make them operate safely. And, just as we have standards for street signs and traffic infrastructure to help human drivers, autonomous vehicle developers would be well-served by a standard visualization platform to represent input from sensors, image classification, motion inference, and all other techniques used to build an accurate image of the immediate environment.

As we’ve written before, the Advanced Technologies Group (ATG) and Visualization teams at Uber leverage web-based visualization technologies to interpret these sensor and algorithmic-derived worlds in order to support an ever-growing pool of autonomous use cases.

Today, we’re excited to open source the redesigned and expanded Autonomous Visualization System (AVS), a new way for the industry to understand and share its data.   

FFRFN3e.gif AVS can display an autonomous vehicle’s performance in the real-world.

AVS is a new standard for describing and visualizing autonomous vehicle perception, motion, and planning data, offering a powerful web-based toolkit to build applications for exploring, interacting and, most critically, making important development decisions with that data.

As a stand-alone, standardized visualization layer, AVS frees developers from having to build custom visualization software for their autonomous vehicles. With AVS abstracting visualization, developers can focus on core autonomy capabilities for drive systems, remote assistance, mapping, and simulation.

The need for unified visualization

A wide variety of organizations, including technology companies, foundations, research institutions, original equipment manufacturers (OEMs), and start-ups, are tackling the challenges of autonomous driving. Visualization tools, which display what autonomous vehicles perceive in their environments, are crucial for developing safe driving systems. The requirements for these tools generally originate close to the hardware and sensor stack and revolve around online or offline playback of autonomy system log data. As platforms mature, new use cases emerge around triage, simulation, mapping, safety and image collection, and labeling. The path to production requires a whole new layer of tooling and infrastructure around monitoring, remote assistance, and support.  

In addition to rapidly evolving requirements, autonomy engineers are often forced to learn complex computer graphics and data visualization techniques in order to deliver effective tooling solutions. The lack of a visualization standard has resulted in engineers assembling custom tools around ready-made technologies and frameworks in order to deliver solutions quickly. However, in our experience, these attempts at developing tools around disparate, off-the-shelf components lead to systems that are challenging to maintain, inflexible, and generally not cohesive enough to form a solid foundation for a platform.

We are sharing AVS with the broader autonomous community in the hope that collaboration across the industry will unlock more advancement, define a new standard, and lead to safer, more efficient transportation solutions for everyone.

Visualizing a world in motion

3I3YBn7.png!web In this example of AVS in Uber ATG’s web-based before and after comparison application, we can view improved vehicle detection.

Autonomous vehicle development is a rapidly evolving area with new services, data sets (especially via LiDAR), and many use cases that require new solutions. At Uber, there were multiple engineering teams with unique requirements that our solution needed to address. Leveraging a web-based visualization application was an obvious choice as it created opportunities for fast iteration across teams, use-case specific applications, simplified information sharing, customization, and integration with existing services.

While the benefits were clear, there were challenges in how to efficiently manage the data while retaining performance comparable to desktop-based systems. Solving for these challenges required a new abstraction to manage and describe the generated data that is used by web applications.

Given the above requirements, we built our system around two key pieces: XVIZ provides the data (including management and specification) while streetscape.gl is the component toolkit to power web applications.

XVIZ

We needed a formal and flexible specification for the data generated from autonomous systems, such that the data format could integrate with evolving infrastructure, conform across multiple clients, and be close enough to the source to define the necessary controls and bindings to efficiently manage it.   

6FNRRnB.png!web The high-level data flow for XVIZ incorporates an encoder and builder on the server side, with a decoder, data buffer, and synchronizer on the client side.

XVIZ provides a stream-oriented view of a scene changing over time and a declarative user interface display system. Like a video recording, you can randomly seek and understand the state of the world at that point. Like an HTML document, it’s presentation is focused and structured according to a schema that allows for introspection. However, XVIZ also allows for easy exploration and interrogation of the data by tying together separate stream updates into a single object.

An XVIZ stream is a series of discrete updates that occur at a specific time with specific primitive types. Primitives are objects that enable descriptions of information such as LiDAR point clouds, camera images, object bounds, trajectories, vehicle speed over time, and predicted plans. To simplify presentation for users, these objects can be individually styled (including at the stream level) or assigned a style class.

XVIZ organizes streams through hierarchical naming, with a separate metadata section listing the streams, their types, relative transforms, declarative UI panels, and style classes. The user interface then bundles graphical panels for the objects with the data, providing the user with the control via YAML to configure a series of layout and display components.

67bQZjj.gif XVIZ’s data structure lets us toggle streams from the dataset.

streetscape.gl

BnQzqaR.png!web streetscape.gl offers a variety of UI components, including camera, playback control, object label, and plot features.

streetscape.gl is a toolkit for building web applications that consume data in the XVIZ protocol. It offers drop-in-ready components for visualizing XVIZ streams in 3D viewports, charts, tables, videos, and more. It addresses common visualization pain points such as time synchronization across data streams, coordinate systems, cameras, dynamic styling, and interaction with 3D objects and cross components, so that users can invest more time building out autonomous vehicle software itself.

Rendering performance is the top goal of streetscape.gl. Built on top of React and Uber’s mature WebGL-powered visualization platform , we are able to support real-time playback and smooth interaction with scenes supporting hundreds of thousands of geometries.

Composability is also front and center in the design of streetscape.gl. Learning from our work on our internal visualization platform , which powers a dozen diverse use cases such as triaging, labeling, debugging, remote assistance, and scene editing, we designed the components to be highly styleable and extensible, so that any team can build an experience that is tailored to their unique workflow.

How AVS is different

AVS was designed to be open and modular, welcoming contributions from internal teams since the beginning of its development to enable decoupling. Architecturally, it provides a layered approach where the coupling between components of the autonomous stack is minimized and offers clear definitions for the exchange of data. Each layer can evolve as needed without requiring system-wide changes, and layers can be tailored for a specific context or us -case.

This guiding principle helps set AVS apart from current solutions. Specifically, AVS’ architecture distinguishes itself because:  

  • It was designed with an intentional separation of data from any underlying platform
  • Its limited, small specifications make tools easier to develop
  • Its data format requirements result in fast transfer and processing

Additionally, we created AVS to meet the needs of everyone in the autonomy ecosystem, including engineers, vehicle operators, analysts, and specialist developers. Autonomous engineers can describe their systems with XVIZ easily, and then test and visualize their expectations with limited overhead. Specialist developers can quickly build data source-agnostic applications with strong performance characteristics and simplified integration using streetscape.gl. Lastly, operators can view the data in standard visual formats, including videos, across multiple applications, leading to easier collaboration, knowledge understanding, deeper analysis, and overall trust in the data quality.

By open sourcing it to the industry, we encourage more to contribute and build upon this initial set of ideas.  

Application in the industry and beyond

For companies building or supporting autonomous vehicles, such as Voyage , Applied Intuition , and Uber ATG , going from simulated or on-the-road tests to finding the root cause of issues can be an extremely time-consuming process.

According to Drew Gray , Voyage’s CTO, b eing able to visually explore autonomous sensor data, predicted paths, tracked objects, and state information like acceleration and velocity is invaluable to the triage process and can positively impact developer efficiency. The information can then be used to set data-driven engineering priorities.

Voyage co-founder Warren Ouyang echoes Gray’s sentiment about the possibilities of AVS.

“We’re excited to use Uber’s autonomous visualization system and collaborate on building better tools for the community going forward,” says Ouyang.

7j6Fvib.png!web AVS provides rich context inside other applications, such as this example where it enhances Uber ATG’s Spectacle event review application.

Beyond root cause analysis, teams at Uber have also leveraged AVS for other use cases, such as web-based log-viewing, developer environments, and mapping maintenance. We also intend that, by open-sourcing the technology, developers in other nascent and adjacent industries, such as drones, robotics, trucking, fleet management, augmented and virtual reality, and retail, will find applications for this toolkit.

What’s next

Bringing AVS to the broader industry is just the start. We envision it democratizing access for more developers and operators looking to contribute to the autonomous space.

In collaboration with partners like Voyage, Applied Intuition, contributors, and open source foundations, we plan to enhance the product with more data sources and specifications (especially ROS support), performance optimizations, and richer features such as side-by-side comparisons.

“At Applied Intuition, we’re working with the most sophisticated AV teams in the world, and they require the most sophisticated tools,” says Peter Ludwig, CTO of Applied Intuition. “AVS falls in line with this, and what’s notably great is that it’s web-based and fills a need in the community to not rebuild the same visualization tools again and again. This is an awesome move from Uber for the rest of the AV community.”

6V3i6vq.png!web Uber ATG’s AVS-powered AV Log Viewer application lets us analyze a vehicle’s approach to an intersection.

Uber is interested in achieving a long-term vision of autonomous vehicles: a safer, cleaner, and more efficient transportation solution for everyone. Unfortunately, early developer tools in any industry tend to be primitive and adapted to solve new use cases that stretch their capabilities. Given how rapidly technology is transforming transportation and the cities we live in, the need for better tools to expedite this change is as urgent as ever.

Whether it’s products that improve urban planning investments , richer geospatial analysis , advanced mapping , or new mobility trends , we find that an open data and tools strategy can help governments, developers, researchers, and the overall industry accelerate towards a smarter transportation ecosystem for the future.

If you’re interested in developing technologies to advance the future of transportation, consider applying for a role at Uber. l If you’re interested in contributing to AVS or other Uber open source projects, check out our open source program homepage .  

Special thanks to: Ib Green, Nicolas Belmonte, Xintong Xia, Travis Gorkin, Anthony Emberley, Jon Thomason, Carnaven Chiu, Jai Ranganathan, Jennifer Anderson, Neil Stegall, Charlie Waite, Ziyi Li and Erik Klimczak.

Comments

Share

Vote

+1

Popular Articles


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK