57

A computer’s understanding of space for Augmented Reality

 5 years ago
source link: https://www.tuicool.com/articles/hit/ZJ7BruU
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
umY7ru6.jpg!web

The goal of Augmented Reality is to superimpose the computer’s perception of space with human’s understanding of it. In computer science, space is simply a metaphor for commonly agreed and scientifically validated concepts of space, time and matter. Space is defined and simulated purely by mathematical equations. A virtual space is nothing but a computer’s understanding of the real world as provided by humans.

Humans are spatial beings. We interact with and understand a large portion of our realities in three dimensions. As Augmented reality tries to simulate visual worlds into human reality, it is important to understand the basic aspects of 3D spaces.

A computer’s understanding of space is nothing more than a mathematically defined 3D representation of objects, location and matter. It can be simply understood by means of coordinate systems without the need of confusing jargon like hyper-realties or alternate universes. Although these are definitely interesting thought experiments.

Visual space and object space

What we perceive as location of objects in the environment is the reconstruction of light patterns on the retina. A visual space on computer graphics can be defined as perceived space or a visual scene of a virtual space being experienced by a participant.

The virtual space in which the object exists is called the object space . It is a direct counterpart of the visual space.

VVjIFzy.jpg!web

Each eye sees the visual space differently. This is a critical challenge of computer graphics for binocular virtual devices or smart glasses. In order to design for virtual worlds, it is important to have a common understanding of the position and orientation of virtual objects in the real world.

Position and coordinates

Three types of coordinate systems are used for layout and programming of virtual and augmented reality applications:

Cartesian Coordinates

The Cartesian coordinate system is used mainly for it’s simplicity and familiarity and most virtual spaces are defined by it. The x-y-z based coordinate system is a precise for specifying location of 3D objects in virtual space. The three coordinate planes are perpendicular to each other. Distances and locations are specified from the point of origin which is the point where the three planes intersect with each other. This is system is mainly used for defining visual coordinates of 3D objects.

uyyeiuq.jpg!web
Cartesian Coordinates

Spherical Polar Coordinates

The Cartesian system defines the positions of 3D objects often with respect to an origin point. A system of spherical polar coordinates is used when locating objects and features with respect to the users’ position. This system is used mainly for mapping of a virtual sound source, or the mapping of spherical video in the case of first person based immersive VR. The Spherical coordinate system is based on perpendicular planes bisecting a sphere and consists of three elements: azimuth, elevation and distance. Azimuth is the angle from the origin point in the horizontal/ground plane, while the elevation is the angle in the vertical plane. Distance is the magnitude or range from the origin.

eYBviiB.jpg!web
Spherical Polar Coordinates

Cylindrical Coordinates

This system is mainly used in VR applications for viewing 360 degree panoramas. The cylindrical system allows for precise mapping and alignment of still images to overlap for edge stitching in panoramas. The system consists of a central reference axis (L) with an origin point (O). The radial distance ( ρ ) is defined from the origin (O). The angular coordinate ( φ ) is defined for the radial distance ( ρ ) along with a height (z). Although this system is good for systems that require rotational symmetry, it is limited in terms of it’s vertical view.

ruMRbuB.jpg!web
Cylindrical Coordinates

Defining orientation and rotation

It is necessary to define the orientation and rotation of user viewpoints and objects along with their position in the virtual space. Knowing this information is especially important when tracking where the user is looking at or knowing the orientation of virtual objects with respect to the visual space.

Six degrees of freedom (6 DOF)

In virtual and augmented reality, it is common to define orientation and rotation with three independent values. These are referred as roll (x), pitch (y) and yaw (z) and are know an Tait-Byan angles. A combination of position (x-y-z) and orientation (roll-pitch-yaw) is referred to as six degrees of freedom (6 DOF).

6f6jUjf.jpg!web
Orientation and Rotation

Navigation

Navigation and way finding are two of the most complex concepts in virtual space especially for VR and AR. It can be handled by either physical movement of the user in real space or by use of consoles for traversing larger distances. For example, the physical movement might refer to movement of your hands and legs for shooting in a game like Call of duty while virtual movement would refer to going to an enemy base. There are a large number of devices that enable virtual movement from keyboards, game controllers to multi-directional treadmills. A single universal interface to navigate both virtual and physical space could be the holy grail for navigation controller design.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK