What Is Virtual Reality? – The Complete Tutorial16 min read21/07/2018
Virtual reality is often associated with games and amusement attractions. However, this immersive technology has been turned into an efficient employee training and marketing tool. Born in late 30’s, the virtual reality term has been expanded from meaning something not physically existing and generated by software to referring to an artificial world where users can navigate and interact with virtual objects.
What is VR? – Virtual reality, also known as VR, is an immersive technology that combines both visual and auditory content to enable users to experience the computer-generated environment. Virtual reality also may provide users with interactive capabilities through the sensory feedback, haptic in particular. The simulated environment can either replicate certain real-world surroundings or be unreal with fantastical 3D creatures, objects, and/or conditions.
Users experiencing an artificial environment can virtually explore it, move through it, and, in some cases, interact with its objects using special hardware. Depending on the realism level, VR can be divided into non-immersive, semi-immersive, and fully-immersive.
Non-immersive virtual reality allows users to keep an eye on what’s happening around them in the real-world environment using the peripheral vision and interact with physical objects. The least immersive VR uses standard resolution displays, personal computers, or video game consoles. Most people are familiar with non-immersive VR due to typical first-person view video games for consoles like Xbox and PlayStation.
Semi-immersive virtual reality provides users with a more realistic artificial environment compared to non-immersive VR. Like in a non-immersive simulated environment, while experiencing a semi-immersive virtual reality, users still can see what’s happening around them, but hardly can interact with physical objects out of the VR hardware they currently use.
Semi-immersive VR typically relies on a set of high-resolution displays, high-end computers, and hard simulators, also known as cockpits, that replicate the main simulated object like aircraft or racing car within the virtual environment. An apt example of the semi-immersive VR is a flight or Formula 1 simulator.
Fully-immersive virtual reality places users into an artificial environment where they nearly lose the connection to a physical world and experience the computer-generated surrounding as a real one. The most realistic type of VR provides a high-resolution image, detailed 3D graphics, audio and sensory feedback, and full immersion into a virtual environment.
Fully-immersive VR relies on head-mounted displays and input devices that provide users with a capability to interact with an artificial world. To ensure a higher level of realism, fully-immersive VR systems can involve hard simulators similar to the ones used in semi-immersive virtual reality.
True immersive virtual reality is an advanced version of fully-immersive VR. It refers to a sort of technology that enables users to move through the virtual environment while actually moving in the real world. Infinadeck, a manufacturer of the omnidirectional treadmill, and TPCAST, a vendor of wireless VR headsets, partnered to show their new true immersive virtual reality experience at the 2018 AWE Conference. Their system relies on the wireless HMD and Infinadeck treadmill. With this technology stack, users can walk in different directions on the treadmill while wearing the VR headset thus exploring the VE.
How does VR work?
The main purpose of virtual reality is to provide users with an artificial 3D environment with no boundaries usually associated with video on a TV or computer displays. To ensure the full immersion, VR systems rely on special head-mounted devices that minimize the auditory and visual connection to the physical world. Unlike augmented reality, VR replaces the real world with a virtual one within a specific device instead of overlaying the physical environment with computer-generated content.
The fully-immersive virtual reality technology stack involves a computer or video game console and head-mounted display (HMD). The computer sends a video signal to the headset like Oculus Rift or HTC Vive via an HDMI cable. For head-mounted display like Samsung Gear VR, Google Cardboard, and Google Daydream, a smartphone slotted into the HMD stores and transmits the video content right to the user’s eyes.
A specific design of VR headset lenses makes the immersion possible. Instead of transmitting video to both eyes simultaneously thus making the content look like a flat image on a TV, with a display in front of user’s eyes, headsets send two video signal feeds to a couple of LCD displays per each eye separately.
How virtual reality works – the complete algorithm:
- Receiving a command. A VR system receives the command from a user to start transmitting the certain content.
- Retrieving the necessary content. The system retrieves the requested content from the storage place.
- Forming video signal feeds. The VR system generates two separate video feeds for each LCD display in the headset.
- Transmitting the video content. The computer or smartphone transmits two video signal feeds to the HMD.
Head-mounted displays also contain lenses placed between user’s eyes that’s why HMD’s are sometimes called goggles. These lenses can adjust an image to the unique distance between user’s eyes. The lenses are aimed at focusing and reshaping an image for each user’s eye to ensure a stereoscopic three-dimensional picture by overlaying one image with the other.
This is exactly how the human vision works. In fact, each of our eye sees the world in 2D. However, both our eyes see the environment in 3D since the fields of view of each eye intersect. For example, rabbits and deer see the world in 2D since their eyes are positioned on the sides of their head and the fields of view of each eye don’t intersect.
Most virtual reality systems contain three basic components: immersion, environment, and interactivity.
Experiencing virtual reality means experiencing immersion. In other words, users feel being a part of the artificial world. Furthermore, users experience telepresence because of being immersed and having the capability to interact with a virtual world they are in. Immersion means that users become unaware of their real surroundings and only feel being inside the virtual environment (VE). Jonathan Steuer, a computer scientist, described a couple of basic elements of immersion: depth and breadth.
The depth of immersion refers to the realism of the virtual environment: how detailed the graphics are, how high the image resolution is, etc. To be “deep”, immersion has to provide users with a capability to explore any part of the VE and particular 3D objects from any perspective. Moreover, an image should adjust to the angle of user’s as well as change in accordance with what a user is looking at.
The breadth of immersion refers to the number of human senses simultaneously stimulated within a virtual environment. To be “broad”, immersion has to stimulate as many human senses as possible, for example, auditory, visuals, and haptics. Head-mounted displays provide audio and video content while input devices like controllers, joysticks, and smart gloves can provide users with a haptic feedback.
A virtual environment may not replicate the real one but it definitely should replicate how humans are used to perceiving any surrounding. It means that if a loud virtual object approaches the user within an artificial environment, then the sound should appear accordingly. However, this environment may have its own physical laws like the inverted gravity, but it should be perceived like an environment rather than a video game.
In addition, a VE should be responsive and have the low latency.
Latency is a delay between a particular user action and the response of the artificial environment to this action. For example, when a user turns his or her head, the VR system should immediately generate a correspondent 3D image. This lag time is a crucial characteristic of how users perceive the VE. The latest study by Massachusetts Institute of Technology shows that a human eye can detect an image seen for as little as 13 ms. That’s why the VE should reflect user actions faster than 13 ms to make users feel being in another environment rather than in the “fake” one with its bugs and errors. Otherwise, the sense of immersion will disappear.
To be completely involved in the artificial environment, users should be able to interact with the VE. Outdated VR experiences provided users with a capability to passively explore a virtual environment. The VE led them in predefined directions and offered no interactivity. Users couldn’t interact with virtual 3D objects or choose a direction to go on their own. They just consumed what the VE offered them. It was similar to a movie in VR with a capability to look around within the VE.
You may be familiar with VR roller coasters that entertain people in amusement parks. These attractions rely on the same type of technology. They provide a high level of immersion but offer no interactivity.
Interactivity depends on a wide range of factors. Jonathan Steuer lists the main three of them: mapping, range, and speed.
Mapping refers to an ability of the VR system to naturally generate results of user actions within a VE. For example, in a virtual basketball court, a 3D ball should accurately respond to user actions. If a user virtually throws this ball, it must move right in the direction the user throws it.
Range refers to the maximum number of results from a particular user action. In other words, range means what users can do within the VE and how they can do it. In the case of the same virtual basketball court, range refers to how users can throw the ball, in which direction, and where this ball can hit depending on a particular user throw.
Speed refers to how fast virtual objects respond to user actions. Each response should look natural with a minimum delay time. Furthermore, all responses should appear in the way users can perceive them.
The way users explore the VE can influence their perception of artificial surroundings. Navigation is an important component of interactivity. An open-world VE provides an additional level of immersion since it replicates the way we explore our physical world. We can move in any direction and interact with any object.
In addition, we can modify real-world surroundings by moving or replacing objects. Therefore, a virtual environment should provide users with the same capabilities. It should change in a predictable way and in accordance with user actions. The better a virtual environment provides the freedom of movement and interaction, the more immersed users feel.
Apart from the level of immersion, VR can be divided into types depending on the methods of the virtual reality delivery. They include:
- True immersive
VR simulators replicate particular real-world activities and provide digital environments where users can safely perform specific actions without a risk to damage any physical property or compromise human lives. That’s why simulators play a crucial role in employee training. VR simulators are often used in healthcare, aviation, and automotive. Simulation-based virtual reality can be either non-immersive, semi-immersive, or fully-immersive. For example, with flight simulators, aviators learn how to pilot particular aircraft models while with surgery simulators, medical students learn how to properly conduct operations.
Avatar-based virtual reality mostly refers to video games where users get immersed in the digital environment and play a role of a predefined character. This character can have some super abilities or replicate typical human behavior with its own personality. In the avatar-based VE, users explore in the digital body of another person, animal, or any unreal creature rather than being themselves. Avatar-based virtual reality can be either non-immersive, semi-immersive, fully-immersive, or true immersive. A well-known movie “Ready Player One” by Steven Spielberg contains the episodes demonstrating the true immersive avatar-based virtual reality.
Projection-based VR is also known as CAVE Automatic Virtual Environment or CAVE. A group of scientists from the University of Illinois invented this system in 1992. CAVE looks like a small theater for a few visitors without chairs. This theater typically has a cubic form and consists of from three to six walls constructed from rear-projection displays. The floor and ceiling also can be projection displays. These displays have a high resolution in order to provide users with the maximum level of immersion.
To experience three-dimensional graphics inside of the CAVE, users wear 3D glasses with specific sensors that track user movements. Because of infrared cameras inside of the CAVE, users can walk around virtual 3D objects and see them floating in the air. Computers control both this aspect of the CAVE and the audio aspect. With a set of speakers inside, CAVEs provide 3D sound in addition to video.
Virtual reality based on desktops is also known as non-immersive VR. As mentioned above, this type of the immersive technology mostly relies on ordinary displays to provide users with a 3D digital environment. Read the “Non-immersive” section to learn more details.
Virtual reality based on head-mounted displays is also known as fully-immersive VR. As mentioned above, HMD-based VR mostly relies on headsets to immerse users in the virtual environment. Read the “Fully-immersive” section to learn more details.
Virtual reality requires hardware to be powerful enough to run VR apps. When it comes to desktops, they should have at least Intel i5 processor and high-end video cards like Nvidia GTX 970 and AMD Radeon R9 290. Virtual reality applications require more capacity than typical video games. If a particular PC can run 1080p games at 60 frames per second, it doesn’t mean that it will easily run modern VR apps since they have a resolution at 90 FPS.
However, according to a pioneer in the virtual reality technology Dr. Frederick Brooks, displays should be able to project at least 20-30 FPS to provide a convenient user experience.
When it comes to mobile devices, smartphones based on Android 4.5 or higher and iOS 9.1 or higher can run VR applications.
Computers make possible VR apps providing digital environments. These VR applications can run either on personal computers, smartphones, or video game consoles. These devices transmit 3D content retrieved from a pre-installed app to output devices like displays and speakers. Computers process commands and data from both input and output devices.
VR apps can rely on different input devices to enable users to navigate and interact with a virtual environment.
VR input devices include:
- Mouse inputs
- Touch controllers
- 3 degrees-of-freedom (3DoF) controllers
- 6DoF wand controllers
- Smart gloves
- Motion trackers
Particular VR apps can rely on specific input devices. Some input devices like controllers and smart gloves can provide a haptic feedback to better immerse users within a virtual environment.
Output devices are hardware that allows users to experience a digital environment. They include displays, VR bodysuits, and head-mounted displays. While displays provide only visual content, headsets can provide users with both visual and audio information. VR bodysuits, on the other hand, ensure a haptic feedback, motion capture, climate control, and even biometric feedback.
VR software manages the overall cooperation between input and output devices by generating commands for output devices and receiving requests from input devices. Moreover, virtual reality apps contain 3D content to be displayed. To create VR content, developers can use one of the suitable programming languages.
VR programming languages:
- Java (Android)
- C++ (Unreal)
- C# (Unity)
VR developers can build apps either from scratch or use a software development kit (SDK). SDKs are digital tools that provide drivers, a user interface, and access to necessary graphical rendering libraries thus significantly facilitating the workload for developers The most popular SDKs are Unreal Engine, Unity 3D, and CryEngine.
Unreal Engine is widely used for the video game development. Many VR games are based on the latest fourth generation of this engine. They are Batman Arkham VR, Farpoint, Robo Recall, etc. The free SDK enables programmers to create different games with either simple 2D or high-end cinematic graphics.
Unity 3D is an effective integrated development environment (IDE) for the VR app development. Since it’s cross-platform, the same code can be used for Android, iOS, PC, web, and consoles. The tool is based on the C# programming language. In addition, Unity 3D has a wide approachable community that makes using this IDE easier.
CryEngine is a C#-based IDE that enables developers to create virtual reality applications with a high-level graphics, realistic physics, and advanced animation. This tool has an intuitive user interface. It’s especially known among VR game developers. Such virtual reality games like Robinson: The Journey and The Climb are based on CryEngine.
Besides gaming, virtual reality is also widely used in other vital industries. This technology is no longer just a way of amusement. It’s especially valuable for employee training in healthcare, military, and aviation. However, VR is also used for a wide range of other use cases.
In healthcare, human lives often depend on the doctor’s skills. Learning by doing is the most efficient way to improve those skills but a tiny mistake can be fatal in medicine. Medical students use the immersive technology to learn crucial skills necessary for proper surgery conduction without putting patients at a risk. With VR, future doctors can efficiently learn human anatomy in an interactive way instead of memorizing text as well as 2D schemes and images. In addition, while wearing a headset, medical students can virtually teleport to the operating room and monitor a surgery in real-time from the experienced surgeon’s point of view by watching a 360-degree video.
In the military industry, fear is one of the most terrifying enemies. On the battlefield, this emotion can make soldiers forget what they’ve learned. The only way out is to learn fighting fear. This is where VR comes in useful. Soldiers can use the immersive technology to practice the most dangerous situations or learn using different weapons. For example, the British army uses VR to train recruits and teach them how to drive a tank.
The aviation industry uses VR to train the flight deck, ground crew, and cabin crew. Future pilots often use hard simulators equipped with a set of displays that replicate aircraft windshields to learn how to pilot a certain airplane model. Engineers can use virtual reality apps to improve their skills of the aircraft repair. Stewardesses can use the immersive technology to learn how to behave in dangerous cases where there’s a threat to human lives.
Virtual reality used to be a part of numerous fantasy movies like Matrix where Neo could teleport to a virtual world. This technology has already become a part of our reality except modern virtual environments can’t influence our real life, unlike the matrix where Neo could even die for real. However, we will definitely have the technology that will be able to completely disconnect us from the physical world. Perhaps, in a couple of decades, VR will become a way of living multiple lives in virtual environments.