top of page

Why do so many AR/VR Companies Rely so Heavily on CG?


Have you noticed that a lot of AR/VR companies rely on CG (computer generated graphics) to show their experience? Sometimes the rigors of product development requires industry visionaries to dream of simple ways to share the experiences they intend to deliver.

The key with using CG is to accurately represent the features and capabilities of the developing product; so that the actual experience that people will have is accurately (but not precisely) demonstrated. When utilized appropriately, CG enables innovators to show people what their products can do and the imminent benefits they will enjoy.

At LAFORGE we are using CG as a tool to simulate the experience we intend to deliver as realistically as possible. Our goal is to expose people to the actual size and experience as seen on a 13” or 15” laptop. We are showing the effects of using the central and near peripheral regions of the field of vision (FOV). If you look at the video below, we have placed our user interface (UI) outside of these two regions as this is where the brains takes in most of the information viewed by the eyes and processes it. Putting a permanent UI element such as menu here would be distracting. However, placing something relevant like a waypoint for your car in a parking lot would be helpful as it acts as a 'visual shortcut' for your brain.

To understand how this CG rendering is supposed to work, follow the instructions below.

- Press play on the video

- Look at the word 'central'

- Glance at the phrase 'near-peripheral'

- You should be able to read it

- Then, without moving your eyes, try to read 'Mid-peripheral'. You'll find that it is blurred. That’s

because it is now in your far peripheral vision; where motion, well-known structures, and form is

detectable but details such as color and written text are compromised.

Human FOV

Infographic showing regions of human field of vision and their purpose

Many companies in the AR/VR space are striving towards the widest FOV possible for a more immersive experience. This makes sense in VR, but when it comes to AR many hardware developers are not looking at how the human eye works. Consider the infographic above, it shows that the wider the FOV gets, the less information a person's brain is able to process.

If you don't believe this, then try it yourself. Place your hand in front of you and count your fingers. Easy right? Now move your hand about 6 inches to the left or right. Though you can see your hand, it is difficult to count your fingers while looking straight ahead (you have to make a slight glance to see them (this is about where our UI is by the way). Now move your hand another two feet in the same direction. Though you can see you hand, it is now blurry and impossible to count your fingers while looking straight ahead. You can still glance to count them, but your eyes will begin to hurt as your muscles are straining and in most people your head will automatically move to see your hand.

That previous sentence is what you want to avoid in an AR headset, as that eyestrain will eventually lead to neck strain or headache; something that no one wants.

In short, our rationale with the videos we produce is authenticity, we are using CG to help us demonstrate the true capabilities and intended purpose of our products. And in so doing, extract useful data-points on the user experience such that we can improve on the way people will use and interact with our products when they receive them.


Featured Posts
Recent Posts
Archive