LG has joined forces with a team of students in Sogang University to develop a new AI that aims to improve user experiences with VR technology.
Although VR technology can immerse a user in a virtual world like no other, this immersion comes with some serious drawbacks: Particularly, any latency between a user’s head movements and their Head-Mounted Display (HMD) interrupts the systems your brain usually uses to detect and register motion, causing the internal confusion that leads to motion sickness. And since these VR headsets have to render their environments and surroundings in real-time as a user moves their head, with even the slightest skip or error leading to discomfort, it is extremely difficult for VR developers to solve this problem.
A number of solutions have been attempted, all to limited success. Developers have attempted to boost display resolution to reduce the motion blur effect that can also cause sickness, but this boost demands more processing power, increasing latency and only creating more problems.
That’s where the AI comes in.
“The core of the newly developed technology is an algorithm that can generate ultra-high resolution images from low-resolution ones in real time”, says reporter Cho Jin-young in his article on Business Korea. “Deep learning technology makes this conversion possible without using external memory devices.”
The newly developed system is a smarter, more active take on Developers’ attempt at decreasing motion blur. The AI will load relevant objects and images into a higher-resolution format dynamically based on what a user is looking at. Jin-young says that this new technology will allow for higher resolutions on mobile VR products by optimizing the system’s power usage and adaptive algorithm. According to researchers, this AI can reduce the latency and motion blur to “one fifth or less” while running at peak performance. It takes a smarter approach to the oldest issues of VR technology, emphasizing smarter use of existing resources over the addition of more power, making use of “Deep learning technology” to allow for an active conversion without an external memory device.
The Sogang University team, led by Department of Electronic Engineering Professor Kang Seok-ju, have also agreed to develop a device to measure existing motion blur and latency in VR devices through the construction and application of precision motors meant to replicate the movements of the human neck and an optical recording system based on human visual systems. This device could help VR Engineers get more solid and concrete feedback on where motion blur and latency is having the most impact on their HMDs, giving them a better idea of where their device needs work.
While the team’s achievements are impressive, Seok-ju believes the project’s greatest strides come from the low resource consumption of developed systems, allowing VR systems to reach new heights without relying on a high-end graphics card.
This advancement marks another software that will make technology more accessible and less expensive than ever before, following a project by Google dubbed 6-DoF, or 6 Degrees of Freedom, that concluded last April and offers new resource-lite controller tracking capabilities for VR systems. The system used over 540,000 stereo-image pairs of users performing various movements in different light settings to train its network of just two small fisheye cameras. The researchers noted that tracking a user’s arm and hand movements have helped the network determine the position of the controller more accurately. The proposed model achieves an average error margin of only 33.5 millimeters, or 1.3 inches, on average – an impressive figure for a system so light on hardware.
The continued development of more and more software aimed at decreasing the clumsiness and high entry cost of Virtual Reality software is good news for any consumers previously hesitant to make the investment. As the market for VR continues to grow, consumers can expect similar projects to continue to chip away at the physical limitations surrounding this new and still-young form of media interaction.