K-Glass 3 allows users to write text or keywords for Internet browsing, providing a virtual keyboard for text.
Seoul: Scientists have developed intelligent stereo-vision
goggles that allow users to text a message or enter new words for browsing the
Internet, offering a virtual keyboard for text and even one for a piano.
K-Glass is an even stronger model of smart glasses
reinforced with augmented reality (AR) that was first developed by the Advanced
Institute of Science and Technology (KAIST) in 2014, with the second released
in 2015 version.
The latest version, which researchers are calling KAIST
K-Glass 3, allows users to text a message or enter new words for Internet
browsing, providing a virtual keyboard for text and even one for a piano.
Currently, more portable mounted head (HMD) screens suffer
from a lack of rich user interfaces, short battery life, and heavy weight, the
researchers said. Some HMD, such as Google Glass, use a touch panel and voice
commands as an interface, but are considered an extension of smart phones and
are not optimized for portable smart glasses.
Recently, the look recognition of HMD including K-Glass 2
was proposed, but the look is enough to realize a natural user interface (UI)
and experience (UX), such as recognition of user gestures, due to its limited
long interactivity and look-calibration, which can be up to several minutes.
As a solution, Professor Hoi-Jun Yoo and his team at the
Department of Electrical Engineering K-Glass 3 developed with a natural user
interface low power and UX processor to allow convenient typing and pointing at
the HMD screen with your hands naked alone.
This processor consists of a pre-processing to implement
stereoscopic vision seven deep learning centers to accelerate the recognition
scene in real time within 33 milliseconds, and a motor for screen rendering
core.
The stereo-vision camera located in the front of K-Glass3,
works in a similar three-dimensional detection (3D) in human vision way. two
lenses of the camera, which is horizontally displayed together as depth
perception produced by the left and right eyes, take pictures of the same
objects or scenes and combine these two different images to extract information
from the spatial depth, 3Denvironments which must be rebuilt.
algorithm vision camera has energy efficiency of20
milliwatts on average, allowing it to operate in the Cup more than 24 hours
without interruption. The research team has adopted the technology of
multi-core deep learning dedicated for mobile devices to recognize user
gestures based on the depth information.
0 comments:
Post a Comment