Talk
in
Expo Talk Panel: Cool Moves: Decoupling physical motions and self-avatar motions in Virtual Reality
Talk 1
Eyal Ofek
Moderator s: Alice Zheng · Hsuan-Tien (Tien) Lin
Virtual Reality (VR) allows users to immerse themselves into imaginative worlds and enjoy appearances and behaviors that can be unlike anything in the real world. Users simply put on a Head Mounted Display (HMDs), pick up hand-held controllers, and enter a virtual environment where they can see themselves, many times represented by a virtual avatar, interact with content displayed around them using natural motion. This synchronized mapping of the user’s body and the self-avatar’s motions enhances embodiment (i.e., a sense of ownership of the virtual avatar), and the direct control of the motions develops a sense of agency (i.e., a feeling of control over the avatar’s actions and their consequences). Both are important characteristics of the immersions and captivating VR experiences.
However, while many authored media such as movies, games, and social applications has used character motion stylization to generate impressive character motions beyond the gamer/user/viewer's abilities, Virtual Reality are bereft of such motion accentuations of the self-avatars. The user may find herself in a novel imaginative environment where the law of physicals behaves differently, yet her avatar motion follows her own body motions which are limited by physical laws, her experience, her environment, social limitations, physical limitations and more.
I will present CoolMoves, a novel proof-of-concept system for expressive and accentuated full-body motion synthesis of a user’s virtual avatar in real-time., while maintaining high embodiment and agency. CoolMoves uses the limited sensing, such as done by current consumer-grade VR systems, specifically headset and hand positions, and synthesizes full-body motions. Motion trajectories for each joint are synthesized through our processing pipeline that takes an exist motion capture database as input and, in real time, loosely matches segments of the user’s motions and accentuates as well as blends them to animate the user’s avatar in VR. The system can extrapolate free-ranging motions that may do not originally appear in the motion data base, enabling the user to be fully expressive in her motion, yet maintain requested stylization.