![]() |
mi360world
Scalable Dissemination and Navigation of Video 360 Content for Personalized Viewing |
![]() |
Home |
Research |
Publications |
Team |
Resources |
360 video is a form of virtual reality (VR) that
allows the viewer to experience media content in an immersive
fashion. In contrast to traditional video, 360 video is recorded
with a special camera that captures the complete surroundings
from almost all directions. Viewers consuming such a video can
select the direction they are looking at by using a pointing
device on a regular display or through head movement using a
head-mounted device. This new format allows a viewer to change
their viewing direction when watching the video, e.g., a viewer
can watch a sporting event from multiple perspectives on the
field. However, creating, storing, and disseminating 360 videos
at a large scale over the Internet poses significant challenges.
These challenges are the focus of this project, which will
develop a new system and framework, called mi360World, to enable
smooth delivery of and interaction with 360 video by any user on
the Internet. This project, if successful, will significantly
improve 360 video delivery, and will enable new and much richer
educational, training and entertainment experiences. It will
also help train a new class of multimedia systems researchers
and practitioners.
The mechanisms for delivering a high-quality,
personalized 360 video over the Internet to a globally
distributed set of users is an unsolved scientific problem that
entails the following challenges: 1) Ultra-high Bandwidth; 2)
Ultra-low Delay; 3) View Adaptation (to user head movement); 4)
Complex video metadata and delivery; 5) Video Quality of
Experience (QoE). Traditional video QoE has seen extensive
research over the years; however, what contributes to 360 video
QoE is much less understood and will require conceiving of and
measuring new metrics. The proposed mi360World system
incorporates three major research thrusts to address the above
challenges: A video creation thrust that enables personalized
viewing by generating navigation graphs and cinematographic
rules, while maintaining a high QoE and reducing cybersickness.
The construction of navigation graphs and inclusion of
cinematographic rules represent the main innovations of this
project, and are encapsulated in a three-layered metadata
representation of the 360 video: a transport layer, a semantic
layer, and an interactive story-telling layer. The second thrust
focuses on scalable distribution of 360 videos to a global set
of diverse viewers, utilizing navigation graphs and
cinematographic rules for highly efficient transition-predictive
prefetching and caching. The third thrust focuses on QoE and has
the goal of devising novel QoE metrics and evaluation methods to
assess cybersickness. System architectures and algorithms will
be extensively evaluated through simulation, emulation, and
benchmarking using testbeds to assess the success of the
proposed research.
|
This material is based upon work supported by the National Science Foundation under Grant No. (CNS-1901137). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. |