Skip to main navigation Skip to search Skip to main content

3DoF+ 360 video location-based asymmetric down-sampling for view synthesis to immersive VR video streaming

Research output: Contribution to journalArticlepeer-review

Abstract

Recently, with the increasing demand for virtual reality (VR), experiencing immersive contents with VR has become easier. However, a tremendous amount of calculation and bandwidth is required when processing 360 videos. Moreover, additional information such as the depth of the video is required to enjoy stereoscopic 360 contents. Therefore, this paper proposes an efficient method of streaming high-quality 360 videos. To reduce the bandwidth when streaming and synthesizing the 3DoF+ 360 videos, which supports limited movements of the user, a proper down-sampling ratio and quantization parameter are offered from the analysis of the graph between bitrate and peak signal-to-noise ratio. High-efficiency video coding (HEVC) is used to encode and decode the 360 videos, and the view synthesizer produces the video of intermediate view, providing the user with an immersive experience.

Original languageEnglish
Article number3148
JournalSensors
Volume18
Issue number9
DOIs
StatePublished - 18 Sep 2018
Externally publishedYes

Keywords

  • 3DoF+
  • HEVC
  • Multi-view video coding
  • View synthesis
  • Virtual reality
  • VSRS

Fingerprint

Dive into the research topics of '3DoF+ 360 video location-based asymmetric down-sampling for view synthesis to immersive VR video streaming'. Together they form a unique fingerprint.

Cite this