Skip to main navigation Skip to search Skip to main content

Learning-Based Early Transform Skip Mode Decision for VVC Screen Content Coding

  • Sungkyunkwan University

Research output: Contribution to journalArticlepeer-review

Abstract

One of the design goals of the recently published international video coding standard, Versatile Video Coding (VVC/H.266), is efficient coding of computer-generated video content (commonly referred to as screen content) which exhibits different signal characteristics from the usual camera-captured video (commonly referred as natural content). VVC can perform transform in multiple different ways including skipping the transform itself, which demands much computation for its best selection among many combinatory options. In this paper, we investigate designing a machine-learning-based early transform skip mode decision (ML-TSM) which makes a determination whether or not to skip the transform in an early stage by making a simple classification employing key features designed in such a way to reflect the characteristics of TSM blocks well. Compared with the VVC reference software 14.0, the proposed scheme is verified to reduce computational complexity by 11% and 4% with a Bjontegaard delta bitrate (BDBR) increase of 0.34% and 0.23% respectively under all-intra (AI) and random-access (RA) configurations.

Original languageEnglish
Pages (from-to)6041-6056
Number of pages16
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume33
Issue number10
DOIs
StatePublished - 1 Oct 2023
Externally publishedYes

Keywords

  • H266
  • screen content coding
  • transform skip mode
  • video coding
  • VVC

Fingerprint

Dive into the research topics of 'Learning-Based Early Transform Skip Mode Decision for VVC Screen Content Coding'. Together they form a unique fingerprint.

Cite this