This project explores a novel approach for the compression of screen content data, taking into account different aspects of the image properties and the specific objective of the compression. Unlike traditional photo or broadcast video data, the content of screen content images is very diverse in terms of its statistical properties. Often, certain regions within the images are characterized by two typical properties: a limited number of colors and repeating patterns. Studies have shown that conventional compression methods are not able to store screen content data efficiently, even with the inclusion of special tools. Much more successful is a method based on ideal data encoding, whose success depends on optimal modeling of the probability distributions of the pixel symbols. For this new compression method, a model approach has already been implemented (prototype), but it is still limited to lossless compression and achieves high compression efficiency only for image content with certain properties. The project investigates new methods for better modeling of the probability distributions of the image point symbols using machine learning methods. Overall, the project pursues several approaches:
Dieses Projektes erforscht einen neuartigen Ansatz für die Kompression von Screen-Content-Daten unter Berücksichtigung von verschiedenen Facetten der Bildeigenschaften und der spezifischen Zielstellung der Kompression. Im Gegensatz zu klassischen Foto- oder Broadcast-Videodaten ist der Inhalt von Screen-Content-Bildern sehr divers hinsichtlich seiner statistischen Eigenschaften. Häufig sind bestimmte Regionen innerhalb der Bilder durch zwei typische Eigenschaften gekennzeichnet: eine begrenzte Anzahl von Farben und sich wiederholende Muster. Untersuchungen haben gezeigt, dass konventionelle Kompressionsverfahren auch mit Einbeziehen von speziellen Werkzeugen nicht in der Lage sind, Screen-Content-Daten effizient zu speichern. Deutlich erfolgreicher ist eine Methode basierend auf der idealen Datencodierung, deren Erfolg von einer optimalen Modellierung der Wahrscheinlichkeitsverteilungen der Bildpunktsymbole abhängt. Für diese neue Kompressionsmethode wurde bereits ein Modellansatz implementiert (Prototyp), welcher jedoch noch beschränkt ist auf die verlustlose Kompression und nur für Bildinhalte mit bestimmten Eigenschaften eine hohe Kompressionseffizienz erreicht. Das Projektvorhaben untersucht neue Verfahren zum besseren Modellieren der Wahrscheinlichkeitsverteilungen der Bildpunktsymbole unter Einbeziehung von Methoden des maschinellen Lernens. Insgesamt verfolgt das Vorhaben mehrere Ansätze:
Och, Hannah; Uddehal, Shabhrish; Strutz, Tilo; Kaup, André (2024)
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'24), 14-19 April 2024, Seoul, South Korea, accepted for publication 2024, S. 3670 - 3674.
Soft context formation is a lossless image coding method for screen content. It encodes images pixel by pixel via arithmetic coding by collecting statistics for probability distribution estimation. Its main pipeline includes three stages, namely a context model based stage, a color palette stage and a residual coding stage. Each stage is only employed if the previous stage is impossible since necessary statistics, e.g. colors or contexts, have not been learned yet. We propose the following enhancements: First, information from previous stages is used to remove redundant palette entries and prediction errors in subsequent stages. Additionally, implicitly known stage decision signals are no longer explicitly transmitted. These enhancements lead to an average bit rate decrease of 1.16% on the evaluated data. Compared to FLIF and HEVC, the proposed method needs roughly 0.28 and 0.17 bits per pixel less on average for 24-bit screen content images, respectively.Enhanced color palette modeling for lossless screen content
DOI: 10.1109/ICASSP48485.2024.10446445
Peer Reviewed
Och, Hannah; Uddehal, Shabhrish; Strutz, Tilo; Kaup, André (2024)
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'24), 14-19 April 2024, Seoul, South Korea, accepted for publication 2024, S. 3685 - 3689.
Screen content images typically contain a mix of natural and synthetic image parts. Synthetic sections usually are comprised of uniformly colored areas and repeating colors and patterns. In the VVC standard, these properties are exploited using Intra Block Copy and Palette Mode. In this paper, we show that pixel-wise lossless coding can outperform lossy VVC coding in such areas. We propose an enhanced VVC coding approach for screen content images using the principle of soft context formation. First, the image is separated into two layers in a block-wise manner using a learning-based method with four block features. Synthetic image parts are coded losslessly using soft context formation, the rest with VVC. We modify the available soft context formation coder to incorporate information gained by the decoded VVC layer for improved coding efficiency. Using this approach, we achieve Bjontegaard-Delta-rate gains of 4.98% on the evaluated data sets compared to VVC.Improved screen content coding in VVC using soft context formation
DOI: 10.1109/ICASSP48485.2024.10447125
Peer Reviewed
Uddehal, Shabhrish; Strutz, Tilo; Och, Hannah; Kaup, André (2023)
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'23), 4-10 June 2023, Rhodes Island, Greece 2023.
In recent years, it has been found that screen content images (SCI) can be effectively compressed based on appropriate probability modelling and suitable entropy coding methods such as arithmetic coding. The key objective is determining the best probability distribution for each pixel position. This strategy works particularly well for images with synthetic (textual) content. However, usually screen content images not only consist of synthetic but also pictorial (natural) regions. These images require diverse models of probability distributions to be optimally compressed. One way to achieve this goal is to separate synthetic and natural regions. This paper proposes a segmentation method that identifies natural regions enabling better adaptive treatment. It supplements a compression method known as Soft Context Formation (SCF) and operates as a pre-processing step. If at least one natural segment is found within the SCI, it is split into two subimages (natural and synthetic parts) and the process of modelling and coding is performed separately for both. For SCIs with natural regions, the proposed method achieves a bit-rate reduction of up to 11.6% and 1.52% with respect to HEVC and the previous version of the SCF.Image Segmentation for Improved Lossless Screen Content Compression
Peer Reviewed
Och, Hannah; Strutz, Tilo; Kaup, André (2021)
VCIP 2021, Munich, 5-8 December 2021.
Probability distribution modeling is the basis for most competitive methods for lossless coding of screen content. One such state-of-the-art method is known as soft context formation (SCF). For each pixel to be encoded, a probability distribution is estimated based on the neighboring pattern and the occurrence of that pattern in the already encoded image. Using an arithmetic coder, the pixel color can thus be encoded very efficiently, provided that the current color has been observed before in association with a similar pattern. If this is not the case, the color is instead encoded using a color palette or, if it is still unknown, via residual coding. Both palette-based coding and residual coding have significantly worse compression efficiency than coding based on soft context formation. In this paper, the residual coding stage is improved by adaptively trimming the probability distributions for the residual error. Furthermore, an enhanced probability modeling for indicating a new color depending on the occurrence of new colors in the neighborhood is proposed. These modifications result in a bitrate reduction of up to 2.9% on average. Compared to HEVC (HM-16.21 + SCM-8.8) and FLIF, the improved SCF method saves on average about 11% and 18% rate, respectively.Optimization of Probability Distributions for Residual Coding of Screen Content
DOI: 10.1109/VCIP53242.2021.9675326
Peer Reviewed
Strutz, Tilo; Möller, Phillip (2020)
IEEE Transactions on Multimedia 22 (5), S. 1126 - 1138.
The compression of screen content has attracted the interest of researchers in the last years as the market for transferring data from computer displays is growing. It has already been shown that especially those methods can effectively compress screen contentwhich are able to predict the probability distribution of next pixel values. This prediction is typically based on a kind of learning process. The predictor learns the relationship between probable pixel colours and surrounding texture. Recently, an effective method called ‘soft context formation’ (SCF) had been proposed which achieves much lower bitrates for images with less than 8 000 colours than other state-of-the-art compression schemes.Screen content compression based on enhanced soft context formation
DOI: 10.1109/TMM.2019.2941270
Peer Reviewed
This paper presents an enhanced version of SCF. The average lossless compression performance has increased by about 5% in
application to images with less than 8 000 colours and about 10% for imageswith up to 90 000 colours. In comparison to FLIF, FP8v3, andHEVC(HM−16.20+SCM−8.8), it achieves savings of about 33%, 4%, and 11% on average. The improvements compared to
the original version result from various modifications. The largest contribution is achieved by the local estimation of the probability
distribution for unpredictable colours in stage II of the compression scheme.
ORCID iD: 0000-0001-5063-6515