In today’s digital world, all data including video and audio, is recorded, stored and transmitted in digital form.
A digital picture is composed of a 2-dimensional array (WxH) of pixels (picture elements). The common descriptors SD, HD, Full HD, 4k that we associate with content and/or display devices, are representative of different screen resolutions needed to display the digital pictures of a certain pixel size. For example, SD represents a screen resolution of 720 x 576 pixels, while Full HD is 1920 x 1080 pixels (which are 5x the number of pixels in an SD picture).
Movies and video are characterized by a certain number of frames (pictures) displayed every second (fps) depending on the application. For example, typical standard definition (SD) or high definition (HD) television use 30 or 60 fps, while 4K/2k content use 60 or 120 fps.
Furthermore, in a digital picture, each pixel is represented by a binary number to indicate its color. For example, a “True Color” pixel uses 24 bits to denote three 8-bit channels (Red, Green, Blue) to yield a palette of 16,777,216 colors.
Thus, a fundamental problem with digital video is the staggering number of bits required for storage, especially as the screen resolution increases. For example one frame of raw Full HD video requires 50 megabits of storage. Transmission of the same video at 30 fps would require a bandwidth of approximately 1.5 Gbps!
Fortunately, video sequences contain a lot of spatial and temporal redundant information. Over the last 30 years, many compression and formatting algorithms have been developed and standardized in order to minimize the amount of data required to represent sequences of video images, including MPEG-2, MPEG-4/H.264 AVC and H.265 HEVC.
Video Coding System
In a typical video coding system, source data is pre-processed and encoded prior to transmission. The received data is then decoded and post-processed, and finally displayed to the end user. The source of the video sequence is output in digital format during the acquisition phase which may be temporally and locally decoupled from the subsequent steps. Pre-processing operations comprise color format conversion, color correction, and noise filtering. Encoding transforms the video sequence into a coded bitstream suitable for storage or transmission methods according to various applications and transmission media, e.g., broadcast over satellite, VOD over a cable network or internet. Video encoders are very complex devices that can produce very high compression ratios by means of sophisticated algorithms.
Transmission includes packaging the bitstream into the appropriate format and delivery of the video to the receiver as well as methods for dealing with data security and data loss. Certain application like streaming video over the internet may require feedback from the receiver into order to adjust stream and transmission parameters to deal with impairments on the channel. Post processing functions are performed on the reconstructed video to enhance or adapt it for display, e.g. trimming, re-sampling, color correction, etc. Finally the video is transferred to a display for viewing using the appropriate color format and timing through a standard interface, e.g., HDMI.
TechPats has extensive experience in video coding technologies. Our analysts have investigated a number of different video coding methods, and are familiar with the industry standard. TechPats has extensive testing capabilities for video and image processing functional testing, as well as experience in analysis of MPEG-4 content. Page Content/Notes