We have another guest post today. Interesting subject!
Transmission of uncompressed video data over any network is nearly impossible due to file size. A single frame of uncompressed, high-definition video is approximately 8 MB. A minute of uncompressed HD video requires 1.4 GB of storage. As a result, you need a network bandwidth of 250 MB/sec to transmit a two hour movie of 166GB. Significantly more bandwidth than most non-commercial networks provide.
This is where video encoding and codecs come in. Video codecs compress video data and convert the compressed stream into a format that can be decoded and played back. This guide introduces the most popular web video codecs today and reviews some compression techniques these codecs use.
What is Video Encoding?
Video encoding is the process of compressing and changing the format of raw video data. Once compressed, the video file consumes less storage space. To view video data, you must decode it with the same tool used to encode in the first place.
Sometimes compressed video content needs to be further encoded for compatibility with different devices and operating systems. Certain programs or services require specific encoding specifications. This process is referred to as video transcoding.
Common Web Video Codecs
Video codecs are hardware or software used to compress and format video. Codecs consist of an encoder that compresses the video and a decoder that recreates the video for playback. The name codec comes from a combination of encoder and decoder. Example video codecs include AV1, H.264, VP9, MPEG-2.
H.264 has become the most common codec for high quality video streaming. This codec provides excellent compression efficiency, high video quality, and fast encoding. In addition, H.264 supports 4K video streaming. This is impressive considering that the codec was created in 2003 when 4K video did not exist.
More advanced video compression standards like HEVC are also available. HEVC provides greater compression efficiency than H.264. As a result, people with slow network connections can watch high quality video. However, HEVC is not compatible across many devices. It wasn’t until 2018 that iPhones started supporting HEVC. For this reason, H.264 is preferred when you want to reach a wider range of devices.
Factors Affecting Video Encoding
There are two basic factors affecting the quality and size of encoded video—source video characteristics and codec configuration. High quality video results in very large video files. As a result, there is always a trade-off between the size and the quality of the video. To help balance this trade-off you can use automatic compression tools to dynamically adapt video according to resources.
Effect of Video Characteristics
The potential effects of source video characteristics on the encoded video size and quality include:
Color depth is the number of bits used to indicate the color of each pixel. High color depth leads to high quality of color in the video. However, high color depths also result in large compressed video file sizes.
Frame rate is the number of frames per second. High frame rate results in smooth and more realistic motion. The downside is that high frame rates create larger file sizes.
Video compression usually works by comparing consecutive frames. The compression algorithm calculates the difference between two successive frames to approximate the appearance of following frames. When a video contains more motion, the difference is high. As a result, the video contains noise and artifacts that make the compression less effective.
High resolution video more accurately depicts the original scene, despite the side effects of compression. Resolution is typically in the range of 360p to 1080p. The number indicates how many pixel lines a video has from top to bottom. Smaller resolutions take up less space but will appear blurry on larger or high definition screens.
Effects of Codec Configuration
Video encoder configuration effects on quality and size include:
Lossless compression uses referencing to reduce duplicate information without data loss. It perfectly reconstructs the original video data from the compressed data. As a result, the quality is high compared to lossy compression. However, video files are usually too large for general usage. Lossless is used when the original and the decompressed file must be identical. For example in ZIP file format.
Lossy compression uses approximations and partial information to encode video data. This compression method produces some artifacts and degradations of the video quality. The codec configuration controls the amount of compression in video encoding. The higher the deviation from the source the higher the compression rate.
The encoder can control the quality of the video, i.e. the video characteristics. Higher quality settings result in larger video files. The exact size of the file depends on the codec.
Bit rate is the number of bits processed per second. Higher bit rates enable codecs to provide more frames per second. As a result, compressed files provide higher video quality at high bit rates. However, high bit rate also lead to larger file size.
Video Compression Techniques
Codecs utilize different compression techniques to reduce the video size. The ultimate goal of compression is to reduce size without impacting video quality. However, some techniques are more noticeable to the viewer than others. The following list examines a few popular compression techniques
A common compression technique is reducing the resolution. High resolution video contains more information in each frame then low resolution video. For instance, a 640×360 frame has 230,400 pixels. In contrast, a 1280×720 frame has 921,600 pixels. The concept is to decrease the amount of information by reducing the number of pixels in each frame.
An artifact of resizing is pixilation. This phenomena leads to the appearance of “blocks” in an image. Pixilation is the combination of low resolution images and interframes. An interframe is an approximated frame in the lossy compression process that represents one or more successive frames. As a result, the encoder keeps the areas of the video as part of the interframe process while the details are actually changing.
Interframe prediction reduces redundant information from frames. The technique is based on the assumption that a group of successive frames usually have similar information. Codecs can leverage that to remove redundant data from video.
Imagine a news broadcast; the video shows a news anchor that sits still, essentially only moving their lips. This kind of video has a lot of redundant information, like the background. To save space, encoders can reuse the same background image across the entire broadcast. H.264 codecs use this technique to achieve high compression rates without losing quality.
Changing frame rates
Another video compression technique is reducing the number of frames per second. This technique decreases the amount of required data to process each second. You need to be careful not to lower the frame rate too much. This can result in unnatural movement. As a result, frame rate reduction can be a bit more destructive than other compression methods.
Video encoding is a useful, and often necessary process, for sharing video data. It helps you ensure that your videos can be viewed across a range of devices and environments. Provided, of course, that you choose a suitable codec and codec settings.
Additionally, the quality and size of your videos requires a delicate balance if you want to grant the greatest accessibility. Hopefully, this article helped you understand how you can adjust your encoding to achieve this balance. Keeping the aspects covered here in mind should help you ensure that users can view your videos readily and at high quality.
Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.