User's Manual

Table Of Contents
Tech-X Flex Base Unit User Guide Issue 1
5: IP and Video Testing 5-21
Intro
Wi-Fi
10/100
System
IP/Video
Specs
Video compression involves multiple stages, beginning with the removal of spatial similarities from
individual frames using techniques similar to JPEG (Joint Photographic Experts Group) compression.
Then, similarities between adjacent frames are determined and removed from the stream, using complex
algorithms to reuse identical data that was already transmitted and to “predict” data where future
changes can be estimated. These processes serve to reduce the two primary forms of redundancy:
Spatial redundancy - Within any given video frame, certain data may be redundant, such as large
portions of the same color or geometrical design. In this situation, compression may be employed to
represent portions of the frame as smaller mathematical values, rather than expressing every single
pixel individually, when many pixels are the same.
Temporal redundancy - Adjacent video frames often have many similarities, especially with video of
still or slow-moving objects. In this case, sequential frames may have redundant information
expressed over time as the video is played.
In the end, the encoders/decoders effectively form a system where the technology is able to interpolate
redundant data, without the need to transmit it. This system allows for more efficient network capacity
utilization when transporting audio/video streams over communications networks.
Frame types
As part of the reduction in redundancy, the video is compressed and reorganized into three different
frame types, serving individual roles as follows:
I-frames (or “Intra pictures”) - I-frames are coded without reference to other pictures. That is, they
contain the full dataset required to render a video frame and do not interpolate based on references
to other frames. Therefore, they may employ compression to reduce spatial redundancy, but cannot
reduce temporal redundancy. I-frames are critically important for providing references to other frames
and serve as access points in the bitstream where decoding can begin. Because other frame types
do reduce temporal redundancy based on a dependence to the I-frames, the loss of I-frames in a
video stream has the most significant impact.
P-frames (or “Predictive pictures”) - P-frames are interspersed between I-frames and allow a
combination of spatial and temporal redundancy. They can use internal spatial coding like I-frames,
but they can also derive data through references to previous I and P-frames. Through this
referencing, a P-frame can render the picture without a full pixel-by-pixel dataset, using redundant
information presented in preceding frames.
B-frames (or “Bi-directional predictive pictures”) - B-frames are a further extension of the P-
frame predictive methodology, except that they may reference preceding and/or following I and/or P-
frames. The use of B-frames allows the highest degree of picture quality with the most efficient
compression. When a B-frame references a frame that comes after itself, the decoder must have