Detailed Information About Video Parameters
When converting videos or ripping disc, users may be confused by those options in settings window. What do they stand for? Well, keep on reading.
For a digital movie, there are some parameters to determine its playback property. Usually we can check these parameters in player or some certain app like MediaInfo. When we convert videos using Foxreal products, we are actually changing these parameters. This article explains common parameters we use in Foxreal Video Converter and Blu-ray/DVD Ripper, which include video format, video codec, frame size, frame rate, total bitrate; audio codec, audio bitrate, sampling rate and channels.
Video format: A digital video file is a actually a package of video tracks, audio tracks, static images and text subtitles. Therefore, a video format can be described a container. The container file is used to identify and interleave different data types. Simpler container formats can contain different types of audio formats, while more advanced container formats can support multiple audio and video streams, subtitles, chapter-information, and meta-data (tags) — along with the synchronization information needed to play back the various streams together. In most cases, the file header, most of the metadata and the synchro chunks are specified by the container format. For example, container formats exist for optimized, low-quality, internet video streaming which differs from high-quality DVD streaming requirements.
Container format parts have various names: "chunks" as in RIFF and PNG, "atoms" in QuickTime/MP4, "packets" in MPEG-TS (from the communications term), and "segments" in JPEG. The main content of a chunk is called the "data" or "payload". Most container formats have chunks in sequence, each with a header, while TIFF instead stores offsets. Modular chunks make it easy to recover other chunks in case of file corruption or dropped frames or bit slip, while offsets result in framing errors in cases of bit slip.
Some containers are exclusive to audio:
AIFF (IFF file format, widely used on Mac OS platform)
WAV (RIFF file format, widely used on Windows platform)
XMF (Extensible Music Format)
Other containers are exclusive to still images:
FITS (Flexible Image Transport System) still images, raw data, and associated metadata.
TIFF (Tagged Image File Format) still images and associated metadata.
Other flexible containers can hold many types of audio and video, as well as other media. The most popular multi-media containers are:
3GP (used by many mobile phones; based on the ISO base media file format)
ASF (container for Microsoft WMA and WMV, which today usually do not use a container)
AVI (the standard Microsoft Windows container, also based on RIFF)
DVR-MS ("Microsoft Digital Video Recording", proprietary video container format developed by Microsoft based on ASF)
Flash Video (FLV, F4V) (container for video and audio from Adobe Systems)
IFF (first platform-independent container format)
Matroska (MKV) (not limited to any codec or system, as it can hold virtually anything. It is an open standard and open source container format).
MJ2 - Motion JPEG 2000 file format, based on the ISO base media file format which is defined in MPEG-4 Part 12 and JPEG 2000 Part 12
QuickTime File Format (standard QuickTime video container from Apple Inc.)
MPEG program stream (standard container for MPEG-1 and MPEG-2 elementary streams on reasonably reliable media such as disks; used also on DVD-Video discs)
MPEG-2 transport stream (a.k.a. MPEG-TS) (standard container for digital broadcasting and for transportation over unreliable media; used also on Blu-ray Disc video; typically contains multiple video and audio streams, and an electronic program guide)
MP4 (standard audio and video container for the MPEG-4 multimedia portfolio, based on the ISO base media file format defined in MPEG-4 Part 12 and JPEG 2000 Part 12) which in turn was based on the QuickTime file format.
Ogg (standard container for Xiph.org audio fomat Vorbis and video format Theora)
RM (RealMedia; standard container for RealVideo and RealAudio)
There are many other container formats, such as NUT, MXF, GXF, ratDVD, SVI, VOB and DivX Media Format
Frame size: Digital video comprises a series of orthogonal bitmap digital images displayed in rapid succession at a constant rate. In the context of video these images are called frames. Since every frame is an orthogonal bitmap digital image it comprises a raster of pixels. If it has a width of W pixels and a height of H pixels we say that the frame size is WxH.
Bitrate (BR): Bit rate is a measure of the rate of information content in a video stream. It is quantified using the bit per second (bit/s or bps) unit or Megabits per second (Mbit/s). A higher bit rate allows better video quality. For example VideoCD, with a bit rate of about 1 Mbit/s, is lower quality than DVD, with maximum bit rate of 10.08 Mbit/s for video. HD (High Definition Digital Video and TV) has a still higher quality, with a bit rate of about 20 Mbit/s.
Bit rateis the most important property for video quality. Here are formulas relating BR with all other properties:
BR = W * H * CD * FPS (1)
BR = W * H * ( CD / CF ) * FPS (2)
W * H stands for frame size, CD stands for color depth -- the only one aspect of color representation, expressing how finely levels of color can be expressed the other aspect is how broad a range of colors can be expressed, FPS stands for frame rate, CF is a factor that assumes a compression algorithm that shrinks the input data. Formula (1) is used for uncompressed video while (2) for common videos.
The value (CD / CF) represents the average bits per pixel (BPP). As an example, if we have a color depth of 12bits/pixel and an algorithm that compresses at 40x, then BPP equals 0.3 (12/40). So in the case of compressed video the formula for bit rate is:
BR = W * H * BPP * FPS
In fact the same formula is valid for uncompressed video because in that case one can assume that the "compression" factor is 1 and that the average bits per pixel equal the color depth.
In the case of uncompressed video, bit rate corresponds directly to the quality of the video (remember that bit rate is proportional to every property that affects the video quality). Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Bit rate of uncompressed video is too high for most practical applications. Video compression is used to greatly reduce the bit rate.
BPP is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel. Chroma subsampling can reduce the BPP to 16 or 12 bits/pixel. Applying jpeg compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values.
Video codec: Video codec is the standard of video compression algorithm. Every codec has exclusive compression method, BPP, and decoder required. We list some popular codecs below:
MPEG-4: good for online distribution of large videos and video recorded to flash memory
MPEG-2: used for DVDs, Super-VCDs, and many broadcast television formats
MPEG-1: used for video CDs
H.264: also known as MPEG-4 Part 10, or as AVC, used for Blu-ray Discs and some broadcast television formats
Frame rate (FPS): Frame rate (also known as frame frequency) is the frequency (rate) at which an imaging device produces unique consecutive images called frames. As for movies, it stands for the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) standards specify 25 frame/s, while NTSC (USA, Canada, Japan, etc.) specifies 29.97 frame/s. Film is shot at the slower frame rate of 24photograms/s, which complicates slightly the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve the illusion of a moving image is about fifteen frames per second.
Audio bitrate: Similar to video bitrate, the audio bitrate is proportional to audio quality and the file size.
Audio codec: Digital compression algorithms of audio tracks, which include AAC, AC3, MPEG-3 and other popular codecs.
Sampling rate: The sampling rate, sample rate, or sampling frequency (fs) defines the number of samples per unit of time (usually seconds) taken from a continuous signal to make a discrete signal. For time-domain signals, the unit for sampling rate is hertz (inverse seconds, 1/s, s−1), sometimes noted as Sa/s (samples per second). The inverse of the sampling frequency is the sampling period or sampling interval, which is the time between samples.
In digital audio the most common sampling rates are 44.1 kHz, 48 kHz, and 96 kHz.
Channels: Determines how many channels of sound should be preserved. We usually choose stereo for most computer and mobile device, while 5.1 or more for home theater which stands for surround sound.
When adjusting settings of output video, there are several factors we should keep in mind:
1. Video bitrate is proportional to frame rate;
2. Video bitrate is proportional to frame size;
3. Audio bitrate is proportional to sampling rate;
4. Video size is proportional to bitrate.
According to above, we should not set the video bitrate too low with a high frame rate, as well as the high audio bitrate with a low sampling rate. Otherwise, the converted file may perform something bad like a synchronization problem of video and audio.
If you only want a simple preset, try to install our extra free patch for profiles of all sorts of new devices. You may find the download link here. We have collected as many optimized presets as possible.
Hope this article Detailed Information About Video Properties helps. Thanks for reading.