Video Formats

 

 

Video Formats

I.             What is a Video Format? Video formats involve two distinct, and very different technology concepts: containers (sometimes called wrappers) & codecs (short for coder/decoder).

II.            Codecs are used inside of a container and because of this video formats can be confusing.

A.            What is a Container?

The container describes the structure of the file: where the various pieces are stored, how they are interleaved, and which codecs are used by which pieces. It may specify an audio codec as well as video. It is used to package the video & its components (audio/metadata) and is identified (usually) by a file extension such as .AVI, .MP4 or .MOV.

B.            What is a Codec?

A codec (short for "coder/decoder") is a way of encoding audio or video into a stream of bytes. It is the method used to encode the video and is the chief determiner of quality. 

Codecs

To speed up downloads, algorithms encode, or shrink, a signal for transmission and then decode it for viewing or editing. Without codecs, downloads of video and audio would take three to five times longer

 

What is codec and example of codec?

Codecs are compression technologies and have two components, an encoder to compress the files, and a decoder to decompress. There are codecs for data (PKZIP), still images (JPEG, GIF, PNG), audio (MP3, AAC) and video (Cinepak, MPEG-2, H. 264, VP8). There are two kinds of codecs; lossless, and lossy

 

How do they work together?

Think of the container as the file itself. Think of the codec as its contents. The important thing to realize is that most good container formats can hold many codecs. For example a .MOV container can hold almost any kind of codec data. The same goes for .MP4 and even .AVI files can hold a wide variety of codecs as their contents. In no way does the container decide the quality or features of the video itself, that is up to the codec. The proper way to describe video is to indicate both: A .MOV file containing H.264 data. An .AVI file containing DivX data. H.264 Quicktime file (.mov)".

C.            Why is it so complicated?

At first this seems straightforward, but it's not. The problem lies in the confusion and imprecision in the common use of these terms. To make matters worse, software companies try to simplify their documentation and instructions by ignoring the difference altogether. The result is that people believe that phrases like MOV file," or "An MP4 file will be fine" are legitimate ways to talk about video. To make life even more confusing, some names, such as "mpeg-4", describe both a codec and a container, so it's not always clear from context, which is being used. You could have a movie encoded with an mpeg-4 codec inside an avi container, for example, or a movie encoded with codec inside an mpeg-4 container.

Video Formats .

 What Video Format Should Use?

A. File size & quality

1. Digital Storage Space - To calculate the amount of storage space you will need for a project, digital video requires approximately 200 MB per minute of footage, or roughly 12 GB per hour. Of course this varies according to your recording device and the quality it is set to record at.

2. Frames per Second – The standard for FPS is 29.97, increasing the FPS allows for more images per second thus a smoother image. Decreasing FPS will make the video a bit choppy and not nearly as smooth.

3. Video Bitrate - Bitrate is a measurement of the number of bits that are transmitted over a set length of time. Your overall bitrate is a combination of your video stream, audio stream & metadata in your file with the majority coming from your video stream. The higher the bit rate the better the quality the bigger it will be.

4. Resolution – this is the number of pixels present in the images of the video. This determines whether your video is standard definition or high definition. The higher the resolution the clearer the image the bigger the file.

5 Transmitting the media:

deciding what codec you want to use depends on your means of transmitting the video so others can view it. This could be an external hard drive, jump drive, via email or uploading to a social media website/blog.

So what codec should we use?

1.            Check the requirements from the Internet sites you wish to upload your video file. (Much of the internet supports a MP4 container with a H.264 codec

2.            your video meant for mobile applications? If so, select a codec that supports all mobile devices

3.            your video going to be embedded in another application? What other applications will typically do is take the video and apply their own codec to the video so that it will be playable within their application but that does not mean the application will understand all types of codec. Check the applications help menu to determine the best codec for importing.

List of Most Common Codecs

1.            MPEG (Moving Pictures Expert Group): three video formats, MPEG 1, 2, and 4.

2.            MPEG-1: Old, supported by everything (at least up to 352x240), reasonably efficient. A good format for the web.

3.            MPEG-2: A version of MPEG-1, with better compression. 720x480. Used in HDTV, DVD, and SVCD.

4.            MPEG-4: A family of codecs, some of which are open, others Microsoft proprietary

List of Most Common Containers

1.            AVI (Audio Video Interleave): a Windows' standard multimedia container.

2.            MPEG-4 Part 14 (known as .mp4): is the standardized container for MPEG-4.

3.            FLV (Flash Video): the format used to deliver MPEG video through Flash Player

Sample Bit Rates

1.            16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable “talking head” picture using various video compression schemes)

2.            128 – 384 kbit/s – business-oriented videoconferencing quality using video compression

VIDEO BROADCAST STANDARDS

NTSC is an abbreviation for National Television Standards Committee, named for the group that originally developed the black & white and subsequently color television system that is used in the United States, Japan and many other countries. An NTSC picture is made up of 525 interlaced lines and is displayed at a rate of 29.97 frames per second.

PAL is an abbreviation for Phase Alternate Line. This is the video format standard used in many European countries. A PAL picture is made up of 625 interlaced lines and is displayed at a rate of 25 frames per second.

SECAM is an abbreviation for Sequential Color and Memory. This video format is used in many Eastern countries such as the USSR, China, Pakistan, France, and a few others. Like PAL, a SECAM picture is also made up of 625 interlaced lines and is displayed at a rate of 25 frames per second. However, the way SECAM processes the color information, it is not compatible with the PAL video format standard.

Understand the top video file extensions.

             MP4. MP4 (MPEG-4 Part 14) is the most common type of video file format. ...

             MOV. MOV (QuickTime Film) stores high-quality video, audio and effects, but these files tend to be quite large. ...

             WMV. ...

             AVI. ...

             AVCHD. ...

             FLV, F4V and SWF. ...

             MKV. ...

             WEBM or HTML5.

The best audio format for sound quality? A lossless audio file format is the best format for sound quality. These include FLAC, WAV, or AIFF. These types of files are considered “hi-res” because they are better or equal to CD-quality.

 

Lossy formats.

Lossy audio formats lose data in the transmission. They don’t decompress back to their original file size, so they end up smaller and some sound waves are lost. Artists and engineers who send audio files back and forth prefer not to use lossy formats, because the files degrade every time they’re exported.

 

MP3

MP3 (MPEG-1 Audio Layer III) is the most popular of the lossy formats. MP3 files work on most devices and the files can be as small as one-tenth the size of lossless files. MP3 is fine for the consumer, since most of the sound it drops is inaudible, but that’s not the case when it comes to bit depth. “MP3 files can only be up to 16-bit, which is not what you want to be working in,” says producer, mixer and engineer Gus Berry. “You want to be working in at least 24-bit or higher when recording and mixing.”

 

AAC

Advanced Audio Coding or AAC files (also known as MPEG-4 AAC), take up very little space and are good for streaming, especially over mobile devices. Requiring less than 1 MB per minute of music and sounding better than MP3 at the same bitrate, the AAC format is used by iTunes/Apple Music, YouTube and Android.

 

Ogg Vorbis

Ogg Vorbis is the free, open-source audio codec that Spotify uses. It’s great for streaming, but the compression results in some data loss. Experts consider it a more efficient format than MP3, with better sound at the same bitrate.

 

Lossless formats.

These files decompress back to their original size, keeping sound quality intact. Audio professionals want all of the original sound waves, so they prefer lossless. These files can be several times larger than MP3s. Lossless bitrates depend on the volume and density of the music, rather than the quality of the audio.

 

FLAC

Free Lossless Audio Codec offers lossless compression and it’s free and open-source.

 

ALAC

Apple’s Lossless Audio Codec allows for lossless compression, but it works only on Apple devices.

 

Uncompressed formats.

These files remain the same size from origin to destination.

 

WAV

WAV (Waveform Audio File) retains all the original data, which makes it the ideal format for sound engineers. “WAV has greater dynamic range and greater bit depth “It’s the highest quality,” Berry agrees. “It can be 24-bit, 32-bit, all the way up to 192kHz sample rate and even higher these days.” If you’re collaborating and sending files back and forth, WAV holds its time code. This can be especially useful for video projects in which exact synchronisation is important.

 

AIFF

Originally created by Apple, AIFF (Audio Interchange File Format) files are like WAV files in that they retain all of the original sound and take up more space than MP3s. They can play on Macs and PCs, but they don’t hold time codes, so they’re not as useful for editing and mixing.

 

DSD

Direct Stream Digital is an uncompressed, high-resolution audio format. These files encode sound using pulse-density modulation. They are very large, with a sample rate as much as 64 times that of a regular audio CD, so they require top-of-the-line audio systems.

 

PCM

Pulse-Code Modulation, used for CDs and DVDs, captures analogue waveforms and turns them into digital bits. Until DSD, this was thought to be the closest you could get to capturing complete analogue audio quality.

 

 Aparna

Assistant Professor 

K.U.K

 


Comments

Popular posts from this blog