The word /codec/ is the combination of the words “coder” and “decoder”(co- dec). There many variations of codec, and in these variations there are many types. This can function to encode and compress for storage purposes the data or it can decompress it for editing. Usually codec is incorporated in the computer and can be used with a part of the software. But even so, it can be presented in a hardware (physical) form, which mainly is the type that can change analog to digital or vice versa. A broadcast engineer is mainly the one who uses this form of codec, but otherwise the most commonly used are the computer codec. This latter used to compress audio and video into a size that can be used easily for transfer, viewing and storage.
While codecs might be complicated, sometimes people get confused which is codec and which is a container format. For example, AVI is a container format used to watch standard videos on PCs, but they get is confused with codec. A format container is a format that has all the components of a video (images, sound, effects,etc) inside of it.
Codecs, have two ways of compressing the formats. Lossy or lossless.
With Lossy you can create the highest quality compression of your video. You permanently eliminate information that is either unimportant or parts of large information that a user might not notice. This won’t be noticed by various users.
Why do we use this compression?
On a regular basis on the internet, whole file sizes are rare since sharing and storing them is impractical, expensive and makes you loose the double of time waiting. For example, with blu-ray films they can be bigger than 50GB (on average), trying to download them or purchase them would be long and expensive. With Lossy you can create a high quality compression, yes it will loose some of the quality but the advantages succeed the disadvantages.
Making a file lossless is not loosing any of the information or qualities of the video. This happens to be when you compress a folder on your desktop to a ZIP or RAR, so you can send it through email to a colleague. This isn’t the best way to compress a video, even using lossless transmitting a video can be bigger than the bandwidth. This type of compression isn’t used in the daily basis, but in the film/media industry.
Types of Codec
There are various types of codec, here are 3 types which are one of the most popular.
This codec is the most important for compressing high quality videos. It can be used in either on lossy or lossless compression, and even let’s you choose every detail of the compression. This including: quality, target file size and frame rate. H.264 uses DivX/XviD for encoding videos (x264 too). & uses MP3/AAC for encoding audio but all this depends on the size of compression you want.
This codec consists of many parts to it, but only one of these parts is used for video encoding. This codec part is called MPEG-4 Part II. These codec parts are still considered one of the most common codec for using streaming definition.
These codecs are built with the enforcement of MPEG-4. While XviD is an open source, DivX is a commercial codec. But either way, both of these can be used between each other and can encode or decode either side. This is mostly used for video encoding (not much else) and work with in addition with any of the above.
Screen Ratios refer to the width and height of a screen. These are represented by two number, for example one of the oldest screen ratios, 4:3. The first number 4, refers to the width and 3 to the height. This means that for every 4 inches of height there are 3 in height. Their is another way to represent this same aspect ratio, 1.33:1, nowadays they use this one to represent the ratios since they usually leave out the colon and would only write 1.33.
When the first films started to be projected they would use the 1.33:1 ratio, but after incorporating audio to the same film strip the ratio changed. Instead of 1.33, it would transform to 1.37. After this ratio was started to be called Academy Ratio, since the Academy of Motion Arts approved it for films
In 1952, Cinerama improved a technique already named polyvision. This technique consisted of using three cameras and three projector that would project the film on a curved screen. With this technique Cinerama gained a 2.35:1 ratio, and with this a big audience who wanted to see films bigger. After this, all Hollywood started to create wider projections trying to make their films the most big. Thats how MGM launched MGM65, where they used 70mm film strips and a ratio of 2.76:1. They used this MGM65 on the film Ben-Hur (1959).
The ratios have been changing with the new changes with technology. Nowadays one of the most used are the 2.39:1 and 1.85:1 ratios. But that doesn’t mean that films strictly use only one ratio throughout a film. A great example of multiple use of ratios is Oz: The Great and The Powerful.
Resoultion is the amount of detail contained in a frame, and determines the quality of a video. This is measured by dots per inch or pixels per inch. The resolution you choose depends if the image is going to be seen through which format. If you’re going to watch on a computer, television or by a project the resoultion will change depending if you stretch the frames.
Videos consist of the use of many images at an incredible speed to create the illusion of movement. Each of these images are called frames. How many frames are there in a second? That is what frame rate is. These frame rates are different depending on which TV standard a country is. For example NTSC (National Television System Comitee) is used in North & South America, including Japan and others. They use 30fps as their frame rate. While in Europe, parts of Africa and Asia; they use PAL (phase alternating line) which uses 25fps. And for film? the standard is 24fps since 1927.