Resumen - mis apuntes PDF

Title Resumen - mis apuntes
Author Paula Dorado
Course Digital Post-Production
Institution Universidad Carlos III de Madrid
Pages 16
File Size 474.4 KB
File Type PDF
Total Downloads 33
Total Views 145

Summary

......


Description

Resumencito mis apuntes + presentaciones: Section I 1. Parameters to create a sequence/composition. What criteria do you consider when setting up your composition? 1. 2. 3. 4. 5.

Video standard. Scanning method – interlaced or progressive. Frame size, number of lines and spatial resolution. Pixel aspect ratio. Frames per second (temporal resolution).

1. Video standard Types:  The analog video standard: + PAL: • 625 scan lines but 576 active per frame, and 50 field per second (25 fps). • Used in most of Europe, China, Australia, and South America. + NTSC: • 60 fields per second (30 fps). U.S., Japan, Canada and some areas of South America. +SECAM: (just for colour). • Analog color tv system for encode analog tv color invented in France, and it was the first Europe TV standard. + PAL+: • Development designed for the transmission of video with a 16/9 aspect ratio equal to PAL video signal. • When a TV display receives a PAL+ signal, the frame is stretched (and we see a wider aspect ratio). Digital video standard: more recently, digital video formats were introduced as digital High Definition video standards. + HD 1280x720 pixels/FullHD 1920*1080: • Widescreen. • Colorimetry (different system), progressive and interlaced. + 4k UHD 3840x2160p and 8k UHDT 7680x4320p: • Proposed by NHK Science and Technology Research Laboratories and approved by the ITU (International Telecommunications Union). • Frame rate up to 120p, 10bits or 12bits per sample. • Rec.709 (HDTV) color space. Link. + Broadcast 3DTV standard: 

• Frame compatible. - Side-by-Side: put the left and right images one next to the other in a HD image *1080i @ 50Hz Side-by-Side *720p @ 50Hz Side-by-Side *720p @ 59.94 / 60 Hz Side-by-Side *1080p @ 23.97 / 24 Hz Side-by-Side *1080i @ 59.94 / 60 Hz Side-by-Side1080i25. - Top-and-Bottom: put left and right images one above the other in a HD image. *1080p @ 23.97 / 24 Hz Top-and-Bottom *720p @ 59.94 / 60 Hz Top-and-Bottom • Not every spatial multiplex format is frame compatible with actual systems. Following formats are non-frame compatible: * 720p @ 50 /60 Hz * 1080p @ 24 Hz • It is using codec H.264/MPEG4-AVC; o MVC (Multi –View Coding) • It’s ready for other multi-view technologies. • Compatible with other devices that use H.264 • It is used in broadcasting transmission. Modalities • Stereoscopic - Active glasses - Polarized glasses • Auto stereoscopic - Without glasses

2. Scanning method Interlaced: it is transmitted in fields. Progressive: *Advantages: - Full frames. - More vertical resolution. -Sync problems between fields are eliminated. - It is adopted in digital environment.

3. Frame size, number of lines and spatial resolution. Frame size: the dimensions of the video frame. *But there are others video standards such as HDV and High Definition with different pixels per line and with more vertical lines. Movements:    

Panning or sweeping movement of a camera across a scene or a frame. Zoom in or zoom out. Ken Burns effect. Camera mapping.

Spatial resolution: is a term that refers to the number of pixels utilized in construction of a digital image.

4. Pixel aspect ratio. It is the relation between the height and width of the frame, and it is measured in a ratio that has to do with how much wider the image is in relation to the height of the frame. Ratios are obtained by dividing the width of the frame by the height, and they are written as two numbers separated by a colon. - 4:3 (1,33:1) universal video format. - 16:9 (1,77:1) HD television format. * Pixels must be stretched or squeezed to final aspect ratio.

Conversion between aspect ratios  Conversion without distortion in the original: • Letterboxing. • Pillarboxing.  Conversion with distortion: • Scaling: enlarge (from 4/3 to 16/9). • Scaling down: reduce (from  Cropping: • Its width. • Its height.

5. Frames per second (temporal resolution). The frequency (rate) at which consecutive images called frames appear on a display.

2. Film formats. Motion picture film format: film format is a technical definition of a set of standard characteristics regarding image capture on photographic film, for either still or filmmaking. It is given by the width, aspect ratio and the number of perforations of the film.  Aspect 1.37:1 (Academy Format o 4:3). • Original silent movies used 1.33:1, designed by Thomas Edison. • This ratio changed with the arrival of sound to 1.37:1. • The widescreen revolution in 1950, let this format obsolete.  Aspect ratio 1,85:1 (16:9). • Conventional film size 35 mm. • The most common ratio is 1.85:1, though the film contains more information outside of these parameters. * Soft masked: full frame is filled during filming. The projectionist is responsible to matte out the top and bottom in the theater (Place barriers in projection). * Hard masked: Black borders during shooting. -

Super 35 (2,35:1). • It is a motion picture film stock format used for widescreen movies to get a 2.35:1 aspect ratio. • It is a production format that uses the standard 35mm size. • It does not require soundtrack because then it is converted to other distribution formats. • It is not using anamorphic lenses (more expensive) so this format can use a more variety of lenses.

-

70mm (2,20:1). • The vast majority of film theaters are unable to handle 70mm film, and so original 70mm films are shown with 35mm prints at these venues. • It is obsolete for film, but it was recovered to IMAX using a square aspect ratio.

3. Video digital formats. -

Video tape formats. Digital Video file formats. + Codecs & containers.  Video encoding setting. - Luminance and color sampling. - Video signal & connectors.

Video tape formats Format also refers to the width of the tape and not the recording-reproduction system. *The use of bands: -

Audio track Video track. It contains the particles which are held electromagnetically. Sync track. This allows the video tape player to synchronize its scan speed and tape speed to the speed of the recording.

Digital video file formats -

Containers: a container or wrapper format is a metafile format whose specification describes how different data elements and metadata coexisting a computer file. • AVI. Ext. Avi (Audio Video Interleaved) - Windows proprietary format. • QuickTime Movie Ext. mov, etc. - It is cross platform, and its newer versions can include interactivity and navigation pseudo-three-dimensional spaces. Developed by Apple. • MPEG (Moving Pictures Experts Group). - It’s a container but compress the video information. So, it’s a hybrid container that includes a codec. + MPEG-1, CDS + MPEG-2, DVDS y TV Digital. + MPEG-4, broadband video.

-

Codecs: (compress and decompress) eliminate redundant information to save space.

• Intraframe. • Interframe. • Some codecs: + MPEG 1 + MPEG 2 + MPEG 4 (MPEG 4 parte 10- H264 very used in broadcasting). + MPEG 7 * COMPRESSION: • Spatial compression: + Eliminates the pixel data that do not change. It is applied to a single frame of data, regardless of preceding or following frames. + Frames with spatial compression are called intraframes. • Temporal compression: + It saves space by analyzing each frame, sampling, and storing only the difference with previous frame. +Frames using temporal compression data are called inter-frames. • Video encoding setting : + Frame size. Some codecs require specific sizes. + Pixel aspect ratio. + Frame speed. + Frame rate. Increasing the frame rate can produce a smoother movement (according to the original frame rates source clips), but requires more disk space. + Quality. In some codecs you can reduce the quality. +Bandwidth. + Scanning process. + Aspect ratio. + Color Depth (how many colors can represent a pixel (bits per sample)). If you want to keep your original color depth or reduce it. + Key frames. They are automatically created on the file at regular intervals. During compression are stored as complete frames. Frames placed between the keyframes, called intermediate frames, are compared with the previous frame, and only store the new data. + Kilobites per second (kbps). Provides the highest quality that supports the media where it will be reproduced.

Luminance and color sampling *Additive synthesis: • Separation of light in red, green and blue. *Human visual system is less sensitive to color than luminance.

Main ways to carry the color information: 1. RGB: each color is treated separately, but it is needed a wider bandwidth. This signal is used in displays and computers.

2. Y, Cb, Cr (component).

There are three main color sampling: -

4:4:4 + Each horizontal line has all information of luminance and chrominance. + Despite this sampling doubles the color information, for visual human system there is no color differences. + Very useful for application such as chrome-keys, masks, etc.

-

4:2:2 + 4 samples for luminance. + 2 samples for red-Y and blue-Y colors. + For studio works.

-

4:1.1 y 4:2:0 + 4 samples for luminance. + 4:1:1: all pixels have luminance. 1 sample for each color subcarrier. DVCAM, DVCPRO.

+ 4:2:0: take one component by each vertical line (R--Y in one and BY in other). DV, DVC Pro, DV Cam. Documentaries, ENG, institutional videos.

3. S-Video: luminance and chrominance are separated. Analog (non-digital) video signal. More quality than composite video, but less than component video. 4. Composite: luminance y chrominance information goes through the same cable (signal). Analog (non-digital) video signal.

Video signal and connectors When you capture and output, the type of video signal you use to connect your equipment is a critical factor that goes into determining the quality of your video. Most common video signals used on today’s video devices: -

Component YUV /RGB. Composite. S-Video (Y/C) (separate video or super video): higher-quality analog video signal used by high-end consumer video equipment. More quality than composite video, but less than component video. It uses S-video connector.

The video signal can be transferred via different connectors or digital interfaces: -

RCA: in professional editing environments, composite video signal is most used for low quality monitoring. (Consumer and home use, composite signals are often used to connect VCRs or DVD players to televisions). + Do not confuse the RCA connector that transmits component video and RCA connector that transmits composite video signal.

-

BNC: it was once a popular computer network connector, now is used for various types of video signals. The most common are the component video signal. It is used with coax cables to transfer video in long distances. - SDI and HD-SDI (SERIAL DIGITAL INTERFACE) + SDI: allow to capture formats with 4:2:2 and 4:4:4. Transmits digital video to long distances with a high sampling and without compresion (1 to 1), using coaxial cables and BNC conectors. Up to 360 Mbit/s. + HD-SDI: this standard is part of the family of SDI standard (based on coaxial cable) created to be used for transport of uncompressed digital video. Improve the 259 standards to obtain higher bitrates.  Up to 1.485/1.001 Gbit/sg. - IEEE 1394 o Firewire: designed by Apple, converted into standard in 1995. Bandwidth of 100, 200, 400 y 800 Mb/s. A point-to-point architecture: is a simple connection architecture in which applications are simply connected directly to each other without additional equipment - HDMI (High-Definition Multi-media Interface): standard for digital audio and video, uncompressed and supported by industry. Supports both digital television and computer signals. It is generally considered as the successor of the SCART interface. - SCART: connector has multiple pins that run composite, component RGB, and stereo audio in one bundle.

4. 2d and 3d graphics. -

Bitmap (rasted graphic) or vector graphic. Motion graphic design for films. Motion graphic design for TV

Bitmap Bitmaps images (also known as raster images) are made up of pixels in a grid, so they are resolution dependent. It's difficult to increase or decrease their size without sacrificing a degree of image quality. When you reduce the size of a bitmap image through your software's resample or resize command, you must throw away pixels.

Vector Vector images are made up of many individual, scalable objects. These objects are defined by mathematical equations rather than pixels. Objects may consist of lines, curves, and shapes with editable attributes such as color, fill, and outline. They have many advantages, but the primary disadvantage is that they're unsuitable for producing photo-realistic imagery.

Motion graphic design for films *Film titles: film’s opening credits are designed to create the context of a film and establish expectations about its atmosphere and tone. *Network branding: identify the station or network so it adjusts the aesthetics and logo to a time-based environment. Normally between 5 – 10 sec. *Interstitials: 30 to 60 sec mini programs that appear between movies or other events, each having a specific objective. Sometimes, if a program finishes earlier than expected, a short mini program is inserted to fill the time. Others might function from a commercial standpoint to promote a network’s brand by creating a link with an existing program. *Bumpers: brief presentation that transitions between a program and a commercial break. Typically are between 2–5 seconds in duration, and in most cases, the name or logo of the show is displayed and is accompanied by an announcement that states the title (if any) of the presentation, the name of the program, and the broadcast or cable network. *Lower thirds: combinations of graphics and text that appear on the bottom portion of the screen to identify the station, the presenter(s), and the content being aired. (Animated lower thirds). *Mortises: full screen graphics that are used to frame live footage—are sometimes used in combination with lower thirds. *Lineups and upfronts: is a full screen graphic that informs viewers about a network’s upcoming program schedule by displaying the names of the shows, dates, and times.. *Network packages: similar to a show package, is a complete “video information system” that is comprised of promotional elements including station identifiers, bumpers, lower thirds, and mortises. *Commercials: most commercials today, which sell everything from household items to political campaigns, can range from 5–10 seconds to hour-long infomercials. 30second commercials are often referred to as spots. *Public service announcement (PSA): is a noncommercial spot that aims to raise public awareness about specific issues such as energy conservation, global warming, homelessness, and drunk driving. PSAs are also used to promote nonprofit organizations such as United Way, Red Cross, and American Cancer Society. *Music videos: cinematic traditions that have been carried over from film into music videos have been enhanced with the incorporation of special effects and animation.

Motion graphic design for tv. *The logo. - Logotype - Isotipo: elemento usado para identificar una marca. - Isologo o Isologotipo: identificador gráfico que sirve para firmar las comunicaciones de una entidad.

Section II 1. Chroma key, Alpha channels, masks, and mattes. What is VFX? Processes in which imagery created is outside the context of live action shot. - Chroma Key: technique for compositing (layering) two images or video streams together based on color hues (chroma range). The color chosen is called chrome and it is used for removing or masking transparent a portion of image. As result, we obtain a matte or a mask.

The alpha channel stores the matte info. Mattes are used in photography and special effects filmmaking to combine two or more image elements into a single, final image. There are static or travelling matte. *Why a green color? Tips to get a good chroma key.   

-

Keep the illumination as even as possible in the background. Try to use a background that has a different color values than your subject. It is better to use special fluorescent designed to VFX.

Rotoscoping: the objective is also to create a matte or mask, but in this case, we don’t have a green/blue plate to tell the software to analyze one color and make it transparent. In the rotoscoping process we mask the shapes of moving objects, such as human beings or spaceships. Ex: Matte painting.

-

Tracking, motion tracking, match moving: you can track the movement of an object and then attach the tracking data for that movement to another object (such as another layer or another effect control point) to create compositions in which images and effects follow the motion. Uses:    

To match different shots in one composition. A visual effect follows a point in a camera movement. A still or video image follows a point in camera movement. Link a text or video point.

* Motion control system. Functions: - Consists of a camera, a motorized head, a dolly, or crane on a track to run on, and a computer to drive them. - The data provided by the computer is used to "move" virtual cameras and add items created digitally, or vice versa, a virtual camera movement is used to move a real camera When is it used? - In twin shots, when an actor has to appear twice (or more) in the same scene and the camera is moving. - For choreography of camera motion where exact camera movement and placement are critical. - Difficult scenes with dangerous elements or location. - Match shots of live action with miniatures and models. - Removal of wires, special rigs. *Spot CNP: paso entre estaciones.

-

Digital video assist with compositing capability:  The versatility of digital video assist systems allows to integrate new functions in the filmmaking process.  Record separated camera inputs and display them separately to choose the best shot.

* Digital video assist: 



  

Not so long the “video assist” was limited to a VCR and a monitor. Today, a DVA can record every take on disk and obtain instant dailies and make simple the edition process and perform other functions that were mere dreams few years ago. It is almost a necessity when complex visual effects shots are being planned to preview how the elements of many visual effects’ shots will fit together while on the set. For example, the previsualization with the animation of the planned action is recorded and then brought to the set. 3D CG elements that were created prior to filming can be displayed on the video technician’s workstation. Visual effects can be added and overlaid with live action.

* ENCODACAM It is a production tool that makes it possible to record actors in real time on a blue or green screen set and combine their live performance with a pre-existing background? Objective  

To helps the director, VFX Supervisor and actors judge how well the action and digitally created background are working together. The recorded camera data can be given to the visual effect’s facility so that its digital artists will know exactly what the camera move was.

The system employs the same standard production cranes, dollies, and camera heads that crews are used to work with. The image from the camera is composited with the virtual background during filming, and the combined signal can be seen in any display.

2. Motion Capture and 3D technologies. - Lighting (how to light our 3D scene):  Standard and photometric lights: they are computer-based objects that simulates different lights used in film and TV work. But unlike photometric lights, standard lights do not have physically-based intensity values.  Spot and directional light: a spotlight cast a focus bean of light like a flashlight, a follow sp...


Similar Free PDFs