The human eye and its brain interface, the human visual system, can process 10 to 12 separate images per second, perceiving them individually,[1] but the threshold of perception is more complex, with different stimuli having different thresholds: the average shortest noticeable dark period, such as the flicker of a cathode ray tube monitor or fluorescent lamp, is 16 milliseconds,[2] while single-millisecond visual stimulus may have a perceived duration between 100ms and 400ms due to persistence of vision in the visual cortex. This may cause images perceived in this duration to appear as one stimulus, such as a 10ms green flash of light immediately followed by a 10ms red flash of light perceived as a single yellow flash of light.[3] Persistence of vision may also create an illusion of continuity, allowing a sequence of still images to give the impression of motion. Early silent films had a frame rate from 14 to 24 FPS which was enough for the sense of motion, but it was perceived as jerky motion. By using projectors with dual- and triple-blade shutters the rate was multiplied two or three times as seen by the audience. Thomas Edison said that 46 frames per second was the minimum: "anything less will strain the eye."[4][5] In the mid- to late-1920s, the frame rate for silent films increased to about 20 to 26 FPS.[4]
When sound film was introduced in 1926, variations in film speed were no longer tolerated as the human ear is more sensitive to changes in audio frequency. From 1927 to 1930, the rate of 24 FPS became standard for 35 mm sound film;[1] a speed of 456 millimetres (18.0 in) per second. This allowed for simple two-blade shutters to give a projected series of images at 48 per second. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame flashed on screen three times.[4]
[edit]Motion Picture Film
In the Motion Picture industry, where traditional film stock is used, the industry standard filming and projection formats are 24 frames per second. Historically, 25fps was used in some European countries. Shooting at a slower frame rate would createfast motion when projected, while shooting at a higher frame rate than 24fps would create slow motion when projected. Other examples of historical experiments in frame rates that were not widely excepted were Maxivision 48 and Showscan, developed by 2001: A Space Odyssey special effects creator Douglas Trumbull.
[edit]Digital video and television
There are three main frame rate standards in the TV and digital cinema business: 24p, 25p, and 30p. However, there are many variations on these as well as newer emerging standards.
- 24p is a progressive format and is now widely adopted by those planning on transferring a video signal to film. Film and video makers use 24p even if their productions are not going to be transferred to film, simply because of the on-screen "look" of the (low) frame rate which matches native film. When transferred to NTSC television, the rate is effectively slowed to 23.976 FPS (24×1000÷1001 to be exact), and when transferred to PAL or SECAM it is sped up to 25 FPS. 35 mm movie cameras use a standard exposure rate of 24 FPS, though many cameras offer rates of 23.976 FPS for NTSC television and 25 FPS for PAL/SECAM. The 24 FPS rate became the de facto standard for sound motion pictures in the mid-1920s.[4]Practically all hand-drawn animation is designed to be played at 24 FPS. Actually hand-drawing 24 unique frames per second ("1's") is costly. Even in big budget films usually hand-draw animation shooting on "2's" (one hand-drawn frame is shown twice, so only 12 unique frames per second)[6][7] and some animation is even drawn on "4's" (one hand-drawn frame is shown four times, so only six unique frames per second).
- 25p is a progressive format and runs 25 progressive frames per second. This frame rate derives from the PAL television standard of 50i (or 50 interlaced fields per second). Film and Television companies use this rate in 50 Hz regions for direct compatibility with television field and frame rates. Conversion for 60 Hz countries is enabled by slowing down the media to 24p then converted to 60 Hz systems using pulldown. While 25p captures half the temporal resolution or motion that normal 50i PAL registers, it yields a higher vertical spatial resolution per frame. Like 24p, 25p is often used to achieve "cine"-look, albeit with virtually the same motion artifacts. It is also better suited to progressive-scan output (e.g., on LCD displays, computer monitors and projectors) because the interlacing is absent.
- 30p is a progressive format and produces video at 30 frames per second. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame image capture. The effects of inter-frame judder are less noticeable than 24p yet retains a cinematic-like appearance. Shooting video in 30p mode gives no interlace artifacts but can introduce judder on image movement and on some camera pans. The widescreen film process Todd-AO used this frame rate in 1954–1956.[8]
- 48p is a progressive format and is currently being trialed in the film industry. At twice the traditional rate of 24p, this frame rate attempts to reduce motion blur and flicker found in films. Director James Cameron stated his intention to film the two sequels to his film Avatar at a higher frame rate than 24 frames per second, in order to add a heightened sense of reality.[9] The first film to be filmed at 48 FPS was The Hobbit, a decision made by its director Peter Jackson.[10] At a preview screening at CinemaCon, the audience's reaction was mixed after being shown some of the film's footage at 48p, with some arguing that the feel of the footage was too lifelike (thus breaking the suspension of disbelief).[11]
- 50i is an interlaced format and is the standard video field rate per second for PAL and SECAM television.
- 60i is an interlaced format and is the standard video field rate per second for NTSC television (e.g. in the US), whether from a broadcast signal, DVD, or home camcorder. This interlaced field rate was developed separately by Farnsworth andZworykin in 1934,[12] and was part of the NTSC television standards mandated by the FCC in 1941. When NTSC color was introduced in 1953, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoid interference between the chroma subcarrier and the broadcast sound carrier. (Hence the usual designation "29.97 fps" = 30 frames(60 fields)/1.001)
- 50p/60p is a progressive format and is used in high-end HDTV systems. While it is not technically part of the ATSC or DVB broadcast standards as of yet, reports suggest that higher progressive frame rates will be a feature of the next-generation high-definition television broadcast standards.[13] In Europe, the EBU considers 1080p50 the next step future proof system for TV broadcasts and is encouraging broadcasters to upgrade their equipment for the future.[14]
- 72p is a progressive format and is currently in experimental stages. Major institutions such as Snell have demonstrated 720p72 pictures as a result of earlier analogue experiments, where 768 line television at 75 FPS looked subjectively better than 1150 line 50 FPS progressive pictures with higher shutter speeds available (and a corresponding lower data rate).[15] Modern cameras such as the Red One can use this frame rate to produce slow motion replays at 24 FPS. Douglas Trumbull, who undertook experiments with different frame rates that led to the Showscan film format, found that emotional impact peaked at 72 FPS for viewers.[16] 72 FPS is the maximum rate available in the WMV video file format.
- 120p (120.00 Hz exactly) is a progressive format and is standardized for UHDTV by the ITU-R BT.2020 recommendation. It will be the single global "double-precision" frame rate for UHDTV (instead of using 100 Hz for PAL-based countries and 119.88 Hz for NTSC-based countries).
- 300 FPS, interpolated 300 FPS along with other high frame rates, have been tested by BBC Research for use in sports broadcasts.[17] 300 FPS can be converted to both 50 and 60 FPS transmission formats without major issues.
[edit]Video games
This section does not cite any references or sources. (September 2008) |
Frame rates in video games refer to the speed at which the image is refreshed (typically in frames per second, or FPS). Many underlying processes, such as collision detection and network processing, run at different or inconsistent frequencies or in different physical components of a computer. FPS affect the experience in two ways: low FPS does not give the illusion of motion effectively and affects the user's capacity to interact with the game, while FPS that vary substantially from one second to the next depending on computational load produce uneven, “choppy” movement or animation. Many games lock their frame rate at lower but more sustainable levels to give consistently smooth motion.
The first 3D first-person shooter game for a personal computer, 3D Monster Maze, had a frame rate of approximately 6 FPS, and was still a success. In modern action-oriented games where players must visually track animated objects and react quickly, frame rates of between 30 and 60 FPS are considered acceptable by most, though this can vary significantly from game to game. Modern action games, including popular console shooters such as Halo 3, are locked at 30 FPS maximum, while others, such as Unreal Tournament 3, can run well in excess of 100 FPS on sufficient hardware. Additionally some games such as Quake 3 Arena perform physics, AI, networking, and other calculations in sync with the rendered frame rate - this can result in inconsistencies with movement and network prediction code if players are unable to maintain the designed maximum frame rate of 125 FPS. The frame rate within games varies considerably depending upon what is currently happening at a given moment, or with the hardware configuration (especially in PC games). When the computation of a frame consumes more time than is allowed between frames, the frame rate decreases.
A culture of competition has arisen among game enthusiasts with regard to frame rates, with players striving to obtain the highest FPS possible, due to their utility in demonstrating a system's power and efficiency. Indeed, many benchmarks (such as3DMark) released by the marketing departments of hardware manufacturers and published in hardware reviews focus on the FPS measurement. Even though the typical LCD monitors of today are locked at 60 Hz, making extremely high frame rates impossible to see in realtime, playthroughs of game “timedemos” at hundreds or thousands of FPS for benchmarking purposes are still common.
Beyond measurement and bragging rights, such exercises do have practical bearing in some cases. A certain amount of discarded “headroom” frames are beneficial for the elimination of uneven (“choppy” or “jumpy”) output, and to prevent FPS from plummeting during the intense sequences when players need smooth feedback most.
Aside from frame rate, a separate but related factor unique to interactive applications such as gaming is latency. Excessive preprocessing can result in a noticeable delay between player commands and computer feedback, even when a full frame rate is maintained, often referred to as input lag.
Without realistic motion blurring, video games and computer animations do not look as fluid as film, even with a higher frame rate. When a fast moving object is present on two consecutive frames, a gap between the images on the two frames contributes to a noticeable separation of the object and its afterimage in the eye. Motion blurring mitigates this effect, since it tends to reduce the image gap when the two frames are strung together. The effect of motion blurring is essentially superimposing multiple images of the fast-moving object on a single frame. Motion blurring makes the motion more fluid for some people, even as the image of the object becomes blurry on each individual frame. Motion blur can also induce headaches when you play a game that requires concentration.[18]
A high frame rate still does not guarantee fluid movements, especially on hardware with more than one GPU. This effect is known as micro stuttering.