Computer Processors
Microprocessor
A microprocessor incorporates the functions of a central processing unit (CPU) on a single integrated circuit (IC). [1] The first microprocessors used a word size of only 4 bits, so that the transistors of its logic circuits would fit onto a single part. One or more microprocessors typically serve as the processing elements of a computer system, embedded system, or handheld device. Microprocessors made possible the advent of the microcomputer in the mid-1970s. Before this period, CPUs were typically made from small-scale integrated circuits containing the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages (containing the equivalent of thousands or millions of discrete transistors), the cost of processing capacity was greatly reduced. Since the advent of the microprocessor in the mid 1970's, it has now become the most prevalent implementation of the CPU, almost completely replacing all other forms. See History of computing hardware for pre-electronic and early electronic computers.
Since the early 1970s, the increase in processing capacity of evolving microprocessors has been known to generally follow Moore's Law. It suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 18 months. In the early 1990s, microprocessor's heat generation (TDP) - due to current leakage - emerged as a leading developmental constraint[2]. From their humble beginnings as the drivers for calculators, the continued increase in processing capacity has led to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest handheld computers now uses a microprocessor at its core.
Contents
[hide]
1 History
1.1 First types
1.2 Notable 8-bit designs
1.3 16-bit designs
1.4 32-bit designs
1.5 64-bit designs in personal computers
1.6 Multicore designs
1.7 RISC
2 Special-purpose designs
3 Market statistics
4 Architectures
5 See also
5.1 Major designers
6 References
7 External links
7.1 General
7.2 Historical documents
History
First types
The 4004 with cover removed (left) and as actually used (right).
Three projects arguably delivered a complete microprocessor at about the same time, namely Intel's 4004, the Texas Instruments (TI) TMS 1000, and Garrett AiResearch's Central Air Data Computer (CADC).
In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained a "a 20-bit, pipelined, parallel multi-microprocessor". However, the system was considered so advanced that the Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown even today. (see First Microprocessor Chip Set.) TI developed the 4-bit TMS 1000, and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971, which implemented a calculator on a chip. The Intel chip was the 4-bit 4004, released on November 15, 1971, developed by Federico Faggin and Marcian Hoff.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
Interestingly, a third party (Gilbert Hyatt) was awarded a patent which might cover the "microprocessor". See a webpage claiming an invention pre-dating both TI and Intel, describing a "microcontroller". According to a rebuttal and a commentary, the patent was later invalidated, but not before substantial royalties were paid out.
A computer-on-a-chip is a variation of a microprocessor which combines the microprocessor core (CPU), some memory, and I/O (input/output) lines, all on one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S. Patent 4,074,351 , was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is perhaps more akin to a microcontroller.
According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974. The 8008 and its successor, the world-famous 8080, opened up the microprocessor component marketplace.
Notable 8-bit designs
The 4004 was later followed in 1972 by the 8008, the world's first 8-bit microprocessor. These processors are the precursors to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974. Its architecture was cloned and improved in the MOS Technology 6502 in 1975, rivaling the Z80 in popularity during the 1980s.
Both the Z80 and 6502 concentrated on low overall cost, through a combination of small packaging, simple computer bus requirements, and the inclusion of circuitry that would normally have to be provided in a separate chip (for instance, the Z80 included a memory controller). It was these features that allowed the home computer "revolution" to take off in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several companies which became the core of the Apple IIc and IIe personal computers, medical implantable grade pacemakers and defibrilators, automotive, industrial and consumer devices.WDC pioneered the licensing of microprocessor technology which was later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990’s.
Motorola trumped the entire 8-bit world by introducing the MC6809 in 1978, arguably one of the most powerful, orthogonal, and clean 8-bit microprocessor designs ever fielded – and also one of the most complex hard-wired logic designs that ever made it into production for any microprocessor. Microcoding replaced hardwired logic at about this point in time for all designs more powerful than the MC6809 – specifically because the design requirements were getting too complex for hardwired logic.
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief flurry of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976) which was used in NASA's Voyager and Viking spaceprobes of the 1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement C-MOS technology. The CDP1802 was used because it could be run at very low power,* and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.
16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to 1976 timeframe.
The first single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility.
The integrated microprocessor memory management unit (MMU) was developed by Childs et al. of Intel, and awarded US patent number 4,442,484.
32-bit designs
Upper interconnect layers on an Intel 80486DX2 die.
16-bit designs were in the market only briefly when full 32-bit implementations started to appear.
The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths, and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high speed, large (16 megabytes (2^24)) memory space and fairly low costs made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world's first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982 (See this bibliographic reference and this general reference). After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system.
Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to other competing architectures such as the Motorola 68000.
Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. Following this with the MC68030, which added the MMU into the chip, the 68K family became the processor for everything that wasn't running DOS. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs (See this webpage for this embedded usage information). The ColdFire processor cores are derivatives of the venerable 68020.
During this time (early to mid 1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032, and a line of 32-bit industrial OEM microcomputers. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z8000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, "microprocessor wars" started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least a factor of 1000. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.
64-bit designs in personal computers
While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microchips targeted at the PC market.
With AMD's introduction of the first 64-bit IA-32 backwards-compatible architecture, AMD64, in September 2003, followed by Intel's own x86-64 chips, the 64-bit desktop era began. Both processors can run 32-bit legacy apps as well as the new 64-bit software. With 64-bit Windows XP, Windows Vista x64, Linux and Mac OS X (to a certain extent) that run 64-bit native, the software too is geared to utilize the full power of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers for the aging CISC designs.
The move to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened with IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.
Multicore designs
AMD Athlon 64 X2 3600 Dual core processor
Main article: Multi-core (computing)
A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock speeds compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first mass-market dual-core processors were announced and as of 2007 dual-core processors are widely used in servers, workstations and PCs while quad-core processors are now available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
RISC
In the mid-1980s to early-1990s, a crop of new high-performance RISC (reduced instruction set computer) microprocessors appeared, which were initially used in special purpose machines and Unix workstations, but then gained wide acceptance in other roles.
The first commercial design was released by MIPS Technologies, the 32-bit R2000 (the R1000 was not released). The R3000 made the design truly practical, and the R4000 introduced the world's first 64-bit design. Competing projects would result in the IBM POWER and Sun SPARC systems, respectively. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha and the HP-PA.
Market forces have "weeded out" many of these designs, with almost no desktop or laptop RISC processors and with the SPARC being used in Sun designs only. MIPS is primarily used in embedded systems, notably in Cisco routers. The rest of the original crop of designs have disappeared. Other companies have attacked niches in the market, notably ARM, originally intended for home computer use but since focussed at the embedded processor market. Today RISC designs based on the MIPS, ARM or PowerPC core power the vast majority of computing devices.
As of 2007, two 64-bit RISC architectures are still produced in volume: SPARC and Power Architecture. The RISC-like Itanium is produced in smaller quantities. The vast majority of 64-bit microprocessors are now x86-64 CISC designs from AMD and Intel.
Special-purpose designs
A 4-bit, 2 register, six assembly language instruction computer made entirely of 74-series chips.
Though the term "microprocessor" has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a "microprocessor", but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/speed up the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.
Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold. [1] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 0.2% of all CPUs sold.
Silicon Valley has an old saying: "The first chip costs a million dollars; the second one costs a nickel." In other words, most of the cost is in the design and the manufacturing setup: once manufacturing is underway, it costs almost nothing.[citation needed]
About 55% of all CPUs sold in the world are 8-bit microcontrollers. Over 2 billion 8-bit microcontrollers were sold in 1997. [2]
Less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers, the rest are sold in household appliances such as toasters, microwaves, vacuum cleaners and televisions. "Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6." [3]
Architectures
65xx
MOS Technology 6502
Western Design Center 65xx
ARM family
Altera Nios, Nios II
Atmel AVR architecture (purely microcontrollers)
EISC
RCA 1802 (aka RCA COSMAC, CDP1802)
DEC Alpha
Intel
Intel 4004, 4040
Intel 8080, 8085, Zilog Z80
Intel Itanium
Intel i860
Intel i960
LatticeMico32
M32R architecture
MIPS architecture
Motorola
Motorola 6800
Motorola 6809
Motorola 68000 family, ColdFire
Motorola 88000 (parent of PowerPC family, with POWER)
IBM POWER, parent of PowerPC family, with 88000
PowerPC family, G3, G4, G5
NSC 320xx
OpenCores OpenRISC architecture
PA-RISC family
National Semiconductor SC/MP ("scamp")
Signetics 2650
SPARC
SuperH family
Transmeta Crusoe, Efficeon (VLIW architectures, IA-32 32-bit Intel x86 emulator)
INMOS Transputer
x86 architecture
Intel 8086, 8088, 80186, 80188 (16-bit real mode-only x86 architecture)
Intel 80286 (16-bit real mode and protected mode x86 architecture)
IA-32 32-bit x86 architecture
x86-64 64-bit x86 architecture
XAP processor from Cambridge Consultants
Xilinx
MicroBlaze soft processor
PowerPC405 embedded hard processor in Virtex FPGAs
5:00 AM | Labels:Bluetooth,Computer Peripherals Computer Peripherals | 0 Comments
3D Display
3D display
Jump to: navigation, search
A 3D display prototype by Philips
A 3D display is any display device capable of conveying three-dimensional images to the viewer.
There are many types of 3D displays: stereoscopic 3D displays show a different image to each eye; autostereoscopic 3D displays do this without the need for any special glasses or other head gear; holographic 3D displays reproduce a light field which is identical to that which emanated from the original scene. In addition there are volumetric displays, where some physical mechanism is used to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up; and rotating panel displays, where a rotating panel sweeps out a volume.
A wide range of organisations have developed 3D displays, ranging from experimental displays in university departments to commercially available displays. Companies involved include Holografika, NewSight, Pavonine, Philips, Spatial View, 3DIcon Corporation, Sharp, SeeReal Technologies and Alioscopy.
4:56 AM | Labels:Bluetooth,Computer Peripherals Computer Peripherals | 0 Comments
Computer Monitor / Display
Computer display
A computer display monitor, usually called simply a monitor, is a piece of electrical equipment which displays viewable images generated by a computer without producing a permanent record. The word "monitor" is used in other contexts; in particular in television broadcasting, where a television picture is displayed to a high standard. A computer display device is usually either a cathode ray tube or some form of flat panel such as a TFT LCD. The monitor comprises the display device, circuitry to generate a picture from electronic signals sent by the computer, and an enclosure or case. Within the computer, either as an integral part or a plugged-in interface, there is circuitry to convert internal data to a format compatible with a monitor.
Contents
[hide]
1 Screen Size
1.1 Diagonal size
1.2 Widescreen area
2 Imaging technologies
2.1 Cathode ray tube
3 Performance measurements
3.1 Comparison
3.1.1 CRT
3.1.2 Passive LCD
3.1.3 TFT LCD
3.1.4 Plasma
3.1.5 Penetron
4 Problems
4.1 Dead pixels
4.2 Stuck pixels
4.3 Phosphor burn-in
4.4 Plasma burn-in
4.5 Black level misadjustment
4.6 Glare
4.7 Color misregistration
4.8 Incomplete spectrum
5 Display interfaces
5.1 Computer Terminals
5.2 Composite signal
5.3 Digital monitors
5.3.1 TTL monitors
5.3.2 Single colour screens
6 Modern technology
6.1 Analog RGB monitors
6.2 Digital and analog combination
7 Configuration and usage
7.1 Multi-head
7.2 Virtual displays
8 Additional features
8.1 Power saving
8.2 Directional screen
8.3 Touch screen
9 Major manufacturers
10 See also
11 Interesting links
12 External links
Screen Size
Diagonal size
The inch size quoted is the diagonal size of the picture tube or LCD panel. With CRTs the picture is normally smaller by 1.5" - 2", hence a 17" LCD gives about the same size picture as a 19" CRT.
This method of size measurement dates from the early days of CRT television when round picture tubes were in common use, which only had one dimension that described display size. When rectangular tubes were used, the diagonal measurement of these was equivalent to the round tube's diameter, hence this was used (and of course it was the largest of the available numbers)
A better way to compare CRT and LCD displays is by viewable image size.
Widescreen area
A widescreen display always has less screen area for a given quoted inch size than a standard 4:3 display, due to basic geometry.
[edit] Imaging technologies
19" inch (48.3 cm tube, 45.9 cm viewable) CRT computer monitor
As with television, several different hardware technologies exist for displaying computer-generated output:
Liquid crystal display (LCD). TFT LCDs are the most popular display device for new computers in the Western world.
Passive LCD gives poor contrast and slow response, and other image defects. These were used in some laptops until the mid 1990s.
·
TFT Thin Film Transistor LCDs give much better picture quality in several respects. All modern LCD monitors are TFT.
Cathode ray tube (CRT)
Standard raster scan computer monitors
Vector displays, as used on the Vectrex, many scientific and radar applications, and several early arcade machines (notably Asteroids - always implemented using CRT displays due to requirement for a deflection system, though can be emulated on any raster-based display.
Television receivers were used by most early personal and home computers, connecting composite video to the television set using a modulator. Image quality was reduced by the additional steps of composite video ? modulator ? TV tuner ? composite video.
Plasma display
Surface-conduction electron-emitter display (SED)
Video projector - implemented using LCD, CRT, or other technologies. Recent consumer-level video projectors are almost exclusively LCD based.
Organic light-emitting diode (OLED) display
Penetron military aircraft displays
Cathode ray tube
CRT Computer display pixel array(right)
The CRT or cathode ray tube, is the picture tube of a monitor. The back of the tube has a negatively charged cathode. The electron gun shoots electrons down the tube and onto a charged screen. The screen is coated with a pattern of dots that glow when struck by the electron stream. Each cluster of three dots, one of each color, is one pixel.
The image on the monitor screen is usually made up from at least tens of thousands of such tiny dots glowing on command from the computer. The closer together the pixels are, the sharper the image on screen can be. The distance between pixels on a computer monitor screen is called its dot pitch and is measured in millimeters. Most monitors have a dot pitch of 0.28 mm or less.
There are two electromagnets around the collar of the tube which deflect the electron beam. The beam scans across the top of the monitor from left to right, is then blanked and moved back to the left-hand side slightly below the previous trace (on the next scan line), scans across the second line and so on until the bottom right of the screen is reached. The beam is again blanked, and moved back to the top left to start again. This process draws a complete picture, typically 50 to 100 times a second. The number of times in one second that the electron gun redraws the entire image is called the refresh rate and is measured in hertz (cycles per second). It is common, particularly in lower-priced equipment, for all the odd-numbered lines of an image to be traced, and then all the even-numbered lines; the circuitry of such an interlaced display need be capable of only half the speed of a non-interlaced display. An interlaced display, particularly at a relatively low refresh rate, can appear to some observers to flicker, and may cause eyestrain and nausea.
Performance measurements
The performance parameters of a monitor are:
Luminance, measured in candelas per square metre (cd/m²).
viewable image size, measured diagonally. For CRT the viewable size is one inch (25 mm) smaller then the tube itself.
Dot pitch. Describes the distance between pixels of the same color in millimetres. In general, the lower the dot pitch (e.g. 0.24 mm, which is also 240 micrometres), the sharper the picture will appear.
Response time. The amount of time a pixel in an LCD monitor takes to go from active (black) to inactive (white) and back to active (black) again. It is measured in milliseconds (ms). Lower numbers mean faster transitions and therefore fewer visible image artifacts.
Contrast ratio. The contrast ratio is defined as the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing.
Refresh rate. The number of times in a second that a display is illuminated.
Power consumption, measured in watts (W).
Aspect ratio, which is the horizontal size compared to the vertical size, e.g. 4:3 is the standard aspect ratio, so that a screen with a width of 1024 pixels will have a height of 768 pixels. A widescreen display can have an aspect ratio of 16:9, which means a display that is 1024 pixels wide will have a height of 576 pixels.
Display resolution. The number of distinct pixels in each dimension that can be displayed.
Comparison
CRT
High contrast ratio
High speed response
Full range light output level control
Large size
Large weight
Most produce geometric distortion
Greater power consumption than LCD.
Prone to moire effect at highest resolution
Can display natively in almost any resolution
Intolerant of damp conditions
Small risk of explosion if the picture tube glass is broken
Passive LCD
Very poor contrast ratio (eg 20:1)
High visible noise if used in more than 8 colour mode (3 bit colour depth).
Very slow response (moving images barely viewable)
Some suffer horizontal & vertical ghosting
Very small size
Very low weight
Very low power consumption
Lower cost than TFT LCDs.
Zero geometric distortion
TFT LCD
More or less all modern LCD monitors are the TFT type.
Medium contrast ratio
Response rates vary from one model to another, slower screens will show smearing on moving images
Very small size
Very low weight
Very low power consumption
Higher cost than Passive LCD or CRT.
Zero geometric distortion
LCDs of both types only have one native resolution. Displaying other resolutions requires conversion & interpolation, which often degrades image quality.
Plasma
High operating temperature can be painful to touch
Prone to burn-in
No geometric distortion
Highest cost option
High power consumption
Penetron
Main article: Penetron
Only found in military aircraft
2 colour display
See through
Orders of magnitude more expensive than the other display technologies listed here
Problems
Dead pixels
A fraction of all LCD monitors are produced with "dead pixels". Due to the desire for affordable monitors, most manufacturers sell monitors with dead pixels. Almost all manufacturers have clauses in their warranties which claim monitors with fewer than some number of dead pixels is not broken and will not be replaced. The dead pixels are usually stuck with the green, red, and/or blue subpixels either individually always stuck on or off.
Like image persistence, this can sometimes be partially or fully reversed by using the same method listed below, however the chance of success is far lower than with a "stuck" pixel. It can also sometimes be repaired by physically flicking the pixel, however it is always a possibility for someone to use too much force and rupture the weak screen internals doing this.
Stuck pixels
LCD monitors, while lacking phosphor screens and thus immune to phosphor burn-in, have a similar condition known as image persistence, where the pixels of the LCD monitor can "remember" a particular color and become "stuck" and unable to change. Unlike phosphor burn-in, however, image persistence can sometimes be reversed partially or completely.[citation needed] This is accomplished by rapidly displaying varying colors to "wake up" the stuck pixels.
Phosphor burn-in
Phosphor burn-in is localised aging of the phosphor layer of a CRT screen where it has displayed a static bright image for many years. This results in a faint permanent image on the screen, even when powered off. In severe cases it can even be possible to read some of the text, though this only occurs where the displayed text remained the same for years.
This was once a relatively common phenomenon in single purpose business computers. It can still be an issue with CRT displays when used to display the same image for years at a time, but modern computers aren't normally used this way any more, so the problem is not a significant issue today, with CRTs.
The size of the issue seems to have become exaggerated in popular opinion. The only systems that suffered the defect were ones displaying the same image for years, and with these the presence of burn-in was not a noticeable effect when in use, since it coincided with the displayed image perfectly. Also such systems were inevitably functional rather than eye candy, so even visible slight damage that occurred when reusing a heavily used business monitor for another business app was a trivial cosmetic issue. It only became a significant issue in 3 situations:
when some heavily used monitors were reused at home,
or re-used for display purposes
in some high security applications (but only those where the high security data displayed did not change for years at a time).
Screen savers were developed as a means to avoid burn-in, but are redundant for CRTs today, despite their popularity. The problem does not occur with multitasking systems, and powering down the display after a period of non-use is as effective and has additional benefits, such as increasing monitor life and reducing power use.
Phosphor Burn-in can be gradually removed on damaged CRT displays by displaying an all white screen with brightness & contrast turned up full. This is a slow procedure, and is usually but not always effective.
Plasma burn-in
Burn-in has re-emerged as an issue with plasma displays, which are much more vulnerable to this than CRTs. Screen savers with moving images may be used with these to minimise localised burn. Periodic change of the colour scheme in use also helps reduce the issue.
Black level misadjustment
User misadjustment of black level is common. This alters the colour displayed with most darker colours.
A testcard image may be used to set the image black level correctly. On CRT monitors
Black level is set with the 'brightness' control
White level is set with the 'contrast' control
sometimes black level can need readjustment after setting white level.
The naming of CRT controls is historic, and in some cases counterintuitive.
Glare
Glare is a problem caused by the relationship between lighting and screen, or by using monitors in bright sunlight. LCDs and flat screen CRTs are less prone to this than conventional curved CRTs, and trinitron CRTs, which are curved on one axis only, are less prone to it than other CRTs curved on both axes.
If the problem persists despite moving the monitor or adjusting lighting, a filter using a mesh of very fine black wires may be placed on the screen to reduce glare and improve contrast. These filters were popular in the late 1980s. They do also reduce light output, which can occasionally be an issue.
Color misregistration
With exceptions of DLP, most display technologies, especially LCD, have an inherent misregistration of the color channels, that is, the centres of the red, green, and blue dots do not line up perfectly. Subpixel rendering depends on this misalignment; technologies making use of this include the Apple II from 1976 [1], and more recently Microsoft (ClearType, 1998) and XFree86 (X Rendering Extension).
Incomplete spectrum
RGB displays produce most of the visible colour spectrum, but not all. This can be a problem where good colour matching to non-RGB images is needed. This issue is common to all monitor technologies with 3 colour channels.
Display interfaces
Computer Terminals
Early CRT-based VDUs (Visual Display Units) such as the DEC VT05 without graphics capabilities gained the label glass teletypes, because of the functional similarity to their electromechanical predecessors.
Some historic computers had no modern display, using a printer instead.
Composite signal
Early home computers such as the Apple II and the Commodore 64 used a composite signal output to drive a CRT monitor or TV. This resulted in degraded resolution due to compromises in the broadcast TV standards used. This method is still used with video game consoles.
Digital monitors
Early digital monitors are sometimes known as TTLs because the voltages on the red, green, and blue inputs are compatible with TTL logic chips. Later digital monitors support LVDS, or TMDS protocols.
TTL monitors
IBM PC with green monochrome display
An amber monochrome computer monitor, manufactured in 2007, which uses a 15-pin SVGA connector just like a standard color monitor.
Monitors used with the MDA, Hercules, CGA, and EGA graphics adapters used in early IBM PC's (Personal Computer) and clones were controlled via TTL logic. Such monitors can usually be identified by a male DB-9 connector used on the video cable. The disadvantage of TTL monitors was the limited number of colors available due to the low number of digital bits used for video signaling.
Modern monochrome monitors, such as the one pictured to the right which was manufactured in 2007, use the same 15-pin SVGA connector that standard color monitors use. They're capable of displaying 32-bit grayscale at 1024x768 resolution, making them able to interface and be used with modern computers.
TTL Monochrome monitors only made use of five out of the nine pins. One pin was used as a ground, and two pins were used for horizontal/vertical synchronization. The electron gun was controlled by two separate digital signals, a video bit, and an intensity bit to control the brightness of the drawn pixels. Only four unique shades were possible; black, dim, medium or bright.
CGA monitors used four digital signals to control the three electron guns used in color CRTs, in a signalling method known as RGBI, or Red Green and Blue, plus Intensity. Each of the three RGB colors can be switched on or off independently. The intensity bit increases the brightness of all guns that are switched on, or if no colors are switched on the intensity bit will switch on all guns at a very low brightness to produce a dark grey. A CGA monitor is only capable of rendering 16 unique colors. The CGA monitor was not exclusively used by PC based hardware. The Commodore 128 could also utilize CGA monitors. Many CGA monitors were capable of displaying composite video via a separate jack.
EGA monitors used six digital signals to control the three electron guns in a signalling method known as RrGgBb. Unlike CGA, each gun is allocated its own intensity bit. This allowed each of the three primary colors to have four different states (off, soft, medium, and bright) resulting in 64 possible colors.
Although not supported in the original IBM specification, many vendors of clone graphics adapters have implemented backwards monitor compatibility and auto detection. For example, EGA cards produced by Paradise could operate as a MDA, or CGA adapter if a monochrome or CGA monitor was used in place of an EGA monitor. Many CGA cards were also capable of operating as MDA or Hercules card if a monochrome monitor was used.
Single colour screens
Display colours other than white were very popular on monochrome monitors in the 1980s. These colours were more comfortable on the eye. This was particularly an issue at the time due to the lower refresh rates in use at the time causing flicker, plus the use of less comfortable colour schemes than used with most of today's software.
Green screens were the most popular colour, with orange displays also available. 'Paper white' was also in use, which was a warm white.
Modern technology
Analog RGB monitors
Most modern computer displays can show thousands or millions of different colors in the RGB color space by varying red, green, and blue signals in continuously variable intensities.
Digital and analog combination
Many monitors have analog signal relay, but some more recent models (mostly LCD screens) support digital input signals. It is a common misconception that all computer monitors are digital. For several years, televisions, composite monitors, and computer displays have been significantly different. However, as TVs have become more versatile, the distinction has blurred.
Configuration and usage
Multi-head
Main article: Multi-monitor
Some users use more than one monitor. The displays can operate in multiple modes. One of the most common spreads the entire desktop over all of the monitors, which thus act as one big desktop. The X Window System refers to this as Xinerama.
Two Apple flat-screen monitors used as dual display
Terminology:
Dualhead - Using two monitors
Triplehead - using three monitors
Display assembly - multi-head configurations actively managed as a single unit
Virtual displays
The X Window System provides configuration mechanisms for using a single hardware monitor for rendering multiple virtual displays, as controlled (for example) with the Unix DISPLAY global variable or with the -display command option.
Additional features
Power saving
More or less all modern monitors contain a power saving mode they will switch to if they receive no video input signal. Modern operating systems can thus power down a monitor after a specified period of inactivity. Typical lifetime cost savings outweight the cost of implementation. This also extends the service life of the monitor.
Some monitors will also switch themselves completely off after a time period on standby.
Some laptops have a dimmed screen mode they can use to extend battery life.] Directional screen
Narrow viewing angle screens are used in some security conscious applications.] Touch screen
These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. This does however mean the screen needs frequent cleaning due to image degradation from fingerprints.
Major manufacturers
Acer
Apple Inc.
BenQ
Dell, Inc.
Hewlett-Packard
Eizo
HannStar Display Corporation
Iiyama Corporation
LaCie
LG Electronics
NEC Display Solutions
Philips
Samsung
Sharp
Sony
ViewSonic
Westinghouse
4:54 AM | Labels:Bluetooth,Computer Peripherals Computer Peripherals | 0 Comments