USB / Universal Serial Bus
Microprocessor
A microprocessor incorporates the functions of a central processing unit (CPU) on a single integrated circuit (IC). [1] The first microprocessors used a word size of only 4 bits, so that the transistors of its logic circuits would fit onto a single part. One or more microprocessors typically serve as the processing elements of a computer system, embedded system, or handheld device. Microprocessors made possible the advent of the microcomputer in the mid-1970s. Before this period, CPUs were typically made from small-scale integrated circuits containing the equivalent of only a few transistors. By integrating the processor onto one or a very few large-scale integrated circuit packages (containing the equivalent of thousands or millions of discrete transistors), the cost of processing capacity was greatly reduced. Since the advent of the microprocessor in the mid 1970's, it has now become the most prevalent implementation of the CPU, almost completely replacing all other forms. See History of computing hardware for pre-electronic and early electronic computers.
Since the early 1970s, the increase in processing capacity of evolving microprocessors has been known to generally follow Moore's Law. It suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every 18 months. In the early 1990s, microprocessor's heat generation (TDP) - due to current leakage - emerged as a leading developmental constraint[2]. From their humble beginnings as the drivers for calculators, the continued increase in processing capacity has led to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest handheld computers now uses a microprocessor at its core.
Contents
[hide]
1 History
1.1 First types
1.2 Notable 8-bit designs
1.3 16-bit designs
1.4 32-bit designs
1.5 64-bit designs in personal computers
1.6 Multicore designs
1.7 RISC
2 Special-purpose designs
3 Market statistics
4 Architectures
5 See also
5.1 Major designers
6 References
7 External links
7.1 General
7.2 Historical documents
History
First types
The 4004 with cover removed (left) and as actually used (right).
Three projects arguably delivered a complete microprocessor at about the same time, namely Intel's 4004, the Texas Instruments (TI) TMS 1000, and Garrett AiResearch's Central Air Data Computer (CADC).
In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained a "a 20-bit, pipelined, parallel multi-microprocessor". However, the system was considered so advanced that the Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown even today. (see First Microprocessor Chip Set.) TI developed the 4-bit TMS 1000, and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971, which implemented a calculator on a chip. The Intel chip was the 4-bit 4004, released on November 15, 1971, developed by Federico Faggin and Marcian Hoff.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
Interestingly, a third party (Gilbert Hyatt) was awarded a patent which might cover the "microprocessor". See a webpage claiming an invention pre-dating both TI and Intel, describing a "microcontroller". According to a rebuttal and a commentary, the patent was later invalidated, but not before substantial royalties were paid out.
A computer-on-a-chip is a variation of a microprocessor which combines the microprocessor core (CPU), some memory, and I/O (input/output) lines, all on one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S. Patent 4,074,351 , was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is perhaps more akin to a microcontroller.
According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974. The 8008 and its successor, the world-famous 8080, opened up the microprocessor component marketplace.
Notable 8-bit designs
The 4004 was later followed in 1972 by the 8008, the world's first 8-bit microprocessor. These processors are the precursors to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974. Its architecture was cloned and improved in the MOS Technology 6502 in 1975, rivaling the Z80 in popularity during the 1980s.
Both the Z80 and 6502 concentrated on low overall cost, through a combination of small packaging, simple computer bus requirements, and the inclusion of circuitry that would normally have to be provided in a separate chip (for instance, the Z80 included a memory controller). It was these features that allowed the home computer "revolution" to take off in the early 1980s, eventually delivering such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several companies which became the core of the Apple IIc and IIe personal computers, medical implantable grade pacemakers and defibrilators, automotive, industrial and consumer devices.WDC pioneered the licensing of microprocessor technology which was later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990’s.
Motorola trumped the entire 8-bit world by introducing the MC6809 in 1978, arguably one of the most powerful, orthogonal, and clean 8-bit microprocessor designs ever fielded – and also one of the most complex hard-wired logic designs that ever made it into production for any microprocessor. Microcoding replaced hardwired logic at about this point in time for all designs more powerful than the MC6809 – specifically because the design requirements were getting too complex for hardwired logic.
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief flurry of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976) which was used in NASA's Voyager and Viking spaceprobes of the 1970s, and onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement C-MOS technology. The CDP1802 was used because it could be run at very low power,* and because its production process (Silicon on Sapphire) ensured much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the 1802 is said to be the first radiation-hardened microprocessor.
16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. During the same year, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to 1976 timeframe.
The first single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility.
The integrated microprocessor memory management unit (MMU) was developed by Childs et al. of Intel, and awarded US patent number 4,442,484.
32-bit designs
Upper interconnect layers on an Intel 80486DX2 die.
16-bit designs were in the market only briefly when full 32-bit implementations started to appear.
The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths, and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high speed, large (16 megabytes (2^24)) memory space and fairly low costs made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world's first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982 (See this bibliographic reference and this general reference). After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system.
Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to other competing architectures such as the Motorola 68000.
Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. Following this with the MC68030, which added the MMU into the chip, the 68K family became the processor for everything that wasn't running DOS. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs (See this webpage for this embedded usage information). The ColdFire processor cores are derivatives of the venerable 68020.
During this time (early to mid 1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032, and a line of 32-bit industrial OEM microcomputers. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z8000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, "microprocessor wars" started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least a factor of 1000. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.
64-bit designs in personal computers
While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microchips targeted at the PC market.
With AMD's introduction of the first 64-bit IA-32 backwards-compatible architecture, AMD64, in September 2003, followed by Intel's own x86-64 chips, the 64-bit desktop era began. Both processors can run 32-bit legacy apps as well as the new 64-bit software. With 64-bit Windows XP, Windows Vista x64, Linux and Mac OS X (to a certain extent) that run 64-bit native, the software too is geared to utilize the full power of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers for the aging CISC designs.
The move to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened with IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.
Multicore designs
AMD Athlon 64 X2 3600 Dual core processor
Main article: Multi-core (computing)
A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock speeds compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first mass-market dual-core processors were announced and as of 2007 dual-core processors are widely used in servers, workstations and PCs while quad-core processors are now available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
RISC
In the mid-1980s to early-1990s, a crop of new high-performance RISC (reduced instruction set computer) microprocessors appeared, which were initially used in special purpose machines and Unix workstations, but then gained wide acceptance in other roles.
The first commercial design was released by MIPS Technologies, the 32-bit R2000 (the R1000 was not released). The R3000 made the design truly practical, and the R4000 introduced the world's first 64-bit design. Competing projects would result in the IBM POWER and Sun SPARC systems, respectively. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha and the HP-PA.
Market forces have "weeded out" many of these designs, with almost no desktop or laptop RISC processors and with the SPARC being used in Sun designs only. MIPS is primarily used in embedded systems, notably in Cisco routers. The rest of the original crop of designs have disappeared. Other companies have attacked niches in the market, notably ARM, originally intended for home computer use but since focussed at the embedded processor market. Today RISC designs based on the MIPS, ARM or PowerPC core power the vast majority of computing devices.
As of 2007, two 64-bit RISC architectures are still produced in volume: SPARC and Power Architecture. The RISC-like Itanium is produced in smaller quantities. The vast majority of 64-bit microprocessors are now x86-64 CISC designs from AMD and Intel.
Special-purpose designs
A 4-bit, 2 register, six assembly language instruction computer made entirely of 74-series chips.
Though the term "microprocessor" has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a "microprocessor", but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Voyager/Viking/Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/speed up the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.
Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold. [1] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 0.2% of all CPUs sold.
Silicon Valley has an old saying: "The first chip costs a million dollars; the second one costs a nickel." In other words, most of the cost is in the design and the manufacturing setup: once manufacturing is underway, it costs almost nothing.[citation needed]
About 55% of all CPUs sold in the world are 8-bit microcontrollers. Over 2 billion 8-bit microcontrollers were sold in 1997. [2]
Less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers, the rest are sold in household appliances such as toasters, microwaves, vacuum cleaners and televisions. "Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6." [3]
Architectures
65xx
MOS Technology 6502
Western Design Center 65xx
ARM family
Altera Nios, Nios II
Atmel AVR architecture (purely microcontrollers)
EISC
RCA 1802 (aka RCA COSMAC, CDP1802)
DEC Alpha
Intel
Intel 4004, 4040
Intel 8080, 8085, Zilog Z80
Intel Itanium
Intel i860
Intel i960
LatticeMico32
M32R architecture
MIPS architecture
Motorola
Motorola 6800
Motorola 6809
Motorola 68000 family, ColdFire
Motorola 88000 (parent of PowerPC family, with POWER)
IBM POWER, parent of PowerPC family, with 88000
PowerPC family, G3, G4, G5
NSC 320xx
OpenCores OpenRISC architecture
PA-RISC family
National Semiconductor SC/MP ("scamp")
Signetics 2650
SPARC
SuperH family
Transmeta Crusoe, Efficeon (VLIW architectures, IA-32 32-bit Intel x86 emulator)
INMOS Transputer
x86 architecture
Intel 8086, 8088, 80186, 80188 (16-bit real mode-only x86 architecture)
Intel 80286 (16-bit real mode and protected mode x86 architecture)
IA-32 32-bit x86 architecture
x86-64 64-bit x86 architecture
XAP processor from Cambridge Consultants
Xilinx
MicroBlaze soft processor
PowerPC405 embedded hard processor in Virtex FPGAs
Universal Serial Bus
.
USBUniversal Serial Bus
Original USB Logo
Year created:
January 1996
Width:
1 bit
Number of devices:
127 per host controller
Capacity
12 or 480 Mbit/s
Style:
Serial
Hotplugging?
Yes
External?
Yes
A USB Series “A” plug, the most common USB plug
The USB "trident" Icon
Universal Serial Bus (USB) is a serial bus standard to interface devices. USB was designed to allow peripherals to be connected using a single standardized interface socket and to improve plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include providing power to low-consumption devices without the need for an external power supply and allowing many devices to be used without requiring manufacturer specific, individual device drivers to be installed.
USB is intended to help retire all legacy varieties of serial and parallel ports. USB can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For many of those devices USB has become the standard connection method. USB is also used extensively to connect non-networked printers; USB simplifies connecting several printers to one computer. The large volume of USB memory devices and their ease of use has created a security concern that is often overlooked. USB lock software can lock out memory devices and still allow other USB peripherals to function. The USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles. In 2004, there were about 1 billion USB devices in the world.[1]
The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Agere, Apple Inc., Hewlett-Packard, Intel, NEC, and Microsoft.
History
The USB 1.0 specification was introduced in November 1995. USB was promoted by Intel (UHCI and open software stack), Microsoft (Windows software stack), Philips (Hub, USB-Audio), and US Robotics. USB was also the primary connector on the original iMac introduced 6 May 1998, including the connector for its new keyboard and mouse[2]. Originally USB was intended to replace the multitude of connectors at the back of PCs, as well as to simplify software configuration of communication devices. USB 1.1 came out in September 1998 to help rectify the adoption problems that occurred with earlier iterations of USB.[3]
As of 2008, the USB specification is at version 2.0 (with revisions). Hewlett-Packard, Intel, Lucent (now Alcatel-Lucent), Microsoft, NEC, and Philips jointly led the initiative to develop a higher data transfer rate than the 1.1 specification. The USB 2.0 specification was released in April 2000 and was standardized by the USB-IF at the end of 2001. Equipment conforming with any version of the standard will also work with devices designed to any previous specification (known as backward compatibility). Smaller USB plugs and receptacles for use in handheld and mobile devices, called Mini-B, were added to USB specification in the first engineering change notice. A new variant of smaller USB plugs and receptacles, Micro-USB, was announced by the USB Implementers Forum on January 4, 2007.[4]
Overview
A conventional USB hub
A USB system has an asymmetric design, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure, subject to a limit of 5 levels of tiers. USB host may have multiple host controllers and each host controller may provide one or more USB ports. Up to 127 devices, including the hub devices, may be connected to a single host controller.
USB devices are linked in series through hubs. There always exists one hub known as the root hub, which is built-in to the host controller. So-called "sharing hubs" also exist; allowing multiple computers to access the same peripheral device(s), either switching access between PCs automatically or manually. They are popular in small-office environments. In network terms they converge rather than diverge branches.
A single physical USB device may consist of several logical sub-devices that are referred to as device functions, because each individual device may provide several functions, such as a webcam (video device function) with a built-in microphone (audio device function).
USB endpoints actually reside on the connected device: the channels to the host are referred to as pipes
USB device communication is based on pipes (logical channels). Pipes are connections from the host controller to a logical entity on the device named an endpoint. The term endpoint is also occasionally used to refer to the pipe. A USB device can have up to 32 active pipes, 16 into the host controller and 16 out of the controller. Each endpoint can transfer data in one direction only, either into or out of the device, so each pipe is uni-directional. Endpoints are grouped into interfaces and each interface is associated with a single device function. An exception to this is endpoint zero, which is used for device configuration and which is not associated with any interface.
When a new USB device is connected to a USB host, the USB device enumeration process is started. The enumeration process first sends a reset signal to the USB device. The speed of the USB device is determined during the reset signaling. After reset, USB device setup information is read from the device by the host and the device is assigned a unique host-controller specific 7-bit address. If the device is supported by the host, the device drivers needed for communicating with the device are loaded and the device is set to configured state. If the USB host is restarted, the enumeration process is repeated for all connected devices.
The host controller polls the bus for traffic, usually in a round-robin fashion, so no USB device can transfer any data on the bus without an explicit request from the host controller.
Host controllers
The computer hardware that contains the host controller and the root hub has an interface geared toward the programmer which is called Host Controller Device (HCD) and is defined by the hardware implementer.
In the version 1.x age, there were two competing HCD implementations, Open Host Controller Interface (OHCI) and Universal Host Controller Interface (UHCI). OHCI was developed by Compaq, Microsoft and National Semiconductor; UHCI was by Intel.
A typical USB connector.
VIA Technologies licensed the UHCI standard from Intel; all other chipset implementers use OHCI. UHCI is more software-driven, making UHCI slightly more processor-intensive than OHCI but cheaper to implement. The dueling implementations forced operating system vendors and hardware vendors to develop and test on both implementations which increased cost.
During the design phase of USB 2.0 the USB-IF insisted on only one implementation. The USB 2.0 HCD implementation is called the Enhanced Host Controller Interface (EHCI). Only EHCI can support hi-speed transfers. Most of PCI-based EHCI controllers contain other HCD implementations called 'companion host controller' to support Full Speed and Low Speed devices. The virtual HCD on Intel and VIA EHCI controllers are UHCI. All other vendors use virtual OHCI controllers.
HCD standards are out of the USB specification's scope, and the USB specification does not specify any HCD interfaces.
Device classes
Devices that attach to the bus can be full-custom devices requiring a full-custom device driver to be used, or may belong to a device class. These classes define an expected behavior in terms of device and interface descriptors so that the same device driver may be used for any device that claims to be a member of a certain class. An operating system is supposed to implement all device classes so as to provide generic drivers for any USB device. Device classes are decided upon by the Device Working Group of the USB Implementers Forum.
Device classes include:[5]
Class
Usage
Description
Examples
00h
Device
Unspecifiedclass 0
(Device class is unspecified. Interface descriptors are used for determining the required drivers.)
01h
Interface
Audio
speaker, microphone, sound card
02h
Both
Communications and CDC Control
ethernet adapter, modem, serial port adapter
03h
Interface
Human Interface Device (HID)
keyboard, mouse
05h
Interface
Physical Interface Device (PID)
force feedback joystick
06h
Interface
Image
digital camera
07h
Interface
Printer
laser printer Inkjet printer
08h
Interface
Mass Storage
USB flash drive, memory card reader, digital audio player
09h
Device
USB hub
full speed hub, hi-speed hub
0Ah
Interface
CDC-Data
(This class is used together with class 02h - Communications and CDC Control.)
0Bh
Interface
Smart Card
USB smart card reader
0Dh
Interface
Content Security
-
0Eh
Interface
Video
webcam
0Fh
Interface
Personal Healthcare
-
DCh
Both
Diagnostic Device
USB compliance testing device
E0h
Interface
Wireless Controller
Wi-Fi adapter, Bluetooth adapter
EFh
Both
Miscellaneous
ActiveSync device
FEh
Interface
Application Specific
IrDA Bridge
FFh
Both
Vendor Specific
(This class code indicates that the device needs vendor specific drivers.)
Note class 0: Use class information in the Interface Descriptors. This base class is defined to be used in Device Descriptors to indicate that class information should be determined from the Interface Descriptors in the device.
USB mass-storage
A flash drive, a typical USB mass-storage device.
USB implements connections to storage devices using a set of standards called the USB mass storage device class (referred to as MSC or UMS). This was initially intended for traditional magnetic and optical drives, but has been extended to support a wide variety of devices, particularly flash drives, which have replaced floppy disks for data transport. Though most computers are capable of booting off of USB Mass Storage devices, USB is not intended to be a primary bus for a computer's internal storage: buses such as ATA (IDE), Serial ATA (SATA), and SCSI fulfill that role.
However, USB has one important advantage in that it is possible to install and remove devices without opening the computer case, making it useful for external drives. Originally conceived and still used today for optical storage devices (CD-RW drives, DVD drives, etc.), a number of manufacturers offer external portable USB hard drives, or empty enclosures for drives, that offer performance comparable to internal drives. These external drives usually contain a translating device that interfaces a drive of conventional technology (IDE, ATA, SATA, ATAPI, or even SCSI) to a USB port. Functionally, the drive appears to the user just like another internal drive. Other competing standards that allow for external connectivity are eSATA and FireWire.
Human-interface devices (HIDs)
Mice and keyboards are frequently fitted with USB connectors, but because most PC motherboards still retain PS/2 connectors for the keyboard and mouse as of 2007, they are often supplied with a small USB-to-PS/2 adaptor, allowing usage with either USB or PS/2 interface. There is no logic inside these adaptors: they make use of the fact that such HID interfaces are equipped with controllers that are capable of serving both the USB and the PS/2 protocol, and automatically detect which type of port they are plugged in to. Joysticks, keypads, tablets and other human-interface devices are also progressively migrating from MIDI, PC game port, and PS/2 connectors to USB.
Apple Macintosh computers have been using USB exclusively for all wired mice and keyboards since January 1999.
USB signalling
USB supports three data rates:
· A Low Speed (1.1, 2.0) rate of 1.5 Mbit/s (187 kB/s) that is mostly used for Human Interface Devices (HID) such as keyboards, mice, and joysticks.
· A Full Speed (1.1, 2.0) rate of 12 Mbit/s (1.5 MB/s). Full Speed was the fastest rate before the USB 2.0 specification and many devices fall back to Full Speed. Full Speed devices divide the USB bandwidth between them in a first-come first-served basis and it is not uncommon to run out of bandwidth with several isochronous devices. All USB Hubs support Full Speed.
· A Hi-Speed (2.0) rate of 480 Mbit/s (60 MB/s).
Experimental data rate:
· A Super-Speed (3.0) rate of 4.8 Gbit/s (600 MB/s). The USB 3.0 specification will be released by Intel and its partners in mid 2008 according to early reports from CNET news. According to Intel, bus speeds will be 10 times faster than USB 2.0 due to the inclusion of a fiber optic link that works with traditional copper connectors. Products using the 3.0 specification are likely to arrive in 2009 or 2010.
USB signals are transmitted on a twisted pair data cable with 90Ω ±15% impedance,[6] labeled D+ and D−. These collectively use half-duplex differential signaling to combat the effects of electromagnetic noise on longer lines. D+ and D− usually operate together; they are not separate simplex connections. Transmitted signal levels are 0.0–0.3 volts for low and 2.8–3.6 volts for high in Full Speed and Low Speed modes, and +-400mV in High Speed (HS) mode. In FS mode the cable wires are not terminated, but the HS mode has termination of 45Ω to ground, or 90Ω differential to match the data cable impedance.
USB uses a special protocol to negotiate the High Speed mode called "chirping". In simplified terms, a device that is HS capable always connects as an FS device first, but after receiving a USB RESET (both D+ and D- are driven LOW by host) it tries to pull the D- line high. If the host (or hub) is also HS capable, it returns alternating signals on D- and D+ lines letting the device know that the tier will operate at High Speed.
Clock tolerance is 480.00 Mbit/s ±500ppm, 12.000 Mbit/s ±2500ppm, 1.50 Mbit/s ±15000ppm.
The USB standard uses the NRZI system to encode data, and uses "bit stuffing" by always injecting one artificial "zero" bit if the stream of data contains six consecutive "ones" before converting the bit stream to NRZI.
Though Hi-Speed devices are commonly referred to as "USB 2.0" and advertised as "up to 480 Mbit/s", not all USB 2.0 devices are Hi-Speed. The USB-IF certifies devices and provides licenses to use special marketing logos for either "Basic-Speed" (low and full) or Hi-Speed after passing a compliance test and paying a licensing fee. All devices are tested according to the latest spec, so recently-compliant Low-Speed devices are also 2.0 devices.
The actual throughput currently (2006) attained with real devices is about two thirds of the maximum theoretical bulk data transfer rate of 53.248 MB/s.[7] Typical hi-speed USB devices operate at lower speeds, often about 3 MB/s overall, sometimes up to 10-20 MB/s. The highest USB data transfer rate claimed by USB vendors is 40 MB/s.
USB connector properties
Series "A" plug and receptacle.
The connectors which the USB committee specified were designed to support a number of USB's underlying goals, and to reflect lessons learned from the varied menagerie of connectors then in service.
· The connectors are particularly cheap to manufacture.[citation needed]
Usability
· It is difficult to incorrectly attach a USB connector. Connectors cannot be plugged-in upside down, and it is clear from the appearance and kinesthetic sensation of making a connection when the plug and socket are correctly mated. However, it is not obvious at a glance to the inexperienced user (or to a user without sight of the installation) which way around the connector goes, so it is often necessary to try both ways.
· Only a moderate insertion/removal force is needed (by specification). USB cables and small USB devices are held in place by the gripping force from the receptacle (without the need for the screws, clips, or thumbturns that other connectors require). The force needed to make or break a connection is modest, allowing connections to be made in awkward circumstances or by those with motor disabilities.
· The connectors enforce the directed topology of a USB network. USB does not support cyclical networks, so the connectors from incompatible USB devices are themselves incompatible. Unlike other communications systems (e.g. RJ-45 cabling) gender-changers are almost never used, making it difficult to create a cyclic USB network.
USB extension cord
Safety
· The connectors are designed to be robust. Many previous connector designs were fragile, with pins or other delicate components prone to bending or breaking, even with the application of only very modest force. The electrical contacts in a USB connector are protected by an adjacent plastic tongue, and the entire connecting assembly is further protected by an enclosing metal sheath. As a result USB connectors can safely be handled, inserted, and removed, even by a small child. The encasing sheath and the tough molded plug body mean that a connector can be dropped, stepped upon, even crushed or struck, all without damage; a considerable degree of force is needed to significantly damage a USB connector.
· The connector construction always ensures that the external sheath on the plug contacts with its counterpart in the receptacle before the four connectors within are connected. This sheath is typically connected to the system ground, allowing otherwise damaging static charges to be safely discharged by this route (rather than via delicate electronic components). This means of enclosure also means that there is a (moderate) degree of protection from electromagnetic interference afforded to the USB signal while it travels through the mated connector pair (this is the only location when the otherwise twisted data pair must travel a distance in parallel). In addition, the power and common connections are made after the system ground but before the data connections. This type of staged make-break timing allows for safe hot-swapping and has long been common practice in the design of connectors in the aerospace industry.
Compatibility
· The USB standard specifies relatively low tolerances for compliant USB connectors, intending to minimize incompatibilities in connectors produced by different vendors (a goal that has been very successfully achieved). Unlike most other connector standards, the USB specification also defines limits to the size of a connecting device in the area around its plug. This was done to avoid circumstances where a device complies with the connector specification but its large size blocks adjacent ports. Compliant devices must either fit within the size restrictions or support a compliant extension cable which does.
· Two-way communication is also possible. In general, cables have only plugs, and hosts and devices have only receptacles: hosts having type-A receptacles and devices type-B. Type-A plugs only mate with type-A receptacles, and type-B with type-B. However, an extension to USB called USB On-The-Go allows a single port to act as either a host or a device — chosen by which end of the cable plugs into the socket on the unit. Even after the cable is hooked up and the units are talking, the two units may "swap" ends under program control. This facility targets units such as PDAs where the USB link might connect to a PC's host port as a device in one instance, yet connect as a host itself to a keyboard and mouse device in another instance.
Types of USB connectors
Type A (left) and Type BUSB Connectors
Different types of USB connectors from left to right• micro USB plug• mini USB plug• B-type plug• A-type receptacle• A-type plug
Pin configuration of the USB connectors Standard A/B
There are several types of USB connectors, and some have been added as the specification has progressed. The original USB specification detailed Standard-A and Standard-B plugs and receptacles. The first engineering change noticed to the USB 2.0 specification added Mini-B plugs and receptacles.
The Mini-B, Micro-A, Micro-B , and Micro-AB connectors are used for smaller devices such as PDAs, mobile phones or digital cameras. The Standard-A plug is approximately 4 by 12 mm, the Standard-B approximately 7 by 8 mm, and the Micro-A and Micro-B plugs approximately 2 by 7 mm.
Micro-USB is a further connector, that was announced by the USB-IF on January 4, 2007.[8] It is intended to replace the Mini-USB plugs used in many new smartphones and Personal digital assistants. This Micro-USB plug is rated for 10,000 connect-disconnect cycles. It is about half the height of the mini-USB connector, but features a similar width. In the Universal Serial Bus Micro-USB Cables and Connectors Specification, details have been laid down for Micro-A plugs, Micro-AB receptacles, and Micro-B plugs and receptacles, along with a Standard-A receptacle to Micro-A plug adapter.
Proprietary connectors and formats
Microsoft's original Xbox game console uses standard USB 1.1 signaling in its controllers and memory cards, but feature proprietary connectors and ports. Similarly, IBM UltraPort uses standard USB signaling, but via a proprietary connection format. American Power Conversion uses USB signaling and HID device class on its uninterruptible power supplies using 10P10C connectors. HTC, a company which makes Windows Mobile-based Communicators, has a proprietary connector called HTC ExtUSB, which combines mini-USB with audio input and output. Apple uses the standard connection format on its MacBook Air but allows it to deliver power beyond the 500 mA limit in order to make its MacBook Air Superdrive work without an external power supply. Nokia includes a USB connection as part of the Pop-Port connector on their mobile phones.
Cables
The maximum length of a standard USB cable is 5.0 meters (16.4 ft). The primary reason for this limit is the maximum allowed round-trip delay of about 1500 ns. If a USB device does not answer to host commands within the allowed time, the host considers the command to be lost. When USB device response time, delays from using the maximum number of hubs and delays from cables connecting the hubs, host and device are summed, the maximum delay caused by a single cable turns out to be 26 ns [9]. The USB 2.0 specification states that the cable delay must be less than 5.2 ns per meter, which means that maximum length USB cable is 5 meters long. However, this is also very close to the maximum possible length when using a standard copper cable.
Using USB devices over a greater length require hubs or active extension cables. Active extension cables are bus-powered hubs equipped with two maximum length standard USB cables. USB connections can be extended to 50 m (160 ft) over CAT5 or up to 10 km (6.2 mi) over fiber by using special USB extender products developed by various manufacturers.
In practice, some USB devices may work with longer cable runs than 5 meters, if the number of hubs between the host and the device is less than the maximum number allowed by the USB standard. However, using a longer cable lowers both the signal quality and the voltage provided by the USB bus below the specification tolerance limits. This may prevent USB devices from working properly or even from working at all.
Pin
Name
Cable colour
Description
1
VCC
Red
+5V
2
D−
White
Data −
3
D+
Green
Data +
4
GND
Black
Ground
Power
The USB specification provides a 5 V (volts) supply on a single wire from which connected USB devices may draw power. The specification provides for no more than 5.25 V and no less than 4.75 V (5 V±5%) between the positive and negative bus power lines.[10] Initially, a device is only allowed to draw 100 mA. It may request more current from the upstream device in units of 2 mA up to a maximum of 500 mA.
If a bus-powered hub is used, the devices downstream may only use a total of four units — 400 mA (i.e. 2 watts) — of current. This limits compliant bus-powered hubs to 4 ports. The host operating system typically keeps track of the power requirements of the USB network and may warn the computer's operator when a given segment requires more power than is available.
On-The-Go and Battery Charging Specification both add new powering modes to the USB specification. The latter specification allows USB devices to draw up to 1.5 A from hubs and hosts that follow the Battery Charging Specification.
As of June 14, 2007, all new mobile phones applying for license in China are required to adopt the USB port as a power port.[11]
In September, 2007 the Open Mobile Terminal Platform --a forum dominated by operators but including manufacturers such as Nokia, Samsung, Motorola, Sony Ericsson and LG--announced that its members had agreed on micro-USB as the future common connector for mobile devices. [12][13]
Non-standard Devices
A number of USB devices require more power than is permitted by the specifications for a single port. This is a common requirement of external hard and optical disc drives and other devices with motors or lamps. Such devices can be used with the use of an external power supply of adequate rating, which is allowed by the standard, or by means of a dual inputs USB cable, one input of which is used for power and data transfer, the other solely for power, which makes the device a non-standard USB device. Some external hubs may, in practice, supply more power to USB devices than required by the specification but a standard compliant device may not depend on this.
Some non-standard USB devices use the 5 V power supply without participating in a proper USB network. These are usually referred to as USB decorations. The typical example is a USB-powered reading light; fans, mug heaters, battery chargers (particularly for mobile telephones) and even miniature vacuum cleaners are available. In most cases, these items contain no digitally based circuitry, and thus are not proper USB devices at all. This can cause problems with some computers — the USB specification requires that devices connect in a low-power mode (100 mA maximum) and state how much current they need, before switching, with the host's permission, into high-power mode.
In addition to limiting the total average power used by the device, the USB specification limits the inrush current (to charge decoupling and bulk capacitors) when the device is first connected; otherwise, connecting a device could cause glitches in the host's internal power. Also, USB devices are required to automatically enter ultra low-power suspend mode when the USB host is suspended; many USB hosts do not cut off the power supply to USB devices when they are suspended since resuming from the suspended state would become a lot more complicated if they did.
There are also devices at the host end that do not support negotiation, such as battery packs that can power USB powered devices; some provide power, while others pass through the data lines to a host PC. USB Power adapters convert utility power and/or power from a car's electrical system to run attached devices. Some of these devices can supply up to 1 A of current. Without negotiation, the powered USB device is unable to inquire if it is allowed to draw 100 mA, 500 mA, or 1 A.
PoweredUSB
Main article: PoweredUSB
PoweredUSB uses standard USB signaling with the addition of extra power lines. It uses 4 additional pins to supply up to 6A at either 5V, 12V, or 24V (depending on keying) to peripheral devices. The wires and contacts on the USB portion have been upgraded to support higher current on the 5V line, as well. This is commonly used in retail systems and provides enough power to operate stationary barcode scanners, printers, pin pads, signature capture devices, etc. This standard was developed by IBM, NCR, and FCI/Berg. It is essentially two connectors stacked such that the bottom connector accepts a standard USB plug and the top connector takes a power connector.
USB compared with FireWire
USB was originally seen as a complement to FireWire (IEEE 1394), which was designed as a high-speed serial bus which could efficiently interconnect peripherals such as hard disks, audio interfaces, and video equipment. USB originally operated at a far lower data rate and used much simpler hardware, and was suitable for small peripherals such as keyboards and mice.
The most significant technical differences between FireWire and USB include the following:
· USB networks use a tiered-star topology, while FireWire networks use a repeater-based topology.
· USB uses a "speak-when-spoken-to" protocol; peripherals cannot communicate with the host unless the host specifically requests communication. A FireWire device can communicate with any other node at any time, subject to network conditions.
· A USB network relies on a single host at the top of the tree to control the network. In a FireWire network, any capable node can control the network.
These and other differences reflect the differing design goals of the two buses: USB was designed for simplicity and low cost, while FireWire was designed for high performance, particularly in time-sensitive applications such as audio and video. Although similar in theoretical maximum transfer rate, in real-world use, especially for high-bandwidth use such as external hard-drives, FireWire 400 generally has a significantly higher throughput than USB 2.0 Hi-Speed.[14][15][16][17] The newer FireWire 800 standard is twice as fast as FireWire 400 and outperforms USB 2.0 Hi-Speed both theoretically and practically.[18]
There are technical reasons why USB 2.0 devices cannot efficiently utilize all the available bandwidth. USB communication is based on polling the devices; there is no pipelining of commands. After sending a command to a device, the USB host must wait for a reply to the command before a new command can be sent to the same device. The bandwidth of a USB bus is divided by all devices connected to the bus. The USB host cannot send commands to one device while waiting for reply from another device. Since all communication is initiated by a USB host, the host must periodically poll all those USB devices that can provide data at unexpected intervals, such as network cards and keyboards. This consumes unnecessary resources when the devices are idle. These issues are being addressed by the forthcoming USB 3.0 specification, although it is not clear whether USB 3.0 is going to match FireWire in bandwidth efficiency.[19]
One reason USB supplanted Firewire, and became far more widespread, is cost; firewire is considerably more expensive to implement, producing more expensive hardware.
Version history
Prereleases
Hi-Speed USB Logo
USB OTG Logo
· USB 0.7: Released in November 1994.
· USB 0.8: Released in December 1994.
· USB 0.9: Released in April 1995.
· USB 0.99: Released in August 1995.
· USB 1.0 Release Candidate: Released in November 1995.
USB 1.0
· USB 1.0: Released in January 1996.Specified data rates of 1.5 Mbit/s (Low-Speed) and 12 Mbit/s (Full-Speed). Did not anticipate or pass-through monitors. Few such devices actually made it to market.
· USB 1.1: Released in September 1998.Fixed problems identified in 1.0, mostly relating to hubs. Earliest revision to be widely adopted.
USB 2.0
· USB 2.0: Released in April 2000.Added higher maximum speed of 480 Mbit/s (now called Hi-Speed). Further modifications to the USB specification have been done via Engineering Change Notices (ECN). The most important of these ECNs are included into the USB 2.0 specification package available from USB.org:
o Mini-B Connector ECN: Released in October 2000.Specifications for Mini-B plug and receptacle. These should not be confused with Micro-B plug and receptacle.
o Errata as of December 2000: Released in December 2000.
o Pull-up/Pull-down Resistors ECN: Released in May 2002.
o Errata as of May 2002: Released in May 2002.
o Interface Associations ECN: Released in May 2003.New standard descriptor was added that allows multiple interfaces to be associated with a single device function.
o Rounded Chamfer ECN: Released in October 2003.A recommended, compatible change to Mini-B plugs that results in longer lasting connectors.
o Unicode ECN: Released in February 2005.This ECN specifies that strings are encoded using UTF-16LE. USB 2.0 did specify that Unicode is to be used but it did not specify the encoding.
o Inter-Chip USB Supplement: Released in March 2006.
o On-The-Go Supplement 1.3: Released in December 2006.USB On-The-Go makes it possible for two USB devices to communicate with each other without requiring a separate USB host. In practice, one of the USB devices acts as a host for the other device.
o Battery Charging Specification 1.0: Released in March 2007.Adds support for dedicated chargers (power supplies with USB connectors), host chargers (USB hosts that can act as chargers) and the no Dead Battery Provision which allows devices to temporarily draw 100 mA current after they have been attached. If a USB device is connected to dedicated charger or host charger, maximum current drawn by the device may be as high as 1.5 A. (Note that this document is not distributed with USB 2.0 specification package.)
o Micro-USB Cables and Connectors Specification 1.01: Released in April 2007.
o Link Power Management Addendum ECN: Released in July 2007.This adds a new power state between enabled and suspended states. Device in this state is not required to reduce its power consumption. However, switching between enabled and sleep states is much faster than switching between enabled and suspended states, which allows devices to sleep while idle.
o High-Speed Inter-Chip USB Electrical Specification Revision 1.0: Released in September 2007.
USB 3.0
· USB 3.0 (Future version): On September 18, 2007, Pat Gelsinger demonstrated USB 3.0 at the fall Intel Developer Forum. USB 3.0 is targeted at ten times the current bandwidth, reaching roughly 5.0 Gbit/s by utilizing two additional high-speed differential pairs for "Superspeed" mode, and with the possibility for optical interconnect.[20] The USB 3.0 specification is planned to be released in the first half of 2008, commercial products are expected to arrive in 2009 or 2010.[21]
· Backwards-Compatibility and Efficiency: USB 3.0 is designed to be backwards-compatible with USB 2.0 and USB 1.1 and employs more efficient protocols to conserve power.[22]
Related technologies
The PictBridge standard allows for interconnecting consumer imaging devices. It typically uses USB as the underlying communication layer.
The USB Implementers Forum is working on a wireless networking standard based on the USB protocol. Wireless USB is intended as a cable-replacement technology, and will use ultra-wideband wireless technology for data rates of up to 480 Mbit/s. Wireless USB is well suited to wireless connection of PC centric devices, just as Bluetooth is now widely used for mobile phone centric personal networks (at much lower data rates).
5:05 AM
|
Labels:Bluetooth,Computer Peripherals
USB / Universal Serial Bus
|
This entry was posted on 5:05 AM
and is filed under
USB / Universal Serial Bus
.
You can follow any responses to this entry through
the RSS 2.0 feed.
You can leave a response,
or trackback from your own site.
0 comments:
Post a Comment