SUPER COMPUTERS

SUPER COMPUTERS

A supercomputer is a computer that is considered, or was considered at the time of its introduction, to be at the frontline in terms of processing capacity, particularly speed of calculation. The term "Super Computing" was first used by New York World newspaper in 1929[1] to refer to large custom-built tabulators IBM made for Columbia University.
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). Cray, himself, never used the word "supercomputer", a little-remembered fact is that he only recognized the word "computer". In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience.

The Cray-2 was the world's fastest computer from 1985 to 1989.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's normal computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Itanium, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Contents
[hide]
1 Software tools
2 Common uses
3 Hardware and software design
3.1 Supercomputer challenges, technologies
3.2 Processing techniques
3.3 Operating systems
3.4 Programming
4 Modern supercomputer architecture
5 Special-purpose supercomputers
6 The fastest supercomputers today
6.1 Measuring supercomputer speed
6.2 The Top500 list
6.3 Current fastest supercomputer system
6.4 Quasi-supercomputing
7 Research and development
8 Timeline of supercomputers
9 See also
10 Notes
11 External links
11.1 Information resources
11.2 Supercomputing centers, organizations
11.3 Specific machines, general-purpose
11.4 Specific machines, special-purpose
//

Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf, WareWulf and openMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several free utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community which often creates disruptive technology in this arena.

Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.
A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Hardware and software design


Processor board of a CRAY YMP vector computer
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.
Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems
Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU.)
Operating systems


Supercomputers predominantly run some variant of Linux or UNIX. Linux has been the most popular operating system since 2004
Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)
Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.
Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.)
For this reason, in the future, the highest performance systems are likely to have a UNIX flavor but with incompatible system-unique features (especially for the highest-end systems at secure facilities).
Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose Fortran compilers can often generate faster code than C or C++ compilers, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used. Modern supercomputer architecture


The Columbia Supercomputer at NASA's Advanced Supercomputing Facility at Ames Research Center


The CPU Architecture Share of Top500 Rankings between 1998 and 2007: x86 family includes x86-64.
As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing(SMP) and Non-Uniform Memory Access (NUMA).
An SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high performance processor or a low power processor.
As of November 2007 the fastest machine is Blue Gene/L. This machine is a cluster of 65,536 computers, each with two processors, each of which processes two data streams concurrently. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.
As of 2005, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar cray C90 supercomputer used in the early 1990s, lots of workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4000 US dollars.
Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.
Special-purpose supercomputers
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
Deep Blue, for playing chess
Reconfigurable computing machines or parts of machines
GRAPE, for astrophysics and molecular dynamics
Deep Crack, for breaking the DES cipher
The fastest supercomputers today
Measuring supercomputer speed
The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-,combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
The Top500 list
Main article: TOP500
Since 1993, the fastest supercomputers have been ranked on the Top500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is the best current definition of the "fastest" supercomputer available at any given time.
Current fastest supercomputer system


A BlueGene/P node card
As of November 2007, the IBM Blue Gene/L at Lawrence Livermore National Laboratory (LLNL) is the fastest operational supercomputer, with a sustained processing rate of 478.2 TFLOPS.[2]
On June 26, 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene supercomputer. These computers can sustain one PFLOPS. IBM has announced that several customers will install these systems later in 2007. One of these is likely to become the fastest deployed supercomputer at that time. [1]
The MDGRAPE-3 supercomputer, which was completed in June 2006, reportedly reached one PFLOPS calculation speed, though it may not qualify as a general-purpose supercomputer as its specialized hardware is optimized for molecular dynamics simulations.[3] [4][5]
Quasi-supercomputing
Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.
One such example is the BOINC platform, a host for a number of distributed computing projects. On January 28th 2008, BOINC recorded a processing power of over 846.5 TFLOPS through 549,554 plus active computers on the network.[6] The largest project, SETI@home, reported processing power of 385.2 TFLOPS through 1,740,529 plus active computers.[7]
Another distributed computing project, Folding@home, reported nearly 1.3 PFLOPS of processing power in late September 2007. A little over 1 PFLOPS of this processing power is contributed by clients running on PlayStation 3 systems.[8]
GIMPS's distributed Mersenne Prime search achieves currently 27 TFLOPS (as of March 2008).
Google's search engine system may be faster with estimated total processing power of between 126 and 316 TFLOPS. The New York Times estimates that the Googleplex and its server farms contain 450,000 servers.[9]
Research and development
On September 9, 2006 the U.S. Department of Energy's National Nuclear Security Administration (NNSA) selected IBM to design and build the world's first supercomputer to use the Cell Broadband Engine™ (Cell B.E.) processor aiming to produce a machine capable of a sustained speed of up to 1,000 trillion calculations per second, or one PFLOPS. Another project in development by IBM is the Cyclops64 architecture, intended to create a "supercomputer on a chip".
In India, a project is under the leadership of Dr. Karmarkar is also developing a supercomputer that can reach one PFLOPS.[10]
CDAC is also building a supercomputer that can reach one PFLOPS by 2010.[11]
The NSF is funding a $200 million effort to develop a one petaFLOP supercomputer, called the Blue Waters Petascale Computing System. It is being built by the NCSA at the University of Illinois at Urbana-Champaign, and is slated to be completed by 2011.[12]
Timeline of supercomputers
This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record. For entries prior to 1993, this list refers to various sources[citation needed]. From 1993 to present, the list reflects the Top500 listing.
Year
Supercomputer
Peak speed
Location
1942
Atanasoff–Berry Computer (ABC)
30 OPS
Iowa State University, Ames, Iowa, USA
TRE Heath Robinson
200 OPS
Bletchley Park
1944
Flowers Colossus
5 kOPS
Post Office Research Station, Dollis Hill, UK
1946
UPenn ENIAC(before 1948+ modifications)
100 kOPS
Aberdeen Proving Ground, Maryland, USA
1954
IBM NORC
67 kOPS
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
1956
MIT TX-0
83 kOPS
Massachusetts Inst. of Technology, Lexington, Massachusetts, USA
1958
IBM AN/FSQ-7
400 kOPS
25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960
UNIVAC LARC
250 kFLOPS
Lawrence Livermore National Laboratory, California, USA
1961
IBM 7030 "Stretch"
1.2 MFLOPS
Los Alamos National Laboratory, New Mexico, USA
1964
CDC 6600
3 MFLOPS
Lawrence Livermore National Laboratory, California, USA
1969
CDC 7600
36 MFLOPS
1974
CDC STAR-100
100 MFLOPS
1975
Burroughs ILLIAC IV
150 MFLOPS
NASA Ames Research Center, California, USA
1976
Cray-1
250 MFLOPS
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981
CDC Cyber 205
400 MFLOPS
(numerous sites worldwide)
1983
Cray X-MP/4
941 MFLOPS
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1984
M-13
2.4 GFLOPS
Scientific Research Institute of Computer Complexes, Moscow, USSR
1985
Cray-2/8
3.9 GFLOPS
Lawrence Livermore National Laboratory, California, USA
1989
ETA10-G/8
10.3 GFLOPS
Florida State University, Florida, USA
1990
NEC SX-3/44R
23.2 GFLOPS
NEC Fuchu Plant, Fuchu, Japan
1993
Thinking Machines CM-5/1024
65.5 GFLOPS
Los Alamos National Laboratory; National Security Agency
Fujitsu Numerical Wind Tunnel
124.50 GFLOPS
National Aerospace Laboratory, Tokyo, Japan
Intel Paragon XP/S 140
143.40 GFLOPS
Sandia National Laboratories, New Mexico, USA
1994
Fujitsu Numerical Wind Tunnel
170.40 GFLOPS
National Aerospace Laboratory, Tokyo, Japan
1996
Hitachi SR2201/1024
220.4 GFLOPS
University of Tokyo, Japan
Hitachi/Tsukuba CP-PACS/2048
368.2 GFLOPS
Center for Computational Physics, University of Tsukuba, Tsukuba, Japan
1997
Intel ASCI Red/9152
1.338 TFLOPS
Sandia National Laboratories, New Mexico, USA
1999
Intel ASCI Red/9632
2.3796 TFLOPS
2000
IBM ASCI White
7.226 TFLOPS
Lawrence Livermore National Laboratory, California, USA
2002
NEC Earth Simulator
35.86 TFLOPS
Earth Simulator Center, Yokohama-shi, Japan
2004
IBM Blue Gene/L
70.72 TFLOPS
U.S. Department of Energy/IBM, USA
2005
136.8 TFLOPS
U.S. Department of Energy/U.S. National Nuclear Security Administration,Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS
2007
478.2 TFLOPS

0 comments:

Bidvertiser

Designed by Posicionamiento Web | Bloggerized by GosuBlogger | Blue Business Blogger