Jump to content

10 Most Powerful SuperComputers Running Linux


Recommended Posts

By Rahul Bali February 13, 2014

Supercomputers were introduced in the 1960s, made initially and, for decades, primarily by Seymour Cray at Control Data Corporation. While the supercomputers of the 1970s used only a few processors, in the 1990s machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling like computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals, physical simulations such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion. Throughout their history, they have been essential in the field of cryptanalysis.
Supercomputer operating systems have undergone major transformations, as sea changes have taken place in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux. 476 of the top 500 fastest supercomputers, 95.2 percent, in the world run Linux. Linux has ruled supercomputing for years. But, it wasn’t always that way.
In 1993, the first successful Linux distributions, Slackware and Debian were only just getting off the ground. Before Linux made its move, Unix was supercomputing’s dominant operating system. Since 2003, the top operating system by performance share on the Supercomputer List underwent a complete flip from 96 percent Unix to 96 percent Linux. By 2004, Linux had taken over the lead for good.
According to The Linux Foundation, “Linux [became] the driving force behind the breakthroughs in computing power that have fueled research and technological innovation. In other words, Linux is dominant in supercomputing, at least in part, because it is what is helping researchers push the limits on computing power.”

10. SuperMUC (IBM)
SuperMUC, the successor of the Höchstleistungsrechner Bayern II, is a new supercomputer of the Leibniz Supercomputing Centre in Garching near Munich, available for European researchers to use to probe the frontiers of medicine, astrophysics and quantum chromodynamics and other scientific disciplines such as computational fluid dynamics, computational chemistry, life sciences, genome analysis and earth quake simulations.
SuperMUC was named the second fastest supercomputer in Europe in November 12, 2012 at International Supercomputing Conferencein Hamburg, Germany. It is also the fastest Intel-compatible system in the world.
The SuperMUC will have 18,432 Intel Xeon Sandy Bridge-EP processors running in IBM System x iDataPlex servers with a total of 147,456 cores and a peak performance of about 3 petaFLOPS. The main memory will be 288 terabytes together with 12 petabytes of hard disk space based on the IBM General Parallel File System. It will also use a new form of cooling that IBM developed, called Aquasar, that uses hot water to cool the processors, a design that should cut cooling electricity usage by 40 percent, IBM claims. SuperMUC runs SUSE Linux Enterprise Server.

9.Vulcan (IBM)

Located at the Lawrence Livermore National Laboratory in California, the Vulcan supercomputer is devoted to both government use and collaborative endeavours of industry and research universities. Its 393,216 cores run at 4.3 petaflops, which are based on IBM’s Blue Gene/Q supercomputing technology.
It also consists of 24 racks and 24,576 compute nodes, placing it within LLNL’s high performance facility, together with the Sequoia. Vulcan runs Red Hat Enterprise Linux.

JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Gene/Q system, and per June 2013 the highest ranked machine in Europe. The JUQUEEN will be rocking 458,752 cores with a peak performance of about 5,872 Tflops. JUQUEEN runs Red Hat Enterprise Linux.

7.Stampede (DELL)

On January 7, 2013, TACC’s new cluster, “Stampede”, went into production. Stampede comprised 6400 nodes, 102400 cpu cores, 205 TB total memory, 14 PB total and 1.6 PB local storage. The bulk of the cluster consisted of 160 racks of primary compute nodes, each with dual Xeon E5-2680 8-core processors, Xeon Phi coprocessor, and 32 GB ram.The cluster also contained 16 nodes with 32 cores and 1 TB ram each, 128 “standard” compute nodes with Nvidia Kepler K20 GPUs, and other nodes for I/O (to a Lustre filesystem), login, and cluster management.
A pre-production configuration of Stampede was listed as the 7th fastest supercomputer in November 2012 with a delivered performance of 2660 TFlops. Because the system was still being assembled, the submitted benchmark was run using 1875 nodes with Xeon Phi coprocessors and 3900 nodes without Xeon Phi coprocessors. In June 2013 , the benchmark was re-run using 6006 nodes (all with Xeon Phi coprocessors), delivering 5168 TFlops and moving the system up to 6th place. Stampede runs Cent OS Linux.

6. Piz Daint (Cray)

The Piz Daint supercomputer is not only the sixth most powerful machine in the world but also the fourth most energy-efficient. It’s also one of only two supercomputers in the top 10 of the efficiency ranking that’s capable of maintaining petaflops performance. Located at the Swiss National Supercomputing Centre, in Lugano, Piz Daint is being used to parse huge data sets and simulate processes for projects in geophysics, materials science, chemistry, and other areas, but especially in climate modeling.
To power that research, the computer uses a hybrid system that combines the advanced network architecture of the Cray XC30 with two cutting-edge processors, the Intel Xeon E5 CPU and the Nvidia Tesla K20X GPU. The computer’s 5272 compute nodes are bound together in a special low-latency network to form an unusually compact machine.
Piz Daint, in operation at CSCS since April 2013, has gone through a major upgrade in October 2013. During this phase the supercomputer based on Intel® Xeon® E5 processors has more than doubled in size from 12 to 28 cabinets and has been upgraded to a hybrid architecture featuring NVIDIA® Tesla® K20X graphical processing units (GPUs). With a total of 5’272 hybrid compute nodes as well as a powerful high-bandwidth, low-latency network, it will now be possible for real simulations to with a peak performance of 7.787 petaflops. Piz Diant runs Cray Linux Environment.

5. Mira (IBM)

Mira, a petascale Blue Gene/Q supercomputer, was constructed by IBM for Argonne National Laboratory’s Argonne Leadership Computing Facility with the support of the United States Department of Energy, and partially funded by the National Science Foundation.Mira will be used for scientific research, including studies in the fields of material science, climatology, seismology, and computational chemistry.The supercomputer is being utilized initially for sixteen projects, selected by the Department of Energy. It has a performance of 8.16 petaflops and consumes 3.9 MW in power.
Mira supercomputer is set to run the simulation of the entire universe over 13 billion years. The supercomputer will work by tracking the movement of countless of particles as they collide and interact to combine structures that could have formed galaxies billions of years back. Mira runs Red Hat Enterprise Linux.
The cost for building Mira has not been released by IBM. Early reports estimated that construction would cost US$50 million, and Argonne National Laboratory announced that Mira was bought using money from a grant of US$180 million. In a press release, IBM marketed the supercomputer’s speed, claiming that “if every man, woman and child in the United States performed one calculation each second, it would take them almost a year to do as many calculations as Mira will do in one second”.

4.K Computer (Fujitsu)
The K computer – named for the Japanese word “kei” meaning 10 quadrillion – is a supercomputer manufactured by Fujitsu, currently installed at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan. The K computer is based on a distributed memory architecture with over 80,000 computer nodes. It is used for a variety of applications, including climate research, disaster prevention and medical research.
K had set a LINPACK record with a performance of 8.162 petaflops, making it the fastest supercomputer in the world at the time. The K computer comprises over 80,000 2.0 GHz 8-core SPARC64 VIIIfx processors contained in 864 cabinets, for a total of over 640,000 cores,manufactured by Fujitsu with 45 nm CMOS technology. Each cabinet contains 96 computing nodes, in addition to 6 I/O nodes. Each computing node contains a single processor and 16 GB of memory. The computer’s water cooling system is designed to minimize failure rate and power consumption. Runs Red Hat Enterprise Linux.

3. Sequoia (IBM)

IBM Sequoia, a petascale Blue Gene/Q supercomputer, constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). Record-breaking science applications have been run on Sequoia, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation.
Sequoia will be used primarily for nuclear weapons simulation, replacing the current Blue Gene/L and ASC Purple supercomputers at Lawrence Livermore National Laboratory. Sequoia will also be available for scientific purposes such as astronomy, energy, lattice QCD, study of the human genome, and climate change. Sequoia is a Blue Gene/Q design, building off previous Blue Gene designs.
It consists of 96 racks containing 98,304 compute nodes. The compute nodes are 16-core PowerPC A2 processor chips with 16 GB of DDR3 memory each. Thus the system contains in total 1,572,864 processor cores with 1.5 PiB memory. It covers an area of about 3,000 square feet. The computer nodes are interconnected in a 5-dimensional torus topology.
In January 2013, the Sequoia sets the record for the first supercomputer using more than one million computing cores at a time for a single application. The Stanford Engineering’s Center for Turbulence Research used it to solve a complex fluid dynamics problem — the prediction of noise generated by a supersonic jet engine.

2. Titan (Cray)
Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects, dealt mostly with molecular scale physics or climate models. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units in addition to conventional central processing units. It is the first such hybrid to perform over 10 petaflops.
The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy. Titan will itself be replaced by an approximately 200 petaflops system in 2016 as part of ORNL’s plan to operate an exascale (1000 petaflopsto 1 exaflops) machine by 2020.
Titan employs AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It uses 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaflops. This was enough to take first place in the November 2012. Titan runs the Cray Linux Environment.

1. Tianhe-2 (NUDT)

Tianhe-2 is a 33.86 petaflops supercomputer located in Sun Yat-sen University, Guangzhou, China. It was developed by a team of 1300 scientists and engineers. It is the world’s fastest supercomputer as of November 2013. According to NUDT, Tianhe-2 will be used for simulation, analysis, and government security applications.
The development of Tianhe-2 was sponsored by the 863 High Technology Program, initiated by the Chinese Government, the government of Guangdong province, and the government of Guangzhou city.It was built by China’s National University of Defense Technology in collaboration with the Chinese IT firm Inspur.
With 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi chips, it represents the world’s largest installation of Ivy Bridge and Xeon Phi chips, counting a total of 3,120,000 cores. Each of the 16,000 nodes possess 88 gigabytes of memory (64 used by the Ivy Bridge processors, and 8 gigabytes for each of the Xeon Phi processors). The total CPU plus coprocessor memory is 1,375 TiB. Tianhe-2 runs Kylin Linux.

Link to comment
Share on other sites

  • Replies 7
  • Views 3.1k
  • Created
  • Last Reply

Top Posters In This Topic

  • Turk


  • jackieo


  • iih1


  • Scorpius



Why they don't used other OS Platform. as we know Most of the Super PC they use Linux ie as the Server, hosting etc,

as i knowledge never been heard other OS PlatForm as their OS. likely the only choice is LINUX.

Link to comment
Share on other sites

AFAIK, They are not powerful enough to handle that much enormous data, to my best knowledge. Linux is actually not just a software OS, it is full of kernels. I am an average Windows user, if I push it up for a bit more job, it stucks. Apple desktop same, too.

Link to comment
Share on other sites

linux can almost forever without rebooting? correct?

And where's the verb?

That actually applies to all Unix OSes and Unix-alike OSes like Linux.

$ uptime 13:47pm  up 1384 days  9:07,  1 user,  load average: 2.12, 2.16, 2.49
Edited by Scorpius
Link to comment
Share on other sites

It may be answer why they use mostly Linux
Linux supercomputer stats

As the chart above shows, Linux is still steadily increasing its supercomputing marketshare. Why? For starters, it’s fast (some distributions can run at speeds as high as 17.59 petaFLOPS), and the fact that it’s limitlessly customizable can't hurt, either.

Source: http://www.neowin.net/news/only-3-of-the-worlds-most-powerful-computers-run-windows

Link to comment
Share on other sites

linux can almost forever without rebooting? correct?

And where's the verb?

That actually applies to all Unix OSes and Unix-alike OSes like Linux.

$ uptime 13:47pm  up 1384 days  9:07,  1 user,  load average: 2.12, 2.16, 2.49

oops RUN lol Linux can run almost forever with no reboot?? wow thas amazing up time there! :)

Edited by jackieo
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...