In today’s ever-more digitalized world, we all have a tale or two to share about how personal computers have let us down: like how they refused to let us run different programs at the same time or how the data was so heavy that the damned device kept us on hold forever before conducting even the most trivial operation.
Well, there is one machine in the world — and it’s in Japan — that is absolutely free of such concerns, being the fastest computer on Earth and capable of handling a mind-boggling number of tasks in far less than the blink of an eye.
In June and November last year, the K computer — jointly developed by the IT giant Fujitsu and housed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe — was ranked No. 1 in the TOP500 list of the world’s fastest computers. The ranking is announced twice a year at the SC conference of supercomputing experts — also known as the International Conference for High Performance Computing, Networking, Storage, and Analysis.
The K computer — which will be available for shared use by researchers in November — is named after the Japanese numerical unit 京 (kei), meaning 10 quadrillion, or 10,000 trillion. By achieving the targeted 10 petaflops, a measure of computer performance equaling 10 quadrillion calculations — or floating-point operations, to be precise — per second, K lived up to its name in November.
If humans were to perform the same number of calculations as K does in a second, it would take the world’s entire population of 7 billion people — each tackling one problem per second — 17 consecutive days.
In that latest November ranking, the K computer was proved to be four times faster than the runner-up, a Chinese machine named Tianhe-1A. Developed by the National Supercomputing Center in Tianjin, Tianhe-1A achieved 2.566 petaflops, followed by the American supercomputer Jaguar, installed at the Oak Ridge National Laboratory in Tennessee, and run by the U.S. Department of Energy, which ranked 3rd, marking 1.759 petaflops.
So what’s the secret to this overwhelming speed?
During a recent visit by this reporter, the supercomputer — the hardware of which has been completed but which still has system software under development — looked like rows and rows of tall refrigerators neatly lined up in a huge warehouse. It is made up of 864 fridge-like units called “racks.” Each rack — 80 cm wide, 75 cm deep and 206 cm tall — contains 24 system boards tacked on top of each other and connected by cables. Each system board carries four CPUs (central processing units), which make up the “brain” of computers. While normal PCs are mounted with just one CPU, K has a total of 82,944 custom-made CPUs.
One of the greatest technological breakthroughs for K was to get these microprocessors — measuring 2.27 cm x 2.26 cm apiece — working efficiently together to achieve the maximum speed, explained Akihiko Fujino, a senior manager at Fujitsu’s Technical Computing Solutions Unit. To that end, the engineers came up with a “six-dimensional” network of CPUs that allows for rapid exchange of data across the system.
Dubbed “Tofu” technology the “six-dimensional torus interconnect” creates a number of channels through which data can travel. This can be easily explained using train lines as an analogy: To get from JR Shinjuku Station to JR Tokyo Station, for example, you can take the Yamanote loop line in a clockwise or counterclockwise direction, passing through many stations along the way. But you can get to the destination much faster if you take the Chuo Line, which cuts the loop in half and connects the two stations directly across the loop. K uses a similar concept, connecting different elements of the computer network multidimensionally.
In addition, to save energy and stabilize the machine’s operation, each system board has extensive copper plumbing, with cooling water running through it to keep the CPUs from overheating.
“K’s CPU, which we ourselves developed, has succeeded in striking the right balance between speed and energy efficiency,” Fujino said. “Its electric power consumption is about 60 percent of that of Intel processors (used in many supercomputers).”
Tadashi Watanabe, the K project leader at AICS, also pointed to the building’s unique architectural features that were designed for the computer. In order to dissipate heat from the densely racked system, the computer room uses air cooling, through which cold air blows from under the floor, cools the racks, hits the ceiling and then gets sent downstairs via air passageways in the periphery of the room.
In addition, to connect the 864 racks, 200,000 cables with a total length of more than 1,000 km are used, Watanabe said. In order to efficiently configure the racks and cables, the third floor of the building — a space covering 50 meters by 60 meters — is free of structural pillars. That meant the building itself had to be designed to be extra sturdy. Also, dampers placed in the building’s basement are capable of absorbing shocks from earthquakes with an intensity of Shindo 6 (in the Japanese seismic scale of 0-7), and anti-liquefaction measures have been taken on the man-made Kobe Port Island plot where the institute is located.
But enough about the technicalities of the computer and the site: What will K be able to offer that others cannot — when it goes fully operational in November?
Engineers, scientists and government officials who have pushed for the K-project all say that supercomputers on the scale of K can greatly advance simulations — the so-called “third science” after theory and experiment.
“While simulations and computational sciences have long been called the third science, we have not been able to accomplish much because of limitations in the computers’ ability,” said Kimihiko Hirao, AICS director and himself a researcher in the field of computational chemistry. “But with K, we can start simulating many things scientifically — like detailed simulations of damage from the Tonankai earthquakes (ones that that hit areas along the Pacific Coast all the way from Shizuoka to Kochi every 100 to 150 years). It enables real predictions in many fields.”
The K-computer project, which started in 2006, is the first one backed by the Japanese government since the Earth Simulator project, which won the top spot in the Top 500 list back in 2002, achieving 35.86 teraflops in the same benchmarking program. (1,000 teraflops equals one petaflop.) But while the Earth Simulator, developed by NEC Corp. and installed at the Yokohama center of the Japan Agency for Marine-Earth Science and Technology, was designed specifically for the purpose of simulating climate change models, the K-computer is built to be universal — to accommodate all kinds of simulation needs. Thus it is hoped to accelerate the R&D and design of a wide range of products and services — from jet engines to silicon semiconductors, to new drugs and to tsunami warning systems.
K’s reputation as the world’s No. 1 computer could be short-lived, however. The race for supercomputing supremacy between the United States, Japan and, most recently, China, is intense, and the U.S. “Sequoia” computer — a 20 petaflops machine being developed by IBM and set to be installed at the Lawrence Livermore National Laboratory in California — is poised to overtake K later this year. This comes as no shock to industry experts.
“It is a known fact that the first-place winner in the Top 500 list is normally twice as fast as the winner from the previous year, and even 1,000 times faster than the top machine 10 years ago,” said Naoki Shinjo, another Fujitsu official in charge of system development at the firm’s next-generation technical computing unit. “So it’s only natural that someone else will take the top spot next year.”
Experts even predict the arrival of “exa-scale” computers — which are 1,000 times faster than 1-petaflops-level computers, and 100 times faster than K — by 2018.
Though Japan has yet to decide whether, and how, to take on the exa-scale project, experts say that the greatest technical challenge awaiting any machine surpassing K will be its energy consumption. While being among the most energy-efficient machines in the Top 500, K still needs 17 megawatts to run, which means that its annual power consumption is equivalent to an aggregated total of that of 25,000 average households. And the environmental impact of computing is under increasing scrutiny these days, as seen in a recent controversy over a U.S. scientist’s estimate that every two Google searches uses enough energy to boil a kettle.
“The biggest bottleneck to the future development of supercomputers is energy,” AICS’s Hirao admitted. “If we are to develop a machine 100 times faster than K, we would still have to keep the level of power consumption at the current level. That means that the new computer will have to be 100 times more energy efficient. That would definitely be the biggest challenge.”
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.