Capping Carbon On Asia’s Supercomputers

Blog

HomeHome / Blog / Capping Carbon On Asia’s Supercomputers

Sep 04, 2023

Capping Carbon On Asia’s Supercomputers

Thoughtful processor design, optimized programming and strong government support

Thoughtful processor design, optimized programming and strong government support are helping to make high performance computing ecosystems more sustainable.

AsianScientist (Apr. 11, 2023) — When Pixar and Disney first shared a glimpse of their 2013 animated film Monsters University, fans quickly noticed one striking detail: the fur covering Sulley, one of the movie's two protagonists, was extremely realistic.

Millions upon millions of tiny strands of fur swayed convincingly whenever the gentle giant moved his limbs, and even ruffled under a tight shirt, just as you’d expect them to. The animators’ secret? A supercomputer that would’ve ranked among the world's fastest at the time, automatically redrawing every single strand of fur and letting it catch and reflect light with each frame of movement.

Ordinary desktop machines don't have the processing prowess to carry out this type of animation—in fact, even the higher-end versions would have had trouble with it. But animators revealed that Sulley and other monsters in the film, along with every texture, shading and frame, owe their crisp, vivid existence to high-performance computing (HPC).

Able to handle billions of calculations easily, this is the same type of technology that is being leveraged to predict tsunamis, supercharge healthcare innovation and study the origin of super massive black holes. Combining powerful processors, sophisticated software and other cutting-edge computing technologies, HPC employs thousands of computing nodes working simultaneously to complete extremely complex computing tasks much quicker than a regular computer can.

There's just one problem. With great computing power comes great energy liability. Even as HPC systems are helping solve some of the most pressing problems in society in the decade since Monsters University, they pose another problem: their massive carbon footprint.

To balance computing power and sustainability, supercomputers across Asia are increasingly being designed with more energy-efficient processors and programming. Meanwhile, governments are waking up to the need for more sustainable energy sources and policies as they shape their growing HPC ecosystems.

One of the major drivers of HPC carbon emissions is their steep energy demands. After all, there needs to be a reliable and robust stream of energy to support such intense computing power. The HPC system behind Monsters University, for example, comprised 2,000 computers totalling 24,000 cores. Despite this computing power, the movie still took over 100 million CPU hours to fully render. All the while, Pixar's power bill kept racking up.

Moreover, the Frontier system, the world's most powerful supercomputer as of November 2022, needs more than 20 MW of power for its over 8.7 million cores—enough to supply some 52,600 households in Singapore for a month.

All told, just powering the world's top 500 supercomputers pumps around two million metric tons of carbon dioxide per year, equivalent to approximately 285,000 households.

Plus, any honest accounting of the environmental toll of HPC systems should take stock of the entire ecosystem of technologies that support it. After all, the computing machines themselves form only one, albeit central, part of the equation.

The majority of the energy that flows into supercomputers is dissipated as heat. To manage temperatures and ensure that the machines continue to work properly, computing facilities employ elaborate cooling mechanisms, which themselves often consume a lot of power.

Another peripheral source of carbon emissions in HPC systems is data. The International Energy Agency estimated that in 2021, data centers worldwide used some 220 to 321 TWh of energy—enough to eclipse the consumption of some countries. Given the world's growing reliance on HPC systems, Professor Tan Tin Wee, chief executive of the National Supercomputing Centre (NSCC) Singapore, predicted that as much as 10 percent of the world's energy consumption will come from data center operations in the future. "Energy consumption will be a huge problem," Tan told Supercomputing Asia.

A major solution for the steep energy costs of HPC systems is to maximize computing energy efficiency, explained Professor Satoshi Matsuoka, director of the RIKEN Center for Computational Science, in an interview with Supercomputing Asia. The goal, he said, should be to keep power consumption at the lowest possible level while also finding ways to achieve better performance.

RIKEN is home to the Fugaku supercomputer, developed by Japanese company Fujitsu. Since Fugaku debuted in 2020, it has consistently led the TOP500 list of the world's fastest supercomputers. Though it was dethroned by Frontier in June 2022, Fugaku remains a solid contender for the world's most powerful—and energy-efficient—supercomputer, particularly when looking at its real-use conditions.

According to Matsuoka, much of what underpins Fugaku's power is thoughtful, purposeful design. "First, we had to design it efficiently," he said, noting that because they knew that the supercomputer would be used for sustainability research, they specifically built its parts to achieve peak computing performance while doing away with other extraneous functions. "The machine was built with a mindset to save power."

The heart of Fugaku—and largely responsible for its extreme energy efficiency—is the A64FX processor, which was also developed by Fujitsu.

A single A64FX chip contains 48 computing cores divided across four core memory groups (CMG). Each CMG can also contain up to one additional core each, which functions as an assistant. In processor parlance, a core is a small processing unit that can perform computing tasks independently of other cores. The vast majority of computer users will be well-served by machines that have two or four cores. A64FX dials up its performance by having 48.

Each core of the A64FX has a clock speed of 1.8 to 2.2 Ghz, which means that every single core can complete 1.8 billion to 2.2 billion cycles per second. Some simpler computing tasks can be completed within one cycle, while more complex instructions take multiple cycles. Though a bit simplistic, higher clock speeds typically translate to superior computing performance.

Matsuoka noted that aside from Fugaku's processor, the network itself is highly efficient. Where commercial network cards use up ​​25 to 30 W per node, Fugaku's ethernet over copper networks use 10 to 20 W per node.

The design for Fugaku also includes precise power control features for users. Whereas most processors operate by having all compute nodes on or off at the same time, Fugaku can be configured to run only the parts relevant to a certain task. "It contributes to significant savings in terms of power usage," said Matsuoka.

These features, along with other engineering innovations, have allowed the Fugaku supercomputer to break performance and power-saving barriers. Compared with the K computer, an earlier Fujitsu supercomputer that was decommissioned in 2019, Matsuoka estimated that Fugaku is about 70 times more powerful in terms of real-use performance. "But power consumption only went up by maybe 20 to 30 percent," he explained. "Thus, compared to its predecessor, the power efficiency of Fugaku is nearly a factor of 50."

Peak efficiency is also the objective of MN-3, a supercomputer developed by the Japanese company Preferred Networks, in collaboration with Kobe University.

In fact, despite Fugaku's incredible numbers, MN-3 comfortably eclipses it in terms of energy efficiency. According to Fujitsu's own numbers, for every watt of energy, Fugaku can carry out around 15 billion calculations. With the same amount of energy, MN-3 can perform almost 41 billion—more than double the efficiency.

This impressive statistic has consistently placed the MN-3 among the world's most efficient supercomputers, according to the Green500, a biannual ranking that lists machines in terms of energy efficiency. The MN-3 clinched the top spot in the November 2021, June 2021 and June 2020 lists.

"MN-3 is currently powered by 128 MN-Core processors and 1,536 Intel Xeon CPUs. It consists of 32 nodes with 4 MN-Core processors in each," explained Dr Yusuke Doi, vice president of computing infrastructure at Preferred Networks, in an interview with Supercomputing Asia.

However, "the key reason why MN-3 topped the Green500 list three times is precisely that it uses MN-Core, which is specialized for the matrix calculation required for deep learning, instead of GPUs," he added.

MN-Core is an accelerator designed with a hierarchical architecture and comes in a four-die package. Each die has four level-two blocks, which are further divided into eight level-one blocks. In turn, level-one blocks house 16 matrix arithmetic blocks, which themselves contain four processing elements each.

At each level, each block is connected with unique on-chip networks, which can broadcast, aggregate or collect data at every hierarchical level. Different parts of a large dataset can be distributed to different parts of the block, which allows highly efficient processing and computing.

Preferred Networks also employed software optimizations that unlocked the full potential of MN-Core's hardware and helped push MN-3's energy efficiency numbers even higher.

In particular, the company came up with the MN-Core Compiler, a program that translates high-level computer code into another, more machine-friendly language. It was designed with two main goals: to minimize the need for user-side modifications and to maximize MN-Core's features to achieve peak computing performance.

Specifically, the compiler had to figure out the optimal way of mapping out computations to each compute unit in the MN-Core's hierarchical structure. Since the accelerator uses only a single instruction stream, the program also had to ensure a steady flow of data to push performance as close as possible to its theoretical max.

The end result is software that has strong control over hardware and can dictate how calculations will be carried out to achieve maximum efficiency. "In MN-Core, what's conventionally decided and processed within the hardware automatically is exposed to the software side, and the software can manually control details of the computation in the hardware in a ‘manual mode’ to optimize energy consumption," Doi explained.

This reflects Preferred Networks’ core philosophy: realizing hardware's true promise through smart software design. "As long as they are properly controlled by the software, it can unleash silicon's true potential," Doi said.

Despite the industry-transforming sustainability efforts of companies like Fujitsu and Preferred Networks, some crucial factors remain beyond the power of private entities.

For instance, in evaluating the carbon emission toll of a supercomputer, looking at how much energy it uses or how efficiently it can carry out calculations isn't enough. It's also important to factor in their country's energy mix. HPC systems in countries powered mostly by renewable energy will be more sustainable than those in territories still reliant on fossil fuels, which is why Matsuoka shared that part of Fugaku's mission is to help Japan develop its offshore wind and solar energy generation. But not every country is able to keep pace.

According to Singapore's Energy Market Authority (EMA), some 95 percent of the country's electricity comes from natural gas. This is the cleanest form of fossil fuel energy—but is nevertheless a carbon-intensive source. EMA estimates that Singapore will continue to rely on natural gas for the foreseeable future, but continues to search for and invest in more sustainable alternatives, like solar energy.

Aside from developing cleaner sources of energy, governments also have the power to shape their countries’ HPC ecosystems, coming up with policies that could help them meet consumer and industry demands while also keeping them in line with emission targets.

In Japan, for example, the government has announced substantial subsidies to help data centers make sustainable upgrades to their facilities. The country is also considering concentrating these power-hungry centers in the colder regions of the country, which could help cut back on electricity needs for cooling systems.

Meanwhile, the Singapore government suspended the approval and construction of new data centers in 2019, pointing to their 350 MW power footprint. The moratorium ended in 2022, and allowed officials to create new guiding principles moving forward.

Under the new rules, only facilities that pass stringent international standards, employ best-in-class energy efficiency technologies and present clear plans to integrate renewables and other innovative energy pathways into operations will be certified. These measures will help Singapore balance the growing need for data centers with the need to respond to the urgent climate crisis.

However, technologies and circumstances are ever-evolving. What may be best-in-class today could be ineffective tomorrow; carbon targets this year could be insufficient the next. In the face of these uncertainties, Singapore has set a good precedent for itself, and a good example for the rest of Asia—push the pause button, take stock of the haves and have-nots and chart a better way forward.

As for NSCC, a government-funded supercomputing facility, its chief executive Professor Tan Tin Wee pointed out that their role is to lead by example. Over the past seven years, his team has pioneered cheaper and more efficient cooling techniques that have lowered the energy consumption of their HPC systems—a crucial endeavor for supercomputing in a tropical country. "We can keep trying out new things, which commercial data centers do not have the luxury of doing," Tan explained. "If we can show others that we can do it, then the rest of the community can follow."

These techniques have been applied to the NSCC's newest supercomputer, the ASPIRE 2A. Designed based on lessons from ASPIRE 1, the ASPIRE 2A has a PUE—or power usage effectiveness, a metric used for measuring a data center's energy efficiency—of close to 1.08. Typical data centers in the region have a PUE of 2.

Already, these innovations have been getting some much-deserved recognition. The NUS-NSCC i4.0 Data Centre, which houses the ASPIRE 2A, received the Building & Construction Authority (BCA) Platinum Green Mark Award for Data Centres in 2021 and the W.Media Southeast Asia Cloud & Datacenter (DC) Award for Energy Efficient Innovation in 2022.

To keep improving their systems’ energy efficiency, the NSCC also runs simulations of their own supercomputers. In this way, Tan said, "Supercomputers are not just a contributor, but a solution to the problem itself."

Whether HPC systems are used to make the most realistic animated monsters or push the bleeding edge of scientific knowledge, it is important to make sure that their emissions fall in line with the planet's sustainability targets. Asia's innovations in processors, programming and policies have shown that this is possible.

This article was first published in the print version of Supercomputing Asia, January 2023.Click here to subscribe to Asian Scientist Magazine in print.

Copyright: Asian Scientist Magazine.

Disclaimer: This article does not necessarily reflect the views of AsianScientist or its staff.

#Carbon Capture #Carbon Emissions #Carbon Footprint #High Performance Computing

Tristan is an independent science writer based in Metro Manila, with around seven years of experience writing about medicine, biotech and the environment. Being formally trained in molecular biology, he once dreamed of collecting degrees and starting his own lab. But these days, he finds his greatest joy in a bottle of beer and a beautiful sentence.

Tristan is an independent science writer based in Metro Manila, with around seven years of experience writing about medicine, biotech and the environment. Being formally trained in molecular biology, he once dreamed of collecting degrees and starting his own lab. But these days, he finds his greatest joy in a bottle of beer and a beautiful sentence.

Tristan Manalac Capping Carbon On Asia's Supercomputers HPC's Worrying Power Problem Thorough Thought And A Powerful Heart Radical Simplification In Software And Chip Architecture Power, Policy And Potent Precedent Tristan Manalac Tristan Manalac