The Linux Foundation Projects
Skip to main content
Blog

Energy Efficiency Key European HPC Strength

By October 18, 2016No Comments

Member Spotlight


Dr. David Brayford is a scientific HPC consultant at Leibniz Supercomputing Centre (LRZ) with extensive experience in HPC, 3D computer graphics including device driver development & physics based photorealistic rendering, scientific and medical software development. At LRZ, Dr. Brayford develops highly parallel software applications including performance analysis and tuning. He has worked on automatic tuning of HPC applications for energy efficiency for many years.

Since the cost of power in Germany is at least 3x the cost of power in the US, LRZ is always looking at cost-efficiencies around power usage. The US national labs now pay close attention as well, but LRZ started early because energy costs were traditionally very high in Germany, and LRZ realized the next generation of systems would only exacerbate those issues.

What is your experience or background in HPC?

I’ve had a long background in HPC, computer graphics and software development before coming to Leibniz-Rechenzentrum (LRZ) in 2012. I started in graphics in the 1990s, developing a fully programmable GPU, based upon Single Instruction Multiple Data (SIMD) technology in Bristol, UK. I received my PhD at The University of Manchester in computer science and a post-doc at the University of Utah in the Scientific Computing and Imaging group and continued working in the US as a software developer for several different companies, including Biofire where I developed software for gene sequencing for “bio surveillance” and at GE developing software used in spinal surgery. I relocated to Munich in 2012, and have been working on various EU projects including Mont-Blanc developing supercomputing technologies based on cell phone chips from ARM and the AutoTune project. That involved automatically tuning HPC applications for a sweet spot of energy efficiency and performance by running CPU’s at lower clock frequencies.

So I’m a “software guy” through and through and that’s why I’m working on OpenHPC at LRZ!

Please tell us about LRZ’s mission

LRZ has long-standing expertise in security, network technologies, IT management, stable and highly energy efficient IT-operations, data archiving, high performance and grid computing. It is internationally known for research in these areas.

LRZ is the IT service provider for Munich’s universities and a growing number of publicly funded scientific institutions in the greater Munich area and in the state of Bavaria. Furthermore, as member of the Gauss Centre for Supercomputing (GCS), the alliance of the three national supercomputing centres in Germany (JSC-Jülich, HLRS Stuttgart, LRZ-Garching), LRZ is a national and European supercomputing centre.

The competencies of the more than 170 LRZ employees are grounded in many years of extensive operational experience, research on latest IT topics, as well as close co-operation with manufacturers and scientific clients. With the help of partner initiatives, the LRZ is opening new avenues of cooperation between computer scientists and IT service providers, on one side, and scientists of key domains such as astrophysics, life sciences, geology, and environmental sciences, on the other side.

Why is LRZ participating in OpenHPC?

There are at least three very important reasons LRZ is strongly supportive of OpenHPC.

To begin with, even though LRZ is a German supercomputing centre, we have finite resources for good software development. LRZ ends up being more of a consumer of software than a creator. Depending on the project in HPC, there can easily be 100+ software packages that we utilize. When we install a new package or are doing some sort of update, we run into the problem of dependencies.
Also, LRZ is internationally known for its leading edge research in the area of energy efficient HPC, which is usually closely linked to hardware. If there is now more common ground with a standardized software stack as developed by OpenHPC new technologies in the field of energy efficiency can be made available quickly to the HPC community due to unified APIs.

For HPC, it’s clearly never “one size fits all.” With all the variety of software packages installed on multiple systems trying to solve unique problems in stable computing environments with a wide range of sysadmins, application developers, hardware specialists and more, it ends up that practically every system is unique.

So we need some unifying mechanism or framework to significantly help cutting time and effort when building new systems. If OpenHPC is successful, it would reduce the amount of work we have to do building unique systems. It would save a considerable amount of time setting up a new system. It wouldn’t be used on current systems, probably, but it would be extremely helpful when building the next-gen of large systems that are three years out. We could focus on solving the real problems.

Also, collaboration from an early stage is important in the research we do. If a vendor shows up with a new, supposedly useful framework but we haven’t been involved in building it, our specific issues might not be well covered. Getting in at the beginning of the collaborative development process is extremely important.

And OpenHPC would also likely help broaden collaboration with vendors and other HPC centres. For large supercomputing sites like LRZ in Germany, physically distant from some major US vendors and HPC centres, this could be quite helpful. When we cooperate with vendors, HPC centres and important users; we don’t often have everyone in the same room. We attend Supercomputing and other events every year, but OpenHPC could potentially provide a mechanism for interacting with all the vendors, HPC centres and important users at once. This would be extremely useful.

What do you do on weekends?

I’m currently training for a half marathon! But I’m a little bit lazy, so maybe this isn’t the perfect hobby. I’m looking around a little. [laughs] I used to do a lot of skiing. I lived in the US in Utah, which includes easy access to the Rocky Mountains, and I could go most weekends in the winter. The conditions are so good there, I even skied on the Fourth of July once, not many people can say that!