Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.


By Jonathan Shieber

NUVIA raises $240M from Mithril to make climate-ready enterprise chips

Climate change is on everyone’s minds these days, what with the outer Bay Area on fire, orange skies above San Francisco, and a hurricane season that is bearing down on the East Coast with alacrity (and that’s just the United States in the past two weeks).

A major — and growing — source of those emissions is data centers, the cloud infrastructure that powers most of our devices and experiences. That’s led to some novel ideas, such as Microsoft’s underwater data center Project Natick, which just came back to the surface for testing a bit more than a week ago.

Yet, for all the fun experiments, there is a bit more of an obvious solution: just make the chips more energy efficient.

That’s the thesis of NUVIA, which was founded by three ex-Apple chip designers who led the design of the “A” series chip line for the company’s iPhones and iPads for years. Those chips are wicked fast within a very tight energy envelope, and NUVIA’s premise is essentially what happens when you take those sorts of energy constraints (and the experience of its chip design team) and apply them to the data center.

We did a deep profile of the company last year when it announced its $53 million Series A, so definitely read that to understand the founding story and the company’s mission. Now about one year later, it’s coming back to us with news of a whole bunch of more funding.

NUVIA announced today that it has closed on a $240 million Series B round led by Mithril Capital, with a bunch of others involved listed below.

Since we last chatted with the company, we now have a bit more detail of what it’s working on. It has two products under development, a system-on-chip (SoC) unit dubbed “Orion” and a CPU core dubbed “Phoenix.” The company previewed a bit of Phoenix’s performance last month, although as with most chip companies, it is almost certainly too early to make any long-term predictions about how the technology will settle in with existing and future chips coming to the market.

NUVIA’s view is that chips are limited to about 250-300 watts of power given the cooling and power constraints of most data centers. As more cores become common pre chip, each core is going to have to make do with less power availability while maintaining performance. NUVIA’s tech is trying to solve that problem, lowering total cost of ownership for data center operators while also improving overall energy efficiency.

There’s a lot more work to be done of course, so expect to see more product announcements and previews from the company as it gets its technology further finalized. With $240 million more dollars in the bank though, it certainly has the resources to make some progress.

Shortly after we chatted with the company last year, Apple sued company founder and CEO Gerald Williams III for breach of contract, with the company arguing that its former chip designer was trying to poach employees for his nascent startup. Williams counter-sued earlier this year, and the two parties are now in the discovery phase of their lawsuit, which remains ongoing.

In addition to lead Mithril, the round was done “in partnership with” the founders of semiconductor giant Marvell (Sehat Sutardja and Weili Dai), funds managed by BlackRock, Fidelity, and Temasek, plus Atlantic Bridge and Redline Capital along with Series A investors Capricorn Investment Group, Dell Technologies Capital, Mayfield, Nepenthe LLC, and WRVI Capital.


By Danny Crichton

Google will soon open a cloud region in Poland

Google today announced its plans to open a new cloud region in Warsaw, Poland to better serve its customers in Central and Eastern Europe.

This move is part of Google’s overall investment in expanding the physical footprint of its data centers. Only a few days ago, after all, the company announced that, in the next two years, it would spend $3.3 billion on its data center presence in Europe alone.

Google Cloud currently operates 20 different regions with 61 availability zones. Warsaw, like most of Google’s regions, will feature three availability zones and launch with all the standard core Google Cloud services, including Compute Engine, App Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery.

To launch the new region in Poland, Google is partnering with Domestic Cloud Provider (a.k.a. Chmury Krajowej, which itself is a joint venture of the Polish Development Fund and PKO Bank Polski). Domestic Cloud Provider (DCP) will become a Google Cloud reseller in the country and build managed services on top of Google’s infrastructure.

“Poland is in a period of rapid growth, is accelerating its digital transformation, and has become an international software engineering hub,” writes Google Cloud CEO Thomas Kurian. “The strategic partnership with DCP and the new Google Cloud region in Warsaw align with our commitment to boost Poland’s digital economy and will make it easier for Polish companies to build highly available, meaningful applications for their customers.”

 


By Frederic Lardinois

Atlassian puts its Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes because the company today announced that it is launching Atlassian Software in Kubernetes (AKS), a new solution that allows enterprises to run and manage its on-premise applications like Jira Data Center as containers and with the help of Kubernetes.

To build this solution, Atlassian partnered with Praqma, a Continuous Delivery and DevOps consultancy. It’s also making AKS available as open source.

As the company admits in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” the company explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.


By Frederic Lardinois

Microsoft launches 2 new Azure regions in Australia

Microsoft continues its steady pace of opening up new data centers and launching new regions for its Azure cloud. Today, the company announced the launch of two new regions in Australia. To deliver these new regions, Azure Australia Central and Central 2, Microsoft entered a strategic partnership with Canberra Data Centers and unsurprisingly, the regions are located in the country’s capital territory around Canberra. These new central regions complement Microsoft’s existing data center presence in Australia, which previously focused on the business centers of Sydney and Melbourne.

Given the location in Canberra, it’s also no surprise that Microsoft is putting an emphasis on its readiness for handling government workloads on its platform. Throughout its announcement, the company also emphasizes that all of its Australia data centers are also the right choice for its customers in New Zealand.

Julia White, Microsoft corporate VP for Azure, told me last month that the company’s strategy around its data center expansion has always been about offering a lot of small regions to allow it to be close to its customers (and, in return, to allow its customers to be close to their own customers, too). “The big distinction is the number of regions we have. “White said. “Microsoft started its infrastructure approach focused on enterprise organizations and built lots of regions because of that. We didn’t pick this regional approach because it’s easy or because it’s simple, but because we believe this is what our customers really want.”

Azure currently consists of 50 available or announced regions. Over time, more of these will also feature numerous availability zones inside every region, though for now, this recently announced feature is only present in two regions.

Google expands its Cloud Platform region in the Netherlands

Google today announced that it has expanded its recently launched Cloud Platform region in the Netherlands with an additional zone. The investment, which is worth a reported 500 million euros, expands the existing Netherlands region from two to three regions. With this, all the four central European Google Cloud Platform zones now feature three zones (which are akin to what AWS would call “availability zones”) that allow developers to build highly available services across multiple data centers.

Google typically aims to have a least three zones in every region, so today’s announcement to expand its region in the Dutch province of Groningen doesn’t come as a major surprise.

With this move, Google is also making Cloud SpannerCloud BigtableManaged Instance Groups, and Cloud SQL available in the region.

Over the course of the last two years, Google has worked hard to expand its global data center footprint. While it still can’t compete with the likes of AWS and Azure, which currently offers more regions than any of its competitors, the company now has enough of a presence to be competitive in most markets.

In the near future, Google also plans to open regions in Los Angeles, Finland, Osaka and Hong Kong. The major blank spots on its current map remain Africa, China (for rather obvious reasons) and Eastern Europe, including Russia.