Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.


By Jonathan Shieber

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”


By Frederic Lardinois

Testing platform Tricentis acquires performance testing service Neotys

If you develop software for a large enterprise company, chances are you’ve heard of Tricentis. If you don’t develop software for a large enterprise company, chances are you haven’t. The software testing company with a focus on modern cloud and enterprise applications was founded in Austria in 2007 and grew from a small consulting firm to a major player in this field, with customers like Allianz, BMW, Starbucks, Deutsche Bank, Toyota and UBS. In 2017, the company raised a $165 million Series B round led by Insight Venture Partners.

Today, Tricentis announced that it has acquired Neotys, a popular performance testing service with a focus on modern enterprise applications and a tests-as-code philosophy. The two companies did not disclose the price of the acquisition. France-based Neotys launched in 2005 and raised about €3 million before the acquisition. Today, it has about 600 customers for its NeoLoad platform. These include BNP Paribas, Dell, Lufthansa, McKesson and TechCrunch’s own corporate parent, Verizon.

As Tricentis CEO Sandeep Johri noted, testing tools were traditionally script-based, which also meant they were very fragile whenever an application changed. Early on, Tricentis introduced a low-code tool that made the automation process both easier and resilient. Now, as even traditional enterprises move to DevOps and release code at a faster speed than ever before, testing is becoming both more important and harder for these companies to implement.

“You have to have automation and you cannot have it be fragile, where it breaks, because then you spend as much time fixing the automation as you do testing the software,” Johri said. “Our core differentiator was the fact that we were a low-code, model-based automation engine. That’s what allowed us to go from $6 million in recurring revenue eight years ago to $200 million this year.”

Tricentis, he added, wants to be the testing platform of choice for large enterprises. “We want to make sure we do everything that a customer would need, from a testing perspective, end to end. Automation, test management, test data, test case design,” he said.

The acquisition of Neotys allows the company to expand this portfolio by adding load and performance testing as well. It’s one thing to do the standard kind of functional testing that Tricentis already did before launching an update, but once an application goes into production, load and performance testing becomes critical as well.

“Before you put it into production — or before you deploy it — you need to make sure that your application not only works as you expect it, you need to make sure that it can handle the workload and that it has acceptable performance,” Johri noted. “That’s where load and performance testing comes in and that’s why we acquired Neotys. We have some capability there, but that was primarily focused on the developers. But we needed something that would allow us to do end-to-end performance testing and load testing.”

The two companies already had an existing partnership and had integrated their tools before the acquisition — and many of its customers were already using both tools, too.

“We are looking forward to joining Tricentis, the industry leader in continuous testing,” said Thibaud Bussière, president and co-founder at Neotys. “Today’s Agile and DevOps teams are looking for ways to be more strategic and eliminate manual tasks and implement automated solutions to work more efficiently and effectively. As part of Tricentis, we’ll be able to eliminate laborious testing tasks to allow teams to focus on high-value analysis and performance engineering.”

NeoLoad will continue to exist as a stand-alone product, but users will likely see deeper integrations with Tricentis’ existing tools over time, include Tricentis Analytics, for example.

Johri tells me that he considers Tricentis one of the “best kept secrets in Silicon Valley” because the company not only started out in Europe (even though its headquarters is now in Silicon Valley) but also because it hasn’t raised a lot of venture rounds over the years. But that’s very much in line with Johri’s philosophy of building a company.

“A lot of Silicon Valley tends to pay attention only when you raise money,” he told me. “I actually think every time you raise money, you’re diluting yourself and everybody else. So if you can succeed without raising too much money, that’s the best thing. We feel pretty good that we have been very capital efficient and now we’re recognized as a leader in the category — which is a huge category with $30 billion spend in the category. So we’re feeling pretty good about it.”


By Frederic Lardinois

Amazon will expand its Amazon Care on-demand healthcare offering U.S.-wide this summer

Amazon is apparently pleased with how its Amazon Care pilot in Seattle has gone, since it announced this morning that it will be expanding the offering across the U.S. this summer, and opening it up to companies of all sizes, in addition to its own employees. The Amazon Care model combines on-demand and in-person care, and is meant as a solution from the search giant to address shortfalls in current offering for employer-sponsored healthcare offerings.

In a blog post announcing the expansion, Amazon touted the speed of access to care made possible for its employees and their families via the remote, chat and video-based features of Amazon Care. These are facilitated via a dedicated Amazon Care app, which provides direct, live chats via a nurse or doctor. Issues that then require in-person care is then handled via a house call, so a medical professional is actually sent to your home to take care of things like administering blood tests or doing a chest exam, and prescriptions are delivered to your door as well.

The expansion is being handled differently across both in-person and remote variants of care; remote services will be available starting this summer to both Amazon’s own employees, as well as other companies who sign on as customers, starting this summer. The in-person side will be rolling out more slowly, starting with availability in Washington, D.C., Baltimore, and “other cities in the coming months” according to the company.

As of today, Amazon Care is expanding in its home state of Washington to begin serving other companies. The idea is that others will sing on to make Amazon Care part of its overall benefits package for employees. Amazon is touting the speed advantages of testing services, including results delivery, for things including COVID-19 as a major strength of the service.

The Amazon Care model has a surprisingly Amazon twist, too – when using the in-person care option, the app will provide an updating ETA for when to expect your physician or medical technician, which is eerily similar to how its primary app treats package delivery.

While the Amazon Care pilot in Washington only launched a year-and-a-half ago, the company has had its collective mind set on upending the corporate healthcare industry for some time now. It announced a partnership with Berkshire Hathaway and JPMorgan back at the very beginning of 2018 to form a joint venture specifically to address the gaps they saw in the private corporate healthcare provider market.

That deep pocketed all-star team ended up officially disbanding at the outset of this year, after having done a whole lot of not very much in the three years in between. One of the stated reasons that Amazon and its partners gave for unpartnering was that each had made a lot of progress on its own in addressing the problems it had faced anyway. While Berkshire Hathaway and JPMorgan’s work in that regard might be less obvious, Amazon was clearly referring to Amazon Care.

It’s not unusual for large tech companies with lots of cash on the balance sheet and a need to attract and retain top-flight talent to spin up their own healthcare benefits for their workforces. Apple and Google both have their own on-campus wellness centers staffed by medical professionals, for instance. But Amazon’s ambitious have clearly exceeded those of its peers, and it looks intent on making a business line out of the work it did to improve its own employee care services — a strategy that isn’t too dissimilar from what happened with AWS, by the way.


By Darrell Etherington

Microsoft Azure expands its NoSQL portfolio with Managed Instances for Apache Cassandra

At its Ignite conference today, Microsoft announced the launch of Azure Managed Instance for Apache Cassandra, its latest NoSQL database offering and a competitor to Cassandra-centric companies like Datastax. Microsoft describes the new service as a ‘semi-managed offering that will help companies bring more of their Cassandra-based workloads into its cloud.

“Customers can easily take on-prem Cassandra workloads and add limitless cloud scale while maintaining full compatibility with the latest version of Apache Cassandra,” Microsoft explains in its press materials. “Their deployments gain improved performance and availability, while benefiting from Azure’s security and compliance capabilities.”

Like its counterpart, Azure SQL Manages Instance, the idea here is to give users access to a scalable, cloud-based database service. To use Cassandra in Azure before, businesses had to either move to Cosmos DB, its highly scalable database service which supports the Cassandra, MongoDB, SQL and Gremlin APIs, or manage their own fleet of virtual machines or on-premises infrastructure.

Cassandra was originally developed at Facebook and then open-sourced in 2008. A year later, it joined the Apache Foundation and today it’s used widely across the industry, with companies like Apple and Netflix betting on it for some of their core services, for example. AWS launched a managed Cassandra-compatible service at its re:Invent conference in 2019 (it’s called Amazon Keyspaces today), Microsoft only launched the Cassandra API for Cosmos DB last November. With today’s announcement, though, the company can now offer a full range of Cassandra-based servicer for enterprises that want to move these workloads to its cloud.


By Frederic Lardinois

TigerGraph raises $105M Series C for its enterprise graph database

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph found and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI, and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”


By Frederic Lardinois

Nobl9 raises $21M Series B for its SLO management platform

SLAs, SLOs, SLIs. If there’s one thing everybody in the business of managing software development loves, it’s acronyms. And while everyone probably knows what a Service Level Agreement (SLA) is, Service Level Objectives (SLOs) and Service Level Indicators (SLIs) may not be quite as well known. The idea, though, is straightforward, with SLOs being the overall goals a team must hit to meet the promises of its SLA agreements, and SLIs being the actual measurements that back up those other two numbers. With the advent of DevOps, these ideas, which are typically part of a company’s overall Site Reliability Engineering (SRE) efforts, are becoming more mainstream, but putting them into practice isn’t always straightforward.

Noble9 aims to provide enterprises with the tools they need to build SLO-centric operations and the right feedback loops inside an organization to help it hit its SLOs without making too many trade-offs between the cost of engineering, feature development and reliability.

The company today announced that it has raised a $21 million Series B round led by its Series A investors Battery Ventures and CRV. In addition, Series A investors Bonfire Ventures and Resolute Ventures also participated, together with new investors Harmony Partners and Sorenson Ventures.

Before starting Nobl9, co-founders Marcin Kurc (CEO) and Brian Singer (CPO) spent time together at Orbitera, where Singer was the co-founder and COO and Kurc the CEO, and then at Google Cloud, after it acquired Orbitera in 2016. In the process, the team got to work with and appreciate Google’s site reliability engineering frameworks.

As they started looking into what to do next, that experience led them to look into productizing these ideas. “We came to this conclusion that if you’re going into Kubernetes, into service-based applications and modern architectures, there’s really no better way to run that than SRE,” Kurc told me. “And when we started looking at this, naturally SRE is a complete framework, there are processes. We started looking at elements of SRE and we agreed that SLO — service level objectives — is really the foundational part. You can’t do SRE without SLOs.”

As Singer noted, in order to adopt SLOs, businesses have to know how to turn the data they have about the reliability of their services, which could be measured in uptime or latency, for example, into the right objectives. That’s complicated by the fact that this data could live in a variety of databases and logs, but the real question is how to define the right SLOs for any given organization based on this data.

“When you go into the conversation with an organization about what their goals are with respect to reliability and how they start to think about understanding if there’s risks to that, they very quickly get bogged down in how are we going to get this data or that data and instrument this or instrument that,” Singer said. “What we’ve done is we’ve built a platform that essentially takes that as the problem that we’re solving. So no matter where the data lives and in what format it lives, we want to be able to reduce it to very simply an error budget and an objective that can be tracked and measured and reported on.”

The company’s platform launched into general availability last week, after a beta that started last year. Early customers include Brex and Adobe.

As Kurc told me, the team actually thinks of this new funding round as a Series A round, but because its $7.5 million Series A was pretty sizable, they decided to call it a Series A instead of a seed round. “It’s hard to define it. If you define it based on a revenue milestone, we’re pre-revenue, we just launched the GA product,” Singer told me. “But I think just in terms of the maturity of the product and the company, I would put us at the [Series] B.”

The team told me that it closed the round at the end of last November, and while it considered pitching new VCs, its existing investors were already interested in putting more money into the company and since its previous round had been oversubscribed, they decided to add to this new round some of the investors that didn’t make the cut for the Series A.

The company plans to use the new funding to advance its roadmap and expand its team, especially across sales, marketing and customer success.


By Frederic Lardinois

SAP launches ‘RISE with SAP,’ a concierge service for digital transformation

SAP today announced a new offering it calls ‘RISE with SAP,’ a solution that is meant to help the company’s customers go through their respective digital transformations and become what SAP calls ‘intelligent enterprises.’ RISE is a subscription service that combines a set of services and product offerings.

SAP’s head of product success Sven Denecken (and its COO for S/4Hana) described it as “the best concierge service you can get for your digital transformation” when I talked to him earlier this week. “We need to help our clients to embrace that change that they see currently,” he said. “Transformation is a journey. Every client wants to become that smarter, faster and that nimbler business, but they, of course, also see that they are faced with challenges today and in the future. This continuous transformation is what is happening to businesses. And we do know from working together with them, that actually they agree with those fundamentals. They want to be an intelligent enterprise. They want to adapt and change. But the key question is how to get there? And the key question they ask us is, please help us to get there.”

With RISE for SAP, businesses will get a single contact at SAP to help guide them through their journey, but also access to the SAP partner ecosystem.

The first step in this process, Denecken stressed, isn’t necessarily to bring in new technology, though that is also part of it, but to help businesses redesign and optimize their business processes and implement the best practices in their verticals — and then measure the outcome. “Business process redesign means that you analyze how your business processes perform. How can you get tailored recommendations? How can you benchmark against industry standards? And this helps you to set the tone and also to motivate your people — your IT, your business people — to adapt,” Denecken described. He also noted that in order for a digital transformation project to succeed, IT and business leaders and employees have to work together.

In part, that includes technology offerings and adopting robotic process automation (RPA), for example. As Denecken stressed, all of this builds on top of the work SAP has done with its customers over the years to define business processes and KPIs.

On the technical side, SAP is obviously offering its own services, including its Business Technology Platform, and cloud infrastructure, but it will also support customers on all of the large cloud providers. Also included in RISE is support for more than 2,200 APIs to integrate various on-premises, cloud and non-SAP systems, access to SAP’s low-code and no-code capabilities and, of course, its database and analytics offerings.

“Geopolitical tensions, environmental challenges and the ongoing pandemic are forcing businesses to deal with change faster than ever before,” said Christian Klein, SAP’s CEO, in today’s announcement. “Companies that can adapt their business processes quickly will thrive – and SAP can help them achieve this. This is what RISE with SAP is all about: It helps customers continuously unlock new ways of running businesses in the cloud to stay ahead of their industry.”

With this new offering, SAP is now providing its customers with a number of solutions that were previously available through its partner ecosystem. Denecken doesn’t see this as SAP competing with its own partners, though. Instead, he argues that this is very much a partner play and that this new solution will likely only bring more customers to its partners as well.

“Needless to say, this has been a negotiation with those partners,” he said. “Because yes, it’s sometimes topics that we now take over they [previously] did. But we are looking for scale here. The need in the market for digital transformation has just started. And this is where we see that this is definitely a big offering, together with partners. “


By Frederic Lardinois

Datastax acquires Kesque as it gets into data streaming

Datastax, the company best known for commercializing the open-source Apache Cassandra database, is moving beyond databases. As the company announced today, it has acquired Kesque, a cloud messaging service.

The Kesque team built its service on top of the Apache Pulsar messaging and streaming project. Datastax has now taken that team’s knowledge in this area and, combined with its own expertise, is launching its own Pulsar-based streaming platform by the name of Datastax Luna Streaming, which is now generally available.

This move comes right as Datastax is also now, for the first time, announcing that it is cash-flow positive and profitable, as the company’s chief product officer, Ed Anuff, told me. “We are at over $150 million in [annual recurring revenue]. We are cash-flow positive and we are profitable,” he told me. This marks the first time the company is publically announcing this data. In addition, the company also today revealed that about 20 percent of its annual contract value is now for DataStax Astra, its managed multi-cloud Cassandra service and that the number of self-service Asta subscribers has more than doubled from Q3 to Q4.

The launch of Luna Streaming now gives the 10-year-old company a new area to expand into — and one that has some obvious adjacencies with its existing product portfolio.

“We looked at how a lot of developers are building on top of Cassandra,” Anuff, who joined Datastax after leaving Google Cloud last year, said. “What they’re doing is, they’re addressing what people call ‘data-in-motion’ use cases. They have huge amounts of data that are coming in, huge amounts of data that are going out — and they’re typically looking at doing something with streaming in conjunction with that. As we’ve gone in and asked, “What’s next for Datastax?,’ streaming is going to be a big part of that.”

Given Datastax’s open-source roots, it’s no surprise the team decided to build its service on another open-source project and acquire an open-source company to help it do so. Anuff noted that while there has been a lot of hype around streaming and Apache Kafka, a cloud-native solution like Pulsar seemed like the better solution for the company. Pulsar was originally developed at Yahoo! (which, full disclosure, belongs to the same Verizon Media Group family as TechCrunch) and even before acquiring Kesque, Datastax already used Pulsar to build its Astra platform. Other Pulsar users include Yahoo, Tencent, Nutanix and Splunk.

“What we saw was that when you go and look at doing streaming in a scale-out way, that Kafka isn’t the only approach. We looked at it, and we liked the Pulsar architecture, we like what’s going on, we like the community — and remember, we’re a company that grew up in the Apache open-source community — we said, ‘okay, we think that it’s got all the right underpinnings, let’s go and get involved in that,” Anuff said. And in the process of doing so, the team came across Kesque founder Chris Bartholomew and eventually decided to acquire his company.

The new Luna Streaming offering will be what Datastax calls a “subscription to success with Apache Pulsar.’ It will include a free, production-ready distribution of Pulsar and an optional, SLA-backed subscription tier with enterprise support.

Unsurprisingly, Datastax also plans to remain active in the Pulsar community. The team is already making code contributions, but Anuff also stressed that Datastax is helping out with scalability testing. “This is one of the things that we learned in our participation in the Apache Cassandra project,” Anuff said. “A lot of what these projects need is folks coming in doing testing, helping with deployments, supporting users. Our goal is to be a great participant in the community.”


By Frederic Lardinois

Ivanti has acquired security firms MobileIron and Pulse Secure

IT security software company Ivanti has acquired two security companies: enterprise mobile security firm MobileIron, and corporate virtual network provider Pulse Secure.

In a statement on Tuesday, Ivanti said it bought MobileIron for $872 million in stock, with 91% of the shareholders voting in favor of the deal; and acquired Pulse Secure from its parent company Siris Capital Group, but did not disclose the buying price.

The deals have now closed.

Ivanti was founded in 2017 after Clearlake Capital, which owned Heat Software, bought Landesk from private equity firm Thoma Bravo, and merged the two companies to form Ivanti. The combined company, headquartered in Salt Lake City, focuses largely on enterprise IT security, including endpoint, asset, and supply chain management. Since its founding, Ivanti went on to acquire several other companies, including U.K.-based Concorde Solutions and RES Software.

If MobileIron and Pulse Secure seem familiar, both companies have faced their fair share of headlines this year after hackers began exploiting vulnerabilities found in their technologies.

Just last month, the U.K. government’s National Cyber Security Center published an alert that warned of a remotely executable bug in MobileIron, patched in June, allowing hackers to break into enterprise networks. U.S. Homeland Security’s cybersecurity advisory unit CISA said that the bug was being actively used by advanced persistent threat (APT) groups, typically associated with state-backed hackers.

Meanwhile, CISA also warned that Pulse Secure was one of several corporate VPN providers with vulnerabilities that have since become a favorite among hackers, particularly ransomware actors, who abuse the bugs to gain access to a network and deploy the file-encrypting ransomware.


By Zack Whittaker

AWS adds natural language search service for business intelligence from its data sets

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”


By Jonathan Shieber

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.


By Frederic Lardinois

Qualcomm Ventures invests in four 5G startups

Qualcomm Ventures, Qualcomm’s investment arm, today announced four new strategic investments in 5G-related startups. These companies are private mobile network specialist Celona, mobile network automation platform Cellwize, the edge computing platform Azion and Pensando, another edge computing platform that combines its software stack with custom hardware.

The overall goal here is obviously to help jumpstart 5G use cases in the enterprise and — by extension — for consumers by investing in a wide range of companies that can build the necessary infrastructure to enable these.

“We invest globally in the wireless mobile ecosystem, with a goal of expanding our base of customers and partners — and one of the areas we’re particularly excited about is the area of 5G,” Quinn Li, a Senior VP at Qualcomm and the global head of Qualcomm Ventures, told me. “Within 5G, there are three buckets of areas we look to invest in: one is in use cases, second is in network transformation, third is applying 5G technology in enterprises.”

So far, Qualcomm Ventures has invested over $170 million in the 5G ecosystem, including this new batch. The firm did not disclose how much it invested in these four new startups, though.

Overall, this new set of companies touches upon the core areas Qualcomm Ventures is looking at, Li explained. Celona, for example, aims to make it as easy for enterprises to deploy private cellular infrastructure as it is to deploy Wi-Fi today.

“They built this platform with a cloud-based controller that leverages the available spectrum — CBRS — to be able to take the cellular technology, whether it’s LTE or 5G, into enterprises,” Li explained. “And then these enterprise use cases could be in manufacturing settings could be in schools, could be to be in hospitals, or it could be on campus for universities.”

Cellwize, meanwhile, helps automate wireless networks to make them more flexible and manageable, in part by using machine learning to tune the network based on the data it collects. One of the main investment theses for this fund, Li told me, is that wireless technology will become increasingly software-defined and Cellwize fits right into this trend. The potential customer here isn’t necessarily an individual enterprise, though, but wireless and mobile operators.

Edge computing, where Azion and Pensando play, is obviously also a hot category right now and when where 5G has some obvious advantages, so it’s maybe no surprise that Qualcomm Ventures is putting a bit of a focus on these today with its investments in Azion and Pensando.

“As we move forward, [you will] see a lot of the compute moving from the cloud into the edge of the network, which allows for processing happening at the edge of the network, which allows for low latency applications to run much faster and much more efficiently,” Li said.

In total, Qualcomm Ventures has deployed $1.5 billion and made 360 investments since its launch in 2000. Some of the more successful companies the firm has invested in include unicorns like Zoom, Cloudflare, Xiaomi, Cruise Automation and Fitbit.


By Frederic Lardinois

Contrast launches its security observability platform

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Nauman. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Nauman argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers like AWS, Azure and Google Cloud, and languages and frameworks like Java, Python, .NET and Ruby.

Image Credits: Contrast


By Frederic Lardinois

Splunk acquires Plumbr and Rigor to build out its observability platform

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”


By Frederic Lardinois