Container security acquisitions increase as companies accelerate shift to cloud

Last week, another container security startup came off the board when Rapid7 bought Alcide for $50 million. The purchase is part of a broader trend in which larger companies are buying up cloud-native security startups at a rapid clip. But why is there so much M&A action in this space now?

Palo Alto Networks was first to the punch, grabbing Twistlock for $410 million in May 2019. VMware struck a year later, snaring Octarine. Cisco followed with PortShift in October and Red Hat snagged StackRox last month before the Rapid7 response last week.

This is partly because many companies chose to become cloud-native more quickly during the pandemic. This has created a sharper focus on security, but it would be a mistake to attribute the acquisition wave strictly to COVID-19, as companies were shifting in this direction pre-pandemic.

It’s also important to note that security startups that cover a niche like container security often reach market saturation faster than companies with broader coverage because customers often want to consolidate on a single platform, rather than dealing with a fragmented set of vendors and figuring out how to make them all work together.

Containers provide a way to deliver software by breaking down a large application into discrete pieces known as microservices. These are packaged and delivered in containers. Kubernetes provides the orchestration layer, determining when to deliver the container and when to shut it down.

This level of automation presents a security challenge, making sure the containers are configured correctly and not vulnerable to hackers. With myriad switches this isn’t easy, and it’s made even more challenging by the ephemeral nature of the containers themselves.

Yoav Leitersdorf, managing partner at YL Ventures, an Israeli investment firm specializing in security startups, says these challenges are driving interest in container startups from large companies. “The acquisitions we are seeing now are filling gaps in the portfolio of security capabilities offered by the larger companies,” he said.


By Ron Miller

Slim.ai announces $6.6M seed to build container DevOps platform

We are more than seven years into the notion of modern containerization, and it still requires a complex set of tools and a high level of knowledge on how containers work. The DockerSlim open source project developed several years ago from a desire to remove some of that complexity for developers.

Slim.ai, a new startup that wants to build a commercial product on top of the open source project, announced a $6.6 million seed round today from Boldstart Ventures, Decibel Partners, FXP Ventures and TechAviv Founder Partners.

Company co-founder and CEO John Amaral says he and fellow co-founder and CTO Kyle Quest have worked together for years, but it was Quest who started and nurtured DockerSlim. “We started coming together around a project that Kyle built called DockerSlim. He’s the primary author, inventor and up until we started doing this company, the sole proprietor of that of that community,” Amaral explained.

At the time Quest built DockerSlim in 2015, he was working with Docker containers and he wanted a way to automate some of the lower level tasks involved in dealing with them. “I wanted to solve my own pain points and problems that I had to deal with, and my team had to deal with dealing with containers. Containers were an exciting new technology, but there was a lot of domain knowledge you needed to build production-grade applications and not everybody had that kind of domain expertise on the team, which is pretty common in almost every team,” he said.

He originally built the tool to optimize container images, but he began looking at other aspects of the DevOps lifecycle including the author, build, deploy and run phases. He found as he looked at that, he saw the possibility of building a commercial company on top of the open source project.

Quinn says that while the open source project is a starting point, he and Amaral see a lot of areas to expand. “You need to integrate it into your developer workflow and then you have different systems you deal with, different container registries, different cloud environments and all of that. […] You need a solution that can address those needs and doing that through an open source tool is challenging, and that’s where there’s a lot of opportunity to provide premium value and have a commercial product offering,” Quinn explained.

Ed Sim, founder and general partner at Boldstart Ventures, one of the seed investors sees a company bringing innovation to an area of technology where it has been lacking, while putting some more control in the hands of developers. “Slim can shift that all left and give developers the power through the Slim tools to answer all those questions, and then, boom, they can develop containers, push them into production and then DevOps can do their thing,” he said.

They are just 15 people right now including the founders, but Amaral says building a diverse and inclusive company is important to him, and that’s why one of his early hires was head of culture. “One of the first two or three people we brought into the company was our head of culture. We actually have that role in our company now, and she is a rock star and a highly competent and focused person on building a great culture. Culture and diversity to me are two sides of the same coin,” he said.

The company is still in the very early stages of developing that product. In the meantime, they continue to nurture the open source project and to build a community around that. They hope to use that as a springboard to build interest in the commercial product, which should be available some time later this year.


By Ron Miller

New Relic acquires Kubernetes observability platform Pixie Labs

Two months ago, Kubernetes observability platform Pixie Labs launched into general availability and announced a $9.15 million Series A funding round led by Benchmark, with participation from GV. Today, the company is announcing its acquisition by New Relic, the publicly traded monitoring and observability platform.

The Pixie Labs brand and product will remain in place and allow New Relic to extend its platform to the edge. From the outset, the Pixie Labs team designed the service to focus on providing observability for cloud-native workloads running on Kubernetes clusters. And while most similar tools focus on operators and IT teams, Pixie set out to build a tool that developers would want to use. Using eBPF, a relatively new way to extend the Linux kernel, the Pixie platform can collect data right at the source and without the need for an agent.

At the core of the Pixie developer experience are what the company calls “Pixie scripts.” These allow developers to write their debugging workflows, though the company also provides its own set of these and anybody in the community can contribute and share them as well. The idea here is to capture a lot of the informal knowledge around how to best debug a given service.

“We’re super excited to bring these companies together because we share a mission to make observability ubiquitous through simplicity,” Bill Staples, New Relic’s Chief Product Officer, told me. “[…] According to IDC, there are 28 million developers in the world. And yet only a fraction of them really practice observability today. We believe it should be easier for every developer to take a data-driven approach to building software and Kubernetes is really the heart of where developers are going to build software.”

It’s worth noting that New Relic already had a solution for monitoring Kubernetes clusters. Pixie, however, will allow it to go significantly deeper into this space. “Pixie goes much, much further in terms of offering on-the-edge, live debugging use cases, the ability to run those Pixie scripts. So it’s an extension on top of the cloud-based monitoring solution we offer today,” Staples said.

The plan is to build integrations into New Relic into Pixie’s platform and to integrate Pixie use cases with New Relic One as well.

Currently, about 300 teams use the Pixie platform. These range from small startups to large enterprises and as Staples and Asgar noted, there was already a substantial overlap between the two customer bases.

As for why he decided to sell, Pixie co-founder (and former Google AI CEO Zain Asgar told me that it was all about accelerating Pixie’s vision.

“We started Pixie to create this magical developer experience that really allows us to redefine how application developers monitor, secure and manage their applications,” Asgar said. “One of the cool things is when we actually met the team at New Relic and we got together with Bill and [New Relic founder and CEO] Lou [Cirne], we realized that there was almost a complete alignment around this vision […], and by joining forces with New Relic, we can actually accelerate this entire process.”

New Relic has recently done a lot of work on open-sourcing various parts of its platform, including its agents, data exporters and some of its tooling. Pixie, too, will now open-source its core tools. Open-sourcing the service was always on the company’s roadmap, but the acquisition now allows it to push this timeline forward.

“We’ll be taking Pixie and making it available to the community through open source, as well as continuing to build out the commercial enterprise-grade offering for it that extends the New Relic one platform,” Staples explained. Asgar added that it’ll take the company a little while to release the code, though.

“The same fundamental quality that got us so excited about Lew as an EIR in 2007, got us excited about Zain and Ishan in 2017 — absolutely brilliant engineers, who know how to build products developers love,” Bessemer Ventures General Partner Eric Vishria told me. “New Relic has always captured developer delight. For all its power, Kubernetes completely upends the monitoring paradigm we’ve lived with for decades.  Pixie brings the same — easy to use, quick time to value, no-nonsense approach to the Kubernetes world as New Relic brought to APM.  It is a match made in heaven.”


By Frederic Lardinois

Cisco acquires PortShift to raise its game in DevOps and Kubernetes security

Cisco is making another acquisition to expand its reach in security solutions, this time specifically targeting DevOps and the world of container management. It is acquiring PortShift, an Israeli startup that has built a Kubernetes-native security platform.

Terms of the deal are not being disclosed. PortShift had raised about $5.3 million from Team8, an incubator and backer of security startups in Israel founded by a group of cybersecurity vets. Cisco, along with Microsoft and Walmart, are among the large corporates that back Team8. (Indeed, their participation is in part a way of getting an early look and inside scoop on some of the more cutting edge technologies being built, and in part a way to help founders understand what corporates’ security needs are these days.)

The deal underscores not just how containerization, and specifically Kubernetes, has taken hold of the enterprise world, but also how those working in this area, and building businesses around containerization and Kubernetes, are paying increasing attention to security around them.

Others are also sharpening their focus on containers and how they are secured. Earlier this year, Venafi acquired Jetstack, which runs a certificate controller for Kubernetes; and last month StackRox raised funding for its own approach to Kubernetes security.

Cisco has been a longtime partner of Google’s around cloud services, and it has made a number of acquisitions in the area of cybersecurity in recent years. They have included Duo for $2.35 billion, OpenDNS for $635 million, and most recently Babble Labs (which helps reduce background noise in video calls, something that both improves quality but also helps users ensure unwanted or private chatter doesn’t inadvertently get heard by unintended listeners).

But as Liz Centoni, the SVP of the Emerging Technologies and Incubation (ET&I) Group, notes in a blog post, Cisco is now turning its attention also to how it can help customers better secure applications and workloads, alongside the investments that it has made to help secure people.

In the area of containers, security issues can arise around container architecture in a number of ways: it can be due to misconfiguration; or because of how applications are monitored; or how developers use open-source libraries; and how companies implement regulatory compliance. Other security vulnerabilities include the use of insecure container images; problems with how containers interact with each other; the use of containers that have been infected with rogue processes; and having containers not isolated properly from their hosts.

Centoni notes that PortShift interested them because it provides an all-in-one platform covering the many aspects of Kubernetes security:

“Today, the application security space is highly fragmented with many vendors addressing only part of the problem,” she writes. “The Portshift team is building capabilities that span a large portion of the lifecycle of the cloud-native application.”

PortShift provides tools for better container configuration visibility, vulnerability management, configuration management, segmentation, encryption, compliance and automation.

The acquisition is expected to close in the first half of Cisco’s 2021 fiscal year, when the team will join Cisco’s ET&I Group.


By Ingrid Lunden

StackRox nabs $26.5M for a platform that secures containers in Kubernetes

Containers have become a ubiquitous cornerstone in how companies manage their data, a trend that has only accelerated in the last eight months with the larger shift to cloud services and more frequent remote working due to the coronavirus pandemic. Alongside that, startups building services to enable containers to be used better are also getting a boost.

StackRox, which develops Kubernetes-native security solutions, says that its business grew by 240% in the first half of this year, and on the back of that, it is announcing today that it has raised $26.5 million to expand its business into international markets, and to continue investing in its R&D.

The funding, which appears to be a Series C, has an impressive list of backers. It is being led by Menlo Ventures, with Highland Capital Partners, Hewlett-Packard Enterprise, Sequoia Capital and Redpoint Ventures all also participating. Sequoia and Redpoint are previous investors, and the company has raised around $60 million to date.

HPE is a strategic backer in this round:

“At HPE, we are working with our customers to help them accelerate their digital transformations,” said Paul Glaser, VP, Hewlett Packard Enterprise, and Head of Pathfinder. “Security is a critical priority as they look to modernize their applications with containers. We’re excited to invest in StackRox and see it as a great fit with our new software HPE Ezmeral to help HPE customers secure their Kubernetes environments across their full application life cycle. By directly integrating with Kubernetes, StackRox enables a level of simplicity and unification for DevOps and Security teams to apply the needed controls effectively.”

Kamal Shah, the CEO, said that StackRox is not disclosing its valuation, but he confirmed it has definitely gone up. For some context, according to PitchBook data, the company was valued at $145 million in its last funding round, a Series B in 2018. Its customers today include the likes of Priceline, Brex, Reddit, Zendesk and Splunk, as well as government and other enterprise customers, in a container security market that analysts project will be worth some $2.2 billion by 2024, up from $568 million last year.

StackRox first got its start in 2014, when containers were starting to pick up momentum in the market. At the time, its focus was a little more fragmented, not unlike the container market itself: it provided solutions that could be used with Docker containers as well as others. Over time, Shah said that the company chose to hone its focus just on Kubernetes, originally developed by Google and open-sourced, and now essentially the de-facto standard in containerisation.

“We made a bet on Kubernetes at a time when there were multiple orchestrators, including Mesosphere, Docker and others,” he said. “Over the last two years Kubernetes has won the war and become the default choice, the Linux of the cloud and the biggest open source cloud application. We are all Kubernetes all the time because what we see in the market are that a majority of our customers are moving to it. It has over 35,000 contributors to the open source project alone, it’s not just Red Hat (IBM) and Google.” Research from CNCF estimates that nearly 80% of organizations that it surveyed are running Kubernetes in production.

That is not all good news, however, with the interest underscoring a bigger need for Kubernetes-focused security solutions for enterprises that opt to use it.

Shah says that some of the typical pitfalls in container architecture arise when they are misconfigured, leading to breaches; as well as around how applications are monitored; how developers use open-source libraries; and how companies implement regulatory compliance. Other security vulnerabilities that have been highlighted by others include the use of insecure container images; how containers interact with each other; the use of containers that have been infected with rogue processes; and having containers not isolated properly from their hosts.

But Shah noted, “Containers in Kubernetes are inherently more secure if you can deploy correctly.” And to that end that is where StackRox’s solutions attempt to help: the company has built a multi-purposes toolkit that provides developers and security engineers with risk visibility, threat detection, compliance tools, segmentation tools and more. “Kubernetes was built for scale and flexibility, but it has lots of controls so if you misconfigure it it can lead to breaches. So you need a security solution to make sure you configure it all correctly,” said Shah.

He added that there has been a definite shift over the years from companies considering security solutions as a optional element into one that forms part of the consideration at the very core of the IT budget — another reason why StackRox and competitors like TwistLock (acquired by Palo Alto Networks) and Aqua Security have all seen their businesses really grow.

“We’ve seen the innovation companies are enabling by building applications in containers and Kubernetes. The need to protect those applications, at the scale and pace of DevOps, is crucial to realizing the business benefits of that innovation,” said Venky Ganesan, partner, Menlo Ventures, in a statement. “While lots of companies have focused on securing the container, only StackRox saw the need to focus on Kubernetes as the control plane for security as well as infrastructure. We’re thrilled to help fuel the company’s growth as it dominates this dynamic market.”

“Kubernetes represents one of the most important paradigm shifts in the world of enterprise software in years,” said Corey Mulloy, General Partner, Highland Capital Partners, in a statement. “StackRox sits at the forefront of Kubernetes security, and as enterprises continue their shift to the cloud, Kubernetes is the ubiquitous platform that Linux was for the Internet era. In enabling Kubernetes-native security, StackRox has become the security platform of choice for these cloud-native app dev environments.”


By Ingrid Lunden

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.


By Frederic Lardinois

Snyk grabs $70M more to detect security vulnerabilities in open source code and containers

A growing number of IT breaches has led to security becoming a critical and central aspect of how computing systems are run and maintained. Today, a startup that focuses on one specific area — developing security tools aimed at developers and the work they do — has closed a major funding round that underscores the growth of that area.

Snyk — a London and Boston-based company that got its start identifying and developing security solutions for developers working on open source code — is today announcing that it has raised $70 million, funding that it will be using to continue expanding its capabilities and overall business. For example, the company has more recently expanded to building security solutions to help developers identify and fix vulnerabilities around containers, an increasingly standard unit of software used to package up and run code across different computing environments.

Open source — Snyk works as an integration into existing developer workflows, compatible with the likes of GitHub, Bitbucket and GitLab, as well as CI/CD pipelines — was an easy target to hit. It’s used in 95% of all enterprises, with up to 77% of open source components liable to have vulnerabilities, by Snyk’s estimates. Containers are a different issue.

“The security concerns around containers are almost more about ownership than technology,” Guy Podjarny, the president who co-founded the company with Assaf Hefetz and Danny Grander, explained in an interview. “They are in a twilight zone between infrastructure and code. They look like virtual machines and suffer many of same concerns such as being unpatched or having permissions that are too permissive.”

While containers are present in fewer than 30% of computing environments today, their growth is on the rise, according to Gartner, which forecasts that by 2022, over 75% of global organizations will run containerized applications. Snyk estimates that a full 44% of Docker image scans (Docker being one of the major container vendors) have known vulnerabilities.

This latest round is being led by Accel with participation from existing investors GV and Boldstart Ventures. These three, along with a fourth investor (Heavybit) also put $22 million into the company as recently as September 2018. That round was made at a valuation of $100 million, and from what we understand from a source close to the startup, it’s now in the “range” of $500 million.

“Accel has a long history in the security market and we believe Snyk is bringing a truly unique, developer-first approach to security in the enterprise,” said Matt Weigand of Accel said in a statement. “The strength of Snyk’s customer base, rapidly growing free user community, leadership team and innovative product development prove the company is ready for this next exciting phase of growth and execution.”

Indeed, the company has hit some big milestones in the last year that could explain that hike. It now has some 300,000 developers using it around the globe, with its customer base growing some 200 percent this year and including the likes of Google, Microsoft, Salesforce and ASOS (sidenote: you know that if developers at developer-centric places themselves working at the vanguard of computing, like Google and Microsoft, are using your product, that is a good sign). Notably, that has largely come by word of mouth — inbound interest.

The company in July of this year took on a new CEO, Peter McKay, who replaced Podjarny. McKay was the company’s first investor and has a track record in helping to grow large enterprise security businesses, a sign of the trajectory that Snyk is hoping to follow.

“Today, every business, from manufacturing to retail and finance, is becoming a software business,” said McKay. “There is an immediate and fast growing need for software security solutions that scale at the same pace as software development. This investment helps us continue to bring Snyk’s product-led and developer-focused solutions to more companies across the globe, helping them stay secure as they embrace digital innovation – without slowing down.”

 


By Ingrid Lunden

VMware is bringing VMs and containers together, taking advantage of Heptio acquisition

At VMworld today in San Francisco, VMware introduced a new set of services for managing virtual machines and containers in a single view called Tanzu. The product takes advantage of the knowledge the company gained when it acquired Heptio last year.

As companies face an increasingly fragmented landscape of maintaining traditional virtual machines, alongside a more modern containerized Kubernetes environment, managing the two together has created its own set of management challenges for IT. This is further complicated by trying to manage resources across multiple clouds, as well as the in-house data centers. Finally, companies need to manage legacy applications, while looking to build newer containerized applications.

VMware’s Craig McLuckie and fellow Heptio co-founder, Joe Beda, were part of the original Kubernetes development team They came to VMware via last year’s acquisition. McLuckie believes that Tanzu can help with all of this by applying the power of Kubernetes across this complex management landscape.

“The intent is to construct a portfolio that has a set of assets that cover every one of these areas, a robust set of capabilities that bring the Kubernetes substrate everywhere — a control plane that enables organizations to start to think about [and view] these highly fragmented deployments with Kubernetes [as the] common lens, and then the technologies you need to be able to bring existing applications forward and to build new application and to support third party vendors bringing their applications into [this],” McLuckie explained.

It’s an ambitious vision that involves bringing together not only VMware’s traditional VM management tooling and Kubernetes, but also open source pieces and other recent acquisitions including Bitnami and Cloud Health along with Wavefront, which it acquired in 2017. Although the vision was defined long before the acquisition of Pivotal last week, it will also play a role in this. Originally that was as a partner, but now it will be as part of VMware.

The idea is to eventually cover the entire gamut of building, running and managing applications in the enterprise. Among the key pieces introduced today as technology previews are the Tanzu Mission Control, a tool for managing Kubernetes clusters wherever the live and Project Pacific, which embeds Kubernetes natively into VSphere, the company’s virtualization platform, bringing together virtual machines and containers.

Screenshot 2019 08 26 08.07.38 1

VMware Tanzu. Slide: VMware

McLuckie sees bringing virtual machine and Kubernetes together in this fashion provides a couple of key advantages. “One is being able to bring a robust, modern API-driven way of thinking about accessing resources. And it turns out that there is this really good technology for that. It’s called Kubernetes. So being able to bring a Kubernetes control plane to Vsphere is creating a new set of experiences for traditional VMware customers that is moving much closer to a kind of cloud-like agile infrastructure type of experience. At the same time, Vsphere is bringing a whole bunch of capabilities to Kubernetes that’s creating more efficient isolation capabilities,” he said.

When you think about the cloud native vision, it has always been about enabling companies to manage resources wherever they live through a single lens, and this is what this set of capabilities that VMware has brought together under Tanzu, is intended to do. “Kubernetes is a way of bringing a control metaphor to modern IT processes. You provide an expression of what you want to have happen, and then Kubernetes takes that and interprets it and drives the world into that desired state,” McLuckie explained.

If VMware can take all of the pieces in the Tanzu vision and make this happen, it will be as powerful as McLuckie believes it to be. It’s certainly an interesting attempt to bring all of a company’s application and infrastructure creation and management under one roof using Kubernetes as the glue, and with Heptio co-founders McLuckie and Beda involved, it certainly has the expertise in place to drive the vision.


By Ron Miller

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

Steve Singh stepping down as Docker CEO

In a surprising turn of events, TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement, went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as Chairman of the Board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 investment last year, which some saw as a sign of continuing struggles for company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:


By Ron Miller

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.


By Ron Miller

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker told TechCrunch.

To that end, it announced a Beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations who had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKE as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges on how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 Beta will be available later this quarter.


By Ron Miller

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several of ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a Beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with Systems Integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care operational details like managing upgrades, rolling out patches, doing backups, and undertaking capacity planning — all of . those operational tasks, which require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as a container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the lifecycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish.],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization lifecycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.


By Ron Miller

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on the Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and its end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose —fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine,  the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.


By Ron Miller

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud, is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe, or rosemary. That by itself would be interesting but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with over 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMWare, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices then to be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.


By Frederic Lardinois