We’ll talk even more Kubernetes at TC Sessions: Enterprise with Microsoft’s Brendan Burns and Google’s Tim Hockin

You can’t go to an enterprise conference these days without talking containers — and specifically the Kubernetes container management system. It’s no surprise then, that we’ll do the same at our inaugural TC Sessions: Enterprise event on September 5 in San Francisco. As we already announced last week, Kubernetes co-founder Craig McLuckie and Aparna Sinha, Google’s director of product management for Kubernetes, will join us to talk about the past, present and future of containers in the enterprise.

In addition, we can now announce that two other Kubernetes co-founders will join us: Google principal software engineer Tim Hockin, who currently works on Kubernetes and the Google Container Engine, and Microsoft distinguished engineer Brendan Burns, who was the lead engineer for Kubernetes during his time at Google.

With this, we’ll have three of the four Kubernetes co-founders onstage to talk about the five-year-old project.

Before joining the Kuberntes efforts, Hockin worked on internal Google projects like Borg and Omega, as well as the Linux kernel. On the Kubernetes project, he worked on core features and early design decisions involving networking, storage, node, multi-cluster, resource isolation and cluster sharing.

While his colleagues Craig McLuckie and Joe Beda decided to parlay their work on Kubernetes into a startup, Heptio, which they then successfully sold to VMware for about $550 million, Burns took a different route and joined the Microsoft Azure team three years ago.

I can’t think of a better group of experts to talk about the role that Kubernetes is playing in reshaping how enterprise build software.

If you want a bit of a preview, here is my conversation with McLuckie, Hockin and Microsoft’s Gabe Monroy about the history of the Kubernetes project.

Early-Bird tickets are now on sale for $249; students can grab a ticket for just $75. Book your tickets here before prices go up.


By Frederic Lardinois

We’re talking Kubernetes at TC Sessions: Enterprise with Google’s Aparna Sinha and VMware’s Craig McLuckie

Over the past five years, Kubernetes has grown from a project inside of Google to an open source powerhouse with an ecosystem of products and services, attracting billions of dollars in venture investment. In fact, we’ve already seen some successful exits, including one from one of our panelists.

On September 5th at TC Sessions: Enterprise, we’re going to be discussing the rise of Kubernetes with two industry veterans. For starters we have Aparna Sinha, director of product management for Kubernetes and the newly announced Anthos product. Sinha was in charge of several early Kubernetes releases and has worked on the Kubernetes team at Google since 2016. Prior to joining Google, she had 15 years experience in enterprise software settings.

Craig McLuckie will also be joining the conversation. He’s one of the original developers of Kubernetes at Google. He went on to found his own Kubernetes startup, Heptio, with Joe Beda, another Google Kubernetes alum. They sold the company to VMware last year for $505 million after raising $33.5 million, according to Crunchbase data.

The two bring a vast reservoir of knowledge and will be discussing the history of Kubernetes, why Google decided to open source it and how it came to grow so quickly. Two other Kubernetes luminaries will be joining them. We’ll have more about them in another post soon.

Kubernetes is a container orchestration engine. Instead of developing large monolithic applications that sit on virtual machines, containers run a small part of the application. As the components get smaller, it requires an orchestration layer to deliver the containers when needed and make them go away when they are not longer required. Kubernetes acts as the orchestra leader.

As Kubernetes, containerization and the cloud-native ethos it encompasses has grown, it has helped drive the enterprise shift to the cloud in general. If you can write your code once, and use it in the cloud or on prem, it means you don’t have to manage applications using different tool sets and that has had broad appeal for enterprises making the shift to the cloud.

TC Sessions: Enterprise (September 5 at San Francisco’s Yerba Buena Center) will take on the big challenges and promise facing enterprise companies today. TechCrunch’s editors will bring to the stage founders and leaders from established and emerging companies to address rising questions, like the promised revolution from machine learning and AI, intelligent marketing automation and the inevitability of the cloud, as well as the outer reaches of technology, like quantum computing and blockchain.

Tickets are now available for purchase on our website at the early-bird rate of $395; student tickets are just $245.

Student tickets are just $245 – grab them here.

We have a limited number of Startup Demo Packages available for $2,000, which includes four tickets to attend the event.

For each ticket purchased for TC Sessions: Enterprise, you will also be registered for a complimentary Expo Only pass to TechCrunch Disrupt SF on October 2-4.


By Ron Miller

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.


By Frederic Lardinois

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


By Frederic Lardinois

Atlassian puts its Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes because the company today announced that it is launching Atlassian Software in Kubernetes (AKS), a new solution that allows enterprises to run and manage its on-premise applications like Jira Data Center as containers and with the help of Kubernetes.

To build this solution, Atlassian partnered with Praqma, a Continuous Delivery and DevOps consultancy. It’s also making AKS available as open source.

As the company admits in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” the company explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.


By Frederic Lardinois

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.


By Ron Miller

Google expands its container service with GKE Advanced

With its Kuberntes Engine (GKE), Google Cloud Google has long offered a managed service for running containers on its platform. Kubernetes users tend to have a variety of needs, but so far, Google only offered a single tier of GKE that wasn’t necessarily geared toward the high-end enterprise users the company is trying to woo. Today, however, the company announced a new advanced edition of GKE that introduces a number of new features and an enhanced financially backed SLA, additional security tools and new automation features. You can think of GKE Advanced as the enterprise version of GKE.

The new service will launch in the second quarter of the year and hasn’t yet announced pricing. The regular version of GKE is now called GKE Standard.

Google says the service builds upon the company’s own learnings from running a complex container infrastructure internally for years.

For enterprise customers, the financially backed SLA is surely a nice bonus. The promise here is 99.95% guaranteed availability for regional clusters.

Most users who opt for a managed Kubernetes environment do so because they don’t want to deal with the hassle of managing these clusters themselves. With GKE Standard, there’s still some work to be done with regard to scaling the clusters. Because of this, GKE Advanced includes a Vertical Pod Autoscaler that keeps on eye on resource utilization and adjusts it as necessary, as well as Node Auto Provisioning, an enhanced version of cluster autoscaling in GKE Standard.

In addition to these new GKE Advanced features, Google is also adding existing GKE security features like the GKE Sandbox and the ability to enforce that only signed and verified images are used in the container environment.

The Sandbox uses Google’s gVisor container sandbox runtime. With this, every sandbox gets its own user-space kernel, adding an additional layer of security. With Binary Authorization, GKE Advanced users can also ensure that all container images are signed by a trusted authority before they are put into production. Somebody could theoretically still smuggle malicious code into the containers, but this process, which enforces standard container release practices, for example, should ensure that only authorized containers can run in the environment.

GKE Advanced also includes support for GKE usage metering, which allows companies to keep tabs on who is using a GKE cluster and charge them according.

 


By Frederic Lardinois

OpenStack Stein launches with improved Kubernetes support

The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

Unsurprisingly, a lot of that development activity focused on Kubernetes and the tools to manage these container clusters. With this release, the team behind the OpenStack Kubernetes installer brought the launch time for a cluster down from about 10 minutes to five, regardless of the number of nodes. To further enhance Kubernetes support, OpenStack Stein also includes updates to Neutron, the project’s networking service, which now makes it easier to create virtual networking ports in bulk as containers are spun up, and Ironic, the bare-metal provisioning service.

All of that is no surprise, given that according to the project’s latest survey, 61 percent of OpenStack deployments now use both Kubernetes and OpenStack in tandem.

The update also includes a number of new networking features that are mostly targeted at the many telecom users. Indeed, over the course of the last few years, telcos have emerged as some of the most active OpenStack users as these companies are looking to modernize their infrastructure as part of their 5G rollouts.

Besides the expected updates, though, there are also a few new and improved projects here that are worth noting.

“The trend from the last couple of releases has been on scale and stability, which is really focused on operations,” OpenStack Foundation executive director Jonathan Bryce told me. “The new projects — and really most of the new projects from the last year — have all been pretty oriented around real-world use cases.”

The first of these is Placement. “As people build a cloud and start to grow it and it becomes more broadly adopted within the organization, a lot of times, there are other requirements that come into play,” Bryce explained. “One of these things that was pretty simplistic at the beginning was how a request for a resource was actually placed on the underlying infrastructure in the data center.” But as users get more sophisticated, they often want to run specific workloads on machines with certain hardware requirements. These days, that’s often a specific GPU for a machine learning workload, for example. With Placement, that’s a bit easier now.

It’s worth noting that OpenStack had some of this functionality before. The team, however, decided to uncouple it from the existing compute service and turn it into a more generic service that could then also be used more easily beyond the compute stack, turning it more into a kind of resource inventory and tracking tool.

Then, there is also Blazer, a reservation service that offers OpenStack users something akin to AWS Reserved Instances. In a private cloud, the use case for a feature is a bit different, though. But as some of the private clouds got bigger, some users found that they needed to be able to guarantee resources to run some of their regular, overnight batch jobs or data analytics workloads, for example.

As far as resource management goes, it’s also worth highlighting Sahara, which now makes it easier to provision Hadoop clusters on OpenStack.

In previous releases, one of the focus areas for the project was to improve the update experience. OpenStack is obviously a very complex system, so bringing it up to the latest version is also a bit of a complex undertaking. These improvements are now paying off. “Nobody even knows we are running Stein right now,” Vexxhost CEO Mohammed Nasar, who made an early bet on OpenStack for his service, told me. “And I think that’s a good thing. You want to be least impactful, especially when you’re in such a core infrastructure level. […] That’s something the projects are starting to become more and more aware of but it’s also part of the OpenStack software in general becoming much more stable.”

As usual, this release launched only a few weeks before the OpenStack Foundation hosts its bi-annual Summit in Denver. Since the OpenStack Foundation has expanded its scope beyond the OpenStack project, though, this event also focuses on a broader range of topics around open-source infrastructure. It’ll be interesting to see how this will change the dynamics at the event.


By Frederic Lardinois

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on the Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and its end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose —fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine,  the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.


By Ron Miller

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud, is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe, or rosemary. That by itself would be interesting but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with over 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMWare, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices then to be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.


By Frederic Lardinois

Container security startup Aqua lands $62M Series C

Aqua Security, a startup that helps customers launch containers securely, announced a $62 million Series C investment today led by Insight Partners.

Existing investors Lightspeed Venture Partners, M12 (Microsoft’s venture fund), TLV Partners and Shlomo Kramer also participated. With today’s investment, the startup’s investments since inception now total over $100 million, according to the company.

Early investors took a chance on the company when it was founded in 2015. Containers were barely a thing back then, but the founders had a vision of what was coming down the pike and their bet has paid off in a big way as the company now has first-mover advantage. As more companies turn to Kubernetes and containers, the need for a security product built from the ground up to secure this kind of environment is essential.

While co-founder and CEO Dror Davidoff says the company has 60 Fortune 500 customers, he’s unable to share names, but he can provide some clues like five of the world’s top banks. As companies like that turn to new technology like containers, they aren’t going to go whole hog without a solid security option. Aqua gives them that.

“Our customers are all taking very dramatic steps towards adoption of those new technologies, and they know that existing security tools that they have in place will not solve the problems,” Davidoff told TechCrunch. He said that most customers have started small, but then have expanded as container adoption increases.

You may thank that an ephemeral concept like a container would be less of a security threat, but Davidoff says that the open nature of containerization actually leaves them vulnerable to tampering. “Container lives long enough to be dangerous,” he said. He added, “They are structured in an open way, making it simple to hack, and once in, to do lateral movement. If the container holds sensitive info, it’s easy to have access to that information.”

Aqua scans container images for malware and makes sure only certified images can run, making it difficult for a bad actor to insert an insecure image, but the ephemeral nature of containers also helps if something slips through. DevOp can simply take down the faulty container and put a newly certified clean one quickly.

The company has 150 employees with offices in the Boston area and R&D in Tel Aviv in Israel. With the new influx of cash, the company plans to expand quickly, growing sales and marketing, customer support and expanding the platform into areas to cover emerging areas like serverless computing. Davidoff says the company could double in size in the next 12-18 months and he’s expecting 3x to 4x customer growth.

All of that money should provide fuel to grow the company as containerization spreads and companies look for a security solution to keep containers in production safe.


By Ron Miller

Densify announces new tool to optimize container management in the cloud

Densify, a Toronto company that helps customers optimize their cloud resources to control usage and spending, announced a new tool today specifically designed to optimize container usage in the cloud.

Company CEO Gerry Smith, says that as containerization proliferates, it’s getting more difficult to track and control cloud infrastructure resource usage as software development and deployment happens with increasing speed.

“The whole basis upon which people buy and use cloud and container resources has become wildly expensive because of the lack of a resource management system,” Smith said.

The Densify solution looks at the consumption and for ways to cut costs and usage. “We have analytics in the cloud, any of various common cloud services that you can connect to, and then we use machine learning to analyze the resources and your cloud and container consumption,” he said.

Densify continuously make recommendations on how to make better use of resources and to find the cheapest computing, whether that’s reserved instances, spot instances or other discounted cloud resources.

What’s more, it can help you identify whether you are providing too few resources to accommodate the number of containers you are deploying, as well as too many.

This may sound a bit like what Spotinst and Cloudyn, the company Microsoft bought a couple of years ago, do in terms of helping control costs in the cloud, but Smith says, for his company it’s more about understanding the resources than pure cost.

“We look at ourselves as a resource management platform. So what we do is characterize the applications, demands of CPU and all the other resources, and use machine learning to predict what it’s going to need at any any given minute, at any given day of a week of the year, so that we can then better predictively match the right supply,” Smith explained.

It’s providing information about each container at a highly detailed level including “what’s running, what resources are being allocated, and the true utilization of an organization’s Kubernetes environment at a cluster, namespace and container level,” according to the company. All of this information should help DevOps teams better understand the resources required by their container deployments.

The company has actually been around since 2006 under the name Cirba. In its early guise it helped companies manage VMware installations. In 2016, it pivoted to cloud resource management and changed the company name to Densify. It has raised around $60 million since inception, with about half of that coming after the company changed to Densify in 2016.

The company is based in Toronto, but has offices in London and Melbourne as well


By Ron Miller

Cloud Foundry ❤ Kubernetes

Cloud Foundry, the open source platform-as-a-service project that more than half of the Fortune 500 companies use to help them build, test and deploy their applications, launched well before Kubernetes existed. Because of this, the team ended up building Diego, its own container management service. Unsurprisingly, given the popularity of Kubernetes, which has become somewhat of the de facto standard for container orchestration, a number of companies in the Cloud Foundry ecosystem starting looking into how they could use Kubernetes to replace Diego.

The result of this is Project Eirini, which was first proposed by IBM. As the Cloud Foundry Foundation announced today, Project Eirini now passes the core functional tests the team runs to validate the software releases of its application runtime, the core Cloud Foundry service that deploys and manages applications (if that’s a bit confusing, don’t even think about the fact that there’s also a Cloud Foundry Container Runtime, which already uses Kubernetes, but which is mostly meant to give enterprise a single platform for running their own applications and pre-built containers from third-party vendors).

a foundry for clouds“That’s a pretty big milestone,” Cloud Foundry Foundation CTO Chip Childers told me. “The project team now gets to shift to a mode where they’re focused on hardening the solution and making it a bit more production-ready. But at this point, early adopters are also starting to deploy that [new] architecture.”

Childers stressed that while the project was incubated by IBM, which has been a long-time backer of overall Cloud Foundry project, Google, Pivotal and others are now also contributing and have dedicated full-time engineers working on the project. In addition, SUSE, SAP and IBM are also active in developing Eirini.

Eirini started out as an incubation project, and while few doubted that this would be a successful project, there was a bit of confusion around how Cloud Foundry would move forward now that it essentially had two container engines for running its core service. At the time, there was even some concern that the project could fork. “I pushed back at the time and said: no, this is the natural exploration process that open source communities need to go through,” Childers said. “What we’re seeing now is that with Pivotal and Google stepping in, that’s a very clear sign that this is going to be the go-forward architecture for the future of the Cloud Foundry Application Runtime.”

A few months ago, by the way, Kubernetes was still missing a few crucial pieces the Cloud Foundry ecosystem needed to make this move. Childers specifically noted that Windows support — something the project’s enterprise users really need — was still problematic and lacked some important features. In recent releases, though, the Kubernetes team fixed most of these issues and improved its Windows support, rendering those issues moot.

What does all of this mean for Diego? Childers noted that the community isn’t at a point where it’ll hold developing that tool. At some point, though, it seems likely that the community will decide that it’s time to start the transition period and make the move to Kubernetes official.

It’s worth noting that IBM today announced its own preview of Eirini in its Cloud Foundry Enterprise Environment and that the latest version of SUSE’s Cloud Foundry-based Application Platform includes a similar preview as well.

In addition, the Cloud Foundry Foundation, which is hosting its semi-annual developer conference in Philadelphia this week, also announced that it has certified its first to systems integrators, Accenture and HCL, as part of its recently launched certification program for companies that work in the Cloud Foundry ecosystem and have at least ten certified developers on their teams.


By Frederic Lardinois