Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: we want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of this of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lance makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate lens and how to make life for developers more enjoyable in this cloud -native technology landscape,” Miska Kaipiainen, the former CEO Kontena and now Mirantis’ Director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context on what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds, and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said that the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said that the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open-source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.


By Frederic Lardinois

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier .

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for Compute Services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.


By Frederic Lardinois

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90 percent of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite the COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”


By Frederic Lardinois

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.


By Frederic Lardinois

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


By Frederic Lardinois

Steve Singh stepping down as Docker CEO

In a surprising turn of events, TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement, went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as Chairman of the Board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 investment last year, which some saw as a sign of continuing struggles for company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:


By Ron Miller

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker told TechCrunch.

To that end, it announced a Beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations who had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKE as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges on how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 Beta will be available later this quarter.


By Ron Miller

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several of ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a Beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with Systems Integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care operational details like managing upgrades, rolling out patches, doing backups, and undertaking capacity planning — all of . those operational tasks, which require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as a container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the lifecycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish.],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization lifecycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.


By Ron Miller

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compliation.

This new capability, which will work for applications written in Javascript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Business Development David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often signficantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platform and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Mesina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure they run on. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

 


By Frederic Lardinois

Microsoft and Docker team up to make packaging and running cloud-native applications easier

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this launch, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched  Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one, remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.


By Frederic Lardinois

Docker inks partnership with Mulesoft as Salesforce takes a strategic stake

Docker and Mulesoft have announced a broad deal to sell products together and integrate their platforms. As part of it, Docker is getting an investment from Salesforce, the CRM giant that acquired Mulesoft for $6.5 billion last spring.

Salesforce is not disclosing the size of the stake it’s taking in Docker, but it is strategic: it will see its new Mulesoft working with Docker to connect containerized applications to multiple data sources across an organization. Putting the two companies together, you can connect these containerized applications to multiple data sources in a modern way, even with legacy applications.

The partnership is happening on multiple levels and includes technical integration to help customers use the two toolsets together more easily. It also includes a sales agreement to cross-sell one another’s products and services and to work with systems integrators and ISVs, who help companies put these kind of complex solutions to work inside large organizations.

Docker chief product officer, Scott Johnston, said it was really about bringing together two companies whose missions were aligned with what they were hearing from customers. That involves tapping into some broad trends around getting more out of their legacy applications and a growing desire to take an API-driven approach to developer productivity, while getting additional value out of their existing data sources. “Both companies have been working separately on these challenges for the last several years, and it just made sense as we listen to the market and listen to customers that we joined joined forces,” Johnston told TechCrunch.

Uri Sarid, Mulesoft’s CTO, agrees that customers have been using both products and it called for a more formal arrangement. “We have joint customers and the partnership will be fortifying that. So that’s a great motion, but we believe in acceleration. And so if there are things that we can do, and we now have plans for what we will do to make that even faster, to make that even more natural and built-in, we can accelerate the motion to this. Before, you had to think about these two concerns separately, and we are working on interoperability that makes makes you not have to think about them separately,” he explained.

This announcement comes at a time of massive consolidation in the enterprise. In the last couple of weeks, we have seen IBM buying Red Hat for $34 billion, SAP acquiring Qualtrics for $8 billion and Vista Equity Partners scooping up Apptio for $1.94 billion. Salesforce acquired Mulesoft earlier this year in its own mega deal in an effort to bridge the gap between data in the cloud and on-prem.

The final piece of today’s announcement is that investment from Salesforce Ventures. Johnston would not say how much the investment was for, but did say it was about aligning the two partners.

Docker has raised almost $273 million before today’s announcement. It’s possible it could be looking for a way to exit, and with the trend toward enterprise consolidation, Salesforce’s investment may be a way to test the waters for just that. If it seems like an odd match, remember that Salesforce bought Heroku in 2010 for $212 million.


By Ron Miller

Anaxi brings more visibility to the development process

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.


By Frederic Lardinois

Sumo Logic brings data analysis to containers

Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.

They are debuting these new features at DockerCon, Docker’s customer conference taking place this week in San Francisco.

Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.

He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.

“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.

They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.

Screen shot: Sumo Logic (cropped)

The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.

As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.

Sumo Logic was founded in 2010 and has raised $230 million, according to data on Crunchbase. Its most recent round was a $70 million Series F led by Sapphire Ventures last June.


By Ron Miller