Microsoft launches Open Service Mesh

Microsoft today announced the launch of a new open-source service mesh based on the Envoy proxy. The Open Service Mesh is meant to be a reference implementation of the Service Mesh Interface (SMI) spec, a standard interface for service meshes on Kubernetes that has the backing of most of the players in this ecosystem.

The company plans to donate Open Service Mesh to the Cloud Native Computing Foundation (CNCF) to ensure that it is community-led and has open governance.

“SMI is really resonating with folks and so we really thought that there was room in the ecosystem for a reference implementation of SMI where the mesh technology was first and foremost implementing those SMI APIs and making it the best possible SMI experience for customers,” Microsoft partner program manager (and CNCF board member) Gabe Monroy told me.

Image Credits: Microsoft

He also added that, because SMI provides the lowest common denominator API design, Open Service Mesh gives users the ability to “bail out” to raw Envoy if they need some more advanced features. This “no cliffs” design, Monroy noted, is core to the philosophy behind Open Service Mesh.

As for its feature set, SMI handles all of the standard service mesh features you’d expect, including securing communications between services using mTLS, managing access control policies, service monitoring and more.

Image Credits: Microsoft

There are plenty of other service mesh technologies in the market today, though. So why would Microsoft launch this?

“What our customers have been telling us is that solutions that are out there today, Istio being a good example, are extremely complex,” he said. “It’s not just me saying this. We see the data in the AKS support queue of customers who are trying to use this stuff — and they’re struggling right here. This is just hard technology to use, hard technology to build at scale. And so the solutions that were out there all had something that wasn’t quite right and we really felt like something lighter weight and something with more of an SMI focus was what was going to hit the sweet spot for the customers that are dabbling in this technology today.”

Monroy also noted that Open Service Mesh can sit alongside other solutions like Linkerd, for example.

A lot of pundits expected Google to also donate its Istio service mesh to the CNCF. That move didn’t materialize. “It’s funny. A lot of people are very focused on the governance aspect of this,” he said. “I think when people over-focus on that, you lose sight of how are customers doing with this technology. And the truth is that customers are not having a great time with Istio in the wild today. I think even folks who are deep in that community will acknowledge that and that’s really the reason why we’re not interested in contributing to that ecosystem at the moment.”


By Frederic Lardinois

Kong donates its Kuma control plane to the Cloud Native Computing Foundation

API management platform Kong today announced that it is donating its open-source Kuma control plane technology to the Cloud Native Computing Foundation (CNCF). Since Kong built Kuma on top of the Envoy service mesh — and Envoy is part of the CNCF’s stable of open-source projects — donating it to this specific foundation was likely an obvious move.

The company first open-sourced Kuma in September 2019. In addition to donating it to the CNCF, the company also today launched version 0.6 of the codebase, which introduces a new hybrid mode that enables Kuma-based service meshes to support applications that run on complex heterogeneous environments, including VMs, Kubernetes clusters and multiple data centers.

Image Credits: Kong

Kong co-founder and CTO Marco Palladino says that the goal was always to donate Kuma to the CNCF.

“The industry needs and deserves to have a cloud native, Envoy-based control plane that is open and not governed by a single commercial entity,” he writes in today’s announcement. “From a technology standpoint, it makes no sense for individual companies to create their own control plane but rather build their own unique applications on proven technologies like Envoy and Kuma. We welcome the broader community to join Kuma on Slack and on our bi-weekly community calls to contribute to the project and continue the incredible momentum we have achieved so far.”

Kuma will become a CNCF Sandbox project. The sandbox is the first stage that projects go through to become full graduated CNCF projects. Currently, the foundation is home to 31 sandbox projects, and Kong argues that Kuma is now production-ready and at the right stage where it can profit from the overall CNCF ecosystem.

“It’s truly remarkable to see the ecosystem around Envoy continue to develop, and as a vendor-neutral organization, CNCF is the ideal home for Kuma,” said Matt Klein, the creator of the Envoy proxy. “Now developers have access to the service mesh data plane they love with Envoy as well as a CNCF-hosted Envoy-based control plane with Kuma, offering a powerful combination to make it easier to create and manage cloud native applications.”


By Frederic Lardinois

HPE acquires cloud native security startup Scytale

HPE announced today that it has acquired Scytale, a cloud native security startup that is built on the open source Secure Production Identity Framework for Everyone (SPIFFE) protocol. The companies did not share the acquisition price.

Specifically, Scytale looks at application-to-application identity and access management, something that is increasingly important as more transactions take place between applications without any human intervention. It’s imperative that the application knows it’s OK to share information with the other application.

This is an area that HPE wants to expand into, Dave Husak, HPE fellow and GM of cloudless initiative wrote in a blog post announcing the acquisition. “As HPE progresses into this next chapter, delivering on our differentiated, edge to cloud platform as-a-service strategy, security will continue to play a fundamental role. We recognize that every organization that operates in a hybrid, multi-cloud environment requires 100% secure, zero trust systems, that can dynamically identify and authenticate data and applications in real-time,” Husak wrote.

He was also careful to stress that HPE would continue to be good stewards of the SPIFFE and SPIRE (the SPIFFE Runtime Environment) projects, both of which are under the auspices of the Cloud Native Computing Foundation.

Scytale co-founder Sunil James, writing in a blog post about the deal, indicated that this was important to the founders that HPE respect the startup’s open source roots. “Scytale’s DNA is security, distributed systems, and open-source. Under HPE, Scytale will continue to help steward SPIFFE. Our ever-growing and vocal community will lead us. We’ll toil to maintain this transparent and vendor-neutral project, which will be fundamental in HPE’s plans to deliver a dynamic, open, and secure edge-to-cloud platform,” he wrote.

Scytale was founded in 2017 and has raised $8 million to-date, according to PitchBook data. The bulk of that was in a $5 million Series A last March led by Bessemer.


By Ron Miller

Mesosphere changes name to D2IQ, shifts focus to Kubernetes, cloud native

Mesosphere was born as the commercial face of the open source Mesos project. It was surely a clever solution to make virtual machines run much more efficiently, but times change and companies change. Today the company announced it was changing its name to Day2IQ or D2IQ for short, and fixing its sights on Kubernetes and cloud native, which have grown quickly in the years since Mesos appeared on the scene.

D2IQ CEO Mike Fey says that the name reflects the company’s new approach. Instead of focusing entirely on the Mesos project, it wants to concentrate on helping more mature organizations adopt cloud native technologies.

“We felt like the Mesosphere name was somewhat of constrictive. It made statements about the company that really allocated us to a given technology, instead of to our core mission, which is supporting successful Day Two operations, making cloud native a viable approach not just for the early adopters, but for everybody,” Fey explained.

Fey is careful to point out that the company will continue to support the Mesos-driven DC/OS solution, but the general focus of the company has shifted, and the new name is meant to illustrate that. “The Mesos product line is still doing well, and there are things that it does that nothing else can deliver on yet. So we’re not abandoning that totally, but we do see that Kubernetes is very powerful, and the community behind it is amazing, and we want to be a value added member of that community,” he said.

He adds that this is not about jumping on the cloud native bandwagon all of a sudden. He points out his company has had a Kubernetes product for more than a year running on top of DC/OS, and it has been a contributing member to the cloud native community.

It’s not just about a name change and refocusing the company and the brand, it also involves several new cloud native products that the company has built to serve the type of audience, the more mature organization, that the new name was inspired by.

For starters, it’s introducing its own flavor of Kubernetes called Konvoy, which it says, provides an “enterprise-grade Kubernetes experience.” The company will also provide a support and training layer, which it believes is a key missing piece, and one that is required by larger organizations looking to move to cloud native.

In addition, it is offering a data integration layer, which is designed to help integrate large amounts of data in a cloud-native fashion. To that end, it is introducing a Beta of Kudo, an open source cloud-native tool for building stateful operations in Kubernetes. The company has already donated this tool to the Cloud Native Computing foundation, the open source organization that houses Kubernetes and other cloud native projects.

The company faces stiff competition in this space from some heavy hitters like the newly combined IBM and Red Hat, but it believes by adhering to a strong open source ethos, it can move beyond its Mesos roots to become a player in the cloud native space. Time will tell if it made a good bet.


By Ron Miller

The challenges of truly embracing cloud native

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs


By Ron Miller

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

Scytale grabs $5M Series A for application-to-application identity management

Scytale, a startup that wants to bring identity and access management to application-to-application activities, announced a $5 million Series A round today.

The round was led by Bessemer Venture Partners, a return investor which led the company’s previous $3 million round in 2018. Bain Capital Ventures, TechOperators and Work-Bench are also participating in this round.

The company wants to bring the same kind of authentication that individuals are used to having with a tool like Okta to applications and services in a cloud native environment. “What we’re focusing on is trying to bring to market, a capability for large enterprises going through this transition to cloud native computing to evolve the existing methods of application to application authentication, so that it’s much more flexible and scalable,” Sunil James, company CEO told TechCrunch.

To help with this, the company has developed the open source, cloud native project, Spiffe, that is managed by the Cloud Native Computing Foundation (CNCF). The project is designed to provide identity and access management for application-to-application communication in an open source framework.

The idea is that as companies transition to a containerized, cloud native approach to application delivery, there needs to a smooth automated way for applications and services to prove they are legitimate very quickly in much the same way individuals provide a username and password to access a website. This could be, for example, as applications pass through API gateways, or as automation drives the use of multiple applications in a workflow.

Webscale companies like Google and Netflix have developed mechanisms to make this work in-house, but it’s been out of reach of most large enterprise companies. Scytale wants to bring this capability to authenticate services and applications to any company.

In addition to the funding announcement, the company also announced Scytale Enterprise, a tool that provides a commercial layer on top of the open source tools that the company has developed. The enterprise version helps companies, who might not have the personnel to deal with the open source version on their own by providing training, consulting and support services.

Bain Capital Venture’s Enrique Salem sees a startup solving a big problem for companies who are moving to cloud native environments and need this kind of authentication.”In an increasingly complex and fragmented enterprise IT environment, Scytale has not only built Spiffe’s amazing open-source community but has also delivered a commercial offering to address hybrid cloud authentication challenges faced by Fortune 500 identity and access management engineering teams,” Salem said in a statement.

The company, which is based in the Bay area, launched in 2017 and currently has 24 employees.


By Ron Miller

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.


By Frederic Lardinois

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).


By Frederic Lardinois

Google steps back from running the Kubernetes infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs over 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands – making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.


By Frederic Lardinois

Upbound grabs $9 M Series A to automate multi-cloud management

Kubernetes, the open source container orchestration tool, does a great job of managing a single cluster, but Upbound, a new Seattle-based startup wants to extend this ability to manage multiple Kubernetes clusters across multi-cloud environment. It’s a growing requirement as companies deploy ever-larger numbers of clusters and choose a multi-vendor approach to cloud infrastructure services.

Today, the company announced a $9 million Series A investment led by GV (formerly Google Ventures) along with numerous unnamed angel investors from the cloud-native community. As part of the deal, GV’s Dave Munichiello will be joining the company board of directors.

It’s important to note that the company is currently working on the product and could be a year away from a release, but the vision is certainly compelling. As Upbound CEO and founder Bassam Tabbara says, his company’s solution could allow customers to run, scale and optimize their workloads across clusters, regions and clouds as a single entity.

That level of control could enable them to set rules and policies across those clusters and clouds. For example, a customer might control costs by creating a rule to find the cloud with lowest cost for processing a given job, or provide failover control across regions and clouds — all automatically. It would provide the general ability to have highly granular control across multiple environments that isn’t really possible now, Tabarra explained.

That vision of enterprise portability is certainly something that caught the eye of GV’s Munichiello. “Upbound presents a credible approach to multi-cloud computing built on the success of Kubernetes, and as a response to the growing enterprise demand for hybrid and multi-cloud environments,” he said in a statement.

Companies are working with multiple Kubernetes clusters today. As an example, CERN, the European physics organization is running 210 clusters. JD.com, the Chinese shopping site has over 20,000 servers running Kubernetes. The largest cluster is made up of 5000 servers. As these projects scale, they require a tool to help manage their workloads across these larger environments.

The company’s founder isn’t new to cloud-native computing or open source. Tabarra was part of the team responsible for producing the open source project, Rook, an offshoot of Kubernetes and a Cloud Native Computing Foundation Sandbox project.  Rook helps orchestrate distributed storage systems running in cloud native environments in a similar way that Kubernetes does for containerized environments. That project provided some of the ground work for what Upbound is trying to do on a broader scale beyond pure storage.

The computing world is suddenly all about abstraction. We started with virtual machines, which allowed you take an individual server and make it into multiple virtual machines. That led to containers, which could take the same machine in let you launch hundreds of containers. Kubernetes is an open source container orchestration tool that has rapidly gained acceptance by allowing operations to treat a cluster of Kubernetes nodes as a single entity, making it much easier to launch and manage containers.

Upbound launched last Fall and currently has 8 employees, but Tabbara says they are actively seeking new engineers. The nature of their business is about distributed workloads and he says the workforce will be similar. They won’t have to work in Seattle. He says the plan is to use and contribute to open source whenever possible and to open source parts of the product when it’s available.


By Ron Miller