The challenges of truly embracing cloud native

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs


By Ron Miller

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

Atlassian puts its Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes because the company today announced that it is launching Atlassian Software in Kubernetes (AKS), a new solution that allows enterprises to run and manage its on-premise applications like Jira Data Center as containers and with the help of Kubernetes.

To build this solution, Atlassian partnered with Praqma, a Continuous Delivery and DevOps consultancy. It’s also making AKS available as open source.

As the company admits in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” the company explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.


By Frederic Lardinois