How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.


By Frederic Lardinois

AWS expands cloud infrastructure offerings with new AMD EPYC-powered T3a instances

Amazon is always looking for ways to increase the options it offers developers in AWS, and to that end, today it announced a bunch of new AMD EPYC-powered T3a instances. These were originally announced at the end of last year at re:Invent, AWS’s annual customer conference.

Today’s announcement is about making these chips generally available. They have been designed for a specific type of burstable workload, where you might not always need a sustained amount of compute power.

“These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary,” AWS’s Jeff Barr wrote in a blog post.

These instances are build on the AWS Nitro System, Amazon’s custom networking interface hardware that the company has been working on for the last several years. The primary components of this system include the Nitro Card I/O Acceleration, Nitro Security Chip and the Nitro Hypervisor.

Today’s release comes on top of the announcement last year that the company would be releasing EC2 instances powered by Arm-based AWS Graviton Processors, another option for developers, who are looking for a solution for scale-out workloads.

It also comes on the heels of last month’s announcement that it was releasing EC2 M5 and R5 instances, which use lower-cost AMD chips. These are also built on top of the Nitro System.

The EPCY processors are available starting today in seven sizes in your choice of spot instances, reserved instances or on-demand, as needed. They are available in US East in northern Virginia, US West in Oregon, Europe in ireland, US East in Ohio and Asia-Pacific in Singapore.


By Ron Miller

Much to Oracle’s chagrin, Pentagon names Microsoft and Amazon as $10B JEDI cloud contract finalists

Yesterday, the Pentagon announced two finalists in the $10 billion, decade-long JEDI cloud contract process — and Oracle was not one of them. In spite of lawsuits, official protests and even back-channel complaining to the president, the two finalists are Microsoft and and Amazon.

“After evaluating all of the proposals received, the Department of Defense has made a competitive range determination for the Joint Enterprise Defense Infrastructure Cloud request for proposals, in accordance with all applicable laws and regulations. The two companies within the competitive range will participate further in the procurement process,” Elissa Smith, DoD spokesperson for Public Affairs Operations told TechCrunch. She added that those two finalists were in fact Microsoft and Amazon Web Services (AWS, the cloud computing arm of Amazon).

This contract procurement process has caught the attention of the cloud computing market for a number of reasons. For starters, it’s a large amount of money, but perhaps the biggest reason it had cloud companies going nuts was that it is a winner-take-all proposition.

It is important to keep in mind that whether it’s Microsoft or Amazon who is ultimately chosen for this contract, the winner may never see $10 billion, and it may not last 10 years because there are a number of points where the DoD could back out, but the idea of a single winner has been irksome for participants in the process from the start.

Over the course of the last year, Google dropped out of the running, while IBM and Oracle have been complaining to anyone who will listen that the contract unfairly favored Amazon. Others have questioned the wisdom of even going with with a single-vendor approach. Even at $10 billion, an astronomical sum to be sure, we have pointed out that in the scheme of the cloud business, it’s not all that much money, but there is more at stake here than money.

There is a belief here that the winner could have an upper hand in other government contracts, that this is an entree into a much bigger pot of money. After all, if you are building the cloud for the Department of Defense and preparing it for a modern approach to computing in a highly secure way, you would be in a pretty good position to argue for other contracts with similar requirements.

In the end, in spite of the protests of the other companies involved, the Pentagon probably got this right. The two finalists are the most qualified to carry out the contract’s requirements. They are the top two cloud infrastructure vendors on the market, although Microsoft is far behind with around 13 or 14 percent marketshare. Amazon is far head with around 33 percent, according to several companies who track such things.

Microsoft in particular has tools and resources that would be very appealing, especially Azure Stack, a mini private version of Azure, that you can stand up anywhere, an approach that would have great appeal to the military, but both companies have experience with government contracts, and both bring strengths and weaknesses to the table. It will undoubtedly be a tough decision.

In February, the contract drama took yet another turn when the department reported it was investigating new evidence of conflict of interest by a former Amazon employee, who was involved in the RFP process for a time before returning to the company. Smith reports that the department found no such conflict, but there could be some ethical violations they are looking into.

“The department’s investigation has determined that there is no adverse impact on the integrity of the acquisition process. However, the investigation also uncovered potential ethical violations, which have been further referred to DOD IG,” Smith explained.

The DoD is supposed to announce the winner this month, but the drama has continued non-stop.


By Ron Miller

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud, is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe, or rosemary. That by itself would be interesting but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with over 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMWare, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices then to be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.


By Frederic Lardinois

Google Cloud challenges AWS with new open-source integrations

Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners. The partners here are Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs.

The idea here, Google says, is to provide users with a seamless user experience and the ability to easily leverage these open-source technologies in Google’s cloud. But there is a lot more at play here, even though Google never quite says so. That’s because Google’s move here is clearly meant to contrast its approach to open-source ecosystems with Amazon’s. It’s no secret that Amazon’s AWS cloud computing platform has a reputation for taking some of the best open-source projects and then forking those and packaging them up under its own brand, often without giving back to the original project. There are some signs that this is changing, but a number of companies have recently taken action and changed their open-source licenses to explicitly prevent this from happening.

That’s where things get interesting, because those companies include Confluent, Elastic, MongoDB, Neo4j and Redis Labs — and those are all partnering with Google on this new project, though it’s worth noting that InfluxData is not taking this new licensing approach and that while DataStax uses lots of open-source technologies, its focus is very much on its enterprise edition.

“As you are aware, there has been a lot of debate in the industry about the best way of delivering these open-source technologies as services in the cloud,” Manvinder Singh, the head of infrastructure partnerships at Google Cloud, said in a press briefing. “Given Google’s DNA and the belief that we have in the open-source model, which is demonstrated by projects like Kubernetes, TensorFlow, Go and so forth, we believe the right way to solve this it to work closely together with companies that have invested their resources in developing these open-source technologies.”

So while AWS takes these projects and then makes them its own, Google has decided to partner with these companies. While Google and its partners declined to comment on the financial arrangements behind these deals, chances are we’re talking about some degree of profit-sharing here.

“Each of the major cloud players is trying to differentiate what it brings to the table for customers, and while we have a strong partnership with Microsoft and Amazon, it’s nice to see that Google has chosen to deepen its partnership with Atlas instead of launching an imitation service,” Sahir Azam, the senior VP of Cloud Products at MongoDB told me. “MongoDB and GCP have been working closely together for years, dating back to the development of Atlas on GCP in early 2017. Over the past two years running Atlas on GCP, our joint teams have developed a strong working relationship and support model for supporting our customers’ mission critical applications.”

As for the actual functionality, the core principle here is that Google will deeply integrate these services into its Cloud Console; for example, similar to what Microsoft did with Databricks on Azure. These will be managed services and Google Cloud will handle the invoicing and the billings will count toward a user’s Google Cloud spending commitments. Support will also run through Google, so users can use a single service to manage and log tickets across all of these services.

Redis Labs CEO and co-founder Ofer Bengal echoed this. “Through this partnership, Redis Labs and Google Cloud are bringing these innovations to enterprise customers, while giving them the choice of where to run their workloads in the cloud, he said. “Customers now have the flexibility to develop applications with Redis Enterprise using the fully integrated managed services on GCP. This will include the ability to manage Redis Enterprise from the GCP console, provisioning, billing, support, and other deep integrations with GCP.”


By Frederic Lardinois

On balance, the cloud has been a huge boon to startups

Today’s startups have a distinct advantage when it comes to launching a company because of the public cloud. You don’t have to build infrastructure or worry about what happens when you scale too quickly. The cloud vendors take care of all that for you.

But last month when Pinterest announced its IPO, the company’s cloud spend raised eyebrows. You see, the company is spending $750 million a year on cloud services, more specifically to AWS. When your business is primarily focused on photos and video, and needs to scale at a regular basis, that bill is going to be high.

That price tag prompted Erica Joy, a Microsoft engineer to publish this Tweet and start a little internal debate here at TechCrunch. Startups, after all, have a dog in this fight, and it’s worth exploring if the cloud is helping feed the startup ecosystem, or sending your bills soaring as they have with Pinterest.

For starters, it’s worth pointing out that Ms. Joy works for Microsoft, which just happens to be a primary competitor of Amazon’s in the cloud business. Regardless of her personal feelings on the matter, I’m sure Microsoft would be more than happy to take over that $750 million bill from Amazon. It’s a nice chunk of business, but all that aside, do startups benefit from having access to cloud vendors?


By Ron Miller

Pentagon stands by finding of no conflict of interest in JEDI RFP process

A line in a new court filing by the Department of Defense suggests that it might reopen investigation into a possible conflict of interest interest in the JEDI contract RFP process involving a former AWS employee. The story has attracted a great deal of attention in major news publications including the Washington Post and Wall Street Journal, but a Pentagon spokesperson has told TechCrunch that nothing has changed.

In the document, filed with the court on Wednesday, the government’s legal representatives sought to outline its legal arguments in the case. The line that attracted so much attention stated, “Now that Amazon has submitted a proposal, the contracting officer is considering whether Amazon’s re-hiring Mr. Ubhi creates an OCI that cannot be avoided, mitigated, or neutralized.” OCI stands for Organizational Conflict of Interest in DoD lingo.

When asked about this specific passage, Pentagon spokesperson Heather Babb made clear the conflict had been investigated earlier and that Ubhi had recused himself from the process. “During his employment with DDS, Mr. Deap Ubhi recused himself from work related to the JEDI contract. DOD has investigated this issue, and we have determined that Mr. Ubhi complied with all necessary laws and regulations,” Babb told TechCrunch.

She repeated that statement when asked specifically about the language in the DoD’s filing. Ubhi did work at Amazon prior to joining the DoD and returned to work for them after he left.

The Department of Defense’s decade-long, $10 billion JEDI cloud contract process has attracted a lot of attention, and not just for the size of the deal. The Pentagon has said this will be a winner-take-all affair. Oracle and IBM have filed formal complaints and Oracle filed a lawsuit in December alleging among other things that there was a conflict of interest by Ubhi, and that they believed the single-vendor approach was designed to favor AWS. The Pentagon has denied these allegations.

The DoD completed the RFP process at the end of October and is expected to choose the winning vendor in April.


By Ron Miller

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former Director of Engineering for Faceboook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate, and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in under a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from NVIDIA and Google are virtually available for anyone to run their test.

Individual users can get on for free, specify the type of GPU they need to compute their experiment, and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a ‘spell hyper’ command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But, perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the ‘size’ of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google, or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent as well as a designer.


By Jordan Crook

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.


By Frederic Lardinois

New Synergy Research report finds enterprise data center market is strong for now

Conventional wisdom would suggest that in 2019, the public cloud dominates and enterprise data centers are becoming an anachronism of a bygone era, but new data from Synergy Research finds that the enterprise data center market had a growth spurt last year.

In fact, Synergy reported that overall spending in enterprise infrastructure, which includes elements like servers, switches and routers and network security; grew 13 percent last year and represents a $125 billion business — not too shabby for a market that is supposedly on its deathbed.

Overall these numbers showed that market is still growing, although certainly not nearly as fast the public cloud. Synergy was kind enough to provide a separate report on the cloud market, which grew 32 percent last year to $250 billion annually.

As Synergy analyst John Dinsdale, pointed out, the private data center is not the only buyer here. A good percentage of sales is likely going to the public cloud, who are building data centers at a rapid rate these days. “In terms of applications and levels of usage, I’d characterize it more like there being a ton of growth in the overall market, but cloud is sucking up most of the growth, while enterprise or on-prem is relatively flat,” Dinsdale told TechCrunch.

 

 

Perhaps the surprising data nugget in the report is that Cisco remains the dominant vendor in this market with 23 percent share over the last four quarters. This, even as it tries to pivot to being more of a software and services vendor, spending billions on companies such as AppDynamics, Jasper Technologies and Duo Security in recent years. Yet data still shows that it still dominating in the traditional hardware sector.

Cisco remains the top vendor in the category in spite of losing a couple of percentage points in marketshare over the last year, primarily due to the fact they don’t do great in the server part of the market, which happens to be the biggest overall slice. The next vendor, HPE, is far back at just 11 percent across the six segments.

While these numbers show that companies are continuing to invest in new hardware, the growth is probably not sustainable long term. At AWS Re:invent in November, AWS president Andy Jassy pointed out that a vast majority of data remains in private data centers, but that we can expect that to begin to move more briskly to the public cloud over the next five years. And web scale companies like Amazon often don’t buy hardware off the shelf, opting to develop custom tools they can understand and configure at a highly granular level.

Jassy said that outside the US, companies are one to three years behind this trend, depending on the market, so the shift is still going on, as the much bigger growth in the public cloud numbers indicates.


By Ron Miller

Amazon reportedly acquired Israeli disaster recovery service, CloudEndure for around $200M

Amazon has reportedly acquired Israeli disaster recovery startup, CloudEndure. Neither company has responded to our request for confirmation, but we have heard from multiple sources that the deal has happened. While some outlets have been reporting that the deal was worth $250 million, we are hearing that it’s closer to $200 million.

The company provides disaster recovery for cloud customers. You may be thinking that disaster recovery is precisely why we put our trust in cloud vendors. If something goes wrong, it’s the vendor’s problem, and you would be right to make this assumption, but nothing is simple. If you have a hybrid or multi-cloud scenario, you need to have ways to recover your data in the event of a disaster like weather, a cyber attack or political issue.

That’s where a company like CloudEndure comes into play. It can help you recover and get back and running in another place, no matter where your data lives, by providing a continuous backup and migration between clouds and private data centers. While CloudEndure currently works with AWS, Azure and AWS, it’s not clear if Amazon would continue to support these other vendors.

The company was backed by Dell Technologies Partners, Infosys and Magma Venture Partners, among others. Ray Wang, founder and principal analyst at Constellation Research, says Infosys recently divested its part of the deal and that might have precipitated the sale. “So much information is sitting in the cloud that you need backups and regions to make sure you have seamless recovery in the event of a disaster,” Wang told TechCrunch.

While he isn’t clear what Amazon will do with the company, he says it will test just how open it is. “If you have multi-cloud and want your on-prem data backed up, or if you have backup on one cloud like AWS and want it on Google or Azure, you could do this today with Cloud Endure,” he said. “That’s why i’m curious if they’ll keep supporting Azure or GCP,” he added.

CloudEndure was founded in 2012 and has raised just over $18 million. It most recent investment came in 2016 when it raised $6 million led by Infosys and Magma.


By Ron Miller

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).


By Frederic Lardinois

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS is bringing the cloud on prem with Outposts

AWS has always been the pure cloud vendor, and even though it has given a nod to hybrid, it is now fully embracing it. Today in conjunction with VMware, it announced a pair of options to bring AWS into the datacenter.

Yes, you read it correctly. You can now put AWS into your data center with AWS hardware, the same design they use in their own datacenters. The two new products are part of AWS Outposts.

There are two Outposts variations — VMware Cloud on AWS Outposts and AWS Outposts. The first uses the VMware control panel. The second allows customers to run compute and storage on-premises using the same AWS APIs that are used in the AWS cloud

In fact, VMware CEO Pat  Gelsinger joined AWS CEO Andy Jassy on stage for a joint announcement. The two companies have been working together for some to bring VMware to the AWS cloud. Part of this announcement flips that on its head bringing the AWS cloud on prem to work with VMware. In both cases, AWS sells you their hardware, installs it if you wish, and will even maintain it for you.

This is an area that AWS has lagged, preferring the vision of a cloud, rather than moving back to the datacenter, but it’s a tacit acknowledgment that customers want to operate in both places for the foreseeable future.

The announcement also extends the company’s cloud-native like vision. On Monday, the company announced Transit Gateways, which is designed to provide a single way to manage network resources, whether they live in the cloud or on-prem.

Now AWS is bringing its cloud on prem, something that Microsoft, Canonical, Oracle and others have had for some time. It’s worth noting that today’s announcement is a public preview. The actual release is expected in the second half of next year.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change and if you are using a template as a work-around for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like social security numbers, dates of birth and addresses and it interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a social security number. If forms change Textract won’t miss it,” Jassy explained

more AWS re:Invent 2018 coverage


By Ron Miller