Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

Andreessen pours $22M into PlanetScale’s database-as-a-service

PlanetScale’s founders invented the technology called Vitess that scaled YouTube and Dropbox. Now they’re selling it to any enterprise that wants their data both secure and consistently accessible. And thanks to its ability to re-shard databases while they’re operating, it can solve businesses’ troubles with GDPR, which demands they store some data in the same locality as the user it belongs to.

The potential to be a computing backbone that both competes with and complements Amazon’s AWS has now attracted a mammoth $22 million Series A for PlanetScale. Led by Andreessen Horowitz and joined by the firm’s Cultural Leadership Fund, head of the US Digital Service Matt Cutts plus existing investor SignalFire, the round is a tall step up from the startup’s $3 million seed it raised a year ago.

“What we’re discovering is that people we thought were at one point competitors, like AWS and hosted relational databases — we’re discovering they may be our partners instead since we’re seeing a reasonable demand for our services in front of AWS’ hosted databases” says CEO Jitendra Vaidya.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Vitess, a predescessor to Kubernetes, is a horizontal scaling sharding middleware built for MySQL. It lets businesses segment their database to boost memory efficiency without sacrificing reliable access speeds. PlanetScale sells Vitess in four ways: hosting on its database-as-a-service, licensing of the tech that can be run on-premises for clients or through another cloud provider, professional training for using Vitess, and on-demand support for users of the open-source version of Vitess.

“We don’t have any concerns about the engineering side of things, but we need to figure out a go-to-market strategy for enterprises” Vaidya explains. “As we’re both technical co-founders, about half of our funding is going towards hiring those functions [outside of engineering], and making that part of our organization work well and get results.”


By Josh Constine

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


By Ron Miller

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


By Frederic Lardinois

OpenFin raises $17 million for its OS for finance

OpenFin, the company looking to provide the operating system for the financial services industry, has raised $17 million in funding through a Series C round led by Wells Fargo, with participation from Barclays and existing investors including Bain Capital Ventures, J.P. Morgan and Pivot Investment Partners. Previous investors in OpenFin also include DRW Venture Capital, Euclid Opportunities and NYCA Partners.

Likening itself to “the OS of finance”, OpenFin seeks to be the operating layer on which applications used by financial services companies are built and launched, akin to iOS or Android for your smartphone.

OpenFin’s operating system provides three key solutions which, while present on your mobile phone, has previously been absent in the financial services industry: easier deployment of apps to end users, fast security assurances for applications, and interoperability.

Traders, analysts and other financial service employees often find themselves using several separate platforms simultaneously, as they try to source information and quickly execute multiple transactions. Yet historically, the desktop applications used by financial services firms — like trading platforms, data solutions, or risk analytics — haven’t communicated with one another, with functions performed in one application not recognized or reflected in external applications.

“On my phone, I can be in my calendar app and tap an address, which opens up Google Maps. From Google Maps, maybe I book an Uber . From Uber, I’ll share my real-time location on messages with my friends. That’s four different apps working together on my phone,” OpenFin CEO and co-founder Mazy Dar explained to TechCrunch. That cross-functionality has long been missing in financial services.

As a result, employees can find themselves losing precious time — which in the world of financial services can often mean losing money — as they juggle multiple screens and perform repetitive processes across different applications.

Additionally, major banks, institutional investors and other financial firms have traditionally deployed natively installed applications in lengthy processes that can often take months, going through long vendor packaging and security reviews that ultimately don’t prevent the software from actually accessing the local system.

OpenFin CEO and co-founder Mazy Dar. Image via OpenFin

As former analysts and traders at major financial institutions, Dar and his co-founder Chuck Doerr (now President & COO of OpenFin) recognized these major pain points and decided to build a common platform that would enable cross-functionality and instant deployment. And since apps on OpenFin are unable to access local file systems, banks can better ensure security and avoid prolonged yet ineffective security review processes.

And the value proposition offered by OpenFin seems to be quite compelling. Openfin boasts an impressive roster of customers using its platform, including over 1,500 major financial firms, almost 40 leading vendors, and 15 out of the world’s 20 largest banks.

Over 1,000 applications have been built on the OS, with OpenFin now deployed on more than 200,000 desktops — a noteworthy milestone given that the ever popular Bloomberg Terminal, which is ubiquitously used across financial institutions and investment firms, is deployed on roughly 300,000 desktops.

Since raising their Series B in February 2017, OpenFin’s deployments have more than doubled. The company’s headcount has also doubled and its European presence has tripled. Earlier this year, OpenFin also launched it’s OpenFin Cloud Services platform, which allows financial firms to launch their own private local app stores for employees and customers without writing a single line of code.

To date, OpenFin has raised a total of $40 million in venture funding and plans to use the capital from its latest round for additional hiring and to expand its footprint onto more desktops around the world. In the long run, OpenFin hopes to become the vital operating infrastructure upon which all developers of financial applications are innovating.

Apple and Google’s mobile operating systems and app stores have enabled more than a million apps that have fundamentally changed how we live,” said Dar. “OpenFin OS and our new app store services enable the next generation of desktop apps that are transforming how we work in financial services.”


By Arman Tabatabai

VMware acquires Bitnami to deliver packaged applications anywhere

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware.

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.


By Ron Miller

Solo.io wants to bring order to service meshes with centralized management hub

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io announced a new open source tool called Service Mesh Hub today, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings including Istio, Linkerd (pronounced Linker-Dee) and Convoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, say she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub. Screenshot: Solo.io

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tool like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken over $13 million in funding, according to Crunchbase data.


By Ron Miller

Egnyte brings native G Suite file support to its platform

Egnyte announced today that customers can now store G Suite files inside its storage, security and governance platform. This builds on the support the company previously had for Office 365 documents.

Egnyte CEO and co-founder Vineet Jain says that while many enterprise customers have seen the value of a collaborative office suite like G Suite, they might have stayed away because of compliance concerns (whether that was warranted or not).

He said that Google has been working on an API for some time that allows companies like Egnyte to decouple G Suite documents from Google Drive. Previously, if you wanted to use G Suite, you no choice but to store the documents in Google Drive.

Jain acknowledges that the actual integration is pretty much the same as his competitors because Google determined the features. In fact, Box and Dropbox announced similar capabilities over the last year, but he believes his company has some differentiating features on its platform.

“I honestly would be hard pressed to tell you this is different than what Box or Dropbox is doing, but when you look at the overall context of what we’re doing…I think our advanced governance features are a game changer,” Jain told TechCrunch.

What that means is that G Suite customers can open a document and get the same editing experience as they would get were they inside Google Drive, while getting all the compliance capabilities built into Egnyte via Egnyte Protect. What’s more, they can store the files wherever they like, whether that’s in Egnyte itself, an on-premises file store or any cloud storage option that Egnyte supports, for that matter.

Egnyte storage and compliance platform

G Suite documents stored on the Egnyte platform.

Long before it was commonplace, Egnyte tried to differentiate itself from a crowded market by being a hybrid play where files can live on-premises or in the cloud. It’s a common way of looking at cloud strategy now, but it wasn’t always the case.

Jain has always emphasized a disciplined approach to growing the company, and it has grown to 15,000 customers and 600 employees over 11 years in business. He won’t share exact revenue, but says the company is generating “multi-millions in revenue” each month.

He has been talking about an IPO for some time, and that remains a goal for the company. In a recent letter to employees that Egnyte shared with TechCrunch, Jain put it this way. “Our leadership team, including our board members, have always looked forward to an IPO as an interim milestone — and that has not changed. However, we now believe this company has the ability to not only be a unicorn but to be a multi-billion dollar company in the long-term. This is a mindset that we all need to have moving forward,” he wrote.

Egnyte was founded in 2007 and has raised over $137 million, according to Crunchbase data.


By Ron Miller

New Relic takes a measured approach to platform overhaul

New Relic, the SaaS applications performance management platform, announced a major update to that platform today. Instead of ripping off the band-aid all at once, the company has decided to take a more measured approach to change, giving customers a chance to ease into it.

The new platform, called New Relic One has been designed to replace the original platform, which was developed over the previous decade. The company says that by moving slowly to the new platform, customers will be able to take advantage of new features that it couldn’t have built on the old platform without having to learn a new a way of working.

Jim Gochee, chief product officer at New Relic says that all of the existing tooling and functionality will eventually be ported over or reimagined on top of New Relic One.”What it is under the covers for us is a new technology stack and a new platform for our offering. We are still running our existing technology stack with our existing products. So we’re [essentially] running two platforms in two stacks in parallel, but all of the new stuff is going to be built on New Relic One over time,” he explained.

By redesigning the existing platform from scratch, New Relic created a new, modern, more extensible model that will allow it to plug in new functionality more easily over time, and eventually even allow customers to do the same thing. For now, it’s about changing what’s happening under the hood and providing a new user experience in a redesigned user interface.

“New Relic One is very pluggable and extensible, which makes it easier for our own teams to build on, and to extend and expand, and also down the road we will eventually get to the point where partners and customers will be able to extend our UI themselves, which is something that we’re very excited about,” he said.

Among the new features is support for AWS Lambda, the company’s serverless offering. It also enables users to search across multiple accounts. It’s not unusual for customers to be monitoring multiple accounts and sub-accounts. With New Relic One, customers can now search across these accounts and find if issues have cascaded more easily.

In a blog post introducing the new platform, CEO Lew Cirne acknowledged the growing complexity of the monitoring landscape, something the new platform has been specifically designed to address.

“Unlike today’s fragmented tools that can deliver a bag of charts and metrics with a bunch of seemingly unrelated numbers, New Relic One is designed to cut through complexity, provide context, and let you see across artificial organizational boundaries so you can quickly find and fix problems,” Cirne wrote.

Nancy Gohring, a senior analyst at 451 Research says this flexibility is a key strength of the new approach. “One of the most important updates here is the reworked data model which allows New Relic to offer customers more flexibility in how they can search the operations data they’re collecting and build dashboards. This kind of flexibility is more important in modern app environments that are more complex and dynamic than they used to be. Everyone’s environment is different and digging for the cause of a problem is more complicated than it used to be,” Gohring told TechCrunch. The new ability to search across accounts should help with that.

She concedes that having parallel platforms is not ideal, but sees why the company chose to go this route. “Having two UIs is never great. But the approach New Relic is taking lets them get something totally new out all at once, rather than spending time gradually introducing it. It will let customers try out the new stuff at their own pace,” she said.

New Relic One goes live tomorrow, and will be available at no additional cost to New Relic subscribers.


By Ron Miller

Sisense acquires Periscope Data to build integrated data science and analytics solution

Sisense announced today that it has acquired Periscope Data to create what it is calling, a complete data science and analytics platform for customers. The companies did not disclose the purchase price.

The two company CEOs met about 18 months ago at a conference, and running similar kinds of companies, hit it off. They began talking and after a time, realized it might make sense to combine the two startups because each one was attacking the data problem from a different angle.

Sisense, which has raised $174 million, tends to serve business intelligence requirements either for internal use or externally with customers. Periscope, which has raised over $34 million, looks at the data science end of the business.

Both company CEOs say that they could have eventually built these capabilities into their respective platforms, but after meeting they decided to bring the two companies together instead, and they made a deal.

Harry Glasser from Periscope Data and Amir Orad of Sisense.

Harry Glasser from Periscope Data and Amir Orad of Sisense.

“I realized over the last 18 months [as we spoke] that we’re actually building leadership positions into two unique areas of the market that will slowly become one as industries and technologies evolve,” Sisense CEO Amir Orad told TechCrunch.

Periscope CEO Harry Glasser says that as his company built a company around advanced analytics and predictive modeling, he saw a growing opportunity around operationalizing these insights across an organization, something he could do much more quickly in combination with Sisense.

“[We have been] pulled into this broader business intelligence conversation, and it has put us in a place where as we do this merger, we are able to instantly leapfrog the three years it would have taken us to deliver that to our customers, and deliver operationalized insights on integration day on day one,” Glasser explained.

The two executives say this is part of a larger trend about companies becoming more data-driven, a phrase that seems trite by now, but as a recent Harvard Business School study found, it’s still a big challenge for companies to achieve.

Omad says that you can debate the pace of change, but that overall, companies are going to operate better when they use data to drive decisions. “I think it’s an interesting intellectual debate, but the direction is one direction. People who deploy this technology will provide better care, better service, hire better, promote employees and grow them better, have better marketing, better sales and be more cost effective,” he said..

Omad and Glasser recognize that many acquisitions don’t succeed, but they believe they are bringing together to like-minded companies that will have a combined ARR of $100 million and 700 employees.

“That’s the icing on the cake, knowing that the cultures are so compatible, knowing that they work so well together, but it starts from a conviction that this advanced analytics can be operationalized throughout enterprises and [with] their customers. This is going to drive transformation inside our customers that’s really great for them and turns them into data-driven companies,” Glasser said.


By Ron Miller

Algorithmia raises $25M Series B for its AI automation platform

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning lifecycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, over 90,000 engineers and data scientists are now on the platform.


By Frederic Lardinois

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.


By Ron Miller

Steve Singh stepping down as Docker CEO

In a surprising turn of events, TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement, went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as Chairman of the Board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 investment last year, which some saw as a sign of continuing struggles for company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:


By Ron Miller

Sumo Logic announces $110M Series G investment on valuation over $1B

Sumo Logic, a cloud data analytics and log analysis company, announced a $110 million Series G investment today. The company indicated that its valuation was “north of a billion dollars,” but wouldn’t give an exact figure.

Today’s round was led by Battery Ventures with participation from new investors Tiger Global Management and Franklin Templeton. Other unnamed existing investors also participated according to the company. Today’s investment brings the total raised to $340 million.

When we spoke to Sumo Logic CEO Ramin Sayer at the time of its $75 million Series F in 2017, he indicated the company was on its way to becoming a public company. While that hasn’t happened yet, he says it is still the goal for the company, and investors wanted in on that before it happened.

“We don’t need to capital. We had plenty of capital already, but when you bring on crossover investors and others in this this stage of a company, they have minimum check sizes and they have a lot of appetite to help you as you get ready to address a lot of the challenges and opportunities as you become a public company,” he said.

He says the company will be investing the money in continuing to develop the platform, whether that’s through acquisitions, which of course the money would help with, or through the company’s own engineering efforts.

The IPO idea remains a goal, but Sayer was not willing or able to commit to when that might happen. The company clearly has plenty of runway now to last for quite some time.

“We could go out now if we wanted to, but we made a decision that that’s not what we’re going to do, and we’re going to continue to double down and invest, and therefore bring some more capital in to give us more optionality for strategic tuck-ins and product IP expansion, international expansion — and then look to the public markets [after] we do that,” he said.

Dharmesh Thakker, general partner at investor, Battery Ventures says his firm likes Sumo Logic’s approach and sees a big opportunity ahead with this investment. “We have been tracking the Sumo Logic team for some time, and admire the company’s early understanding of the massive cloud-native opportunity and the rise of new, modern application architectures,” he said in a statement.

The company crossed the $100 million revenue mark last year and has 2000 customers including Airbnb, Anheuser-Busch and Samsung. It competes with companies like Splunk, Scaylr and Loggly.


By Ron Miller

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.


By Ron Miller