Confluent introduces scale on demand for Apache Kafka cloud customers

We find ourselves in a time when certain businesses are being asked to scale to levels they never imagined. Sometimes that increased usage comes in bursts, which means you don’t want to pay for permanent extra capacity you might not always need. Today, Confluent introduced a new scale on demand feature for its Apache Kafka cloud service that will scale up and down as needed automatically.

Confluent CEO Jay Kreps says that elasticity is arguably one of the most important features of cloud computing, and this ability to scale up and down is one of the primary factors that has attracted organizations to the cloud. By automating that capability, they giving DevOps one less major thing to worry about.

“This new functionality allows users to dynamically scale Kafka and the other key ecosystem components like KSQL and Kafka Connect. This is a key missing capability that no other service provides,” Kreps explained.

He points out that this particularly relevant right now with people working at home. Systems are being taxed more than perhaps ever before, and this automated elasticity is going to come in handy, making it more cost-effective and efficient than was previously possible.

“These capabilities let customers add capacity as they need it, or scale down to save money, all without having to pre-plan in advance, ” he said.

The new elasticity feature in Confluent is part of a series of updates to the platform, known as Project Metamorphosis, that Confluent is planning to roll out throughout this year on a regular basis.

“Through the rest of the year we’ll be doing a sequence of releases that bring the capabilities of modern cloud data systems to the Kafka ecosystem in Confluent Cloud. We’ll be announcing one major capability each month, starting with elasticity,” he said.

Kreps first announced Metamorphosis last month when the company also announced a massive $250 million funding round on a $4.5 billion valuation. In spite of the current economic situation, driven by the ongoing pandemic, Confluent plans to continue to build out the product, as today’s announcement attests.


By Ron Miller

Confluent lands another big round with $250M Series E on $4.2B valuation

The pandemic may feel all-encompassing at the moment, but Confluent announced a $250 million Series E today, showing that major investment continues in spite of the dire economic situation at the moment.

Today’s round follows last year’s $125 million Series D. At that point the company was valued a mere $2.5 billion. Investors obviously see a lot of potential here.

Coatue Management led the round with help from Altimeter Capital and Franklin Templeton. Existing investors Index Ventures and Sequoia Capital also participated. Today’s investment brings the total raised to $456 million.

The company is based on Apache Kafka, the open source streaming data project that emerged from LinkedIn in 2011. Confluent launched in 2014 and has gained steam, funding and gaudy valuations along the way.

CEO and co-founder Jay Kreps reports that growth continued last year when sales grew 100% over the previous year. A big part of that is the cloud product the company launched in 2017. It added a free tier last September, which feels pretty prescient right about now.

But the company isn’t making money giving stuff away, so much as attracting users, who can become customers at some point as they make their way through the sales funnel. The beauty of the cloud product is that you can buy by the sip.

The company has big plans for the product this year. Although Kreps was loath to go into detail, he says that there will be a series of changes coming up this year that will add significantly to the product’s capabilities.

“As part of this we’re going to have a major new set of capabilities for our cloud service, and for open source Kafka, and for our product that we’re going to announce every month for the rest of the year,” Kreps told TechCrunch. These will start rolling out the first week in May.

While he wouldn’t get specific, he says that it relates to the changing nature of cloud infrastructure deployment. “This whole infrastructure area is really evolving as it moves to the cloud. And so it has to become much, much more elastic and scalable as it really changes how it works. And we’re going to have announcements around what we think are the core capabilities of event streaming in the cloud,” he said.

While a round this big with a valuation this high and an institutional investor like Franklin Tempeton involved typically means an IPO could be the next step, Kreps was not ready to talk about that, except to say the company does plan to begin behaving in the cadence of a public company with a set of quarterly earnings, just not for public consumption yet.

The company was founded in 2014. It has 1000 employees and has plans to continue to hire and to expand the product. Kreps sees plenty of opportunity here in spite of the current economics.

“I don’t think you want to just turtle up and hang on to your existing customers and not expand if you’re in a market that’s really growing. What really got this round of investors excited is the fact that we’re onto something that has a huge market, and we want to continue to advance, even in these really weird uncertain times,” he said.


By Ron Miller

Open-source leader Confluent raises $125m on $2.5b valuation

Confluent, the commercial company built on top of the open source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital with participation from Index Ventures and Benchmark, who also participated in previous rounds. Today’s investment brings the total raised to $206 million, according the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company…” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka.
Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018 including IBM buying Red Hat for $34 billion in October and Salesforce acquiring Mulesoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.


By Ron Miller

AWS launches a managed Kafka service

Kafka is an open source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his keynote, Kafka users traditionally had to do a lot of happy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage


By Frederic Lardinois