Confluent adds free tier to Kafka real-time streaming data cloud service

When Confluent launched a cloud service in 2017, it was trying to reduce some of the complexity related to running a Kafka streaming data application. Today, it introduced a free tier to that cloud service. The company hopes to expand its market beyond large technology company customers, and the free tier should make it easier for smaller companies to get started.

The new tier provides up to $50 of service a month for up to three months. Company CEO Jay Kreps says that while $50 might not sound like much, it’s actually hundreds of gigabytes of throughput and makes it easy to get started with the tool.

“We felt like we can make this technology really accessible. We can make it as easy as we can. We want to make it something where you can just get going in seconds, and not have to pay anything to start building an application that uses real time streams of data,” Kreps said.

Kafka has been available as an open source product since 2011, so it’s been free to download, install and build applications, but still required a ton of compute and engineering resources to pull off. The cloud service was designed to simplify that, and the free tier lets developers get comfortable building a small application without making a large financial investment.

Once they get comfortable working with Kafka on the free version, users can then buy in whatever increments make sense for them, and only pay for what they use. It can be pennies worth of Kafka or hundreds of dollars depending on a customer’s individual requirements. “After free, you can buy 11 cents worth of Kafka or you can buy it $10 worth, all the way up to these massive users like Lyft that use Kafka Cloud at huge scale as part of their ride sharing service,” he said.

While a free SaaS trial might feel like a common kind of marketing approach, Kreps says for a service like Kafka, it’s actually much more difficult to pull off. “With something like a distributed system where you get a whole chunk of infrastructure, it’s actually technically an extraordinarily difficult thing to provide zero to elastic scale up capabilities. And a huge amount of engineering goes into making that possible,” Kreps explained.

Kafka processes massive streams of data in real time. It was originally developed inside LinkedIn and open sourced in 2011. Confluent launched as a commercial entity on top of the open source project in 2014. In January the company raised $125 million on a $2.5 billion valuation. It has raised over $205 million, according to Crunchbase data.


By Ron Miller

Netdata, a monitoring startup with 50 year old founder, announces $17M Series A

Nearly everything about Netdata, makers of open source monitoring tool, defies standard thinking about startups. Consider that the founder is a polished, experienced 50-year old executive, who started his company several years ago when he became frustrated by what he was seeing in the monitoring tools space. Like any good founder, he decided to build his own, and today the company announced a $17 million Series A led by Bain Capital.

Marathon Ventures also participated in the round. The company received a $3.7 million seed round earlier this year, which was led by Marathon.

Costa Tsaousis, the company’s founder and CEO, was working as an executive for a company in Greece in 2014 when he decided he had had enough of the monitoring tools he was seeing. “At that time, I decided to do something about it myself — actually, I was pissed off by the industry. So I started writing a tool at night and on weekends to simplify monitoring significantly, and also provide a lot more insights,” Tsaousis told TechCrunch.

Mind you, he was a 45 year old executive, who hadn’t done much coding in years, but he was determined as any startup founder tends to be, and he took two years to create his monitoring tool As he tells it, he released it to open source in 2016 and it just took off. “In 2016, I released this project to the public, and it went viral, I wrote a single Reddit post, and immediately started building a huge community. It grew up about 10,000 GitHub stars in a matter of a week,” he said. Even today, he says that it gets a half million downloads every single day, and hundreds of people are contributing to the open source version of the product, relieving him of the burden of supporting the product himself.

Panos Papadopoulos, who led the investment at Marathon, says Tsaousis is not your typical early stage startup founder. “He is not following many norms. He is 50 years old, and he was a C level executive. His presentation and the depth of his thinking, and even his core materials are unlike anything else have seen in an early stage startup,” he said.

What he created was an open source monitoring tool, one that he says simplifies monitoring significantly, and also provide a lot more insights, offering hundreds of metrics as soon as you install it. He says it is also much faster, providing those insights every second, and it’s distributed, meaning Netdata doesn’t actually collect the data, just provides insights on it wherever it lives.

Screenshot 2019 09 25 09.58.36

Live dashboard on the Netdata website.

Today, the company has 24 employees and Tsaousis has set up shop in San Francisco. In addition, to the open source version of the product, there is a SaaS version, which also has what he calls a “massively free plan.” He says the open source monitoring agent is “a gift to the world.” The SaaS tool is about democratizing monitoring, and finally, the pay version is even different from most monitoring tools, charging by the seat instead of by the amount of infrastructure you are monitoring.

Tsaousis wants no less than to lead the monitoring space eventually, and believes that the free tiers will lead the way. “I think Netdata can change the way people perceive and understand monitoring, but in order to do this, I think that offering free services in a massive way is essential, Otherwise, it will not work. So My aim is to lead monitoring. This may sound arrogant, and Netdata is not there yet, but I think it can be,” he said.


By Ron Miller

How founder and CTO Dries Buytaert sold Acquia for $1B

Acquia announced yesterday that Vista Equity Partners was going to buy a majority stake in the company worth a $1 billion. That would seem to be reason enough to sell the company. That’s a good amount a dough, but as co-founder and CTO Dries Buytaert told Extra Crunch, he’s also happy to be taking care of his early investors and his long-time, loyal employees who stuck by him all these years.

Vista is actually buying out early investors as part of the deal, while providing some liquidity for employee equity holders. “I feel proud that we are able to reward our employees, especially those that have been so loyal to the company and worked so hard for so many years. It makes me feel good that we can do that for our employees,” he said.

Image via TechCrunch


By Ron Miller

Chef CEO does an about face, says company will not renew ICE contract

After stating clearly on Friday that he would honor a $95,000 contract with ICE, CEO Barry Crist must have had a change of heart over the weekend. In a blog post, this morning he wrote that the company would not be renewing the contract with ICE after all.

“After deep introspection and dialog within Chef, we will not renew our current contracts with ICE and CBP when they expire over the next year. Chef will fulfill our full obligations under the current contracts,” Crist wrote in the blog post.

He also backed off the seemingly firm position he took on Friday on the matter when he told TechCrunch, “It’s something that we spent a lot of time on, and I want to represent that there are portions of [our company] that do not agree with this, but I as a leader of the company, along with the executive team, made a decision that we would honor the contracts and those relationships that were formed and work with them over time,” he said.

Today, he acknowledged that intense feelings inside the company against the contract led to his decision. The contract began in 2015 under the Obama administration and was aimed at modernizing programming approaches at DHS, but over time as ICE family separation and deportation polices have come under fire, there were calls internally (and later externally) to end the contract. “Policies such as family separation and detention did not yet exist [when we started this contract]. While I and others privately opposed this and various other related policies, we did not take a position despite the recommendation of many of our employees. I apologize for this,” he wrote

Crist also indicated that the company would be donating the revenue from the contracts to organizations that work with people who have been affected by these policies. It’s a similar approach that Salesforce took when 618 of its employees protested a contract the company has with the Customs and Border Patrol (CBP). In response to the protests, Salesforce pledged $1 million to organizations helping affected families.

After a tweet last week exposed the contract, the protests began on social media, and culminated in programmer Seth Vargo removing pieces of open source code from the repository in protest of the contract in response. The company sounded firmly committed to fulfilling this contract in spite of the calls for action internally and externally, and the widespread backlash it was facing both inside and outside the company.

Vargo told TechCrunch in an interview that he saw this issue in moral terms, “Contrary to Chef’s CEO’s publicly posted response, I do think it is the responsibility of businesses to evaluate how and for what purposes their software is being used, and to follow their moral compass,” he said. Apparently Crist has come around to this point of view. Vargo chose not to comment on the latest development.


By Ron Miller

Programmer who took down open source pieces over Chef ICE contract responds

On Friday afternoon Chef CEO Barry Crist and CTO Corey Scobie sat down with TechCrunch to defend their contract with ICE after a firestorm on social media called for them to cut ties with the controversial agency. On Sunday, programmer Seth Vargo, the man who removed his open source components, which contributed to a partial shutdown of Chef’s commercial business for a time last week, responded.

While the Chef executives stated that the company was in fact the owner, Vargo made it clear he owned those pieces and he had every right to remove them from the repository. “Chef (the company) was including a third party software package that I owned. It was on my personal repository on GitHub and personal namespace on RubyGems,” he said. He believes that gave him the right to remove it.

Chef CTO Corey Scobie did not agree. “Part of the challenge was that [Vargo] actually didn’t have authorization to remove those assets. And the assets were not his to begin with. They were actually created under a time when that particular individual [Vargo] was an employee of Chef. And so therefore, the assets were Chef’s assets, and not his assets to remove,” he said.

Vargo says that simply isn’t true and Chef misunderstands the licensing terms. “No OSI license or employment agreement requires me to continue to maintain code of my personal account(s). They are conflating code ownership (which they can argue they have) over code stewardship,” Vargo told TechCrunch.

As further proof, Vargo added that he has even included detailed instructions in his will on how to deal with the code he owns when he dies. “I want to make it absolutely clear that I didn’t “hack” into Chef or perform any kind of privilege escalation. The code lived in my personal accounts. Had I died on Thursday, the exact same thing would have happened. My will requests all my social media and code accounts be deleted. If I had deleted my GitHub account, the same thing would have happened,” he explained.

Vargo said that Chef actually was in violation of the open source license when they restored those open source pieces without putting his name on it. “Chef actually violated the Apache license by removing my name, which they later restored in response to public pressure,” he said.

Scobie admitted that the company did forget to include Vargo’s name on the code, but added it back as soon as they heard about the problem. “In our haste to restore one of the objects, we inadvertently removed a piece of metadata that identified him as the author. We didn’t do that knowingly. It was absolutely a mistake in the process of trying to restore customers and our and our global customer base service. And as soon as we were notified of it, we reverted that change on this specific object in question,” he said.

Vargo says, as for why he took the open source components down, he was taking a moral stand against the contract, which dates back to the Obama administration. He also explained that he attempted to contact Chef via multiple channels before taking action. “First, I didn’t know about the history of the contract. I found out via a tweet from @shanley and subsequently verified via the USA spending website. I sent a letter and asked Chef publicly via Twitter to respond multiple times, and I was met with silence. I wanted to know how and why code in my personal repositories was being used with ICE. After no reply for 72 hours, I decided to take action,” he said.

Since then, Chef’s CEO Barry Crist has made it clear he was honoring the contract, which Vargo felt further justified his actions. “Contrary to Chef’s CEO’s publicly posted response, I do think it is the responsibility of businesses to evaluate how and for what purposes their software is being used, and to follow their moral compass,” he said.

Vargo has a long career helping build development tools and contributing to open source. He currently works for Google Cloud. Previous positions include HashiCorp and Chef.


By Ron Miller

Chef CEO says he’ll continue to work with ICE in spite of protests

Yesterday, software development tool maker Chef found itself in the middle of a firestorm after a Tweet called them out for doing business with DHS/ICE. Eventually it led to an influential open source developer removing a couple of key pieces of software from the project, bringing down some parts of Chef’s commercial business.

Chef intends to fulfill its contract with ICE, in spite of calls to cancel it. In a blog post published this morning, Chef CEO Barry Crist defended the decision. “I do not believe that it is appropriate, practical, or within our mission to examine specific government projects with the purpose of selecting which U.S. agencies we should or should not do business.”

He stood by the company’s decision this afternoon in an interview with TechCrunch, while acknowledging that it was a difficult and emotional decision for everyone involved. “For some portion of the community, and some portion of our company, this is a super, super-charged lightning rod, and this has been very difficult. It’s something that we spent a lot of time on, and I want to represent that there are portions of [our company] that do not agree with this, but I as a leader of the company, along with the executive team, made a decision that we would honor the contracts and those relationships that were formed and work with them over time,” he said.

He added, “I think our challenge as as leadership right now is how do we collectively navigate through through times like this, and through emotionally-charged issues like the ICE contract.”

The deal with ICE, which is a $95,000 a year contract for software development tools, dates back to the Obama administration when the then DHS CIO wanted to move the department towards more modern agile/DevOps development workflows, according Christ.

He said for people who might think it’s a purely economic decision, the money represents a fraction of the company’s more than $50 million annual revenue (according to Crunchbase data), but he says it’s about a long-term business arrangement with the government that transcends individual administration policies. “It’s not about the $100,000, it’s about decisions we’ve made to engage the government. And I appreciate that not everyone in our world feels the same way or would make that same decision, but that’s the decision that that we made as a leadership team,”Crist said.

Shortly after word of Chef’s ICE contract appeared on Twitter, according to a report in The Register, former Chef employee Seth Vargo removed a couple of key pieces of open source software from the repository, telling The Register that “software engineers have to operate by some kind of moral compass.” This move brought down part of Chef’s commercial software and it took them 24 hours to get those services fully restored, according to Chef CTO Corey Scobie.

Crist says he wants to be clear that his decision does not mean he supports current ICE policies. “I certainly don’t want to be viewed as I’m taking a strong stand in support of ICE. What we’re taking a strong stand on is our consistency with working with our customers, and again, our work with DHS  started in the previous administration on things that we feel very good about,” he said.


By Ron Miller

FOSSA scores $8.5 million Series A to help enterprise manage open source licenses

As more enterprise developers make use of open source, it becomes increasingly important for companies to make sure that they are complying with licensing requirements. They also need to ensure the open sources bits are being updated over time for security purposes. That’s where FOSSA comes in, and today the company announced an $8.5 million Series A.

The round was led by Bain Capital Ventures with help from Costanoa Ventures and Norwest Venture Partners. Today’s round brings the total raised to $11 million, according to the company.

Company founder and CEO Kevin Wang says that over the last 18 months, the startup has concentrated on building tools to help enterprises comply with their growing use of open source in a safe and legal way. He says that overall this increasing use of open source great news for developers, and for these bigger companies in general. While it enables them to take advantage of all the innovation going on in the open source community, they need to make sure they are in compliance.

“The enterprise is really early on this journey, and that’s where we come in. We provide a platform to help the enterprise manage open source usage at scale,” Wang explained. That involves three main pieces. First it tracks all of the open source and third-party code being used inside a company. Next, it enforces licensing and security policy, and finally, it has a reporting component. “We automate the mass reporting and compliance for all of the housekeeping that comes from using open source at scale,” he said.

The enterprise focus is relatively new for the company. It originally launched in 2017 as a tool for developers to track individual use of open source inside their programs. Wang saw a huge opportunity inside the enterprise to apply this same kind of capability inside larger organizations, who were hungry for tools to help them comply with the myriad open source licenses out there.

“We found that there was no tooling out there that can manage the scale and breadth across all the different enterprise use cases and all the really complex mission-critical code bases,” he said. What’s more, he found that where there were existing tools, they were vastly underutilized or didn’t provide broad enough coverage.

The company announced a $2.2 million seed round in 2017, and since then has grown from 10 to 40 employees. With today’s funding, that should increase as the company is expanding quickly. Wang reports that the startup has been tripling its revenue numbers and customer accounts year over year. The new money should help accelerate that growth and expand the product and markets it can sell into.


By Ron Miller

Kubernetes co-founder Craig McLuckie is as tired of talking about Kubernetes as you are

“I’m so tired of talking about Kubernetes . I want to talk about something else,” joked Kubernetes co-founder and VP of R&D at VMware Craig McLuckie during a keynote interview at this week’s Cloud Foundry Summit in The Hague. “I feel like that 80s band that had like one hit song — Cherry Pie.”

He doesn’t quite mean it that way, of course (though it makes for a good headline, see above), but the underlying theme of the conversation he had with Cloud Foundry executive director Abby Kearns was that infrastructure should be boring and fade into the background, while enabling developers to do their best work. “We still have a lot of work to do as an industry to make the infrastructure technology fade into the background and bring forwards the technologies that developers interface with, that enable them to develop the code that drives the business, etc. […] Let’s make that infrastructure technology really, really boring. ”

IMG 20190911 115940

What McLuckie wants to talk about is developer experience and with VMware’s intend to acquire Pivotal, it’s placing a strong bet on Cloud Foundry as one of the premiere development platforms for cloud native applications. For the longest time, the Cloud Foundry and Kubernetes ecosystem, which both share an organizational parent in the Linux Foundation, have been getting closer, but that move has accelerated in recent months as the Cloud Foundry ecosystem has finished work on some of its Kubernetes integrations.

McLuckie argues that the Cloud Native Computing Foundation, the home of Kubernetes and other cloud-native open-source projects, was always meant to be a kind of open-ended organization that focuses on driving innovation. And that created a large set of technologies that vendors can choose from. “But when you start to assemble that, I tend to think about you building up this cake which is your development stack, you discover that some of those layers of the cake, like Kubernetes, have a really good bake. They are done to perfection,” said McLuckie, who is clearly a fan of the Great British Baking show. “And other layers, you look at it and you think, wow, that could use a little more bake, it’s not quite ready yet. […] And we haven’t done a great job of pulling it all together and providing a recipe that delivers an entirely consumable experience for everyday developers.”

EEK3PG1W4AAaasp

He argues that Cloud Foundry, on the other hand, has always focused on building that highly opinionated, consistent developer experience. “Bringing those two communities together, I think, is going to have incredibly powerful results for both communities as we start to bring these technologies together,” he said.

With the Pivotal acquisition still in the works, McLuckie didn’t really comment on what exactly this means for the path forward for Cloud Foundry and Kubernetes (which he still talked about with a lot of energy, despite being tired of it), but it’s clear that he’s looking to Cloud Foundry to enable that developer experience on top of Kubernetes that abstracts all of the infrastructure away for developers and makes deploying an application a matter of a single CLI command.

Bonus: Cherry Pie.


By Frederic Lardinois

ScyllaDB takes on Amazon with new DynamoDB migration tool

There are a lot of open source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

It’s a bold move, but Scylla, which has a free open source product along with paid versions, has always had a penchant for going after bigger players. It has had a tool to help move Cassandra users to ScyllaDB for some time.

CEO Dor Laor says DynamoDB customers can now also migrate existing code with little modification. “If you’re using DynamoDB today, you will still be using the same drivers and the same client code. In fact, you don’t need to modify your client code one bit. You just need to redirect access to a different IP address running Scylla,” Laor told TechCrunch.

He says that the reason customers would want to switch to Scylla is because it offers a faster and cheaper experience by utilizing the hardware more efficiently. That means companies can run the same workloads on fewer machines, and do it faster, which ultimately should translate to lower costs.

The company also announced a $25 million Series C extension led by Eight Roads Ventures. Existing investors Bessemer Venture Partners, Magma Venture Partners, Qualcomm Ventures and TLV Partners also participated. Scylla has raised a total of $60 million, according to the company.

The startup has been around for 6 years and customers include Comcast, GE, IBM and Samsung. Laor says that Comcast went from running Cassandra on 400 machines to running the same workloads with Scylla on just 60.

Laor is playing the long game in the database market, and it’s not about taking on Cassandra, DynamoDB or any other individual product. “Our main goal is to be the default NoSQL database where if someone has big data, real-time workloads, they’ll think about us first, and we will become the default.”


By Ron Miller

HashiCorp announces fully managed service mesh on Azure

Service mesh is just beginning to take hold in the cloud native world, and as it does, vendors are looking for ways to help customers understand it. One way to simplify the complexity of dealing with the growing number of service mesh products out there is to package it as a service. Today, HashiCorp announced a new service on Azure to address that need, building it into the Consul product.

HashiCorp co-founder and CTO Armon Dadgar says it’s a fully managed service. “We’ve partnered closely with Microsoft to offer a native Consul [service mesh] service. At the highest level, the goal here is, how do we make it basically push button,” Dadgar told TechCrunch.

He adds that there is extremely tight integration in terms of billing and permissions, as well other management functions, as you would expect with a managed service in the public cloud. Brendan Burns, one of the original Kubernetes developers, who is now a distinguished engineer at Microsoft, says the HashiCorp solution really strips away a lot of the complexity associated with running a service mesh.

“In this case, HashiCorp is using some integration into the Azure control plane to run Consul for you. So you just consume the service mesh. You don’t have to worry about the operations of the service mesh, Burns said. He added, “This is really turning it into a service instead of a do-it-yourself exercise.”

Service meshes are tools used in conjunction with containers and Kubernetes in a dynamic cloud native environment to help micro services communicate and interoperate with one another. There is a growing number of them including Istio, Envoy and Linkerd jockeying for position right now.

Burns makes it clear that while Microsoft is working closely with HashiCorp on this project, it’s also working with other vendors, as well. “Our goal with the service mesh interface specification was really to let a lot of partners be successful on the platform. You know, there’s a bunch of different service meshes. It’s a place where we feel like there’s a lot of evolution and experimentation happening, so we want to make sure that our customers can can find the right solution for them,” Burns explained.

The HashiCorp Consul service is currently in private Beta.


By Ron Miller

Snyk grabs $70M more to detect security vulnerabilities in open source code and containers

A growing number of IT breaches has led to security becoming a critical and central aspect of how computing systems are run and maintained. Today, a startup that focuses on one specific area — developing security tools aimed at developers and the work they do — has closed a major funding round that underscores the growth of that area.

Snyk — a London and Boston-based company that got its start identifying and developing security solutions for developers working on open source code — is today announcing that it has raised $70 million, funding that it will be using to continue expanding its capabilities and overall business. For example, the company has more recently expanded to building security solutions to help developers identify and fix vulnerabilities around containers, an increasingly standard unit of software used to package up and run code across different computing environments.

Open source — Snyk works as an integration into existing developer workflows, compatible with the likes of GitHub, Bitbucket and GitLab, as well as CI/CD pipelines — was an easy target to hit. It’s used in 95% of all enterprises, with up to 77% of open source components liable to have vulnerabilities, by Snyk’s estimates. Containers are a different issue.

“The security concerns around containers are almost more about ownership than technology,” Guy Podjarny, the president who co-founded the company with Assaf Hefetz and Danny Grander, explained in an interview. “They are in a twilight zone between infrastructure and code. They look like virtual machines and suffer many of same concerns such as being unpatched or having permissions that are too permissive.”

While containers are present in fewer than 30% of computing environments today, their growth is on the rise, according to Gartner, which forecasts that by 2022, over 75% of global organizations will run containerized applications. Snyk estimates that a full 44% of Docker image scans (Docker being one of the major container vendors) have known vulnerabilities.

This latest round is being led by Accel with participation from existing investors GV and Boldstart Ventures. These three, along with a fourth investor (Heavybit) also put $22 million into the company as recently as September 2018. That round was made at a valuation of $100 million, and from what we understand from a source close to the startup, it’s now in the “range” of $500 million.

“Accel has a long history in the security market and we believe Snyk is bringing a truly unique, developer-first approach to security in the enterprise,” said Matt Weigand of Accel said in a statement. “The strength of Snyk’s customer base, rapidly growing free user community, leadership team and innovative product development prove the company is ready for this next exciting phase of growth and execution.”

Indeed, the company has hit some big milestones in the last year that could explain that hike. It now has some 300,000 developers using it around the globe, with its customer base growing some 200 percent this year and including the likes of Google, Microsoft, Salesforce and ASOS (sidenote: you know that if developers at developer-centric places themselves working at the vanguard of computing, like Google and Microsoft, are using your product, that is a good sign). Notably, that has largely come by word of mouth — inbound interest.

The company in July of this year took on a new CEO, Peter McKay, who replaced Podjarny. McKay was the company’s first investor and has a track record in helping to grow large enterprise security businesses, a sign of the trajectory that Snyk is hoping to follow.

“Today, every business, from manufacturing to retail and finance, is becoming a software business,” said McKay. “There is an immediate and fast growing need for software security solutions that scale at the same pace as software development. This investment helps us continue to bring Snyk’s product-led and developer-focused solutions to more companies across the globe, helping them stay secure as they embrace digital innovation – without slowing down.”

 


By Ingrid Lunden

APIs are the next big SaaS wave

While the software revolution started out slowly, over the past few years it’s exploded and the fastest-growing segment to-date has been the shift towards software as a service or SaaS.

SaaS has dramatically lowered the intrinsic total cost of ownership for adopting software, solved scaling challenges and taken away the burden of issues with local hardware. In short, it has allowed a business to focus primarily on just that — its business — while simultaneously reducing the burden of IT operations.

Today, SaaS adoption is increasingly ubiquitous. According to IDG’s 2018 Cloud Computing Survey, 73% of organizations have at least one application or a portion of their computing infrastructure already in the cloud. While this software explosion has created a whole range of downstream impacts, it has also caused software developers to become more and more valuable.

The increasing value of developers has meant that, like traditional SaaS buyers before them, they also better intuit the value of their time and increasingly prefer businesses that can help alleviate the hassles of procurement, integration, management, and operations. Developer needs to address those hassles are specialized.

They are looking to deeply integrate products into their own applications and to do so, they need access to an Application Programming Interface, or API. Best practices for API onboarding include technical documentation, examples, and sandbox environments to test.

APIs tend to also offer metered billing upfront. For these and other reasons, APIs are a distinct subset of SaaS.

For fast-moving developers building on a global-scale, APIs are no longer a stop-gap to the future—they’re a critical part of their strategy. Why would you dedicate precious resources to recreating something in-house that’s done better elsewhere when you can instead focus your efforts on creating a differentiated product?

Thanks to this mindset shift, APIs are on track to create another SaaS-sized impact across all industries and at a much faster pace. By exposing often complex services as simplified code, API-first products are far more extensible, easier for customers to integrate into, and have the ability to foster a greater community around potential use cases.

Screen Shot 2019 09 06 at 10.40.51 AM

Graphics courtesy of Accel

Billion-dollar businesses building APIs

Whether you realize it or not, chances are that your favorite consumer and enterprise apps—Uber, Airbnb, PayPal, and countless more—have a number of third-party APIs and developer services running in the background. Just like most modern enterprises have invested in SaaS technologies for all the above reasons, many of today’s multi-billion dollar companies have built their businesses on the backs of these scalable developer services that let them abstract everything from SMS and email to payments, location-based data, search and more.

Simultaneously, the entrepreneurs behind these API-first companies like Twilio, Segment, Scale and many others are building sustainable, independent—and big—businesses.

Valued today at over $22 billion, Stripe is the biggest independent API-first company. Stripe took off because of its initial laser-focus on the developer experience setting up and taking payments. It was even initially known as /dev/payments!

Stripe spent extra time building the right, idiomatic SDKs for each language platform and beautiful documentation. But it wasn’t just those things, they rebuilt an entire business process around being API-first.

Companies using Stripe didn’t need to fill out a PDF and set up a separate merchant account before getting started. Once sign-up was complete, users could immediately test the API with a sandbox and integrate it directly into their application. Even pricing was different.

Stripe chose to simplify pricing dramatically by starting with a single, simple price for all cards and not breaking out cards by type even though the costs for AmEx cards versus Visa can differ. Stripe also did away with a monthly minimum fee that competitors had.

Many competitors used the monthly minimum to offset the high cost of support for new customers who weren’t necessarily processing payments yet. Stripe flipped that on its head. Developers integrate Stripe earlier than they integrated payments before, and while it costs Stripe a lot in setup and support costs, it pays off in brand and loyalty.

Checkr is another excellent example of an API-first company vastly simplifying a massive yet slow-moving industry. Very little had changed over the last few decades in how businesses ran background checks on their employees and contractors, involving manual paperwork and the help of 3rd party services that spent days verifying an individual.

Checkr’s API gives companies immediate access to a variety of disparate verification sources and allows these companies to plug Checkr into their existing on-boarding and HR workflows. It’s used today by more than 10,000 businesses including Uber, Instacart, Zenefits and more.

Like Checkr and Stripe, Plaid provides a similar value prop to applications in need of banking data and connections, abstracting away banking relationships and complexities brought upon by a lack of tech in a category dominated by hundred-year-old banks. Plaid has shown an incredible ramp these past three years, from closing a $12 million Series A in 2015 to reaching a valuation over $2.5 billion this year.

Today the company is fueling an entire generation of financial applications, all on the back of their well-built API.

Screen Shot 2019 09 06 at 10.41.02 AM

Graphics courtesy of Accel

Then and now

Accel’s first API investment was in Braintree, a mobile and web payment systems for e-commerce companies, in 2011. Braintree eventually sold to, and became an integral part of, PayPal as it spun out from eBay and grew to be worth more than $100 billion. Unsurprisingly, it was shortly thereafter that our team decided to it was time to go big on the category. By the end of 2014 we had led the Series As in Segment and Checkr and followed those investments with our first APX conference in 2015.

Plaid, Segment, Auth0, and Checkr had only raised Seed or Series A financings! And we are even more excited and bullish on the space. To convey just how much API-first businesses have grown in such a short period of time, we thought it would be useful perspective to share some metrics over the past five years, which we’ve broken out in the two visuals included above in this article.

While SaaS may have pioneered the idea that the best way to do business isn’t to actually build everything in-house, today we’re seeing APIs amplify this theme. At Accel, we firmly believe that APIs are the next big SaaS wave — having as much if not more impact as its predecessor thanks to developers at today’s fastest-growing startups and their preference for API-first products. We’ve actively continued to invest in the space (in companies like, Scale, mentioned above).

And much like how a robust ecosystem developed around SaaS, we believe that one will continue to develop around APIs. Given the amount of progress that has happened in just a few short years, Accel is hosting our second APX conference to once again bring together this remarkable community and continue to facilitate discussion and innovation.

Screen Shot 2019 09 06 at 10.41.10 AM

Graphics courtesy of Accel


By Arman Tabatabai

How Pivotal got bailed out by fellow Dell family member, VMware

When Dell acquired EMC in 2016 for $67 billion, it created a complicated consortium of interconnected organizations. Some, like VMware and Pivotal, operate as completely separate companies. They have their own boards of directors, can acquire companies and are publicly traded on the stock market. Yet they work closely within the Dell, partnering where it makes sense. When Pivotal’s stock price plunged recently, VMware saved the day when it bought the faltering company for $2.7 billion yesterday.

Pivotal went public last year, and sometimes struggled, but in June the wheels started to come off after a poor quarterly earnings report. The company had what MarketWatch aptly called “a train wreck of a quarter.”

How bad was it? So bad that its stock price was down 42% the day after it reported its earnings. While the quarter itself wasn’t so bad, with revenue up year over year, the guidance was another story. The company cut its 2020 revenue guidance by $40-$50 million and the guidance it gave for the upcoming 2Q19 was also considerably lower than consensus Wall Street estimates.

The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on Aug 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on Aug 14th. That’s when VMware admitted it was thinking about buying the struggling company.


By Ron Miller

IBM is moving OpenPower Foundation to The Linux Foundation

IBM makes the Power Series chips, and as part of that has open sourced some of the underlying technologies to encourage wider use of these chips. The open source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.


By Ron Miller

H20.ai announces $72.5M Series D led by Goldman Sachs

H20.ai‘s mission is to democratize AI by providing a set of tools that frees companies from relying on teams of data scientists. Today it got a bushel of money to help. The company announced a $72.5 million Series D round led by Goldman Sachs and Ping An Global Voyager Fund.

Previous investors Wells Fargo, NVIDIA and Nexus Venture Partners also participated. Under the terms of the deal, Jade Mandel from Goldman Sachs will be joining the H2O.ai Board. Today’s investment brings the total raised to $147 million.

It’s worth noting that Goldman Sachs isn’t just an investor. It’s also a customer. Company CEO and co-founder Sri Ambati says the fact that customers, Wells Fargo and Goldman Sachs, have led the last two rounds is a validation for him and his company. “Customers have risen up from the ranks for two consecutive rounds for us. Last time the Series C was led by Wells Fargo where we were their platform of choice. Today’s round was led by Goldman Sachs, which has been a strong customer for us and strong supporters of our technology,” Ambati told TechCrunch.

The company’s main product, H20 Driverless AI, introduced in 2017, gets its name from the fact it provides a way for people who aren’t AI experts to still take advantage of AI without a team of data scientists. “Driverless AI is automatic machine learning, which brings the power of a world class data scientists in the hands of everyone. lt builds models automatically using machine learning algorithms of every kind,” Ambati explained.

They introduced a new recipe concept today, that provides all of the AI ingredients and instructions for building models for different business requirements. H20.ai’s team of data scientists has created and open sourced 100 recipes for things like credit risk scoring, anomaly detection and property valuation.

The company has been growing since its Series C round in 2017 when it had 70 employees. Today it has 175 and has tripled the number of customers since the prior round, although Ambati didn’t discuss an exact number.  The company has its roots in open source and has 20,000 users of its open source products, according to Ambati.

He didn’t want to discuss valuation and wouldn’t say when the company might go public, saying it’s early days for AI and they are working hard to build a company for the long haul.


By Ron Miller