Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”


By Christine Hall

YL Ventures sells its stake in cybersecurity unicorn Axonius for $270M

YL Ventures, the Israel-focused cybersecurity seed fund, today announced that it has sold its stake cybersecurity asset management startup Axonius, which only a week ago announced a $100 million Series D funding round that now values it at around $1.2 billion.

ICONIQ Growth, Alkeon Capital Management, DTCP and Harmony Partners acquired YL Venture’s stake for $270 million. This marks YL’s first return from its third $75 million fund, which it raised in 2017, and the largest return in the firm’s history.

With this sale, the company’s third fund still has six portfolio companies remaining. It closed its fourth fund with $120 million in committed capital in the middle of 2019.

Unlike YL, which focuses on early-stage companies — though it also tends to participate in some later-stage rounds — the investors that are buying its stake specialize in later-stage companies that are often on an IPO path. ICONIQ Growth has invested in the likes of Adyen, CrowdStrike, Datadog and Zoom, for example, and has also regularly partnered with YL Ventures on its later-stage investments.

“The transition from early-stage to late-stage investors just makes sense as we drive toward IPO, and it allows each investor to focus on what they do best,” said Dean Sysman, co-founder and CEO of Axonius. “We appreciate the guidance and support the YL Ventures team has provided during the early stages of our company and we congratulate them on this successful journey.”

To put this sale into perspective for the Silicon Valley- and Tel Aviv-based YL Ventures, it’s worth noting that it currently manages about $300 million. Its current portfolio includes the likes of Orca Security, Hunters and Cycode. This sale is a huge win for the firm.

Its most headline-grabbing exit so far was Twistlock, which was acquired by Palo Alto Networks for $410 million in 2019, but it has also seen exits of its portfolio companies to Microsoft, Proofpoint, CA Technologies and Walmart, among others. The fund participated in Axonius’ $4 million seed round in 2017 up to its $58 Million Series C round a year ago.

It seems like YL Ventures is taking a very pragmatic approach here. It doesn’t specialize in late-stage firms — and until recently, Israeli startups always tended to sell long before they got to a late-stage round anyway. And it can generate a nice — and guaranteed — return for its own investors, too.

“This exit netted $270 million in cash directly to our third fund, which had $75 million total in capital commitments, and this fund still has 6 outstanding portfolio companies remaining,” Yoav Leitersdorf, YL Ventures’ founder and managing partner, told me. “Returning multiple times that fund now with a single exit, with the rest of the portfolio companies still there for the upside is the most responsible — yet highly profitable path — we could have taken for our fund at this time. And all this while diverting our energies and means more towards our seed-stage companies (where our help is more impactful), and at the same time supporting Axonius by enabling it to bring aboard such excellent late-stage investors as ICONIQ and Alkeon – a true win-win-win situation for everyone involved!”

He also noted that this sale achieved a top-decile return for the firm’s limited partners and allows it to focus its resources and attention toward the younger companies in its portfolio.


By Frederic Lardinois

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.


By Frederic Lardinois

WhyLabs brings more transparancy to ML ops

WhyLabs, a new machine learning startup that was spun out of the Allen Institute, is coming out of stealth today. Founded by a group of former Amazon machine learning engineers, Alessya Visnjic, Sam Gracie and Andy Dang, together with Madrona Venture Group principal Maria Karaivanova, WhyLabs’ focus is on ML operations after models have been trained — not on building those models from the ground up.

The team also today announced that it has raised a $4 million seed funding round from Madrona Venture Group, Bezos Expeditions, Defy Partners and Ascend VC.

Visnjic, the company’s CEO, used to work on Amazon’s demand forecasting model.

“The team was all research scientists, and I was the only engineer who had kind of tier-one operating experience,” she told me. “So it was like, ”Okay, how bad could it be?’ I carried the pager for the retail website before it can be bad. But it was one of the first AI deployments that we’d done at Amazon at scale. The pager duty was extra fun because there were no real tools. So when things would go wrong — like we’d order way too many black socks out of the blue — it was a lot of manual effort to figure out why was this happening.”

Image Credits: WhyLabs

But while large companies like Amazon have built their own internal tools to help their data scientists and AI practitioners operate their AI systems, most enterprises continue to struggle with this — and a lot of AI projects simply fail and never make it into production. “We believe that one of the big reasons that happens is because of the operating process that remains super manual,” Visnjic said. “So at WhyLabs, we’re building the tools to address that — specifically to monitor and track data quality and alert — you can think of it as Datadog for AI applications.”

The team has brought ambitions, but to get started, it is focusing on observability. The team is building — and open-sourcing — a new tool for continuously logging what’s happening in the AI system, using a low-overhead agent. That platform-agnostic system, dubbed WhyLogs, is meant to help practitioners understand the data that moves through the AI/ML pipeline.

For a lot of businesses, Visnjic noted, the amount of data that flows through these systems is so large that it doesn’t make sense for them to keep “lots of big haystacks with possibly some needles in there for some investigation to come in the future.” So what they do instead is just discard all of this. With its data logging solution, WhyLabs aims to give these companies the tools to investigate their data and find issues right at the start of the pipeline.

Image Credits: WhyLabs

According to Karaivanova, the company doesn’t have paying customers yet, but it is working on a number of proofs of concepts. Among those users is Zulily, which is also a design partner for the company. The company is going after mid-size enterprises for the time being, but as Karaivanova noted, to hit the sweet spot for the company, a customer needs to have an established data science team with 10 to 15 ML practitioners. While the team is still figuring out its pricing model, it’ll likely be a volume-based approach, Karaivanova said.

“We love to invest in great founding teams who have built solutions at scale inside cutting-edge companies, who can then bring products to the broader market at the right time. The WhyLabs team are practitioners building for practitioners. They have intimate, first-hand knowledge of the challenges facing AI builders from their years at Amazon and are putting that experience and insight to work for their customers,” said Tim Porter, managing director at Madrona. “We couldn’t be more excited to invest in WhyLabs and partner with them to bring cross-platform model reliability and observability to this exploding category of MLOps.”


By Frederic Lardinois

Amid shift to remote work, application performance monitoring is IT’s big moment

In recent weeks, millions have started working from home, putting unheard-of pressure on services like video conferencing, online learning, food delivery and e-commerce platforms. While some verticals have seen a marked reduction in traffic, others are being asked to scale to new heights.

Services that were previously nice to have are now necessities, but how do organizations track pressure points that can add up to a critical failure? There is actually a whole class of software to help in this regard.

Monitoring tools like Datadog, New Relic and Elastic are designed to help companies understand what’s happening inside their key systems and warn them when things may be going sideways. That’s absolutely essential as these services are being asked to handle unprecedented levels of activity.

At a time when performance is critical, application performance monitoring (APM) tools are helping companies stay up and running. They also help track root causes should the worst case happen and they go down, with the goal of getting going again as quickly as possible.

We spoke to a few monitoring vendor CEOs to understand better how they are helping customers navigate this demand and keep systems up and running when we need them most.

IT’s big moment


By Ron Miller

Datadog acquires app testing company Madumbo

Datadog, the popular monitoring and analytics platform, today announced that it has acquired Madumbo, an AI-based application testing platform.

“We’re excited to have the Madumbo team join Datadog,” said Olivier Pomel, Datadog’s CEO. “They’ve built a sophisticated AI platform that can quickly determine if a web application is behaving correctly. We see their core technology strengthening our platform and extending into many new digital experience monitoring capabilities for our customers.”

Paris-based Madumbo, which was incubated at Station F and launched in 2017, offers its users a way to test their web apps without having to write any additional code. It promises to let developers build tests by simply interacting with the site, using the Madumbo test recorder, and to help them build test emails, password and testing data on the fly. The Madumbo system then watches your site and adapts its check to whatever changes you make. This bot also watches for JavaScript errors and other warnings and can be integrated into a deployment script.

The team will join Datadog’s existing Paris office and will work on new products, which Datadog says will be announced later this year. Datadog will phase out the Madumbo platform over the course of the next few months.

“Joining Datadog and bringing Madumbo’s AI-powered testing technology to its platform is an amazing opportunity,” said Gabriel-James Safar, CEO of Madumbo. “We’ve long admired Datadog and its leadership, and are excited to expand the scope of our existing technology by integrating tightly with Datadog’s other offerings.”


By Frederic Lardinois

Datadog launches Watchdog to help you monitor your cloud apps

Your typical cloud monitoring service integrates with dozens of service and provides you a pretty dashboard and some automation to help you keep tabs on how your applications are doing. Datadog has long done that but today, it is adding a new service called Watchdog, which uses machine learning to automatically detect anomalies for you.

The company notes that a traditional monitoring setup involves defining your parameters based on how you expect the application to behave and then set up dashboards and alerts to monitor them. Given the complexity of modern cloud applications, that approach has its limits, so an additional layer of automation becomes necessary.

That’s where Watchdog comes in. The service observes all of the performance data it can get its paws on, learns what’s normal, and then provides alerts when something unusual happens and — ideally — provides insights into where exactly the issue started.

“Watchdog builds upon our years of research and training of algorithms on our customers data sets. This technology is unique in that it not only identifies an issue programmatically, but also points users to probable root causes to kick off an investigation,” Datadog’s head of data science Homin Lee notes in today’s announcement.

The service is now available to all Datadog customers in its Enterprise APM plan.


By Frederic Lardinois

Intermix.io looks to help data engineers find their worst bottlenecks

For any company built on top of machine learning operations, the more data they have, the better they are off — as long as they can keep it all under control. But as more and more information pours in from disparate sources, gets logged in obscure databases and is generally hard (or slow) to query, the process of getting that all into one neat place where a data scientist can actually start running the statistics is quickly running into one of machine learning’s biggest bottlenecks.

That’s a problem Intermix.io and its founders, Paul Lappas and Lars Kamp, hope to solve. Engineers get a granular look at all of the different nuances behind what’s happening with some specific function, from the query all the way through all of the paths it’s taking to get to its end result. The end product is one that helps data engineers monitor the flow of information going through their systems, regardless of the source, to isolate bottlenecks early and see where processes are breaking down. The company also said it has raised seed funding from Uncork Capital, S28 Capital, PAUA Ventures along with Bastian Lehman, CEO of Postmates, and Hasso Plattner, Founder of SAP.

“Companies realize being data driven is a key to success,” Kamp said. “The cloud makes it cheap and easy to store your data forever, machine learning libraries are making things easy to digest. But a company that wants to be data driven wants to hire a data scientist. This is the wrong first hire. To do that they need access to all the relevant data, and have it be complete and clean. That falls to data engineers who need to build data assembly lines where they are creating meaningful types to get data usable to the data scientist. That’s who we serve.”

Intermix.io works in a couple of ways: first, it tags all of that data, giving the service a meta-layer of understanding what does what, and where it goes; second, it taps every input in order to gather metrics on performance and help identify those potential bottlenecks; and lastly, it’s able to track that performance all the way from the query to the thing that ends up on a dashboard somewhere. The idea here is that if, say, some server is about to run out of space somewhere or is showing some performance degradation, that’s going to start showing up in the performance of the actual operations pretty quickly — and needs to be addressed.

All of this is an efficiency play that might not seem to make sense at a smaller scale. the waterfall of new devices that come online every day, as well as more and more ways of understanding how people use tools online, even the smallest companies can quickly start building massive data sets. And if that company’s business depends on some machine learning happening in the background, that means it’s dependent on all that training and tracking happening as quickly and smoothly as possible, with any hiccups leading to real-term repercussions for its own business.

Intermix.io isn’t the first company to try to create some application performance management software. There are others like Data Dog and New Relic, though Lappas says that the primary competition from them comes in the form of traditional APM software with some additional scripts tacked on. However, data flows are a different layer altogether, which means they require a more unique and custom approach to addressing that problem.


By Matthew Lynley

Datadog provides visibility into Kubernetes apps with new container map

As companies turn increasingly to containerization, it creates challenges in terms of monitoring each individual container and the impact on the underlying application. This is particularly difficult because of the ephemeral nature of containers, which can exist for a very short time. Datadog introduced a container map product today that could help by bringing visualization to bear on the problem.

“With his announcement, what we are doing is introducing a container map to show you all of the containers across your system,” Ilan Rabinovitch, VP of Product Management at Datadog told TechCrunch. This could enable customers to see every container at any given time, organize them into groups based on tags, then drill-down to see what’s happening within each one.

The company makes use of tags and metadata to identify the different parts of the containers and their relationship to one another and the underlying infrastructure. The tool monitors containers much like any other entity in Datadog.

“Just as the host map does with individual instances, the container map enables you to easily group, filter, and inspect your containers using metadata such as services, availability zones, roles, partitions, or any other dimension you like,” the company wrote in a blog post introducing the new feature.

While Datadog won’t help a company directly remediate a problem as it avoids having write access to a company’s systems, the customer can use Web hooks or a serverless trigger like an Amazon Lambda function to invoke some sort of action should certain conditions be met that could compromise or break the application.

The company is simply acting as a third party watching to make sure the containers all behave properly. “We trust Kubernetes to do what it should do. But when something breaks, you need to be able to understand what happened, and Kubernetes is not designed to do this,” Rabinovitch said. The new map features provides that missing visibility into the container system and lets users drill down inside individual containers to pinpoint the source of a problem.


By Ron Miller

Through luck and grit, Datadog is fusing the culture of developers and operations

There used to be two cultures in the enterprise around technology. On one side were software engineers, who built out the applications needed by employees to conduct the business of their companies. On the other side were sysadmins, who were territorially protective of their hardware domain — the servers, switches, and storage boxes needed to power all of that software. Many a great comedy routine has been made at the interface of those two cultures, but they remained divergent.

That is, until the cloud changed everything. Suddenly, there was increasing overlap in the skills required for software engineering and operations, as well as a greater need for collaboration between the two sides to effectively deploy applications. Yet, while these two halves eventually became one whole, the software monitoring tools used by them were often entirely separate.

New York City-based Datadog was designed to bring these two cultures together to create a more nimble and collaborative software and operations culture. Founded in 2010 by Olivier Pomel and Alexis Lê-Quôc, the product offers monitoring and analytics for cloud-based workflows, allowing ops team to track and analyze deployments and developers to instrument their applications. Pomel said that “the root of all of this collaboration is to make sure that everyone has the same understanding of the problem.”

The company has had dizzying success. Pomel declined to disclose precise numbers, but says the company had “north of $100 million” of recurring revenue in the past twelve months, and “we have been doubling that every year so far.” The company, headquartered in the New York Times Building in Times Square, employs more than 600 people across its various worldwide offices. The company has raised nearly $150 million of venture capital according to Crunchbase, and is perennially on banker’s short lists for strong IPO prospects.

The real story though is just how much luck and happenstance can help put wind in the sails of a company.

Pomel first met Lê-Quôc while an undergraduate in France. He was working on running the campus network, and helped to discover that Lê-Quôc had hacked the network. Lê-Quôc was eventually disconnected, and Pomel would migrate to IBM’s upstate New York offices after graduation. After IBM, he led technology at Wireless Generation, a K-12 startup, where he ran into Lê-Quôc again, who was heading up ops for the company. The two cultures of develops and ops was glaring at the startup, where “we had developers who hated operations” and there was much “finger-pointing.”

Putting aside any lingering grievances from their undergrad days, the two began to explore how they could ameliorate the cultural differences they witnessed between their respective teams. “Bringing dev and ops together is not a feature, it is core,” Pomel explained. At the same time, they noticed that companies were increasingly talking about building on Amazon Web Services, which in 2009, was still a relatively new concept. They incorporated Datadog in 2010 as a cloud-first monitoring solution, and launched general availability for the product in 2012.

Luck didn’t just bring the founders together twice, it also defined the currents of their market. Datadog was among the first cloud-native monitoring solutions, and the superlative success of cloud infrastructure in penetrating the enterprise the past few years has benefitted the company enormously. We had “exactly the right product at the right time,” Pomel said, and “a lot of it was luck.” He continued, “It’s healthy to recognize that not everything comes from your genius, because what works once doesn’t always work a second time.”

While startups have been a feature in New York for decades, enterprise infrastructure was in many ways in a dark age when the company launched, which made early fundraising difficult. “None of the West Coast investors were listening,” Pomel said, and “East Coast investors didn’t understand the infrastructure space well enough to take risks.” Even when he could get a West Coast VC to chat with him, they “thought it was a form of mental impairment to start an infrastructure startup in New York.”

Those fundraising difficulties ended up proving a boon for Datadog, because it forced the company to connect with customers much earlier and more often than it might have otherwise. Pomel said, “it forced us to spend all of our time with customers and people who were related to the problem” and ultimately, “it grounded us in the customer problem.” Pomel believes that the company’s early DNA of deeply listening to customers has allowed it to continue to outcompete its rivals on the West Coast.

More success is likely to come as companies continue to move their infrastructure onto the cloud. Datadog used to have a roughly even mix of private and public cloud business, and now the balance is moving increasingly toward the public side. Even large financial institutions, which have been reticent in transitioning their infrastructures, have now started to aggressively embrace cloud as the future of computing in the industry, according to Pomel.

Datadog intends to continue to add new modules to its core monitoring toolkit and expand its team. As the company has grown, so has the need to put in place more processes as parts of the company break. Quoting his co-founder, Pomel said the message to employees is “don’t mind the rattling sound — it is a space heater, not an airliner” and “things are going to break and change, and it is normal.”

Much as Datadog has bridged the gap between developers and ops, Pomel hopes to continue to give back to the New York startup ecosystem by bridging the gap between technical startups and venture capital. He has made a series of angel investments into local emerging enterprise and data startups, including Generable, Seva, and Windmill. Hard work and a lot of luck is propelling Datadog into the top echelon of enterprise startups, pulling New York along with it.