Stonehenge Technology Labs bags $2M, gives CPG companies one-touch access to metrics

Stonehenge Technology Labs wants consumer packaged goods companies to gain meaningful use from all of the data they collect. It announced $2 million in seed funding for its STOPWATCH commerce enhancement software.

The round was led by Irish Angels, with participation from Bread and Butter Ventures, Gaingels, Angeles Investors, Bonfire Ventures and Red Tail Venture Capital.

CEO Meagan Kinmonth Bowman founded the Arkansas-based company in 2019 after working at Hallmark, where she was tasked with the digital transformation of the company.

“This was not a consequence of them not being good marketers or connected to mom, but they didn’t have the technology to connect their back end with retailers like Amazon, Walmart or Hobby Lobby,” she told TechCrunch. “There are so many smart people building products to connect with consumers. The challenge is the big guys are doing things the same way and not thinking like the 13-year-olds on social media that are actually winning the space.”

Kinmonth Bowman and her team recognized that there was a missing middle layer connecting the world of dotcom with brick and mortar. If the middle layer could be applied to the enterprise resource plans and integrate public and private data feeds, a company could be just as profitable online as it could be in traditional retail, she said.

Stonehenge’s answer to that is STOPWATCH, which takes in over 100 million rows of data per workspace per day, analyzes the data points, adds real-time alerts and provides the right data to the right people at the right time.

Dan Rossignol, a B2B SaaS investor, said the CPG world is also about consumerizing our life, and the global pandemic showed that even at home, people could have a productive day and business. Rossignol likes to invest in underestimated founders and saw in Stonehenge a company that is getting CPGs out from underneath antiquated technologies.

“What Meagan and her team are doing is really interesting,” he added. “At this stage, it is all about the people, and the ability to bet on doing something larger.”

Kinmonth Bowman said she had the opportunity to base the company in Silicon Valley, but chose Bentonville, Arkansas instead to be closer to the more than 1,000 CPG companies based there that she felt were the prime customer base for STOPWATCH.

The platform was originally created as a subsidiary of a consulting company, but in 2018, one of their clients told them they just wanted the software rather than also paying for the consulting piece. The business was split, and Stonehenge went underground for eight months to make a software product specifically for the client.

Kinmonth Bowman admits the technology itself is not that sexy — it is using exact transfer loads to extract data from hundreds of systems into a “lake house,” and then siloing it by retailer and other factors and then presenting the data in different ways. For example, the CEO will want different metrics than product teams.

Over the past year, the company has doubled its revenue and also doubled the amount of contracts. It already counts multiple Fortune 100 companies and emerging brands as some of its early users and plans to use the new funding to hire a sales team and go after some strategic relationships.

Stonehenge is also working on putting together a diverse workforce that mimics the users of the software, Kinmonth Bowman said. One of the challenges has been to get unique talent to move to Arkansas, but she said it is one she is eager to take on.

Meanwhile, Brett Brohl, managing partner at Bread and Butter Ventures, said the Stonehenge team “is just crazy enough, smart and driven” to build something great.

“All of the biggest companies have been around for a long time, but not a lot of large organizations have done a good job digitizing their businesses,” he said. “Even pre-COVID, they were building fill-in-the-blank digital transformations, but COVID accelerated technology and hit a lot of companies in the face. That was made more obvious to end consumers, which puts more pressure on companies to understand the need, which is good for STOPWATCH. It went from paper to Excel spreadsheets to the next cloud modification. The time is right for the next leap and how to use data.”


By Christine Hall

Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”


By Christine Hall

Salesforce’s Kathy Baxter is coming to TC Sessions: SaaS to talk AI

As the use of AI has grown and developed over the last several years, companies like Salesforce have tried to tap into it to improve their software and help customers operate faster and more efficiently. Kathy Baxter, principal architect for the ethical AI practice at Salesforce will be joining us at TechCrunch Sessions: SaaS on October 27th to talk about the impact of AI on SaaS.

Baxter, who has more than 20 years of experience as a software architect, joined Salesforce in 2017 after more than a decade at Google in a similar role. We’re going to tap into her expertise on a panel discussing AI’s growing role in software.

Salesforce was one of the earlier SaaS adherents to AI, announcing its artificial intelligence tooling, which the company dubbed Einstein, in 2016. While the positioning makes it sound like a product, it’s actually much more than a single entity. It’s a platform component, which the various pieces of the Salesforce platform can tap into to take advantage of various types of AI to help improve the user experience.

That could involve feeding information to customer service reps on Service Cloud to make the call move along more efficiently, helping salespeople find the customers most likely to close a deal soon in the Sales Cloud or helping marketing understand the optimal time to send an email in the Marketing Cloud.

The company began building out its AI tooling early on with the help of 175 data scientists and has been expanding on that initial idea since. Other companies, both startups and established companies like SAP, Oracle and Microsoft have continued to build AI into their platforms as Salesforce has. Today, many SaaS companies have some underlying AI built into their service.

Baxter will join us to discuss the role of AI in software today and how that helps improve the operations of the service itself, and what the implications are of using AI in your software service as it becomes a mainstream part of the SaaS development process.

In addition to our discussion with Baxter, the conference will also include Databricks’ Ali Ghodsi, UiPath’s Daniel Dines, Puppet’s Abby Kearns, and investors Casey Aylward and Sarah Guo, among others. We hope you’ll join us. It’s going to be a stimulating day.

Buy your pass now to save up to $100, and use CrunchMatch to make expanding your empire quick, easy and efficient. We can’t wait to see you in October!

Is your company interested in sponsoring or exhibiting at TC Sessions: SaaS 2021? Contact our sponsorship sales team by filling out this form.



By Ron Miller

Vista Equity takes minority stake in Canada’s Vena with $242M investment

Vena, a Canadian company focused on the Corporate Performance Management (CPM) software space, has raised $242 million in Series C funding from Vista Equity Partners.

As part of the financing, Vista Equity is taking a minority stake in the company. The round follows $25 million in financing from CIBC Innovation Banking last September, and brings Vena’s total raised since its 2011 inception to over $363 million.

Vena declined to provide any financial metrics or the valuation at which the new capital was raised, saying only that its “consistent growth and…strong customer retention and satisfaction metrics created real demand” as it considered raising its C round.

The company was originally founded as a B2B provider of planning, budgeting and forecasting software. Over time, it’s evolved into what it describes as a “fully cloud-native, corporate performance management platform” that aims to empower finance, operations and business leaders to “Plan to Growtheir businesses. Its customers hail from a variety of industries, including banking, SaaS, manufacturing, healthcare, insurance and higher education. Among its over 900 customers are the Kansas City Chiefs, Coca-Cola Consolidated, World Vision International and ELF Cosmetics.

Vena CEO Hunter Madeley told TechCrunch the latest raise is “mostly an acceleration story for Vena, rather than charting new paths.”

The company plans to use its new funds to build out and enable its go-to-market efforts as well as invest in its product development roadmap. It’s not really looking to enter new markets, considering it’s seeing what it describes as “tremendous demand” in the markets it currently serves directly and through its partner network.

“While we support customers across the globe, we’ll stay focused on growing our North American, U.K. and European business in the near term,” Madeley said.

Vena says it leverages the “flexibility and familiarity” of an Excel interface within its “secure” Complete Planning platform. That platform, it adds, brings people, processes and systems into a single source solution to help organizations automate and streamline finance-led processes, accelerate complex business processes and “connect the dots between departments and plan with the power of unified data.”            

Early backers JMI Equity and Centana Growth Partners will remain active, partnering with Vista “to help support Vena’s continued momentum,” the company said. As part of the raise, Vista Equity Managing Director Kim Eaton and Marc Teillon, senior managing director and co-head of Vista’s Foundation Fund, will join the company’s board.

“The pandemic has emphasized the need for agile financial planning processes as companies respond to quickly-changing market conditions, and Vena is uniquely positioned to help businesses address the challenges required to scale their processes through this pandemic and beyond,” said Eaton in a written statement. 

Vena currently has more than 450 employees across the U.S., Canada and the U.K., up from 393 last year at this time.


By Mary Ann Azevedo

YL Ventures sells its stake in cybersecurity unicorn Axonius for $270M

YL Ventures, the Israel-focused cybersecurity seed fund, today announced that it has sold its stake cybersecurity asset management startup Axonius, which only a week ago announced a $100 million Series D funding round that now values it at around $1.2 billion.

ICONIQ Growth, Alkeon Capital Management, DTCP and Harmony Partners acquired YL Venture’s stake for $270 million. This marks YL’s first return from its third $75 million fund, which it raised in 2017, and the largest return in the firm’s history.

With this sale, the company’s third fund still has six portfolio companies remaining. It closed its fourth fund with $120 million in committed capital in the middle of 2019.

Unlike YL, which focuses on early-stage companies — though it also tends to participate in some later-stage rounds — the investors that are buying its stake specialize in later-stage companies that are often on an IPO path. ICONIQ Growth has invested in the likes of Adyen, CrowdStrike, Datadog and Zoom, for example, and has also regularly partnered with YL Ventures on its later-stage investments.

“The transition from early-stage to late-stage investors just makes sense as we drive toward IPO, and it allows each investor to focus on what they do best,” said Dean Sysman, co-founder and CEO of Axonius. “We appreciate the guidance and support the YL Ventures team has provided during the early stages of our company and we congratulate them on this successful journey.”

To put this sale into perspective for the Silicon Valley- and Tel Aviv-based YL Ventures, it’s worth noting that it currently manages about $300 million. Its current portfolio includes the likes of Orca Security, Hunters and Cycode. This sale is a huge win for the firm.

Its most headline-grabbing exit so far was Twistlock, which was acquired by Palo Alto Networks for $410 million in 2019, but it has also seen exits of its portfolio companies to Microsoft, Proofpoint, CA Technologies and Walmart, among others. The fund participated in Axonius’ $4 million seed round in 2017 up to its $58 Million Series C round a year ago.

It seems like YL Ventures is taking a very pragmatic approach here. It doesn’t specialize in late-stage firms — and until recently, Israeli startups always tended to sell long before they got to a late-stage round anyway. And it can generate a nice — and guaranteed — return for its own investors, too.

“This exit netted $270 million in cash directly to our third fund, which had $75 million total in capital commitments, and this fund still has 6 outstanding portfolio companies remaining,” Yoav Leitersdorf, YL Ventures’ founder and managing partner, told me. “Returning multiple times that fund now with a single exit, with the rest of the portfolio companies still there for the upside is the most responsible — yet highly profitable path — we could have taken for our fund at this time. And all this while diverting our energies and means more towards our seed-stage companies (where our help is more impactful), and at the same time supporting Axonius by enabling it to bring aboard such excellent late-stage investors as ICONIQ and Alkeon – a true win-win-win situation for everyone involved!”

He also noted that this sale achieved a top-decile return for the firm’s limited partners and allows it to focus its resources and attention toward the younger companies in its portfolio.


By Frederic Lardinois

Contrast launches its security observability platform

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Nauman. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Nauman argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers like AWS, Azure and Google Cloud, and languages and frameworks like Java, Python, .NET and Ruby.

Image Credits: Contrast


By Frederic Lardinois

Splunk acquires Plumbr and Rigor to build out its observability platform

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”


By Frederic Lardinois

IBM Cloud suffers prolonged outage

The IBM Cloud is currently suffering a major outage, and with that, multiple services that are hosted on the platform are also down, including everybody’s favorite tech news aggregator, Techmeme.

It looks like the problems started around 2:30pm PT and spread from there. Best we can tell, this is a worldwide problem and involves a networking issue, but IBM’s own status page isn’t actually loading anymore and returns an internal server error, so we don’t quite know the extent of the outage or what triggered it. IBM Cloud’s Twitter account has also remained silent, though we found a status page for IBM Aspera hosted on a third-party server, which seems to confirm that this is likely a worldwide networking issue.

IBM Cloud, which published a paper about ensuring zero downtime in April, also suffered a minor outage in its Dallas data center in March.

We’ve reached out to IBM’s PR team and will update this post once we get more information.


By Frederic Lardinois

Puppet names former Cloud Foundry Foundation executive director Abbey Kearns as CTO

Puppet, the Portland-based infrastructure automation company, today announced that it has named former Cloud Foundry Foundation executive director Abby Kearns as its new CTO. She’s replacing Deepak Giridharagopal, who became CTO in 2016.

Kearns stepped down from her role at the Cloud Foundry Foundation earlier this month after holding that position since 2016. At the time, she wasn’t quite ready to reveal her next move, though, and her taking the CTO job at Puppet comes as a bit of a surprise. Despite a lot of usage and hype in its early days, Puppet isn’t often seen as an up-and-coming company anymore, after all. But Kearns argues that a lot of this is due to perception.

“Puppet had great technology and really drove the early DevOps movement, but they kind of fell off the face of the map,” she said. “Nobody thought of them as anything other than config management, and so I was like, well, you know, problem number one: fix that perception problem if that’s no longer the reality or otherwise, everyone thinks you’re dead.”

Since Kearns had already started talking to Puppet CEO Yvonne Wassenaar, who took the job in January 2019, she joined the product advisory board about a year ago and the discussion about Kearns joining the company became serious a few months later.

“We started talking earlier this year,” said Kearns. “She said: ‘You know, wouldn’t it be great if you could come help us? I’m building out a brand new executive team. We’re really trying to reshape the company.’ And I got really excited about the team that she built. She’s got a really fantastic new leadership team, all of them are there for less than a year. they have a new CRO, new CMO. She’s really assembled a fantastic team of people that are super smart, but also really thoughtful people.”

Kearns argues that Puppet’s product has really changed, but that the company didn’t really talk about it enough, despite the fact that 80% of the Global 5,000 are customers.

Given the COVID-19 pandemic, Kearns has obviously not been able to meet the Puppet team yet, but she told me that she’s starting to dig deeper into the company’s product portfolio and put together a strategy. “There’s just such an immensely talented team here. And I realize every startup tells you that, but really, there’s actually a lot of talented people here that are really nice. And I guess maybe it’s the Portland in them, but everyone’s nice,” she said.

“Abby is keenly aware of Puppet’s mission, having served on our Product Advisory Board for the last year, and is a technologist at heart,” said Wassenaar. “She brings a great balance to this position for us – she has deep experience in the enterprise and understands how to solve problems at massive scale.”

In addition to Kearns, former Cloud Foundry Foundation VP of marketing Devin Davis also joined Puppet as the company’s VP of corporate marketing and communications.


By Frederic Lardinois

Fishtown Analytics raises $12.9M Series A for its open-source analytics engineering tool

Philadelphia-based Fishtown Analytics, the company behind the popular open-source data engineering tool dbt, today announced that it has raised a $12.9 million Series A round led by Andreessen Horowitz, with the firm’s general partner Martin Casada joining the company’s board.

“I wrote this blog post in early 2016, essentially saying that analysts needed to work in a fundamentally different way,” Fishtown founder and CEO Tristan Handy told me, when I asked him about how the product came to be. “They needed to work in a way that much more closely mirrored the way the software engineers work and software engineers have been figuring this shit out for years and data analysts are still like sending each other Microsoft Excel docs over email.”

The dbt open-source project forms the basis of this. It allows anyone who can write SQL queries to transform data and then load it into their preferred analytics tools. As such, it sits in-between data warehouses and the tools that load data into them on one end, and specialized analytics tools on the other.

As Casada noted when I talked to him about the investment, data warehouses have now made it affordable for businesses to store all of their data before it is transformed. So what was traditionally “extract, transform, load” (ETL) has now become “extract, load, transform” (ELT). Andreessen Horowitz is already invested in Fivetran, which helps businesses move their data into their warehouses, so it makes sense for the firm to also tackle the other side of this business.

“Dbt is, as far as we can tell, the leading community for transformation and it’s a company we’ve been tracking for at least a year,” Casada said. He also argued that data analysts — unlike data scientists — are not really catered to as a group.

Before this round, Fishtown hadn’t raised a lot of money, even though it has been around for a few years now, except for a small SAFE round from Amplify.

But Handy argued that the company needed this time to prove that it was on to something and build a community. That community now consists of more than 1,700 companies that use the dbt project in some form and over 5,000 people in the dbt Slack community. Fishtown also now has over 250 dbt Cloud customers and the company signed up a number of big enterprise clients earlier this year. With that, the company needed to raise money to expand and also better service its current list of customers.

“We live in Philadelpha. The cost of living is low here and none of us really care to make a quadro-billion dollars, but we do want to answer the question of how do we best serve the community,” Handy said. “And for the first time, in the early part of the year, we were like, holy shit, we can’t keep up with all of the stuff that people need from us.”

The company plans to expand the team from 25 to 50 employees in 2020 and with those, the team plans to improve and expand the product, especially its IDE for data analysts, which Handy admitted could use a bit more polish.


By Frederic Lardinois

Databricks makes bringing data into its ‘lakehouse’ easier

Databricks today announced that launch of its new Data Ingestion Network of partners and the launch of its Databricks Ingest service. The idea here is to make it easier for businesses to combine the best of data warehouses and data lakes into a single platform — a concept Databricks likes to call ‘lakehouse.’

At the core of the company’s lakehouse is Delta Lake, Databricks’ Linux Foundation-managed open-source project that brings a new storage layer to data lakes that helps users manage the lifecycle of their data and ensures data quality through schema enforcement, log records and more. Databricks users can now work with the first five partners in the Ingestion Network — Fivetran, Qlik, Infoworks, StreamSets, Syncsort — to automatically load their data into Delta Lake. To ingest data from these partners, Databricks customers don’t have to set up any triggers or schedules — instead, data automatically flows into Delta Lake.

“Until now, companies have been forced to split up their data into traditional structured data and big data, and use them separately for BI and ML use cases. This results in siloed data in data lakes and data warehouses, slow processing and partial results that are too delayed or too incomplete to be effectively utilized,” says Ali Ghodsi, co-founder and CEO of Databricks. “This is one of the many drivers behind the shift to a Lakehouse paradigm, which aspires to combine the reliability of data warehouses with the scale of data lakes to support every kind of use case. In order for this architecture to work well, it needs to be easy for every type of data to be pulled in. Databricks Ingest is an important step in making that possible.”

Databricks VP or Product Marketing Bharath Gowda also tells me that this will make it easier for businesses to perform analytics on their most recent data and hence be more responsive when new information comes in. He also noted that users will be able to better leverage their structured and unstructured data for building better machine learning models, as well as to perform more traditional analytics on all of their data instead of just a small slice that’s available in their data warehouse.

 


By Frederic Lardinois

Microsoft acquires data privacy and governance service BlueTalon

Microsoft today announced that it has acquired BlueTalon, a data privacy and governance service that helps enterprises set policies for how their employees can access their data. The service then enforces those policies across most popular data environments and provides tools for auditing policies and access, too.

Neither Microsoft nor BlueTalon disclosed the financial details of the transaction. Ahead of today’s acquisition, BlueTalon had raised about $27.4 million, according to Crunchbase. Investors include Bloomberg Beta, Maverick Ventures, Signia Venture Partners and Stanford’s StartX fund.

BlueTalon Policy Engine How it works

“The IP and talent acquired through BlueTalon brings a unique expertise at the apex of big data, security and governance,” writes Rohan Kumar, Microsoft’s corporate VP for Azure Data. “This acquisition will enhance our ability to empower enterprises across industries to digitally transform while ensuring right use of data with centralized data governance at scale through Azure.”

Unsurprisingly, the BlueTalon team will become part of the Azure Data Governance group, where the team will work on enhancing Microsoft’s capabilities around data privacy and governance. Microsoft already offers access and governance control tools for Azure, of course. As virtually all businesses become more data-centric, though, the need for centralized access controls that work across systems is only going to increase and new data privacy laws aren’t making this process easier.

“As we began exploring partnership opportunities with various hyperscale cloud providers to better serve our customers, Microsoft deeply impressed us,” BlueTalon CEO Eric Tilenius, who has clearly read his share of “our incredible journey” blog posts, explains in today’s announcement. “The Azure Data team was uniquely thoughtful and visionary when it came to data governance. We found them to be the perfect fit for us in both mission and culture. So when Microsoft asked us to join forces, we jumped at the opportunity.”


By Frederic Lardinois

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, that the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzzword was ‘data lakes’ and the company started building its own in order to build out its analytics capacities.

Electric Line-Up, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use thethAzure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me that the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud since renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage cost, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing any of its own IoT data from its plants to the cloud. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.


By Frederic Lardinois

Humio raises $9M Series A for its real-time log analysis service

Humio, a startup that provides a real-time log analysis service for on-premises and cloud infrastructures, today announced that it has raised a $9 million Series A round led by Accel. It previously raised its seed round from WestHill and Trifork.

The company, which has offices in San Francisco, the U.K. and Denmark, tells me that it saw a 13x increase in its annual revenue in 2018. Current customers include Bloomberg, Microsoft and Netlify .

“We are experiencing a fundamental shift in how companies build, manage and run their systems,” said Humio CEO Geeta Schmidt. “This shift is driven by the urgency to adopt cloud-based and microservice-driven application architectures for faster development cycles, and dealing with sophisticated security threats. These customer requirements demand a next-generation logging solution that can provide live system observability and efficiently store the massive amounts of log data they are generating.”

To offer them this solution, Humio raised this round with an eye toward fulfilling the demand for its service, expanding its research and development teams and moving into more markets across the globe.

As Schmidt also noted, many organizations are rather frustrated by the log management and analytics solutions they currently have in place. “Common frustrations we hear are that legacy tools are too slow — on ingestion, searches and visualizations — with complex and costly licensing models,” she said. “Ops teams want to focus on operations — not building, running and maintaining their log management platform.”

To build this next-generation analysis tool, Humio built its own time series database engine to ingest the data, with open-source tools like Scala, Elm and Kafka in the backend. As data enters the pipeline, it’s pushed through live searches and then stored for later queries. As Humio VP of Engineering Christian Hvitved tells me, though, running ad-hoc queries is the exception, and most users only do so when they encounter bugs or a DDoS attack.

The query language used for the live filters is also pretty straightforward. That was a conscious decision, Hvitved said. “If it’s too hard, then users don’t ask the question,” he said. “We’re inspired by the Unix philosophy of using pipes, so in Humio, larger searches are built by combining smaller searches with pipes. This is very familiar to developers and operations people since it is how they are used to using their terminal.”

Humio charges its customers based on how much data they want to ingest and for how long they want to store it. Pricing starts at $200 per month for 30 days of data retention and 2 GB of ingested data.


By Frederic Lardinois

Forget Watson, the Red Hat acquisition may be the thing that saves IBM

With its latest $34 billion acquisition of Red Hat, IBM may have found something more elementary than “Watson” to save its flagging business.

Though the acquisition of Red Hat  is by no means a guaranteed victory for the Armonk, N.Y.-based computing company that has had more downs than ups over the five years, it seems to be a better bet for “Big Blue” than an artificial intelligence program that was always more hype than reality.

Indeed, commentators are already noting that this may be a case where IBM finally hangs up the Watson hat and returns to the enterprise software and services business that has always been its core competency (albeit one that has been weighted far more heavily on consulting services — to the detriment of the company’s business).

Watson, the business division focused on artificial intelligence whose public claims were always more marketing than actually market-driven, has not performed as well as IBM had hoped and investors were losing their patience.

Critics — including analysts at the investment bank Jefferies (as early as one year ago) — were skeptical of Watson’s ability to deliver IBM from its business woes.

As we wrote at the time:

Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “not ready for human investigational or clinical use.”

The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.

The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.

That’s not the only trouble IBM has had with Watson’s healthcare results. Earlier this year, the online medical journal Stat reported that Watson was giving clinicians recommendations for cancer treatments that were “unsafe and incorrect” — based on the training data it had received from the company’s own engineers and doctors at Sloan-Kettering who were working with the technology.

All of these woes were reflected in the company’s latest earnings call where it reported falling revenues primarily from the Cognitive Solutions business, which includes Watson’s artificial intelligence and supercomputing services. Though IBM chief financial officer pointed to “mid-to-high” single digit growth from Watson’s health business in the quarter, transaction processing software business fell by 8% and the company’s suite of hosted software services is basically an afterthought for business gravitating to Microsoft, Alphabet, and Amazon for cloud services.

To be sure, Watson is only one of the segments that IBM had been hoping to tap for its future growth; and while it was a huge investment area for the company, the company always had its eyes partly fixed on the cloud computing environment as it looked for areas of growth.

It’s this area of cloud computing where IBM hopes that Red Hat can help it gain ground.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, IBM Chairman, President and Chief Executive Officer, in a statement announcing the acquisition. “IBM will become the world’s number-one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.”

The acquisition also puts an incredible amount of marketing power behind Red Hat’s various open source services business — giving all of those IBM project managers and consultants new projects to pitch and maybe juicing open source software adoption a bit more aggressively in the enterprise.

As Red Hat chief executive Jim Whitehurst told TheStreet in September, “The big secular driver of Linux is that big data workloads run on Linux. AI workloads run on Linux. DevOps and those platforms, almost exclusively Linux,” he said. “So much of the net new workloads that are being built have an affinity for Linux.”


By Jonathan Shieber