The OpenStack Foundation becomes the Open Infrastructure Foundation

This has been a long time coming, but the OpenStack foundation today announced that it is changing its name to ‘Open Infrastructure Foundation,” starting in 2021.

The announcement, which the foundation made at its virtual developer conference, doesn’t exactly come as a surprise. Over the course of the last few years, the organization started adding new projects that went well beyond the core OpenStack project and renamed its conference to the ‘Open Infrastructure Summit.’ The organization actually filed for the ‘Open Infrastructure Foundation’ trademark back in April.

Image Credits: OpenStack Foundation

After years of hype, the open-source OpenStack project hit a bit of a wall in 2016, as the market started to consolidate. The project itself, which helps enterprises run their private cloud, found its niche in the telecom space, though, and continues to thrive as one of the world’s most active open-source projects. Indeed, I regularly hear from OpenStack vendors that they are now seeing record sales numbers — despite the lack of hype. With the project being stable, though, the Foundation started casting a wider net and added additional projects like the popular Kata Containers runtime and CI/CD platform Zuul.

“We are officially transitioning and becoming the Open Infrastructure Foundation,” long-term OpenStack Foundation executive president Jonathan Bryce told me. “That is something that I think is an awesome step that’s built on the success that our community has spawned both within projects like OpenStack, but also as a movement […], which is [about] how do you give people choice and control as they build out digital infrastructure? And that is, I think, an awesome mission to have. And that’s what we are recognizing and acknowledging and setting up for another decade of doing that together with our great community.”

In many ways, it’s been more of a surprise that the organization waited as long as it did. As the foundation’s COO Mark Collier told me, the team waited because it wanted to sure that it did this right.

“We really just wanted to make sure that all the stuff we learned when we were building the OpenStack community and with the community — that started with a simple idea of ‘open source should be part of cloud, for infrastructure.’ That idea has just spawned so much more open source than we could have imagined. Of course, OpenStack itself has gotten bigger and more diverse than we could have imagined,” Collier said.

As part of today’s announcement, the group is also adding four new members at Platinum tier, its highest membership level: Ant Group, the Alibaba affiliate behind Alipay, embedded systems specialist Wind River, China’s Fiberhome (which was previously a Gold member) and Facebook Connectivity. To become a Platinum member, companies have to contribute $350,000 per year to the foundation and must have at least 2 full-time employees contributing to its projects.

“If you look at those companies that we have as Platinum members, it’s a pretty broad set of organizations,” Bryce noted. “AT&T, the largest carrier in the world. And then you also have a company Ant, who’s the largest payment processor in the world and a massive financial services company overall — over to Ericsson, that does telco, Wind River, that does defense and manufacturing. And I think that speaks to that everybody needs infrastructure. If we build a community — and we successfully structure these communities to write software with a goal of getting all of that software out into production, I think that creates so much value for so many people: for an ecosystem of vendors and for a great group of users and a lot of developers love working in open source because we work with smart people from all over the world.”

The OpenStack Foundation’s existing members are also on board and Bryce and Collier hinted at several new members who will join soon but didn’t quite get everything in place for today’s announcement.

We can probably expect the new foundation to start adding new projects next year, but it’s worth noting that the OpenStack project continues apace. The latest of the project’s bi-annual releases, dubbed ‘Victoria,’ launched last week, with additional Kubernetes integrations, improved support for various accelerators and more. Nothing will really change for the project now that the foundation is changing its name — though it may end up benefitting from a reenergized and more diverse community that will build out projects at its periphery.


By Frederic Lardinois

Temporal raises $18.75M for its microservices orchestration platform

Temporal, a Seattle-based startup that is building an open-source, stateful microservices orchestration platform, today announced that it has raised an $18.75 million Series A round led by Sequoia Ventures. Existing investors Addition Ventures and Amplify Partners also joined, together with new investor Madrona Venture Group. With this, the company has now raised a total of $25.5 million.

Founded by Maxim Fateev (CEO) and Samar Abbas (CTO), who created the open-source Cadence orchestration engine during their time at Uber, Temporal aims to make it easier for developers and operators to run microservices in production. Current users include the likes of Box and Snap.

“Before microservices, coding applications was much simpler,” Temporal’s Fateev told me. “Resources were always located in the same place — the monolith server with a single DB — which meant developers didn’t have to codify a bunch of guessing about where things were. Microservices, on the other hand, are highly distributed, which means developers need to coordinate changes across a number of servers in different physical locations.”

Those servers could go down at any time, so engineers often spend a lot of time building custom reliability code to make calls to these services. As Fateev argues, that’s table stakes and doesn’t help these developers create something that builds real business value. Temporal gives these developers access to a set of what the team calls ‘reliability primitives’ that handle these use cases. “This means developers spend far more time writing differentiated code for their business and end up with a more reliable application than they could have built themselves,” said Fateev.

Temporal’s target use is virtually any developer who works with microservices — and wants them to be reliable. Because of this, the company’s tool — despite offering a read-only web-based user interface for administering and monitoring the system — isn’t the main focus here. The company also doesn’t have any plans to create a no-code/low-code workflow builder, Fateev tells me. However, since it is open-source, quite a few Temporal users build their own solutions on top of it.

The company itself plans to offer a cloud-based Temporal-as-a-Service offering soon. Interestingly, Fateev tells me that the team isn’t looking at offering enterprise support or licensing in the near future, though. “After spending a lot of time thinking it over, we decided a hosted offering was best for the open-source community and long term growth of the business,” he said.

Unsurprisingly, the company plans to use the new funding to improve its existing tool and build out this cloud service, with plans to launch it into general availability next year. At the same time, the team plans to say true to its open-source roots and host events and provide more resources to its community.

“Temporal enables Snapchat to focus on building the business logic of a robust asynchronous API system without requiring a complex state management infrastructure,” said Steven Sun, Snap Tech Lead, Staff Software Engineer. “This has improved the efficiency of launching our services for the Snapchat community.”


By Frederic Lardinois

Atlassian Smarts adds machine learning layer across the company’s platform of services

Atlassian has been offering collaboration tools, often favored by developers and IT for some time with such stalwarts as Jira for help desk tickets, Confluence to organize your work and BitBucket to organize your development deliverables, but what it lacked was machine learning layer across the platform to help users work smarter within and across the applications in the Atlassian family.

That changed today, when Atlassian announced it has been building that machine learning layer called Atlassian Smarts, and is releasing several tools that take advantage of it. It’s worth noting that unlike Salesforce, which calls its intelligence layer Einstein or Adobe, which calls its Sensei; Atlassian chose to forgo the cutesy marketing terms and just let the technology stand on its own.

Shihab Hamid, the founder of the Smarts and Machine Learning Team at Atlassian, who has been with the company 14 years, says that they avoided a marketing name by design. “I think one of the things that we’re trying to focus on is actually the user experience and so rather than packaging or branding the technology, we’re really about optimizing teamwork,” Hamid told TechCrunch.

Hamid says that the goal of the machine learning layer is to remove the complexity involved with organizing people and information across the platform.

“Simple tasks like finding the right person or the right document becomes a challenge, or at least they slow down productivity and take time away from the creative high-value work that everyone wants to be doing, and teamwork itself is super messy and collaboration is complicated. These are human challenges that don’t really have one right solution,” he said.

He says that Atlassian has decided to solve these problems using machine learning with the goal of speeding up repetitive, time-intensive tasks. Much like Adobe or Salesforce, Atlassian has built this underlying layer of machine smarts, for lack of a better term, that can be distributed across their platform to deliver this kind of machine learning-based functionality wherever it makes sense for the particular product or service.

“We’ve invested in building this functionality directly into the Atlassian platform to bring together IT and development teams to unify work, so the Atlassian flagship products like JIRA and Confluence sit on top of this common platform and benefit from that common functionality across products. And so the idea is if we can build that common predictive capability at the platform layer we can actually proliferate smarts and benefit from the data that we gather across our products,” Hamid said.

The first pieces fit into this vision. For starters, Atlassian is offering a smart search tool that helps users find content across Atlassian tools faster by understanding who you are and how you work. “So by knowing where users work and what they work on, we’re able to proactively provide access to the right documents and accelerate work,” he said.

The second piece is more about collaboration and building teams with the best personnel for a given task. A new tool called predictive user mentions helps Jira and Confluence users find the right people for the job.

“What we’ve done with the Atlassian platform is actually baked in that intelligence, because we know what you work on and who you collaborate with, so we can predict who should be involved and brought into the conversation,” Hamid explained.

Finally, the company announced a tool specifically for Jira users, which bundles together similar sets of help requests and that should lead to faster resolution over doing them manually one at a time.

“We’re soon launching a feature in JIRA Service Desk that allows users to cluster similar tickets together, and operate on them to accelerate IT workflows, and this is done in the background using ML techniques to calculate the similarity of tickets, based on the summary and description, and so on.”

All of this was made possible by the company’s previous shift  from mostly on-premises to the cloud and the flexibility that gave them to build new tooling that crosses the entire platform.

Today’s announcements are just the start of what Atlassian hopes will be a slew of new machine learning-fueled features being added to the platform in the coming months and years.


By Ron Miller

Armory nabs $40M Series C as commercial biz on top of open source Spinnaker project takes off

As companies continue to shift more quickly to the cloud, pushed by the pandemic, startups like Armory that work in the cloud native space are seeing an uptick in interest. Armory is a company built to be commercial layer on top of the open source continuous delivery project Spinnaker. Today, it announced a $40 million Series C.

B Capital led the round with help from new investors Lead Edge Capital and Marc Benioff along with previous investors Insight Partners, Crosslink Capital, Bain Capital Ventures, Mango Capital, Y Combinator and Javelin Venture Partners. Today’s investment brings the total raised to more than $82 million.

“Spinnaker is an open source project that came out of Netflix and Google, and it is a very sophisticated multi-cloud and software delivery platform,” company co-founder and CEO Daniel R. Odio told TechCrunch.

Odio points out that this project has the backing of industry leaders including the three leading public cloud infrastructure vendors Amazon, Microsoft and Google, as well as other cloud players like CloudFoundry and HashiCorp. “The fact that there is a lot of open source community support for this project means that it is becoming the new standard for cloud native software delivery,” he said.

In the days before the notion of continuous delivery, companies moved forward slowly, releasing large updates over months or years. As software moved to the cloud, this approach no longer made sense and companies began delivering updates more incrementally adding features when they were ready. Adding a continuous delivery layer helped facilitate this move.

As Odio describes it, Armory extends the Spinnaker project to help implement complex use cases at large organizations including around compliance and governance and security. It is also in the early stages of implementing a SaaS version of the solution, which should be available next year.

While he didn’t want to discuss customer numbers, he mentioned JPMorgan Chase and Autodesk as customers along with less specific allusions to a “a Fortune Five technology company, a Fortune 20 Bank, a Fortune 50 retailer and a Fortune 100 technology company.

The company currently has 75 employees, but Odio says business has been booming and he plans to double the team in the next year. As he does, he says that he is deeply committed to diversity and inclusion.

“There’s actually a really big difference between diversity and inclusion, and there’s a great Vernā Myers quote that diversity is being asked to the party and inclusion is being asked to dance, and so it’s actually important for us not only to focus on diversity, but also focus on inclusion because that’s how we win. By having a heterogeneous company, we will outperform a homogeneous company,” he said.

While the company has moved to remote work during COVID, Odio says they intend to remain that way, even after the current crisis is over. “Now obviously COVID been a real challenge for the world including us. We’ve gone to a fully remote-first model, and we are going to stay remote first even after COVID. And it’s really important for us to be taking care of our people, so there’s a lot of human empathy here,” he said.

But at the same time, he sees COVID opening up businesses to move to the cloud and that represents an opportunity for his business, one that he will focus on with new capital at his disposal. “In terms of the business opportunity, we exist to help power the transformation that these enterprises are undergoing right now, and there’s a lot of urgency for us to execute on our vision and mission because there is a lot of demand for this right now,” he said.


By Ron Miller

Twilio is buying customer data startup Segment for between $3B and $4B

Sources have told TechCrunch that Twilio intends to acquire customer data startup Segment for between $3 and $4 billion. Forbes broke the story on Friday night, reporting a price tag of $3.2 billion.

We have heard from a couple of industry sources that the deal is in the works and could be announced as early as Monday.

Twilio and Segment are both API companies. That means they create an easy way for developers to tap into a specific type of functionality without writing a lot of code. As I wrote in a 2017 article on Segment, it provides a set of APIs to pull together customer data from a variety of sources:

Segment has made a name for itself by providing a set of APIs that enable it to gather data about a customer from a variety of sources like your CRM tool, customer service application and website and pull that all together into a single view of the customer, something that is the goal of every company in the customer information business.

While Twilio’s main focus since it launched in 2008 has been on making it easy to embed communications functionality into any app, it signaled a switch in direction when it released the Flex customer service API in March 2018. Later that same year, it bought SendGrid, an email marketing API company for $2 billion.

Twilio’s market cap as of Friday was an impressive $45 billion. You could see how it can afford to flex its financial muscles to combine Twilio’s core API mission, especially Flex, with the ability to pull customer data with Segment and create customized email or ads with SendGrid.

This could enable Twilio to expand beyond pure core communications capabilities and it could come at the cost of around $5 billion for the two companies, a good deal for what could turn out to be a substantial business as more and more companies look for ways to understand and communicate with their customers in more relevant ways across multiple channels.

As Semil Shah from early stage VC firm Haystack wrote in the company blog yesterday, Segment saw a different way to gather customer data, and Twilio was wise to swoop in and buy it.

Segment’s belief was that a traditional CRM wasn’t robust enough for the enterprise to properly manage its pipe. Segment entered to provide customer data infrastructure to offer a more unified experience. Now under the Twilio umbrella, Segment can continue to build key integrations (like they have for Twilio data), which is being used globally inside Fortune 500 companies already.

Segment was founded in 2011 and raised over $283 million, according to Crunchbase data. Its most recent raise was $175 million in April on a $1.5 billion valuation.

Twilio stock closed at $306.24 per share on Friday up $2.39%.

Segment declined to comment on this story. We also sent a request for comment to Twilio, but hadn’t heard back by the time we published.  If that changes, we will update the story.


By Ron Miller

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem


By Ron Miller

Grid AI raises $18.6M Series A to help AI researchers and engineers bring their models to production

Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute. 

Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.

The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.

“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”

As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.

What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”

The service is now in private beta.

With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.

“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”

“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”


By Frederic Lardinois

Kong launches Kong Konnect, its cloud-native connectivity platform

At its (virtual) Kong Summit 2020, API platform Kong today announced the launch of Kong Konnect, its managed end-to-end cloud-native connectivity platform. The idea here is to give businesses a single service that allows them to manage the connectivity between their APIs and microservices and help developers and operators manage their workflows across Kong’s API Gateway, Kubernetes Ingress and King Service Mesh runtimes.

“It’s a universal control plane delivery cloud that’s consumption-based, where you can manage and orchestrate API gateway runtime, service mesh runtime, and Kubernetes Ingress controller runtime — and even Insomnia for design — all from one platform,” Kong CEO and co-founder Augusto ‘Aghi’ Marietti told me.

The new service is now in private beta and will become generally available in early 2021.

Image Credits: Kong

At the core of the platform is Kong’s new so-called ServiceHub, which provides that single pane of glass for managing a company’s services across the organization (and make them accessible across teams, too).

As Marietti noted, organizations can choose which runtime they want to use and purchase only those capabilities of the service that they currently need. The platform also includes built-in monitoring tools and supports any cloud, Kubernetes provider or on-premises environment, as long as they are Kubernetes-based.

The idea here, too, is to make all these tools accessible to developers and not just architects and operators. “I think that’s a key advantage, too,” Marietti said. “We are lowering the barrier by making a connectivity technology easier to be used by the 50 million developers — not just by the architects that were doing big grand plans at a large company.”

To do this, Konnect will be available as a self-service platform, reducing the friction of adopting the service.

Image Credits: Kong

This is also part of the company’s grander plan to go beyond its core API management services. Those services aren’t going away, but they are now part of the larger Kong platform. With its open-source Kong API Gateway, the company built the pathway to get to this point, but that’s a stable product now and it’s now clearly expanding beyond that with this cloud connectivity play that takes the company’s existing runtimes and combines them to provide a more comprehensive service.

“We have upgraded the vision of really becoming an end-to-end cloud connectivity company,” Marietti said. “Whether that’s API management or Kubernetes Ingress, […] or Kuma Service Mesh. It’s about connectivity problems. And so the company uplifted that solution to the enterprise.”

 


By Frederic Lardinois

Adobe beefs up developer tools to make it easer to build apps on Experience Cloud

Adobe has had a developer program for years called Adobe.io, but today at the Adobe Developers Live virtual conference, the company announced some new tools with a fresh emphasis on helping developers build custom apps on the Adobe Experience Cloud.

Jason Woosley, VP of developer experience and commerce at Adobe says that the pandemic has forced companies to build enhanced digital experiences much more quickly than they might have, and the new tools being announced today are at least partly related to helping speed up the development of better online experiences.

“Our focus is very specifically on making the experience generation business something that’s very attractive to developers and very accessible to developers so we’re announcing a number of tools,” Woosley told TechCrunch.

The idea is to build a more complete framework over time to make it easier to build applications and connect to data sources that take advantage of the Experience Cloud tooling. For starters, Project Firefly is designed to help developers build applications more quickly by providing a higher level of automation than was previously available.

“Project Firefly creates an extensibility framework that reduces the boilerplate that a developer would need to get started working with the Experience Cloud, and extends that into the customizations that we know every implementation eventually needs to differentiate the storefront experience, the website experience or whatever customer touch point as these things become increasingly digital,” he said.

In order to make those new experiences open to all, the company is also announcing React Spectrum, an open source set of libraries and tools designed to help members of the Adobe developer community build more accessible applications and websites.

“It comes with all of the accessibility features that often get forgotten when you’re in a race to market, so it’s nice to make sure that you will be very inclusive with your design, making sure that you’re bringing on all aspects of your audiences,” Woosley said.

Finally, a big part of interacting with Experience Cloud is taking advantage of all of the data that’s available to help build those more customized interactions with customers that having that data enables. To that end, the company is announcing some new web and mobile software development kits (SDKs) designed to help make it simpler to link to Experience Cloud data sources as you build your applications.

Project Firefly is generally available starting today as are several React Spectrum components and some data connection SDKs. The company intends to keep adding to these various pieces in the coming months.


By Ron Miller

Five years after creating Traefik application proxy, open source project hits 2B downloads

Five years ago, Traefik Labs founder and CEO Emile Vauge was working on a project deploying thousands of microservices and he was lacking a cloud native application proxy that could handle this kind of scale. So like any good developer, he created one himself and Traefik was born.

If you go back five years, the notion of cloud native was still in its infancy. Docker has been doing containers for just a couple of years, and Kubernetes would only be released that year. There wasn’t much cloud native tooling around, so Vauge decided to build a cloud native reverse proxy out of pure necessity.

“At that time, five years ago, there was no reverse proxy that was good at managing the complexity of microservices at cloud scale. So that was really the origin of Traefik. And one of the big innovations was its automation and its simplicity,” he said.

As he explained it, a reverse proxy needs to have several features like traffic management, load balancing, observability and security, but much of this had to be done manually with the tools available at the time. As it turns out Vauge had stumbled onto a major pain point.

“Initially I created Traefik for myself. It was a side project but it turned out that there was a huge interest and very quickly a community gathered around the project,” he said. After a few months, he realized he could build a company around this and left his job to start a company called Containous.

Today, he changed the name of that company to Traefik Labs and the open source project he developed has become wildly popular. “Five years later we are at 2 billion downloads. It’s in the top 10 most downloaded projects on Docker. We have 30,000 stars on GitHub. So basically it’s one of the largest open source projects in the world,” he said. In addition, he said that there are over 550 individuals contributing to the project today.

When he formed Containous, he developed an open core-based commercial project designed for enterprise needs around scaling, high availability and more security features. Today, that includes the Traefik Proxy and an open source service mesh called Traefik Mesh.

Among the companies using the open source project today are Conde Nast, eBay Classifieds and Mailchimp.

Vauge certainly was in the right place at the right time five years ago, which he modestly attributes to luck because he was working at one of the few companies at the time who were dealing with microservices at scale. “We had to build a lot of things and Traefik was one of those things. So I was basically lucky because I created Traefik at the right time,” he said.

Not surprisingly a company with that kind of open source traction has attracted the interest of venture capitalists and Vauge has raised $16 million since he launched his company in 2015 including $10 million led by Balderton Capital in January.


By Ron Miller

Narrator raises $6.2M for a new approach to data modelling that replaces star schema

Snowflake went public this week, and in a mark of the wider ecosystem that is evolving around data warehousing, a startup that has built a completely new concept for modelling warehoused data is announcing funding. Narrator — which uses an 11-column ordering model rather than standard star schema to organise data for modelling and analysis — has picked up a Series A round of $6.2 million, money that it plans to use to help it launch and build up users for a self-serve version of its product.

The funding is being led by Initialized Capital along with continued investment from Flybridge Capital Partners and Y Combinator — where the startup was in a 2019 cohort — as well as new investors including Paul Buchheit.

Narrative has been around for three years, but its first phase was based around providing modelling and analytics directly to companies as a consultancy, helping companies bring together disparate, structured data sources from marketing, CRM, support desks and internal databases to work as a unified whole. As consultants, using an earlier build of the tool that it’s now launching, the company’s CEO Ahmed Elsamadisi said he and others each juggled queries “for eight big companies singlehandedly,” while deep-dive analyses were done by another single person.

Having validated that it works, the new self-serve version aims to give data scientists and analysts a simplified way of ordering data so that queries, described as actionable analyses in a story-like format — or “Narratives“, as the company calls them — can be made across that data quickly — hours rather than weeks — and consistently. (You can see a demo of how it works below provided by the company’s head of data, Brittany Davis.)

(And the new data-as-a-service is also priced in SaaS tiers, with a free tier for the first 5 million rows of data, and a sliding scale of pricing after that based on data rows, user numbers, and Narratives in use.)

Elsamadisi, who co-founded the startup with Matt Star, Cedric Dussud, and Michael Nason, said that data analysts have long lived with the problems with star schema modelling (and by extension the related format of snowflake schema), which can be summed up as “layers of dependencies, lack of source of truth, numbers not matching, and endless maintenance” he said.

“At its core, when you have lots of tables built from lots of complex SQL, you end up with a growing house of cards requiring the need to constantly hire more people to help make sure it doesn’t collapse.”

(We)Work Experience

It was while he was working as lead data scientist at WeWork — yes, he told me, maybe it wasn’t actually a tech company but it had “tech at its core” — that he had a breakthrough moment of realising how to restructure data to get around these issues.

Before that, things were tough on the data front. WeWork had 700 tables that his team was managing using a star schema approach, covering 85 systems and 13,000 objects. Data would include information on acquiring buildings, to the flows of customers through those buildings, how things would change and customers might churn, with marketing and activity on social networks, and so on, growing in line with the company’s own rapidly scaling empire.  All of that meant a mess at the data end.

“Data analysts wouldn’t be able to do their jobs,” he said. “It turns out we could barely even answer basic questions about sales numbers. Nothing matched up, and everything took too long.”

The team had 45 people on it, but even so it ended up having to implement a hierarchy for answering questions, as there were so many and not enough time to dig through and answer them all. “And we had every data tool there was,” he added. “My team hated everything they did.”

The single-table column model that Narrator uses, he said, “had been theorised” in the past but hadn’t been figured out.

The spark, he said, was to think of data structured in the same way the we ask questions, where — as he described it — each piece of data can be bridged together and then also used to answer multiple questions.

“The main difference is we’re using a time-series table to replace all your data modelling,” Elsamadisi explained. “This is not a new idea, but it was always considered impossible. In short, we tackle the same problem as most data companies to make it easier to get the data you want but we are the only company that solves it by innovating on the lowest-level data modelling approach. Honestly, that is why our solution works so well. We rebuilt the foundation of data instead of trying to make a faulty foundation better.”

Narrator calls the composite table, which includes all of your data reformatted to fit in its 11-column structure, the Activity Stream.

Elsamadisi said using Narrator for the first time takes about 30 minutes, and about a month to learn to use it thoroughly. “But you’re not going back to SQL after that, it’s so much faster,” he added.

Narrator’s initial market has been providing services to other tech companies, and specifically startups, but the plan is to open it up to a much wider set of verticals. And in a move that might help with that, longer term, it also plans to open source some of its core components so that third parties can data products on top of the framework more quickly.

As for competitors, he says that it’s essentially the tools that he and other data scientists have always used, although “we’re going against a ‘best practice’ approach (star schema), not a company.” Airflow, DBT, Looker’s LookML, Chartio’s Visual SQL, Tableau Prep are all ways to create and enable the use of a traditional star schema, he added. “We’re similar to these companies — trying to make it as easy and efficient as possible to generate the tables you need for BI, reporting, and analysis — but those companies are limited by the traditional star schema approach.”

So far the proof has been in the data. Narrator says that companies average around 20 transformations (the unit used to answer questions) compared to hundreds in a star schema, and that those transformations average 22 lines compared to 1000+ lines in traditional modelling. For those that learn how to use it, the average time for generating a report or running some analysis is four minutes, compared to weeks in traditional data modelling. 

“Narrator has the potential to set a new standard in data,” said Jen Wolf, ​Initialized Capital COO and partner and new Narrator board member​, in a statement. “We were amazed to see the quality and speed with which Narrator delivered analyses using their product. We’re confident once the world experiences Narrator this will be how data analysis is taught moving forward.”


By Ingrid Lunden

Data virtualization service Varada raises $12M

Varada, a Tel Aviv-based startup that focuses on making it easier for businesses to query data across services, today announced that it has raised a $12 million Series A round led by Israeli early-stage fund MizMaa Ventures, with participation by Gefen Capital.

“If you look at the storage aspect for big data, there’s always innovation, but we can put a lot of data in one place,” Varada CEO and co-founder Eran Vanounou told me. “But translating data into insight? It’s so hard. It’s costly. It’s slow. It’s complicated.”

That’s a lesson he learned during his time as CTO of LivePerson, which he described as a classic big data company. And just like at LivePerson, where the team had to reinvent the wheel to solve its data problems, again and again, every company — and not just the large enterprises — now struggles with managing their data and getting insights out of it, Vanounou argued.

Image Credits: Varada

The rest of the founding team, David Krakov, Roman Vainbrand and Tal Ben-Moshe, already had a lot of experience in dealing with these problems, too, with Ben-Moshe having served at the Chief Software Architect of Dell EMC’s XtremIO flash array unit, for example. They built the system for indexing big data that’s at the core of Varada’s platform (with the open-source Presto SQL query engine being one of the other cornerstones).

Image Credits: Varada

Essentially, Varada embraces the idea of data lakes and enriches that with its indexing capabilities. And those indexing capabilities is where Varada’s smarts can be found. As Vanounou explained, the company is using a machine learning system to understand when users tend to run certain workloads and then caches the data ahead of time, making the system far faster than its competitors.

“If you think about big organizations and think about the workloads and the queries, what happens during the morning time is different from evening time. What happened yesterday is not what happened today. What happened on a rainy day is not what happened on a shiny day. […] We listen to what’s going on and we optimize. We leverage the indexing technology. We index what is needed when it is needed.”

That helps speed up queries, but it also means less data has to be replicated, which also brings down the cost. AÅs Mizmaa’s Aaron Applebaum noted, since Varada is not a SaaS solution, the buyers still get all of the discounts from their cloud providers, too.

In addition, the system can allocate resources intelligently to that different users can tap into different amounts of bandwidth. You can tell it to give customers more bandwidth than your financial analysts, for example.

“Data is growing like crazy: in volume, in scale, in complexity, in who requires it and what the business intelligence uses are, what the API uses are,” Applebaum said when I asked him why he decided to invest. “And compute is getting slightly cheaper, but not really, and storage is getting cheaper. So if you can make the trade-off to store more stuff, and access things more intelligently, more quickly, more agile — that was the basis of our thesis, as long as you can do it without compromising performance.”

Varada, with its team of experienced executives, architects and engineers, ticked a lot of the company’s boxes in this regard, but he also noted that unlike some other Israeli startups, the team understood that it had to listen to customers and understand their needs, too.

“In Israel, you have a history — and it’s become less and less the case — but historically, there’s a joke that it’s ‘ready, fire, aim.’ You build a technology, you’ve got this beautiful thing and you’re like, ‘alright, we did it,’ but without listening to the needs of the customer,” he explained.

The Varada team is not afraid to compare itself to Snowflake, which at least at first glance seems to make similar promises. Vananou praised the company for opening up the data warehousing market and proving that people are willing to pay for good analytics. But he argues that Varada’s approach is fundamentally different.

“We embrace the data lake. So if you are Mr. Customer, your data is your data. We’re not going to take it, move it, copy it. This is your single source of truth,” he said. And in addition, the data can stay in the company’s virtual private cloud. He also argues that Varada isn’t so much focused on the business users but the technologists inside a company.

 


By Frederic Lardinois

Snyk bags another $200M at $2.6B valuation 9 months after last raise

When we last reported on Snyk in January, eons ago in COVID time, the company announced $150 million investment on a valuation of over $1 billion. Today, barely nine months later, it announced another $200 million and its valuation has expanded to $2.6 billion.

The company is obviously drawing some serious investor attention and even a pandemic is not diminishing that interest. Addition led today’s round, bringing the total raised to $450 million with $350 million coming this year alone.

Snyk has a unique approach to security, building it into the development process instead of offloading it to a separate security team. If you want to build a secure product, you need to think about it as you’re developing the product and that’s what Snyk’s product set is designed to do — check for security as you’re committing your build to your git repository.

With an open source product at the top of funnel to drive interest in the platform, CEO Peter McKay says the pandemic has only accelerated the appeal of the company. In fact, the startup’s annual recurring revenue (ARR) is growing at a remarkable 275% year over year.

McKay says, even with the pandemic, his company has been accelerating adding 100 employees in the last 12 months to take advantage of the increasing revenue. “When others were kind of scaling back we invested and it worked out well because our business never slowed down. In fact, in a lot of the industries it really picked up,” he said.

That’s because as many other founders have pointed out, COVID is speeding up the rate at which many companies are moving to the cloud, and that’s working Snyk’s favor. “We’ve just capitalized on this accelerated shift to the cloud and modern cloud native applications,” he said.

The company currently has 375 employees with plans to add 100 more in the next year. As it grows, McKay says that he is looking to build a diverse and inclusive culture, something he learned about as he moved through his career at VMware and Veeam.

He says one of the keys at Snyk is putting every employee through unconscious bias training to help limit bias in the hiring process, and the executive team has taken a pledge to make the company’s hiring practices more diverse. Still, he recognizes it takes work to achieve these goals, and it’s always easy for an experienced team to go back to the network instead of digging deeper for a more diverse candidate pool.

“I think we’ve put all the pieces in place to get there, but I think like a lot of companies, there’s still a long way to go,” he said. But he recognizes the sooner you embed diversity into the company culture, the better because it’s hard to go back after the fact and do it.

Addition founder Lee Fixel says he sees a company that’s accelerating rapidly and that’s why he was willing to pour in so big an investment. “Snyk’s impressive growth is a signal that the market is ready to embrace a change from traditional security and empower developers to tackle the new security risk that comes with a software-driven digital world,” he said in a statement.

Snyk was founded in 2015. The founders brought McKay on board for some experienced leadership in 2018 to help lead the company through its rapid growth. Prior to the $350 million in new money this year, the company raised $70 million in 2019.


By Ron Miller

Hasura raises $25 million Series B and adds MySQL support to its GraphQL service

Hasura, a service that provides developers with an open-source engine that provides them a GraphQL API to access their databases, today announced that it has raised a $25 million Series B round led by Lightspeed Venture Partners. Previous investors Vertex Ventures US, Nexus Venture Partners, Strive VC and SAP.iO Fund also participated in this round.

The new round, which the team raised after the COVID-19 pandemic had already started, comes only six months after the company announced its $9.9 million Series A round. In total, Hasura has now raised $36.5 million.

“We’ve been seeing rapid enterprise traction in 2020. We’ve wanted to accelerate our efforts investing in the Hasura community and our cloud product that we recently launched and to ensure the success of our enterprise customers. Given the VC inbound interest, a fundraise made sense to help us step on the gas pedal and give us room to grow comfortably,” Hasura co-founder and CEO Tanmai Gopa told me.

In addition to the new funding, Hasura also today announced that it has added support for MySQL databases to its service. Until now, the company’s service only worked with PostgreSQL databases.

Rajoshi Ghosh, co-founder and COO (left) and Tanmai Gopal, co-founder and CEO (right).

Rajoshi Ghosh, co-founder and COO (left) and Tanmai Gopal, co-founder and CEO (right).

As the company’s CEO and co-founder Tanmai Gopal told me, MySQL support has long been at the top of the most requested features by the service’s users. Many of these users — who are often in the health care and financial services industry — are also working with legacy systems they are trying to connect to modern applications and MySQL plays an important role there, given how long it has been around.

In addition to adding MySQL support, Hasura is also adding support for SQL Server to its line-up, but for now, that’s in early access.

“For MySQL and SQL Server, we’ve seen a lot of demand from our healthcare and financial services / fin-tech users,” Gopa said. “They have a lot of existing online data, especially in these two databases, that they want to activate to build new capabilities and use while modernizing their applications.

Today’s announcement also comes only a few months after the company launched a fully-managed managed cloud service for its service, which complements its existing paid Pro service for enterprises.

“We’re very impressed by how developers have taken to Hasura and embraced the GraphQL approach to building applications,” said Gaurav Gupta, partner at Lightspeed Venture Partners and Hasura board member. “Particularly for front-end developers using technologies like React, Hasura makes it easy to connect applications to existing databases where all the data is without compromising on security and performance. Hasura provides a lovely bridge for re-platforming applications to cloud-native approaches, so we see this approach being embraced by enterprise developers as well as front-end developers more and more.”

The company plans to use the new funding to add support for more databases and to tackle some of the harder technical challenges around cross-database joins and the company’s application-level data caching system. “We’re also investing deeply in company building so that we can grow our GTM and engineering in tandem and making some senior hires across these functions,” said Gopa.


By Frederic Lardinois

Progress snags software automation platform Chef for $220M

Progress, a Boston area developer tool company, boosted its offerings in a big way today when it announced it was acquiring software automation platform Chef for $220 million.

Chef, which went 100% open source last year, had annual recurring revenue (ARR) of $70 million from the commercial side of the house. Needless to say, Progress CEO Yogesh Gupta was happy to bring the company into the fold and gain not only that revenue, but a set of highly skilled employees, a strong developer community and an impressive customer list.

Gupta said that Chef fits with his company’s acquisition philosophy. “This acquisition perfectly aligns with our growth strategy and meets the requirements that we’ve previously laid out: a strong recurring revenue model, technology that complements our business, a loyal customer base and the ability to leverage our operating model and infrastructure to run the business more efficiently,” he said in a statement.

Chef CEO Barry Crist offered a typical argument for an acquired company, that Progress offered  a better path to future growth, while sending a message to the open source community and customers that Progress would be a good steward of the startup’s vision.

“For Chef, this acquisition is our next chapter, and Progress will help enhance our growth potential, support our Open Source vision, and provide broader opportunities for our customers, partners, employees and community,” Crist said in a statement.

Chef’s customer list is certainly impressive including tech industry stalwarts like Facebook, IBM and SAP, as well as non-tech companies like Nordstrom, Alaska Airlines and Capital One.

The company was founded in 2008 and had raised $105 million. according to Crunchbase data. It hadn’t raised any funds since 2015 when it raised a $40 million Series E led by DFJ Growth. Other investors along the way included Battery Ventures, Ignition Partners and Scale Venture Partners.

The transaction is expected to close next month pending normal regulatory approvals.


By Ron Miller