Zoho launches Catalyst, a new developer platform with a focus on microservices

Zoho may be one of the most underrated tech companies. The 23-year-old company, which at this point offers more than 45 products, has never taken outside funding and has no ambition to go public, yet it’s highly profitable and runs its own data centers around the world. And today, it’s launching Catalyst, a cloud-based developer platform with a focus on microservices that it hopes can challenge those of many of its larger competitors.

The company already offered a low-code tool for building business apps. But Catalyst is different. Zoho isn’t following in the footsteps of Google or Amazon here and offering a relatively unopinionated platform for running virtual machines and containers. Indeed, it does nothing of the sort. The company is 100% betting on serverless as the next major technology for building enterprise apps and the whole platform has been tuned for this purpose.

Catalyst Zia AI

“Historically, when you look at cloud computing, when you look at any public clouds, they pretty much range from virtualizing your servers and renting our virtual servers all the way up the stack,” Raju Vegesna, Zoho’s chief evangelist, said when I asked him about this decision to bet on serverless. “But when you look at it from a developer’s point of view, you still have to deal with a lot of baggage. You still have to figure out the operating system, you still have to figure out the database. And then you have to scale and manage the updates. All of that has to be done at the application infrastructure level.” In recent years, though, said Vegesna, the focus has shifted to the app logic side, with databases and file servers being abstracted away. And that’s the trend Zoho is hoping to capitalize on with Catalyst.

What Catalyst does do is give advanced developers a platform to build, run and manage event-driven microservice-based applications that can, among other things, also tap into many of the tools that Zoho built for running its own applications, like a grammar checker for Zoho Writer, document previews for Zoho Drive or access to its Zia AI tools for OCR, sentiment analysis and predictions. The platform gives developers tools to orchestrate the various microservices, which obviously means it’ll make it easy to scale applications as needed, too. It integrates with existing CI/CD pipelines and IDEs.

Catalyst Functions

Catalyst also complies with the SOC Type II and ISO 27001 certifications, as well as GDPR. It also offers developers the ability to access data from Zoho’s own applications, as well as third-party tools, all backed by Zoho’s Unified Data Model, a relational datastore for server-side and client deployment.

“The infrastructure that we built over the last several years is now being exposed,” said Vegesna. He also stressed that Zoho is launching the complete platform in one go (though it will obviously add to it over time). “We are bringing everything together so that you can develop a mobile or web app from a single interface,” he said. “We are not just throwing 50 different disparate services out there.” At the same time, though, the company is also opting for a very deliberate approach here with its focus on serverless. That, Vegesna believes, will allow Zoho Catalyst to compete with its larger competitors.

It’s also worth noting that Zoho knows that it’s playing the long-game here, something it is familiar with, given that it launched its first product, Zoho Writer, back in 2005 before Google had launched its productivity suite.

Catalyst Homepage

 


By Frederic Lardinois

Autify raises $2.5M seed round for its no-code software testing platform

Autify, a platform that makes testing web application as easy as clicking a few buttons, has raised a $2.5 million seed round from Global Brain, Salesforce Ventures, Archetype Ventures and several angels. The company, which recently graduated from the Alchemist accelerator program for enterprise startups, splits its base between the U.S., where it keeps an office, and Japan, where co-founders Ryo Chikizawa (CEO) and Sam Yamashita got their start as software engineers.

The main idea here is that Autify, which was founded in 2016, allows teams to write test by simply recording their interactions with the app with the help of a Chrome extension and can then have Autify run these tests automatically on a variety of other browsers and mobile devices. Typically, these kinds of tests are very brittle and quickly start to fail whenever a developer makes changes to the design of the application.

Autify gets around this by using some machine learning smarts that give it the ability to know that a given button or form is still the same, no matter where it is on the page. Users can currently test their applications using IE, Edge, Chrome and Firefox on macOS and Windows, as well as a range of iOS and Android devices.

Scenario Editor

Chikizawa tells me that the main idea of Autify is based on his own experience as a developer. He also noted that many enterprises are struggling to hire automation engineers who can write tests for them, using Selenium and similar frameworks. With Autify, any developer (and even non-developer) can create a test without having to know the specifics of the underlying testing framework. “You don’t really need technical knowledge,” explained Chikizawa. “You can just out of the box use Autify.”

There are obviously some other startups that are also tacking this space, including SpotQA, for example. Chikizawa, however, argues that Autify is different given its focus on enterprises. “The audience is really different. We have competitors that are targeting engineers, but because we are saying that no coding [is required], we are selling to the companies that have been struggling with hiring automating engineers,” he told me. He also stressed that Autify is able to do cross-browser testing, something that’s also not a given among its competitors.

The company introduced its closed beta version in March and is currently testing the service with about a hundred companies. It integrates with development platforms like TestRail, Jenkins and CircleCI, as well as Slack.

Screen Shot 2019 10 01 at 2.04.24 AM


By Frederic Lardinois

Databricks brings its Delta Lake project to the Linux Foundation

Databricks, the big data analytics service founded by the original developers of Apache Spark, today announced that it is bringing its Delta Lake open-source project for building data lakes to the Linux Foundation and under an open governance model. The company announced the launch of Delta Lake earlier this year and even though it’s still a relatively new project, it has already been adopted by many organizations and has found backing from companies like Intel, Alibaba and Booz Allen Hamilton.

“In 2013, we had a small project where we added SQL to Spark at Databricks […] and donated it to the Apache Foundation,” Databricks CEO and co-founder Ali Ghodsi told me. “Over the years, slowly people have changed how they actually leverage Spark and only in the last year or so it really started to dawn upon us that there’s a new pattern that’s emerging and Spark is being used in a completely different way than maybe we had planned initially.”

This pattern, he said, is that companies are taking all of their data and putting it into data lakes and then do a couple of things with this data, machine learning and data science being the obvious ones. But they are also doing things that are more traditionally associated with data warehouses, like business intelligence and reporting. The term Ghodsi uses for this kind of usage is ‘Lake House.’ More and more, Databricks is seeing that Spark is being used for this purpose and not just to replace Hadoop and doing ETL (extract, transform, load). “This kind of Lake House patterns we’ve seen emerge more and more and we wanted to double down on it.”

Spark 3.0, which is launching today, enables more of these use cases and speeds them up significantly, in addition to the launch of a new feature that enables you to add a pluggable data catalog to Spark.

Data Lake, Ghodsi said, is essentially the data layer of the Lake House pattern. It brings support for ACID transactions to data lakes, scalable metadata handling, and data versioning, for example. All the data is stored in the Apache Parquet format and users can enforce schemas (and change them with relative ease if necessary).

It’s interesting to see Databricks choose the Linux Foundation for this project, given that its roots are in the Apache Foundation. “We’re super excited to partner with them,” Ghodsi said about why the company chose the Linux Foundation. “They run the biggest projects on the planet, including the Linux project but also a lot of cloud projects. The cloud-native stuff is all in the Linux Foundation.”

“Bringing Delta Lake under the neutral home of the Linux Foundation will help the open source community dependent on the project develop the technology addressing how big data is stored and processed, both on-prem and in the cloud,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation. “The Linux Foundation helps open source communities leverage an open governance model to enable broad industry contribution and consensus building, which will improve the state of the art for data storage and reliability.”


By Frederic Lardinois

Suse’s OpenStack Cloud dissipates

Suse, the newly independent open-source company behind the eponymous Linux distribution and an increasingly large set of managed enterprise services, today announced a bit of a new strategy as it looks to stay on top of the changing trends in the enterprise developer space. Over the course of the last few years, Suse put a strong emphasis on the OpenStack platform, an open-source project that essentially allows big enterprises to build something in their own data centers akin to the core services of a public cloud like AWS or Azure. With this new strategy, Suse is transitioning away from OpenStack . It’s ceasing both production of new versions of its OpenStack Cloud and sales of its existing OpenStack product.

“As Suse embarks on the next stage of our growth and evolution as the world’s largest independent open source company, we will grow the business by aligning our strategy to meet the current and future needs of our enterprise customers as they move to increasingly dynamic hybrid and multi-cloud application landscapes and DevOps processes,” the company said in a statement. “We are ideally positioned to execute on this strategy and help our customers embrace the full spectrum of computing environments, from edge to core to cloud.”

What Suse will focus on going forward are its Cloud Application Platform (which is based on the open-source Cloud Foundry platform) and Kubernetes-based container platform.

Chances are, Suse wouldn’t shut down its OpenStack services if it saw growing sales in this segment. But while the hype around OpenStack died down in recent years, it’s still among the world’s most active open-source projects and runs the production environments of some of the world’s largest companies, including some very large telcos. It took a while for the project to position itself in a space where all of the mindshare went to containers — and especially Kubernetes — for the last few years. At the same time, though, containers are also opening up new opportunities for OpenStack, as you still need some way to manage those containers and the rest of your infrastructure.

The OpenStack Foundation, the umbrella organization that helps guide the project, remains upbeat.

“The market for OpenStack distributions is settling on a core group of highly supported, well-adopted players, just as has happened with Linux and other large-scale, open-source projects,” said OpenStack Foundation COO Mark Collier in a statement. “All companies adjust strategic priorities from time to time, and for those distro providers that continue to focus on providing open-source infrastructure products for containers, VMs and bare metal in private cloud, OpenStack is the market’s leading choice.”

He also notes that analyst firm 451 Research believes there is a combined Kubernetes and OpenStack market of about $11 billion, with $7.7 billion of that focused on OpenStack. “As the overall open-source cloud market continues its march toward eight figures in revenue and beyond — most of it concentrated in OpenStack products and services — it’s clear that the natural consolidation of distros is having no impact on adoption,” Collier argues.

For Suse, though, this marks the end of its OpenStack products. As of now, though, the company remains a top-level Platinum sponsor of the OpenStack Foundation and Suse’s Alan Clark remains on the Foundation’s board. Suse is involved in some of the other projects under the OpenStack brand, so the company will likely remain a sponsor, but it’s probably a fair guess that it won’t continue to do so at the highest level.


By Frederic Lardinois

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting GPUs and today’s neural processor units over a bus. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers can also opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm, argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and still allows its customers to use the existing standard tools they are already familiar with.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.


By Frederic Lardinois

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.


By Ron Miller

Annual Extra Crunch members can receive $1,000 in AWS credits

We’re excited to announce a new partnership with Amazon Web Services for annual members of Extra Crunch. Starting today, qualified annual members can receive $1,000 in AWS credits. You also must be a startup founder to claim this Extra Crunch community perk.

AWS is the premier service for your application hosting needs, and we want to make sure our community is well-resourced to build. We understand that hosting and infrastructure costs can be a major hurdle for tech startups, and we’re hoping that this offer will help better support your team.

What’s included in the perk:

  • $1,000 in AWS Promotional Credit valid for 1 year
  • 2 months of AWS Business Support
  • 80 credits for self-paced labs

Applications are processed in 7-10 days, once an application is received. Companies may not be eligible for AWS Promotional Credits if they previously received a similar or greater amount of credit. Companies may be eligible to be “topped up” to a higher credit amount if they previously received a lower credit.

In addition to the AWS community perk, Extra Crunch members also get access to how-tos and guides on company building, intelligence on what’s happening in the startup ecosystem, stories about founders and exits, transcripts from panels at TechCrunch events, discounts on TechCrunch events, no banner ads on TechCrunch.com and more. To see a full list of the types of articles you get with Extra Crunch, head here.

You can sign up for annual Extra Crunch membership here.

Once you are signed up, you’ll receive a welcome email with a link to the AWS offer. If you are already an annual Extra Crunch member, you will receive an email with the offer at some point today. If you are currently a monthly Extra Crunch subscriber and want to upgrade to annual in order to claim this deal, head over to the “my account” section on TechCrunch.com and click the “upgrade” button.

This is one of several new community perks we’ve been working on for Extra Crunch members. Extra Crunch members also get 20% off all TechCrunch event tickets (email [email protected] with the event name to receive a discount code for event tickets). You can learn more about our events lineup here. You also can read about our Brex community perk here.


By Travis Bernard

Harness launches Continuous Insights to measure software team performance

Jyoti Bansal, CEO and co-founder at Harness, has always been frustrated by the lack of tools to measure software development team performance. Harness is a tool that provides Continuous Delivery as a Service, and its latest offering, Continuous Insights, lets managers know exactly how their teams are performing.

Bansal says a traditional management maxim says that if you can’t measure a process, you can’t fix it, and Continuous Insights is designed to provide a way to measure engineering effectiveness. “People want to understand how good their software delivery processes are, and where they are tracking right now, and that’s what this product, Continuous Insights, is about,” Bansal explained.

He says that it is the first product in the market to provide this view of performance without pulling weeks or months of data. “How do you get data around what your current performance is like, and how fast you deliver software, or where the bottlenecks are, and that’s where there are currently a lot of visibility gaps,” he said. He adds, “Continuous Insights makes it extremely easy for engineering teams to clearly measure and track software delivery performance with customizable, dashboards.”

Harness measures four key metrics as defined by DevOps Research and Assessment (DORA) in their book Accelerate. These include deployment frequency, lead time, mean-time-to-recovery and failure change rate. “Any organization that can do a better job with these would would really out-innovate their peers and competitors,” he said. Conversely companies doing badly on these four metrics are more likely to fall behind in the market.

pasted image 0

Image: Harness

By measuring these four areas, it not only provides a way to track performance, he sees it as a way to gamify these metrics where each team tries to outdo one another around efficiency. While you would think that engineering would be the most data-driven organization, he says that up until now it has lacked the tooling. He hopes that Harness users will be able to bring that kind of rigor to engineering.


By Ron Miller

Render challenges the cloud’s biggest vendors with cheaper, managed infrastructure

Render, a participant in the TechCrunch Disrupt SF Startup Battlefield, has a big idea. It wants to take on the world’s biggest cloud vendors by offering developers a cheaper alternative that also removes a lot of the complexity around managing cloud infrastructure.

Render’s goal is to help developers, especially those in smaller companies, who don’t have large DevOps teams, to still take advantage of modern development approaches in the cloud. “We are focused on being the easiest and most flexible provider for teams to run any application in the cloud,” CEO and founder Anurag Goel explained.

He says that one of the biggest pain points for developers and startups, even fairly large startups, is that they have to build up a lot of DevOps expertise when they run applications in the cloud. “That means they are going to hire extremely expensive DevOps engineers or consultants to build out the infrastructure on AWS,” he said. Even after they set up the cloud infrastructure, and move applications there, he points out that there is ongoing maintenance around patching, security and identity access management. “Render abstracts all of that away, and automates all of it,” Goel said.

It’s not easy competing with the big players on scale, but he says so far they have been doing pretty well, and plan to move much of their operations to bare metal servers, which he believes will help stabilize costs further.

“Longer term, we have a lot of ideas [about how to reduce our costs], and the simplest thing we can do is to switch to bare metal to reduce our costs pretty much instantly.” He says the way they have built Render will make that easier to do. The plan now is to start moving their services to bare metal in the fourth quarter this year.

Even though the company only launched in April, it is already seeing great traction. “The response has been great. We’re now doing over 100 million HTTP requests every week. And we have thousands of developers and startups and everyone from people doing small hobby projects to even a major presidential campaign,” he said.

Although he couldn’t share the candidate’s name, he said they were using Render for everything including their infrastructure for hosting their web site and their back-end administration. “Basically all of their cloud infrastructure is on Render,” he said.

Render has raised a $2.2 million seed round and is continuing to add services to the product, including several new services it will announce this week around storage, infrastructure as code and one-click deployment.


By Ron Miller

Confluent adds free tier to Kafka real-time streaming data cloud service

When Confluent launched a cloud service in 2017, it was trying to reduce some of the complexity related to running a Kafka streaming data application. Today, it introduced a free tier to that cloud service. The company hopes to expand its market beyond large technology company customers, and the free tier should make it easier for smaller companies to get started.

The new tier provides up to $50 of service a month for up to three months. Company CEO Jay Kreps says that while $50 might not sound like much, it’s actually hundreds of gigabytes of throughput and makes it easy to get started with the tool.

“We felt like we can make this technology really accessible. We can make it as easy as we can. We want to make it something where you can just get going in seconds, and not have to pay anything to start building an application that uses real time streams of data,” Kreps said.

Kafka has been available as an open source product since 2011, so it’s been free to download, install and build applications, but still required a ton of compute and engineering resources to pull off. The cloud service was designed to simplify that, and the free tier lets developers get comfortable building a small application without making a large financial investment.

Once they get comfortable working with Kafka on the free version, users can then buy in whatever increments make sense for them, and only pay for what they use. It can be pennies worth of Kafka or hundreds of dollars depending on a customer’s individual requirements. “After free, you can buy 11 cents worth of Kafka or you can buy it $10 worth, all the way up to these massive users like Lyft that use Kafka Cloud at huge scale as part of their ride sharing service,” he said.

While a free SaaS trial might feel like a common kind of marketing approach, Kreps says for a service like Kafka, it’s actually much more difficult to pull off. “With something like a distributed system where you get a whole chunk of infrastructure, it’s actually technically an extraordinarily difficult thing to provide zero to elastic scale up capabilities. And a huge amount of engineering goes into making that possible,” Kreps explained.

Kafka processes massive streams of data in real time. It was originally developed inside LinkedIn and open sourced in 2011. Confluent launched as a commercial entity on top of the open source project in 2014. In January the company raised $125 million on a $2.5 billion valuation. It has raised over $205 million, according to Crunchbase data.


By Ron Miller

Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.


By Greg Kumparak

Battlefield vets StrongSalt (formerly OverNest) announces $3M seed round

StrongSalt, then known as OverNest, appeared at the TechCrunch Disrupt NYC Battlefield in 2016, and announced product for searching encrypted code, which remains unusual to this day. Today, the company announced a $3 million seed round led by Valley Capital Partners.

StrongSalt founder and CEO Ed Yu, says encryption remains a difficult proposition, and that when you look at the majority of breaches, encryption wasn’t used. He said that his company wants to simplify adding encryption to applications, and came up with a new service to let developers add encryption in the form of an API. “We decided to come up with what we call an API platform. It’s like infrastructure that allows you to integrate our solution into any existing or any new applications,” he said.

The company’s original idea was to create a product to search encrypted code, but Yu says the tech has much more utility as an API that’s applicable across applications, and that’s why they decided to package it as a service. It’s not unlike Twilio for communications or Stripe for payments, except in this case you can build in searchable encryption.

The searchable part is actually a pretty big deal because, as Yu points out, when you encrypt data it is no longer searchable. “If you encrypt all your data, you cannot search within it, and if you cannot search within it, you cannot find the data you’re looking for, and obviously you can’t really use the data. So we actually solved that problem,” he said.

Developers can add searchable encryption as part of their applications. For customers already using a commercial product, the company’s API actually integrates with popular services, enabling customers to encrypt the data stored there, while keeping it searchable.

“We will offer a storage API on top of Box, AWS S3, Google cloud, Azure — depending on what the customer has or wants. If the customer already has AWS S3 storage, for example, then when they use our API, and after encrypting the data, it will be stored in their AWS repository,” Yu explained.

For those companies who don’t have a storage service, the company is offering one. What’s more, they are using the blockchain to provide a mechanism for the sharing, auditing and managing encrypted data. “We also use the blockchain for sharing data by recording the authorization by the sender, so the receiver can retrieve the information needed to reconstruct the keys in order to retrieve the data. This simplifies key management in the case of sharing and ensures auditability and revocability of the sharing by the sender,” Yu said.

If you’re wondering how the company has been surviving since 2016, while only getting its seed round today, it had a couple of small seed rounds prior to this, and a contract with the US Department of Defense, which replaced the need for substantial earlier funding.

“The DOD was looking for a solution to have secure communication between between computers, and they needed to have a way to securely store data, and so we were providing a solution for them,” he said. In fact, this work was what led them to build the commercial API platform they are offering today.

The company, which was founded in 2015, currently has 12 employees spread across the globe.


By Ron Miller

Chef CEO does an about face, says company will not renew ICE contract

After stating clearly on Friday that he would honor a $95,000 contract with ICE, CEO Barry Crist must have had a change of heart over the weekend. In a blog post, this morning he wrote that the company would not be renewing the contract with ICE after all.

“After deep introspection and dialog within Chef, we will not renew our current contracts with ICE and CBP when they expire over the next year. Chef will fulfill our full obligations under the current contracts,” Crist wrote in the blog post.

He also backed off the seemingly firm position he took on Friday on the matter when he told TechCrunch, “It’s something that we spent a lot of time on, and I want to represent that there are portions of [our company] that do not agree with this, but I as a leader of the company, along with the executive team, made a decision that we would honor the contracts and those relationships that were formed and work with them over time,” he said.

Today, he acknowledged that intense feelings inside the company against the contract led to his decision. The contract began in 2015 under the Obama administration and was aimed at modernizing programming approaches at DHS, but over time as ICE family separation and deportation polices have come under fire, there were calls internally (and later externally) to end the contract. “Policies such as family separation and detention did not yet exist [when we started this contract]. While I and others privately opposed this and various other related policies, we did not take a position despite the recommendation of many of our employees. I apologize for this,” he wrote

Crist also indicated that the company would be donating the revenue from the contracts to organizations that work with people who have been affected by these policies. It’s a similar approach that Salesforce took when 618 of its employees protested a contract the company has with the Customs and Border Patrol (CBP). In response to the protests, Salesforce pledged $1 million to organizations helping affected families.

After a tweet last week exposed the contract, the protests began on social media, and culminated in programmer Seth Vargo removing pieces of open source code from the repository in protest of the contract in response. The company sounded firmly committed to fulfilling this contract in spite of the calls for action internally and externally, and the widespread backlash it was facing both inside and outside the company.

Vargo told TechCrunch in an interview that he saw this issue in moral terms, “Contrary to Chef’s CEO’s publicly posted response, I do think it is the responsibility of businesses to evaluate how and for what purposes their software is being used, and to follow their moral compass,” he said. Apparently Crist has come around to this point of view. Vargo chose not to comment on the latest development.


By Ron Miller

Programmer who took down open source pieces over Chef ICE contract responds

On Friday afternoon Chef CEO Barry Crist and CTO Corey Scobie sat down with TechCrunch to defend their contract with ICE after a firestorm on social media called for them to cut ties with the controversial agency. On Sunday, programmer Seth Vargo, the man who removed his open source components, which contributed to a partial shutdown of Chef’s commercial business for a time last week, responded.

While the Chef executives stated that the company was in fact the owner, Vargo made it clear he owned those pieces and he had every right to remove them from the repository. “Chef (the company) was including a third party software package that I owned. It was on my personal repository on GitHub and personal namespace on RubyGems,” he said. He believes that gave him the right to remove it.

Chef CTO Corey Scobie did not agree. “Part of the challenge was that [Vargo] actually didn’t have authorization to remove those assets. And the assets were not his to begin with. They were actually created under a time when that particular individual [Vargo] was an employee of Chef. And so therefore, the assets were Chef’s assets, and not his assets to remove,” he said.

Vargo says that simply isn’t true and Chef misunderstands the licensing terms. “No OSI license or employment agreement requires me to continue to maintain code of my personal account(s). They are conflating code ownership (which they can argue they have) over code stewardship,” Vargo told TechCrunch.

As further proof, Vargo added that he has even included detailed instructions in his will on how to deal with the code he owns when he dies. “I want to make it absolutely clear that I didn’t “hack” into Chef or perform any kind of privilege escalation. The code lived in my personal accounts. Had I died on Thursday, the exact same thing would have happened. My will requests all my social media and code accounts be deleted. If I had deleted my GitHub account, the same thing would have happened,” he explained.

Vargo said that Chef actually was in violation of the open source license when they restored those open source pieces without putting his name on it. “Chef actually violated the Apache license by removing my name, which they later restored in response to public pressure,” he said.

Scobie admitted that the company did forget to include Vargo’s name on the code, but added it back as soon as they heard about the problem. “In our haste to restore one of the objects, we inadvertently removed a piece of metadata that identified him as the author. We didn’t do that knowingly. It was absolutely a mistake in the process of trying to restore customers and our and our global customer base service. And as soon as we were notified of it, we reverted that change on this specific object in question,” he said.

Vargo says, as for why he took the open source components down, he was taking a moral stand against the contract, which dates back to the Obama administration. He also explained that he attempted to contact Chef via multiple channels before taking action. “First, I didn’t know about the history of the contract. I found out via a tweet from @shanley and subsequently verified via the USA spending website. I sent a letter and asked Chef publicly via Twitter to respond multiple times, and I was met with silence. I wanted to know how and why code in my personal repositories was being used with ICE. After no reply for 72 hours, I decided to take action,” he said.

Since then, Chef’s CEO Barry Crist has made it clear he was honoring the contract, which Vargo felt further justified his actions. “Contrary to Chef’s CEO’s publicly posted response, I do think it is the responsibility of businesses to evaluate how and for what purposes their software is being used, and to follow their moral compass,” he said.

Vargo has a long career helping build development tools and contributing to open source. He currently works for Google Cloud. Previous positions include HashiCorp and Chef.


By Ron Miller

Chef CEO says he’ll continue to work with ICE in spite of protests

Yesterday, software development tool maker Chef found itself in the middle of a firestorm after a Tweet called them out for doing business with DHS/ICE. Eventually it led to an influential open source developer removing a couple of key pieces of software from the project, bringing down some parts of Chef’s commercial business.

Chef intends to fulfill its contract with ICE, in spite of calls to cancel it. In a blog post published this morning, Chef CEO Barry Crist defended the decision. “I do not believe that it is appropriate, practical, or within our mission to examine specific government projects with the purpose of selecting which U.S. agencies we should or should not do business.”

He stood by the company’s decision this afternoon in an interview with TechCrunch, while acknowledging that it was a difficult and emotional decision for everyone involved. “For some portion of the community, and some portion of our company, this is a super, super-charged lightning rod, and this has been very difficult. It’s something that we spent a lot of time on, and I want to represent that there are portions of [our company] that do not agree with this, but I as a leader of the company, along with the executive team, made a decision that we would honor the contracts and those relationships that were formed and work with them over time,” he said.

He added, “I think our challenge as as leadership right now is how do we collectively navigate through through times like this, and through emotionally-charged issues like the ICE contract.”

The deal with ICE, which is a $95,000 a year contract for software development tools, dates back to the Obama administration when the then DHS CIO wanted to move the department towards more modern agile/DevOps development workflows, according Christ.

He said for people who might think it’s a purely economic decision, the money represents a fraction of the company’s more than $50 million annual revenue (according to Crunchbase data), but he says it’s about a long-term business arrangement with the government that transcends individual administration policies. “It’s not about the $100,000, it’s about decisions we’ve made to engage the government. And I appreciate that not everyone in our world feels the same way or would make that same decision, but that’s the decision that that we made as a leadership team,”Crist said.

Shortly after word of Chef’s ICE contract appeared on Twitter, according to a report in The Register, former Chef employee Seth Vargo removed a couple of key pieces of open source software from the repository, telling The Register that “software engineers have to operate by some kind of moral compass.” This move brought down part of Chef’s commercial software and it took them 24 hours to get those services fully restored, according to Chef CTO Corey Scobie.

Crist says he wants to be clear that his decision does not mean he supports current ICE policies. “I certainly don’t want to be viewed as I’m taking a strong stand in support of ICE. What we’re taking a strong stand on is our consistency with working with our customers, and again, our work with DHS  started in the previous administration on things that we feel very good about,” he said.


By Ron Miller