Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90 percent of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite the COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”


By Frederic Lardinois

11 VCs share their thoughts on enterprise startup trends and opportunities

Compared to other tech firms, enterprise companies have held up well during the pandemic.

If anything, the problems enterprises were facing prior to the economic downturn have become even more pronounced; if you were thinking about moving to the cloud or just dabbling in it, you’re probably accelerating that motion. If you were trying to move off of legacy systems, that has become even more imperative. And if you were attempting to modernize processes and workflows, whether engineer- and developer-related, or across other parts of the organization, chances are good that you are giving that a much closer look.

We won’t be locked down forever and employees will eventually return to offices, but it’s likely that many companies will take the lessons they learned during this era and put them to work inside their organizations. Startups are uniquely positioned to help companies solve these new modern kinds of problems, much more so than a legacy vendor (which could be itself trying to update its approach).

Venture capitalists certainly understand all of these dynamics and are always dutifully searching for startups that could help companies shift to a digital future more quickly.

We spoke to 11 of them to take their pulse and learn more about the trends that are exciting them, what they look for in an investment opportunity and which parts of the enterprise are ripe for startups to impact:

  • Max Gazor, CRV
  • Navin Chadda, Mayfield
  • Matt Murphy, Menlo Venture Capital
  • Soma Somasagar, Madrona Ventures
  • Jon Lehr, Work-Bench
  • Steve Herrod, General Catalyst
  • Jai Das, Sapphire Ventures
  • Max Gazor,  CRV
  • Ed Sim, Boldstart Ventures
  • Martin Cassado, Andreessen Horowitz
  • Vassant Natarajan, Accel

Max Gazor, CRV

What trends are you most excited about in the enterprise from an investing perspective?

It’s abundantly clear that cloud software markets are bigger than most people anticipated. We continue to invest heavily there as we have been doing for the last decade.

Specifically, the most exciting trend right now in enterprise is low-code software development. I’m on the board of Airtable, where I led the Series A and co-led the Series B investments, so I see first hand how this will play out. We are heading toward a future where hundreds of millions of people will be empowered to compose software that fits their own needs. Imagine the productivity and transformation that will unlock in the world! It may be one of the largest market opportunities we have seen since cloud computing.


By Ron Miller

Microsoft launches Project Bonsai, its new machine teaching service for building autonomous systems

At its Build developer conference, Microsoft today announced that Project Bonsai, its new machine teaching service, is now in public preview.

If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.

It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.

“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.

Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.

In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.

Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.

“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”


By Frederic Lardinois

Gremlin brings chaos engineering to Windows platform

Chaos engineering is about helping companies set up worst case scenarios and testing them to see what causes the operating system to fall over, but up until now, it has mostly been for teams running Linux servers. Gremlin, the startup that offers Chaos Engineering as a Service released a new tool to give engineers working on Microsoft Windows systems access to a similar set of experiments.

Gremlin co-founder and CEO Kolton Andrus says that the 4-year old company started with LInux support, then moved to Docker containers and Kubernetes, but there has been significant demand for Windows support, and the company decided it was time to build this into the platform too.

“The same types of failure can occur, but it happens in different ways on different operating systems. And people need to be able to respond to that. So it’s been the blind spot, and we [decided to] prioritize the types of experiments that people [running Windows] need the most,” he said.

He added, “What we’re launching here is that core set of capabilities for customers so they can go out and get started right away.”

To that end, the Gremlin Windows agent lets engineers run experiments on shutdown, CPU, disk, I/O, memory and latency attacks. It’s worth noting that a third of the world’s servers still run on Windows, and having this ability to test these systems in this way has been mostly confined to  companies who could afford to build their own systems in-house.

What Gremlin is doing for Windows is what it has done for the other supported systems. It’s enabling any company to take advantage of chaos engineering tools to help prevent system failure. During the pandemic, as some systems have become flooded with traffic, having this ability to experiment with different worst-case scenarios and figuring out what brings your system to its knees is more important than ever.

The Gremlin Windows agent not only gives the company a wider range of operating system support, it also broadens its revenue base, which is also increasingly important at a time of economic uncertainty.

The company, which is based in the San Francisco area was founded in 2016 and has raised over $26 million, according to Crunchbase data. The company raised the bulk of that, $18 million in 2018.


By Ron Miller

Microsoft launches Azure Synapse Link to help enterprises get faster insights from their data

At its Build developer conference, Microsoft today announced Azure Synapse Link, a new enterprise service that allows businesses to analyze their data faster and more efficiently, using an approach that’s generally called ‘hybrid transaction/analytical processing’ (HTAP). That’s a mouthful, it essentially enables enterprises to use the same database system for analytical and transactional workloads on a single system. Traditionally, enterprises had to make some tradeoffs between either building a single system for both that was often highly over-provisioned or to maintain separate systems for transactional and analytics workloads.

Last year, at its Ignite conference, Microsoft announced Azure Synapse Analytics, an analytics service that combines analytics and data warehousing to create what the company calls “the next evolution of Azure SQL Data Warehouse.” Synapse Analytics brings together data from Microsoft’s services and those from its partners and makes it easier to analyze.

“One of the key things, as we work with our customers on their digital transformation journey, there is an aspect of being data-driven, of being insights-driven as a culture, and a key part of that really is that once you decide there is some amount of information or insights that you need, how quickly are you able to get to that? For us, time to insight and a secondary element, which is the cost it takes, the effort it takes to build these pipelines and maintain them with an end-to-end analytics solution, was a key metric we have been observing for multiple years from our largest enterprise customers,” said Rohan Kumar, Microsoft’s corporate VP for Azure Data.

Synapse Link takes the work Microsoft did on Synaps Analytics a step further by removing the barriers between Azure’s operational databases and Synapse Analytics, so enterprises can immediately get value from the data in those databases without going through a data warehouse first.

“What we are announcing with Synapse Link is the next major step in the same vision that we had around reducing the time to insight,” explained Kumar. “And in this particular case, a long-standing barrier that exists today between operational databases and analytics systems is these complex ETL (extract, transform, load) pipelines that need to be set up just so you can do basic operational reporting or where, in a very transactionally consistent way, you need to move data from your operational system to the analytics system, because you don’t want impact the performance of the operational system in any way because that’s typically dealing with, depending on the system, millions of transactions per second.”

ETL pipelines, Kumar argued, are typically expensive and hard to build and maintain, yet enterprises are now building new apps — and maybe even line of business mobile apps — where any action that consumers take and that is registered in the operational database is immediately available for predictive analytics, for example.

From the user perspective, enabling this only takes a single click to link the two, while it removes the need for managing additional data pipelines or database resources. That, Kumar said, was always the main goal for Synapse Link. “With a single click, you should be able to enable real-time analytics on you operational data in ways that don’t have any impact on your operational systems, so you’re not using the compute part of your operational system to do the query, you actually have to transform the data into a columnar format, which is more adaptable for analytics, and that’s really what we achieved with Synapse Link.”

Because traditional HTAP systems on-premises typically share their compute resources with the operational database, those systems never quite took off, Kumar argued. In the cloud, with Synapse Link, though, that impact doesn’t exist because you’re dealing with two separate systems. Now, once a transaction gets committed to the operational database, the Synapse Link system transforms the data into a columnar format that is more optimized for the analytics system — and it does so in real time.

For now, Synapse Link is only available in conjunction with Microsoft’s Cosmos DB database. As Kumar told me, that’s because that’s where the company saw the highest demand for this kind of service, but you can expect the company to add support for available in Azure SQL, Azure Database for PostgreSQL and Azure Database for MySQL in the future.


By Frederic Lardinois

Venafi acquires Jetstack, the startup behind the cert-manager Kubernetes certificate controller

It seems that we are in the middle of a mini acquisition spree for Kubernetes startups, specifically those that can help with Kubernetes security. In the latest development, Venafi, a vendor of certificate and key management for machine-to-machine connections, is acquiring Jetstack, a UK startup that helps enterprises migrate and work within Kubernetes and cloud-based ecosystems, which has also been behind the development of cert-manager, a popular, open source native Kubernetes certificate management controller.

Financial terms of the deal, which is expected to close in June of this year, have not been disclosed, but Jetstack has been working with Venafi to integrate its services and had a strategic investment from Venafi’s Machine Identity Protection Development Fund.

Venafi is part of the so-called “Silicon Slopes” cluster of startups in Utah. It has raised about $190 million from investors that include TCV, Silver Lake and Intel Capital and was last valued at $600 million. That was in 2018, when it raised $100 million, so now it’s likely Venafi is worth more, especially considering its customers, which include the top five U.S. health insurers; the top five U.S. airlines; the top four credit card issuers; three out of the top four accounting and consulting firms; four of the top five U.S., U.K., Australian and South African banks; and four of the top five U.S. retailers.

For the time being, the two organizations will continue to operate separately, and cert-manager — which has hundreds of contributors and millions of downloads — will continue on as before, with a public release of version 1 expected in the June-July timeframe.

The deal underscores not just how Kubernetes-based containers have quickly gained momentum and critical mass in the enterprise IT landscape, in particular around digital transformation; but specifically the need to provide better security services around that at speed and at scale. The deal comes just one day after VMware announced that it was acquiring Octarine, another Kubernetes security startup, to fold into Carbon Black (an acquisition it made last year).

“Nowadays, business success depends on how quickly you can respond to the market,” said Matt Barker, CEO and co-founder of Jetstack. “This reality led us to re-think how software is built and Kubernetes has given us the ideal platform to work from. However, putting speed before security is risky. By joining Venafi, Jetstack will give our customers a chance to build fast while acting securely.”

To be clear, Venafi had been offering Kubernetes integrations prior to this — and Venafi and Jetstack have worked together for two years. But acquiring Jetstack will give it direct, in-house expertise to speed up development and deployment of better tools to meet the challenges of a rapidly expanding landscape of machines and applications, all of which require unique certificates to connect securely.

“In the race to virtualize everything, businesses need faster application innovation and better security; both are mandatory,” said Jeff Hudson, CEO of Venafi, in a statement. “Most people see these requirements as opposing forces, but we don’t. We see a massive opportunity for innovation. This acquisition brings together two leaders who are already working together to accelerate the development process while simultaneously securing applications against attack, and there’s a lot more to do. Our mutual customers are urgently asking for more help to solve this problem because they know that speed wins, as long as you don’t crash.”

The crux of the issue is the sheer volume of machines that are being used in computing environments, thanks to the growth of Kubernetes clusters, cloud instances, microservices and more, with each machine requiring a unique identity to connect, communicate, and execute securely, Venafi notes, with disruptions or misfires in the system leaving holes for security breaches.

Jetstack’s approach to information security came by way of its expertise in Kubernetes, developing cert-mananger specifically so that its developer customers could easily create and maintain certificates for their networks.

“At Jetstack we help customers realize the benefits of Kubernetes and cloud native infrastructure, and we see transformative results to businesses firsthand,” said Matt Bates, CTO and co-founder of Jetstack, in a statement. “We developed cert-manager to make it easy for developers to scale Kubernetes with consistent, secure, and declared-as-code machine identity protection. The project has been a huge hit with the community and has been adopted far beyond our expectations. Our team is thrilled to join Venafi so we can accelerate our plans to bring machine identity protection to the cloud native stack, grow the community and contribute to a wider range of projects across the ecosystem.” Both Bates and Barker will report to Venafi’s Hudson and join the bigger company’s executive team.


By Ingrid Lunden

VMware to acquire Kubernetes security startup Octarine and fold it into Carbon Black

VMware announced today that it intends to buy early-stage Kubernetes security startup, Octarine and fold it into Carbon Black, a security company it bought last year for $2.1 billion. The company did not reveal the price of today’s acquisition.

According to a blog post announcing the deal from Patrick Morley, general manager and senior vice president at VMware’s Security Business Unit, Octarine should fit in with what Carbon Black calls its “intrinsic security strategy” — that is, protecting content and applications wherever they live. In the case of Octarine, it’s cloud native containers in Kubernetes environments.

“Acquiring Octarine enables us to advance intrinsic security for containers (and Kubernetes environments), by embedding the Octarine technology into the VMware Carbon Black Cloud, and via deep hooks and integrations with the VMware Tanzu platform,” Morley wrote in a blog post.

This also fits in with VMware’s Kubernetes strategy, having purchased Heptio, an early Kuberentes company started by Craig McLuckie and Joe Beda, two folks who helped develop Kubernets while at Google before starting their own company,

We covered Octarine last year when it released a couple of open source tools to help companies define the Kubernetes security parameters. As we quoted head of product Julien Sobrier at the time:

“Kubernetes gives a lot of flexibility and a lot of power to developers. There are over 30 security settings, and understanding how they interact with each other, which settings make security worse, which make it better, and the impact of each selection is not something that’s easy to measure or explain.”

As for the startup, it now gets folded into VMware’s security business. While the CEO tried to put a happy face on the acquisition in a blog post, it seems its days as an independent entity are over. “VMware’s commitment to cloud native computing and intrinsic security, which have been demonstrated by its product announcements and by recent acquisitions, makes it an ideal home for Octarine,” the company CEO Shemer Schwarz wrote in the post.

Octarine was founded in 2017 and has raised $9 million, according to Pitchbook data.


By Ron Miller

FeaturePeek moves beyond Y Combinator with $1.8M seed

FeaturePeek’s founders graduated from Y Combinator in Summer 2019, which for an early stage startup must seem like a million years ago right now. Despite the current conditions though, the company announced a $1.8 million seed investment today.

The round was led by Matrix Partners with some unnamed Angel investors also participating.

The startup has built a solution to allow teams to review front-end designs throughout the development process instead of waiting until the end when the project has been moved to staging, co-founder Eric Silverman explained.

FeaturePeek is designed to give front end capabilities that enable developers to get feedback from all their different stakeholders at every stage in the development process and really fill in the missing gaps of the review cycle,” he said.

He added, “Right now, there’s no dedicated place to give feedback on that new work until it hits their staging environment, and so we’ll spin up ad hoc deployment previews, either on commit or on pull requests and those fully running environments can be shared with the team. On top of that, we have our overlay where you can file bugs you can annotate screenshots, record video or leave comments.”

Since last summer, the company has remained lean with three full time employees, but it has continued to build out the product. In addition to the funding, the company also announced a free command line version of the product for single developers in addition to the teams product it has been building since the Y Combinator days.

Ilya Sukhar, partner at Matrix Partners says as a former engineer, he had experienced this kind of problem first hand, and he knew that there was a lack of tooling to help. That’s what attracted him to FeaturePeek.

“I think FeaturePeek is kind of a company that’s trying to change that and try to bring all of these folks together in an environment where they can review running code in a way that really wasn’t possible before, and I certainly have been frustrated on both ends of this where as an engineer, you’re kind of like okay I wrote it, are you ever going to look at it,” he said.

Sukhar recognizes these are trying times to launch a startup, and nobody really knows how things are going to play out, but he encourages these companies not to get too caught up in the macro view at this stage.

Silverman knows that he needs to adapt his go to market strategy for the times, and he says the founders are making a concerted effort to listen to users and find ways to improve the product while finding ways to communicate with the target audience.


By Ron Miller

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”


By Frederic Lardinois

MemSQL raises $50M in debt facility for its real-time database platform

As a number of startups get back into fundraising in earnest, one that is on a growth tear has closed a substantial debt round to hold on to more equity in the company as it inches to being cash-flow positive. MemSQL — the relational, real-time database used by organisations to query and analyse large pools of fast-moving data across cloud, hybrid and on-premise environments (customers include major banks, telecoms carriers, ride sharing giants, and even those building COVID-19 tracing apps) — has secured $50 million in debt, money that CEO Raj Verma says should keep it “well capitalised for the next several years” and puts it on the road to an IPO or potential private equity exit.

The funding is coming from Hercules Capital, which has some $4.3 billion under management and has an interesting history. On the one hand, it’s invested in companies that include Facebook (this was back in 2012, when Facebook was still a startup), but it’s also been in the news because its CEO was one of the high fliers accused in the college cheating scandal of 2019.

MemSQL does not disclose its valuation but Verma confirmed it is now significantly higher than it was at its last equity raise of $30 million in 2018 when it was valued at about $270 million, per data from PitchBook.

Why raise debt rather than equity? The company is already backed by a long list of impressive investors starting with Y Combinator, and including Accel, Data Collective, DST, GV (one of Google-owner Alphabet’s venture capital vehicles), Khosla, IA Ventures, In-Q-Tel (the CIA-linked VC) and many more. Verma said in an interview with TechCrunch that the startup had started to look at this fundraise before the pandemic hit.

It had “multiple options to raise an equity round” from existing and new investors, which quickly produced some eight term sheets. Ultimately, it took the debt route mainly because it didn’t need the capital badly enough to give up equity, and terms “are favourable right now,” making a debt facility the best option. “Our cash burn is in the single digits,” he said, and “we still have independence.”

The company has been on a roll in recent times. It grew 75% last year (note it was 200% in 2018) with cash burn of $8-9 million in that period and now has a annual recurring revenues of $40 million. Customers include three of the world’s biggest banks, which use MemSQL to power all of its algorithmic trading, major telecoms carriers, mapping providers (Verma declined to comment on whether investor Google is a customer), and more. While Verma today declines to talk about specific names, previous named customers have included Uber, Akamai, Pinterest, Dell EMC and Comcast.

And if the current health pandemic has had put a lot of pressure on some companies in the tech world, MemSQL is one of the group that’s been seeing a strong upswing in business.

Verma noted that this is down to multiple reasons. First, its customer base has not had a strong crossover with sectors like travel that have been hit hard by the economic slowdown and push to keep people indoors. Second, its platform has actually proven to be useful precisely in the present moment, with companies now being forced to reckon with legacy architecture and move to hybrid or all-cloud environments just to do business. And others like True Digital are specifically building contact-tracing applications to help address the spread of the novel coronavirus on MemSQL.

The company plays in a well-crowded area that includes big players like Oracle and SAP. Verma said that its tech stands apart from these because of its hybrid architecture and because it can provide speed improvements of some 30x with technology that — as we have noted before — allows users to push millions of events per day into the service while its users can query the records in real time. 

It also helps to have competitive pricing. “We are a favourable alternative,” Verma said.

“This structured investment represents a significant commitment from Hercules and provides an example of the breadth of our platform and our ability to finance growth-orientated, institutionally-backed technology companies at various stages. We are impressed with the work that the MemSQL management team has accomplished operationally and excited to begin our partnership with one of the promising companies in the database market,” said Steve Kuo, senior managing director technology group head for Hercules, in a statement.


By Ingrid Lunden

Health APIs usher in the patient revolution we have been waiting for

If you’ve ever been stuck using a health provider’s clunky online patient portal or had to make multiple calls to transfer medical records, you know how difficult it is to access your health data.

In an era when control over personal data is more important than ever before, the healthcare industry has notably lagged behind — but that’s about to change. This past month, the U.S. Department of Health and Human Services (HHS) published two final rules around patient data access and interoperability that will require providers and payers to create APIs that can be used by third-party applications to let patients access their health data.

This means you will soon have consumer apps that will plug into your clinic’s health records and make them viewable to you on your smartphone.

Critics of the new rulings have voiced privacy concerns over patient health data leaving internal electronic health record (EHR) systems and being surfaced to the front lines of smartphone apps. Vendors such as Epic and many health providers have publicly opposed the HHS rulings, while others, such as Cerner, have been supportive.

While that debate has been heated, the new HHS rulings represent a final decision that follows initial rules proposed a year ago. It’s a multi-year win for advocates of greater data access and control by patients.

The scope of what this could lead to — more control over your health records, and apps on top of it — is immense. Apple has been making progress with its Health Records app for some time now, and other technology companies, including Microsoft and Amazon, have undertaken healthcare initiatives with both new apps and cloud services.

It’s not just big tech that is getting in on the action: startups are emerging as well, such as Commure and Particle Health, which help developers work with patient health data. The unlocking of patient health data could be as influential as the unlocking of banking data by Plaid, which powered the growth of multiple fintech startups, including Robinhood, Venmo and Betterment.

What’s clear is that the HHS rulings are here to stay. In fact, many of the provisions require providers and payers to provide partial data access within the next 6-12 months. With this new market opening up, though, it’s time for more health entrepreneurs to take a deeper look at what patient data may offer in terms of clinical and consumer innovation.

The incredible complexity of today’s patient data systems


By Walter Thompson

Enterprise companies find MLOps critical for reliability and performance

Enterprise startups UIPath and Scale have drawn huge attention in recent years from companies looking to automate workflows, from RPA (robotic process automation) to data labeling.

What’s been overlooked in the wake of such workflow-specific tools has been the base class of products that enterprises are using to build the core of their machine learning (ML) workflows, and the shift in focus toward automating the deployment and governance aspects of the ML workflow.

That’s where MLOps comes in, and its popularity has been fueled by the rise of core ML workflow platforms such as Boston-based DataRobot. The company has raised more than $430 million and reached a $1 billion valuation this past fall serving this very need for enterprise customers. DataRobot’s vision has been simple: enabling a range of users within enterprises, from business and IT users to data scientists, to gather data and build, test and deploy ML models quickly.

Founded in 2012, the company has quietly amassed a customer base that boasts more than a third of the Fortune 50, with triple-digit yearly growth since 2015. DataRobot’s top four industries include finance, retail, healthcare and insurance; its customers have deployed over 1.7 billion models through DataRobot’s platform. The company is not alone, with competitors like H20.ai, which raised a $72.5 million Series D led by Goldman Sachs last August, offering a similar platform.

Why the excitement? As artificial intelligence pushed into the enterprise, the first step was to go from data to a working ML model, which started with data scientists doing this manually, but today is increasingly automated and has become known as “auto ML.” An auto-ML platform like DataRobot’s can let an enterprise user quickly auto-select features based on their data and auto-generate a number of models to see which ones work best.

As auto ML became more popular, improving the deployment phase of the ML workflow has become critical for reliability and performance — and so enters MLOps. It’s quite similar to the way that DevOps has improved the deployment of source code for applications. Companies such as DataRobot and H20.ai, along with other startups and the major cloud providers, are intensifying their efforts on providing MLOps solutions for customers.

We sat down with DataRobot’s team to understand how their platform has been helping enterprises build auto-ML workflows, what MLOps is all about and what’s been driving customers to adopt MLOps practices now.

The rise of MLOps


By Walter Thompson

Confluent introduces scale on demand for Apache Kafka cloud customers

We find ourselves in a time when certain businesses are being asked to scale to levels they never imagined. Sometimes that increased usage comes in bursts, which means you don’t want to pay for permanent extra capacity you might not always need. Today, Confluent introduced a new scale on demand feature for its Apache Kafka cloud service that will scale up and down as needed automatically.

Confluent CEO Jay Kreps says that elasticity is arguably one of the most important features of cloud computing, and this ability to scale up and down is one of the primary factors that has attracted organizations to the cloud. By automating that capability, they giving DevOps one less major thing to worry about.

“This new functionality allows users to dynamically scale Kafka and the other key ecosystem components like KSQL and Kafka Connect. This is a key missing capability that no other service provides,” Kreps explained.

He points out that this particularly relevant right now with people working at home. Systems are being taxed more than perhaps ever before, and this automated elasticity is going to come in handy, making it more cost-effective and efficient than was previously possible.

“These capabilities let customers add capacity as they need it, or scale down to save money, all without having to pre-plan in advance, ” he said.

The new elasticity feature in Confluent is part of a series of updates to the platform, known as Project Metamorphosis, that Confluent is planning to roll out throughout this year on a regular basis.

“Through the rest of the year we’ll be doing a sequence of releases that bring the capabilities of modern cloud data systems to the Kafka ecosystem in Confluent Cloud. We’ll be announcing one major capability each month, starting with elasticity,” he said.

Kreps first announced Metamorphosis last month when the company also announced a massive $250 million funding round on a $4.5 billion valuation. In spite of the current economic situation, driven by the ongoing pandemic, Confluent plans to continue to build out the product, as today’s announcement attests.


By Ron Miller

GitHub gets a built-in IDE with Codespaces, discussion forums and more

Under different circumstances, GitHub would be hosting its Satellite conference in Paris this week. Like so many other events, GitHub decided to switch Satellite to a virtual event, but that isn’t stopping the Microsoft-owned company from announcing quite a bit of news this week.

The highlight of GitHub’s announcement is surely the launch of GitHub Codespaces, which gives developers a full cloud-hosted development environment in the cloud, based on Microsoft’s VS Code editor. If that name sounds familiar, that’s likely because Microsoft itself rebranded Visual Studio Code Online to Visual Studio Codespaces a week ago — and GitHub is essentially taking the same concepts and technology and is now integrating it directly inside its service. If you’ve seen VS Online/Codespaces before, the GitHub environment will look very similar.

Contributing code to a community can be hard. Every repository has its own way of configuring a dev environment, which often requires dozens of steps before you can write any code,” writes Shanku Niyogi, GitHub’s SVP of Product, in today’s announcement. “Even worse, sometimes the environment of two projects you are working on conflict with one another. GitHub Codespaces gives you a fully-featured cloud-hosted dev environment that spins up in seconds, directly within GitHub, so you can start contributing to a project right away.”

Currently, GitHub Codespaces is in beta and available for free. The company hasn’t set any pricing for the service once it goes live, but Niyogi says the pricing will look similar to that of GitHub Actions, where it charges for computationally intensive tasks like builds. Microsoft currently charges VS Codespaces users by the hour and depending on the kind of virtual machine they are using.

The other major new feature the company is announcing today is GitHub Discussions. These are essentially discussion forums for a given project. While GitHub already allowed for some degree of conversation around code through issues and pull requests, Discussions are meant to enable unstructured threaded conversations. They also lend themselves to Q&As, and GitHub notes that they can be a good place for maintaining FAQs and other documents.

Currently, Discussions are in beta for open-source communities and will be available for other projects soon.

On the security front, GitHub is also announcing two new features: code scanning and secret scanning. Code scanning checks your code for potential security vulnerabilities. It’s powered by CodeQL and free for open-source projects. Secret scanning is now available for private repositories (a similar feature has been available for public projects since 2018). Both of these features are part of GitHub Advanced Security.

As for GitHub’s enterprise customers, the company today announced the launch of Private Instances, a new fully managed service for enterprise customers that want to use GitHub in the cloud but know that their code is fully isolated from the rest of the company’s users. “Private Instances provides enhanced security, compliance, and policy features including bring-your-own-key encryption, backup archiving, and compliance with regional data sovereignty requirements,” GitHub explains in today’s announcement.


By Frederic Lardinois

Sleuth raises $3M Seed to bring order to continuous deployment

Sleuth, an early stage startup from three former Atlassian employees, wants to bring some much-needed order to the continuous delivery process. Today, the company announced it has raised a $3 million seed round.

CRV led the round with participation from angel investors from New Relic, Atlassian and LaunchDarkly.

“Sleuth is a deployment tracker built to solve the confusion that comes when companies have adopted continuous delivery,” says CEO and co-founder Dylan Etkin. The company’s founders recognized that more and more companies were making the move to continuous delivery deployment, and they wanted to make it easier to track those deployments and figure out where the bottle necks were.

He says that typically, on any given DevOps team, there are perhaps two or three people who know how the entire system works, and with more people spread out now, it’s more important than ever that everyone has that capability. Etkin says Sleuth lets everyone on the team understand the underlying complexity of the delivery system with the goal of helping them understand the impact of a given change they made.

“Sleuth is trying to make that better by targeting the developer and really giving them a communications platform, so that they can discuss the [tools] and understand what is changing and who has changed what. And then more importantly, what is the impact of my change,” he explained.

Image Credit: Sleuth

The company was founded by three former Atlassian alumni — Ektin along with Michael Knighten and Don Brown — all of whom were among the first 50 employees at the now tremendously successful development tools company.

That kind of pedigree tends to get the attention of investors like CRV, but it is also telling that three companies including their former employer saw enough potential here to invest in the company, and be using the product.

Etkin recognizes this is a tricky time to launch an early-stage startup. He said that when he first entered the lock down, his inclination was to hunker down, but they concluded that their tool would have even greater utility at the moment. “The founders took stock and we were always building a tool that was great for remote teams and collaboration in general, and that hasn’t changed… if anything, I think it’s becoming more important right now.”

The company plans to spend the next 6-9 months refining the product, adding a few folks to the five person team and finding product-market fit. There is never an ideal time to start a company, but Sleuth believes now is its moment. It may not be easy, but they are taking a shot.


By Ron Miller