Microsoft launches Project Bonsai, its new machine teaching service for building autonomous systems

At its Build developer conference, Microsoft today announced that Project Bonsai, its new machine teaching service, is now in public preview.

If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.

It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.

“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.

Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.

In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.

Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.

“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”


By Frederic Lardinois

Microsoft launches Azure Synapse Link to help enterprises get faster insights from their data

At its Build developer conference, Microsoft today announced Azure Synapse Link, a new enterprise service that allows businesses to analyze their data faster and more efficiently, using an approach that’s generally called ‘hybrid transaction/analytical processing’ (HTAP). That’s a mouthful, it essentially enables enterprises to use the same database system for analytical and transactional workloads on a single system. Traditionally, enterprises had to make some tradeoffs between either building a single system for both that was often highly over-provisioned or to maintain separate systems for transactional and analytics workloads.

Last year, at its Ignite conference, Microsoft announced Azure Synapse Analytics, an analytics service that combines analytics and data warehousing to create what the company calls “the next evolution of Azure SQL Data Warehouse.” Synapse Analytics brings together data from Microsoft’s services and those from its partners and makes it easier to analyze.

“One of the key things, as we work with our customers on their digital transformation journey, there is an aspect of being data-driven, of being insights-driven as a culture, and a key part of that really is that once you decide there is some amount of information or insights that you need, how quickly are you able to get to that? For us, time to insight and a secondary element, which is the cost it takes, the effort it takes to build these pipelines and maintain them with an end-to-end analytics solution, was a key metric we have been observing for multiple years from our largest enterprise customers,” said Rohan Kumar, Microsoft’s corporate VP for Azure Data.

Synapse Link takes the work Microsoft did on Synaps Analytics a step further by removing the barriers between Azure’s operational databases and Synapse Analytics, so enterprises can immediately get value from the data in those databases without going through a data warehouse first.

“What we are announcing with Synapse Link is the next major step in the same vision that we had around reducing the time to insight,” explained Kumar. “And in this particular case, a long-standing barrier that exists today between operational databases and analytics systems is these complex ETL (extract, transform, load) pipelines that need to be set up just so you can do basic operational reporting or where, in a very transactionally consistent way, you need to move data from your operational system to the analytics system, because you don’t want impact the performance of the operational system in any way because that’s typically dealing with, depending on the system, millions of transactions per second.”

ETL pipelines, Kumar argued, are typically expensive and hard to build and maintain, yet enterprises are now building new apps — and maybe even line of business mobile apps — where any action that consumers take and that is registered in the operational database is immediately available for predictive analytics, for example.

From the user perspective, enabling this only takes a single click to link the two, while it removes the need for managing additional data pipelines or database resources. That, Kumar said, was always the main goal for Synapse Link. “With a single click, you should be able to enable real-time analytics on you operational data in ways that don’t have any impact on your operational systems, so you’re not using the compute part of your operational system to do the query, you actually have to transform the data into a columnar format, which is more adaptable for analytics, and that’s really what we achieved with Synapse Link.”

Because traditional HTAP systems on-premises typically share their compute resources with the operational database, those systems never quite took off, Kumar argued. In the cloud, with Synapse Link, though, that impact doesn’t exist because you’re dealing with two separate systems. Now, once a transaction gets committed to the operational database, the Synapse Link system transforms the data into a columnar format that is more optimized for the analytics system — and it does so in real time.

For now, Synapse Link is only available in conjunction with Microsoft’s Cosmos DB database. As Kumar told me, that’s because that’s where the company saw the highest demand for this kind of service, but you can expect the company to add support for available in Azure SQL, Azure Database for PostgreSQL and Azure Database for MySQL in the future.


By Frederic Lardinois

Microsoft launches industry-specific cloud solutions, starting with healthcare

Microsoft today announced the launch of the Microsoft Cloud for Healthcare, an industry-specific cloud solution for healthcare providers. This is the first in what is likely going to be a set of cloud offerings that target specific verticals and extends a trend we’ve seen among large cloud providers (especially Google), who tailor specific offerings to the needs of individual industries.

“More than ever, being connected is critical to create an individualized patient experience,” writes Tom McGuinness, Corporate Vice President, Worldwide Health at Microsoft, and Dr. Greg Moore, Corporate Vice President, Microsoft Health, in today’s announcement. “The Microsoft Cloud for Healthcare helps healthcare organizations to engage in more proactive ways with their patients, allows caregivers to improve the efficiency of their workflows and streamline interactions with Classified as Microsoft Confidentialpatientswith more actionable results.”

Like similar Microsoft-branded offerings from the company, Cloud for Healthcare is about bringing together a set of capabilities that already exist inside of Microsoft. In this case, that includes Microsoft 365, Dynamics, Power Platform and Azure, including Azure IoT for monitoring patients. The solution sits on top of a common data model that makes it easier to share data between applications and analyze the data they gather.

“By providing the right information at the right time, the Microsoft Cloud for Healthcare will help hospitals and care providers better manage the needs of patients and staff and make resource deployments more efficient,” Microsoft says in its press materials. “This solution also improves end-to-end security compliance and accessibility of data, driving better operational outcomes.”

Since Microsoft never passes up a chance to talk up Teams, the company also notes that its communications service will allow healthcare workers to more efficiently communicate with each other, but it also notes that Teams now includes a Bookings app to help its users — including healthcare providers — schedule, manage and conduct virtual visits in Teams. Some of the healthcare systems that are already using Teams include St Luke’s University Health Network, Stony Brook Medicine, Confluent Health, and Calderdale & Huddersfield NHSFoundationTrust in the UK.

In addition to Microsoft’s own tools, the company is also working with its large partner ecosystem to provide healthcare providers with specialized services. These include the likes of Epic, Allscripts, GE Healthcare, Adaptive Biotechnologies and Nuance.


By Frederic Lardinois

Microsoft is acquiring Metaswitch Networks to expand its Azure 5G strategy

Just weeks after announcing a deal to acquire 5G specialist Affirmed Networks, Microsoft is making another acquisition to strengthen its cloud-based telecoms offering. It’s acquiring Metaswitch Networks, a UK-based provider of cloud-based communications products used by carriers and network providers (customers include the likes of BT in the UK, Sprint, and virtual network consortium RINA.

Terms of the deal were not disclosed in today’s announcement. Metaswitch’s investors included the PE firms Northgate and WRV, Francisco Partners and Sequoia, but it’s unclear how much it had raised nor its last valuation. (The company has been around since 1981.)

The deal speaks to a growing focus from tech companies leveraging cloud architectures and the adoption of new networking technologies — specifically 5G — to capitalise on a bigger role in becoming service providers both to carriers and to those who would like to build carrier-like services (potentially bypassing telcos in the process), through the offering of virtualised products delivered from its cloud.

It comes just one day after Rakuten, the Japanese e-commerce and streaming services giant, announced that it would be acquiring Innoeye, another specialist in cloud-based communications services. Others like Amazon have also been building up their offerings in AWS serving the same market.

Microsoft describes Metaswitch portfolio of cloud-native services — which include 5G data, voice and unified communications (contact center) products — as “complementary” to Affirmed.

“Microsoft intends to leverage the talent and technology of these two organizations, extending the Azure platform to both deploy and grow these capabilities at scale in a way that is secure, efficient and creates a sustainable ecosystem,” the company said. 

The migration to 5G represents a window of opportunity to companies that provide services to carriers. The latter have long been saddled with expensive, ageing equipment and now have the potential to replace some or all of that with software-based services, delivered via the cloud, that can be more easily updated and modified with market demand. That is the hope, at least. The reality may be that many carriers sweat out their assets and upgrade in small increments, as operational expenditure still represents a big investment and cost.

Microsoft is all too aware of that reality and also of the prospect of appearing like a threat, not a saviour.

“We will continue to support hybrid and multi-cloud models to create a more diverse telecom ecosystem and spur faster innovation, an expanded set of unique offerings and greater opportunities for differentiation,” it notes. “We will continue to partner with existing suppliers, emerging innovators and network equipment partners to share roadmaps and explore expanded opportunities to work together, including in the areas of radio access networks (RAN), next-generation core, virtualized services, orchestration and operations support system/business support system (OSS/BSS) modernization. A future that is interoperable has never been more important to ensure the success of customers and partners.”

Indeed, Microsoft’s been providing services to, and selling its own IT through, carriers for years before this. These latest acquisitions, however, represent a growing focus on what role it can play in that enterprise vertical in the years to come.


By Ingrid Lunden

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”


By Frederic Lardinois

Microsoft and AWS exchange poisoned pen blog posts in latest Pentagon JEDI contract spat

Microsoft and Amazon are at it again as the fight for the Defense Department JEDI contract continues. In a recent series of increasingly acerbic pronouncements, the two companies continue their ongoing spat over the $10 billion, decade-long JEDI contract spoils.

As you may recall (or not), last fall in a surprise move, the DoD selected Microsoft as the winning vendor in the JEDI winner-take-all cloud infrastructure sweepstakes. The presumed winner was always AWS, but when the answer finally came down, it was not them.

To make a very long story short, AWS took exception to the decision and went to court to fight it. Later it was granted a stay of JEDI activities between Microsoft and the DoD, which as you can imagine did not please Microsoft . Since then, the two companies have been battling in PR pronouncements and blog posts trying to get the upper hand in the war for public opinion.

That fight took a hard turn this week when the two companies really went at it in dueling blog posts after Amazon filed its latest protest.

First there was Microsoft with PR exec Frank Shaw taking exception to AWS’s machinations, claiming the company just wants a do-over:

This latest filing – filed with the DoD this time – is another example of Amazon trying to bog down JEDI in complaints, litigation and other delays designed to force a do-over to rescue its failed bid.

Amazon’s Drew Herdner countered in a blog post published this morning:

Recently, Microsoft has published multiple self-righteous and pontificating blog posts that amount to nothing more than misleading noise intended to distract those following the protest.

The bottom line is that Microsoft believes it won the contract fair and square with a more competitive bid, while Amazon believes it should have won on technical superiority, and that there was political interference from the president because he doesn’t like Amazon CEO Jeff Bezos, who also owns the Washington Post.

If you’ve been following this story from the beginning (as I have), you know it has taken a series of twists and turns. It’s had lawsuits, complaints, drama and intrigue. The president has inserted himself into it too. There have been accusations of conflicts of interest. There have been investigations, lawsuits, and more investigations.

Government procurement tends to be pretty bland, but from the start when the DoD chose to use the cutesy Star Wars-driven acronym for this project, it has been anything but. Now it’s come down to two of the world’s largest tech companies exchanging angry blog posts. Sooner or later this is going to end right?


By Ron Miller

GitHub gets a built-in IDE with Codespaces, discussion forums and more

Under different circumstances, GitHub would be hosting its Satellite conference in Paris this week. Like so many other events, GitHub decided to switch Satellite to a virtual event, but that isn’t stopping the Microsoft-owned company from announcing quite a bit of news this week.

The highlight of GitHub’s announcement is surely the launch of GitHub Codespaces, which gives developers a full cloud-hosted development environment in the cloud, based on Microsoft’s VS Code editor. If that name sounds familiar, that’s likely because Microsoft itself rebranded Visual Studio Code Online to Visual Studio Codespaces a week ago — and GitHub is essentially taking the same concepts and technology and is now integrating it directly inside its service. If you’ve seen VS Online/Codespaces before, the GitHub environment will look very similar.

Contributing code to a community can be hard. Every repository has its own way of configuring a dev environment, which often requires dozens of steps before you can write any code,” writes Shanku Niyogi, GitHub’s SVP of Product, in today’s announcement. “Even worse, sometimes the environment of two projects you are working on conflict with one another. GitHub Codespaces gives you a fully-featured cloud-hosted dev environment that spins up in seconds, directly within GitHub, so you can start contributing to a project right away.”

Currently, GitHub Codespaces is in beta and available for free. The company hasn’t set any pricing for the service once it goes live, but Niyogi says the pricing will look similar to that of GitHub Actions, where it charges for computationally intensive tasks like builds. Microsoft currently charges VS Codespaces users by the hour and depending on the kind of virtual machine they are using.

The other major new feature the company is announcing today is GitHub Discussions. These are essentially discussion forums for a given project. While GitHub already allowed for some degree of conversation around code through issues and pull requests, Discussions are meant to enable unstructured threaded conversations. They also lend themselves to Q&As, and GitHub notes that they can be a good place for maintaining FAQs and other documents.

Currently, Discussions are in beta for open-source communities and will be available for other projects soon.

On the security front, GitHub is also announcing two new features: code scanning and secret scanning. Code scanning checks your code for potential security vulnerabilities. It’s powered by CodeQL and free for open-source projects. Secret scanning is now available for private repositories (a similar feature has been available for public projects since 2018). Both of these features are part of GitHub Advanced Security.

As for GitHub’s enterprise customers, the company today announced the launch of Private Instances, a new fully managed service for enterprise customers that want to use GitHub in the cloud but know that their code is fully isolated from the rest of the company’s users. “Private Instances provides enhanced security, compliance, and policy features including bring-your-own-key encryption, backup archiving, and compliance with regional data sovereignty requirements,” GitHub explains in today’s announcement.


By Frederic Lardinois

Microsoft to open first data center in New Zealand as cloud usage grows

In spite of being in the midst of a pandemic sowing economic uncertainty, one area that continues to thrive is cloud computing. Perhaps that explains why Microsoft, which saw Azure grow 59% in its most recent earnings report, announced plans to open a new data center in New Zealand once it receives approval from the Overseas Investment Office.

“This significant investment in New Zealand’s digital infrastructure is a testament to the remarkable spirit of New Zealand’s innovation and reflects how we’re pushing the boundaries of what is possible as a nation,” Vanessa Sorenson, general manager at Microsoft New Zealand said in a statement.

The company sees this project against the backdrop of accelerating digital transformation that we are seeing as the pandemic forces companies to move to the cloud more quickly with employees often spread out and unable to work in offices around the world.

As CEO Satya Nadella noted on Twitter, this should help companies in New Zealand that are in the midst of this transformation. “Now more than ever, we’re seeing the power of digital transformation, and today we’re announcing a new datacenter region in New Zealand to help every organization in the country build their own digital capability,” Nadella tweeted.

The company wants to do more than simply build a data center. It will make this part of a broader investment across the country, including skills training and reducing the environmental footprint of the data center.

Once New Zealand comes on board, the company will boast 60 regions covering 140 countries around the world. The new data center won’t just be about Azure, either. It will help fuel usage of Office 365 and the Dynamics 365 back-office products, as well.


By Ron Miller

In spite of pandemic (or maybe because of it), cloud infrastructure revenue soars

It’s fair to say that even before the impact of COVID-19, companies had begun a steady march to the cloud. Maybe it wasn’t fast enough for AWS, as Andy Jassy made clear in his 2019 Re:invent keynote, but it was happening all the same and the steady revenue increases across the cloud infrastructure market bore that out.

As we look at the most recent quarter’s earnings reports for the main players in the market, it seems the pandemic and economic fall out has done little to slow that down. In fact, it may be contributing to its growth.

According to numbers supplied by Synergy Research, the cloud infrastructure market totaled $29 billion in revenue for Q12020.

Image Credit: Synergy Research

Synergy’s John Dinsdale, who has been watching this market for a long time, says that the pandemic could be contributing to some of that growth, at least modestly. In spite of the numbers, he doesn’t necessarily see these companies getting out of this unscathed either, but as companies shift operations from offices, it could be part of the reason for the increased demand we saw in the first quarter.

“For sure, the pandemic is causing some issues for cloud providers, but in uncertain times, the public cloud is providing flexibility and a safe haven for enterprises that are struggling to maintain normal operations. Cloud provider revenues continue to grow at truly impressive rates, with AWS and Azure in aggregate now having an annual revenue run rate of well over $60 billion,” Dinsdale said in a statement.

AWS led the way with a third of the market or more than $10 billion in quarterly revenue as it continues to hold a substantial lead in market share. Microsoft was in second, growing at a brisker 59% for 18% of the market. While Microsoft doesn’t break out its numbers, using Synergy’s numbers, that would work out to around $5.2 billion for Azure revenue. Meanwhile Google came in third with $2.78 billion.

If you’re keeping track of market share at home, it comes out to 32% for AWS, 18% for Microsoft and 8% for Google. This split has remained fairly steady, although Microsoft has managed to gain a few percentage points over the last several quarters as its overall growth rate outpaces Amazon.


By Ron Miller

Microsoft makes it easier to get started with Windows Virtual Desktops

Microsoft today announced a slew of updates to various parts of its Microsoft 365 ecosystem. A lot of these aren’t all that exciting (though that obviously depends on your level of enthusiasm for products like Microsoft Endpoint Manager), but the overall thrust behind this update is to make life easier for the IT admins that help provision and manage corporate Windows — and Mac — machines, something that’s even more important right now, given how many companies are trying to quickly adapt to this new work-from-home environment.

For them, the highlight of today’s set of announcements is surely an update to Windows Virtual Desktop, Microsoft’s service for giving employees access to a virtualized desktop environment on Azure and that allows IT departments to host multiple Windows 10 sessions on the same hardware. The company is launching a completely new management experience for this service that makes getting started significantly easier for admins.

Ahead of today’s announcement, Brad Anderson, Microsoft’s corporate VP for Microsoft 365, told me that it took a considerable amount of Azure expertise to get started with this service. With this update, you still need to know a bit about Azure, but the overall process of getting started is now significantly easier. And that, Anderson noted, is now more important than ever.

“Some organizations are telling me that they’re using on-prem [Virtual Desktop Infrastructure]. They had to go do work to basically free up capacity. In some cases, that means doing away with disaster recovery for some of their services in order to get the capacity,” Anderson said. “In some cases, I hear leaders say it’s going to take until the middle or the end of May to get the additional capacity to spin up the VDI sessions that are needed. In today’s world, that’s just unacceptable. Given what the cloud can do, people need to have the ability to spin up and spin down on demand. And that’s the unique thing that a Windows Virtual Desktop does relative to traditional VDI.”

Anderson also believes that remote work will remain much more common once things go back to normal — whenever that happens and whatever that will look like. “I think the usage of virtualization where you are virtualizing running an app in a data center in the cloud and then virtualizing it down will grow. This will introduce a secular trend and growth of cloud-based VDI,” he said.

In addition to making the management experience easier, Microsoft is now also making it possible to use Microsoft Teams for video meetings in these virtual desktop environments, using a feature called ‘A/V redirection’ that allows users to connect their local audio and video hardware and virtual machines with low latency. It’ll take another month or so for this feature to roll out, though.

Also new is the ability to keep service metadata about Windows Virtual Desktop usage within a certain Azure region for compliance and regulatory reasons.

For those of you interested in Microsoft Endpoint Manager, the big news here is better support for macOS-based machines. Using the new Intune MDM agent for macOS, admins can use the same tool for managing repetitive tasks on Windows 10 and macOS.

Productivity Score — a product only an enterprise manager would love — is also getting an update. You can now see how people in an organization are reading, authoring and collaborating around content in OneDrive and SharePoint, for example. And if they aren’t, you can write a memo and tell them they should collaborate more.

There are also new dashboards here for looking at how employees work across devices and how they communicate. It’s worth noting that this is aggregate data and not another way for corporate to look at what individual employees are doing.

The one feature here that does actually seem really useful, especially given the current situation, is a new Network Connectivity category that helps IT to figure out where there are networking challenges.


By Frederic Lardinois

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows and while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.


By Frederic Lardinois

Google Cloud’s fully-managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of API’s.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, hey, we want, in addition to GCP, we want AWS,” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads in on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and talking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”


By Frederic Lardinois

Fishtown Analytics raises $12.9M Series A for its open-source analytics engineering tool

Philadelphia-based Fishtown Analytics, the company behind the popular open-source data engineering tool dbt, today announced that it has raised a $12.9 million Series A round led by Andreessen Horowitz, with the firm’s general partner Martin Casada joining the company’s board.

“I wrote this blog post in early 2016, essentially saying that analysts needed to work in a fundamentally different way,” Fishtown founder and CEO Tristan Handy told me, when I asked him about how the product came to be. “They needed to work in a way that much more closely mirrored the way the software engineers work and software engineers have been figuring this shit out for years and data analysts are still like sending each other Microsoft Excel docs over email.”

The dbt open-source project forms the basis of this. It allows anyone who can write SQL queries to transform data and then load it into their preferred analytics tools. As such, it sits in-between data warehouses and the tools that load data into them on one end, and specialized analytics tools on the other.

As Casada noted when I talked to him about the investment, data warehouses have now made it affordable for businesses to store all of their data before it is transformed. So what was traditionally “extract, transform, load” (ETL) has now become “extract, load, transform” (ELT). Andreessen Horowitz is already invested in Fivetran, which helps businesses move their data into their warehouses, so it makes sense for the firm to also tackle the other side of this business.

“Dbt is, as far as we can tell, the leading community for transformation and it’s a company we’ve been tracking for at least a year,” Casada said. He also argued that data analysts — unlike data scientists — are not really catered to as a group.

Before this round, Fishtown hadn’t raised a lot of money, even though it has been around for a few years now, except for a small SAFE round from Amplify.

But Handy argued that the company needed this time to prove that it was on to something and build a community. That community now consists of more than 1,700 companies that use the dbt project in some form and over 5,000 people in the dbt Slack community. Fishtown also now has over 250 dbt Cloud customers and the company signed up a number of big enterprise clients earlier this year. With that, the company needed to raise money to expand and also better service its current list of customers.

“We live in Philadelpha. The cost of living is low here and none of us really care to make a quadro-billion dollars, but we do want to answer the question of how do we best serve the community,” Handy said. “And for the first time, in the early part of the year, we were like, holy shit, we can’t keep up with all of the stuff that people need from us.”

The company plans to expand the team from 25 to 50 employees in 2020 and with those, the team plans to improve and expand the product, especially its IDE for data analysts, which Handy admitted could use a bit more polish.


By Frederic Lardinois

Pileus helps businesses cut their cloud spend

Israel-based Pileus, which is officially launching today, aims to help businesses keep their cloud spend under control. The company also today announced that it has raised a $1 million seed round from a private angel investor.

Using machine learning, the company’s platform continuously learns about how a user typically uses a given cloud and then provides forecasts and daily personalized recommendations to help them stay within a budget.

Pileus currently supports AWS, with support for Google Cloud and Microsoft Azure coming soon.

With all of the information it gathers about your cloud usage, the service can also monitor usage for any anomalies. Because, at its core, Pileus keeps a detailed log of all your cloud spend, it also can provide detailed reports and dashboards of what a user is spending on each project and resource.

If you’ve ever worked on a project like this, you know that these reports are only as good as the tags you use to identify each project and resource, so Pileus makes that a priority on its platform, with a tagging tool that helps enforce tagging policies.

“My team and I spent many sleepless nights working on this solution,” says Pileus CEO Roni Karp. “We’re thrilled to finally be able to unleash Pileus to the masses and help everyone gain more efficiency of their cloud experience while helping them understand their usage and costs better than ever before.”

Pileus currently offers a free 30-day trial. After that, users can either opt to pay $180/month or $800 per year. At those prices, the service isn’t exactly useful until your cloud spend is significantly more than that, of course.

The company isn’t just focused on individual businesses, though. It’s also targeting managed service providers that can use the platform to create reports and manage their own customer billing. Karp believes this will become a significant source of revenue for Pileus because “there are not many good tools in the field today, especially for Azure.”

It’s no secret that Pileus is launching into a crowded market, where well-known incumbents like Cloudability already share mindshare with a growing number of startups. Karp, however, believes that Pileus can stand out, largely because of its machine learning platform and its ability to provide users with immediate value, whereas, he argues, it often takes several weeks for other platforms to deliver results.

 


By Frederic Lardinois

DoD Inspector General report finds everything was basically hunky-dory with JEDI cloud contract bid

While controversy has dogged the $10 billion, decade-long JEDI contract since its earliest days, a report by the DoD’s Inspector General’s Office concluded today that, while there were some funky bits and potential conflicts, overall the contract procurement process was fair and legal and  the president did not unduly influence the process in spite of public comments.

There were a number of issues along the way about whether the single contractor award was fair or reasonable, about whether there were was White House influence on the decision, and whether the president wanted to prevent Amazon founder Jeff Bezos, who also owns the Washington Post, from getting the contract.

There were questions about whether certain personnel, who had been or were about to be Amazon employees, had undue influence on the contents of the RFP or if former Secretary of Defense showed favor to Amazon, which ultimately did not even win the contract, and that one of Mattis’ under secretaries, in fact, owned stock in Microsoft .

It’s worth noting that the report states clearly that it is not looking at the merits of this contract award or whether the correct company won on technical acumen. It was looking at all of these controversial parts came up throughout the process. As the report stated:

“In this report, we do not draw a conclusion regarding whether the DoD appropriately awarded the JEDI Cloud contract to Microsoft rather than Amazon Web Services. We did not assess the merits of the contractors’ proposals or DoD’s technical or price evaluations; rather we reviewed the source selection process and determined that it was in compliance with applicable statutes, policies, and the evaluation process described in the Request for Proposals.”

Although the report indicates that the White House would not cooperate with the investigation into potential bias, the investigators claim they had enough discussions with parties involved with the decision to conclude that there was no undue influence on the White House’s part:

“However, we believe the evidence we received showed that the DoD personnel who evaluated
the contract proposals and awarded Microsoft the JEDI Cloud contract were not pressured regarding their decision on the award of the contract by any DoD leaders more senior to them, who may have communicated with the White House,” the report stated.

The report chose to blame the media instead, at least for partly giving the impression that the White House had influenced the process, stating:

“Yet, these media reports, and the reports of President Trump’s statements about Amazon, ongoing bid protests and “lobbying” by JEDI Cloud competitors, as well as inaccurate media reports about the JEDI Cloud procurement process, may have created the appearance or perception that the contract award process was not fair or unbiased.”

It’s worth noting that we reported that AWS president Andy Jassy made it clear in a press conference at AWS re:Invent in December that the company believed the president’s words had influenced the process.

“I think that we ended up with a situation where there was political interference. When you have a sitting president, who has shared openly his disdain for a company, and the leader of that company, it makes it really difficult for government agencies, including the DoD, to make objective decisions without fear of reprisal.”

As for other points of controversy, such as those previously referenced biases, all were found lacking by the Inspector General. While the earliest complaints from Oracle and others were that Deap Ubhi and Victor Gavin, two individuals involved in drafting the RFP, failed to disclose they were offered jobs by Amazon during that time.

The report concluded that while Ubhi violated ethics rules, his involvement wasn’t substantial enough to influence the RFP (which again, Amazon didn’t win). “However, we concluded that Mr. Ubhi’s brief early involvement in the JEDI Cloud Initiative was not substantial and did not provide any advantage to his prospective employer, Amazon…,” the report stated.

The report found Gavin did not violate any ethics rules in spite of taking a job with Amazon because he had disqualified himself from the process, nor did the report find that former Secretary Mattis had any ethical violations in its investigation.

One final note: Stacy Cummings, Principal Deputy Assistant Secretary of Defense for Acquisition and Deputy Assistant Secretary of Defense for Acquisition Enablers, who worked for Mattis, owned some stock in Microsoft and did not disclose this. While the report found that was a violation of ethics guidelines, it ultimately concluded this did not unduly influence the award to Microsoft.

While the report is a substantial, 313 pages, it basically concludes that as far as the purview of the Inspector General is concerned, the process was basically conducted in a fair way. The court case, however involving Amazon’s protest of the award to Microsoft continues. And the project remains on hold until that is concluded.

Note: Microsoft and Amazon did not respond to requests from TechCrunch for comments before we published this article. If that changes, we will update accordingly.

Report on the Joint Enterprise Defense Infrastructure (Jedi) Cloud Procurement Dodig-2020-079 by TechCrunch on Scribd


By Ron Miller