Microsoft challenges Twilio with the launch of Azure Communication Services

Microsoft today announced the launch of Azure Communication Services, a new set of features in its cloud that enable developers to add voice and video calling, chat and text messages to their apps, as well as old-school telephony.

The company describes the new set of services as the “first fully managed communication platform offering from a major cloud provider,” and that seems right, given that Google and AWS offer some of these features, including the AWS notification service, for example, but not as part of a cohesive communication service. Indeed, it seems Azure Communication Service is more of a competitor to the core features of Twilio or up-and-coming MessageBird.

Over the course of the last few years, Microsoft has built up a lot of experience in this area, in large parts thanks to the success of its Teams service. Unsurprisingly, that’s something Microsoft is also playing up in its announcement.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support 5B+ meeting minutes daily,” writes Scott Van Vliet, corporate vice president for Intelligent Communication at the company.

Microsoft also stresses that it offers a set of additional smart services that developers can tap into to build out their communication services, including its translation tools, for example. The company also notes that its services are encrypted to meet HIPPA and GDPR standards.

Like similar services, developers access the various capabilities through a set of new APIs and SDKs.

As for the core services, the capabilities here are pretty much what you’d expect. There’s voice and video calling (and the ability to shift between them). There’s support for chat and, starting in October, users will also be able to send text messages. Microsoft says developers will be able to send these to users anywhere, with Microsoft positioning it as a global service.

Provisioning phone numbers, too, is part of the services and developers will be able to provision those for in-bound and out-bound calls, port existing numbers, request new ones and — most importantly for contact-center users — integrate them with existing on-premises equipment and carrier networks.

“Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market,” writes Van Vliet. “We see rich communication experiences – enabled by voice, video, chat, and SMS – continuing to be an integral part in how businesses connect with their customers across devices and platforms.”


By Frederic Lardinois

Google Cloud’s new BigQuery Omni will let developers query data in GCP, AWS and Azure

At its virtual Cloud Next ’20 event, Google today announced a number of updates to its cloud portfolio, but the public alpha launch of BigQuery Omni is probably the highlight of this year’s event. Powered by Google Cloud’s Anthos hybrid-cloud platform, BigQuery Omni allows developers to use the BigQuery engine to analyze data that sits in multiple clouds, including those of Google Cloud competitors like AWS and Microsoft Azure — though for now, the service only supports AWS, with Azure support coming later.

Using a unified interface, developers can analyze this data locally without having to move data sets between platforms.

“Our customers store petabytes of information in BigQuery, with the knowledge that it is safe and that it’s protected,” said Debanjan Saha, the GM and VP of Engineering for Data Analytics at Google Cloud, in a press conference ahead of today’s announcement. “A lot of our customers do many different types of analytics in BigQuery. For example, they use the built-in machine learning capabilities to run real-time analytics and predictive analytics. […] A lot of our customers who are very excited about using BigQuery in GCP are also asking, ‘how can they extend the use of BigQuery to other clouds?’ ”

Image Credits: Google

Google has long said that it believes that multi-cloud is the future — something that most of its competitors would probably agree with, though they all would obviously like you to use their tools, even if the data sits in other clouds or is generated off-platform. It’s the tools and services that help businesses to make use of all of this data, after all, where the different vendors can differentiate themselves from each other. Maybe it’s no surprise then, given Google Cloud’s expertise in data analytics, that BigQuery is now joining the multi-cloud fray.

“With BigQuery Omni customers get what they wanted,” Saha said. “They wanted to analyze their data no matter where the data sits and they get it today with BigQuery Omni.”

Image Credits: Google

He noted that Google Cloud believes that this will help enterprises break down their data silos and gain new insights into their data, all while allowing developers and analysts to use a standard SQL interface.

Today’s announcement is also a good example of how Google’s bet on Anthos is paying off by making it easier for the company to not just allow its customers to manage their multi-cloud deployments but also to extend the reach of its own products across clouds. This also explains why BigQuery Omni isn’t available for Azure yet, given that Anthos for Azure is still in preview, while AWS support became generally available in April.


By Frederic Lardinois

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier .

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for Compute Services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.


By Frederic Lardinois

CodeGuru, AWS’s AI code reviewer and performance profiler, is now generally available

AWS today announced that CodeGuru, a set of tools that use machine learning to automatically review code for bugs and suggest potential optimizations, is now generally available. The tool launched into preview at AWS re:Invent last December.

CodeGuru consists of two tools, Reviewer and Profiler, and those names pretty much describe exactly what they do. To build Reviewer, the AWS team actually trained its algorithm with the help of code from more than 10,000 open source projects on GitHub, as well as reviews from Amazon’s own internal codebase.

“Even for a large organization like Amazon, it’s challenging to have enough experienced developers with enough free time to do code reviews, given the amount of code that gets written every day,” the company notes in today’s announcement. “And even the most experienced reviewers miss problems before they impact customer-facing applications, resulting in bugs and performance issues.”

To use CodeGuru, developers continue to commit their code to their repository of choice, no matter whether that’s GitHub, Bitbucket Cloud, AWS’s own CodeCommit or another service. CodeGuru Reviewer then analyzes that code, tries to find bugs and, if it does, it will also offer potential fixes. All of this is done within the context of the code repository, so CodeGuru will create a GitHub pull request, for example, and add a comment to that pull request with some more info about the bug and potential fixes.

To train the machine learning model, users can also provide CodeGuru with some basic feedback, though we’re mostly talking “thumbs up” and “thumbs down” here.

The CodeGuru Application Profiler has a somewhat different mission. It is meant to help developers figure out where there might be some inefficiencies in their code and identify the most expensive lines of code. This includes support for serverless platforms like AWS Lambda and Fargate.

One feature the team added since it first announced CodeGuru is that Profiler now attaches an estimated dollar amount to the lines of unoptimized code.

“Our customers develop and run a lot of applications that include millions and millions of lines of code. Ensuring the quality and efficiency of that code is incredibly important, as bugs and inefficiencies in even a few lines of code can be very costly. Today, the methods for identifying code quality issues are time-consuming, manual, and error-prone, especially at scale,” said Swami Sivasubramanian, vice president, Amazon Machine Learning, in today’s announcement. “CodeGuru combines Amazon’s decades of experience developing and deploying applications at scale with considerable machine learning expertise to give customers a service that improves software quality, delights their customers with better application performance, and eliminates their most expensive lines of code.”

AWS says a number of companies started using CodeGuru during the preview period. These include the likes of Atlassian, EagleDream and DevFactory.

“While code reviews from our development team do a great job of preventing bugs from reaching production, it’s not always possible to predict how systems will behave under stress or manage complex data shapes, especially as we have multiple deployments per day,” said Zak Islam, head of Engineering, Tech Teams, at Atlassian. “When we detect anomalies in production, we have been able to reduce the investigation time from days to hours and sometimes minutes thanks to Amazon CodeGuru’s continuous profiling feature. Our developers now focus more of their energy on delivering differentiated capabilities and less time investigating problems in our production environment.”

Image Credits: AWS


By Frederic Lardinois

Why AWS built a no-code tool

AWS today launched Amazon Honeycode, a no-code environment built around a spreadsheet-like interface that is a bit of a detour for Amazon’s cloud service. Typically, after all, AWS is all about giving developers all of the tools to build their applications — but they then have to put all of the pieces together. Honeycode, on the other hand, is meant to appeal to non-coders who want to build basic line-of-business applications. If you know how to work a spreadsheet and want to turn that into an app, Honeycode is all you need.

To understand AWS’s motivation behind the service, I talked to AWS VP Larry Augustin and Meera Vaidyanathan, a general manager at AWS.

“For us, it was about extending the power of AWS to more and more users across our customers,” explained Augustin. “We consistently hear from customers that there are problems they want to solve, they would love to have their IT teams or other teams — even outsourced help — build applications to solve some of those problems. But there’s just more demand for some kind of custom application than there are available developers to solve it.”

Image Credits: Amazon

In that respect then, the motivation behind Honeycode isn’t all that different from what Microsoft is doing with its PowerApps low-code tool. That, too, after all, opens up the Azure platform to users who aren’t necessarily full-time developers. AWS is taking a slightly different approach here, though, but emphasizing the no-code part of Honeycode.

“Our goal with honey code was to enable the people in the line of business, the business analysts, project managers, program managers who are right there in the midst, to easily create a custom application that can solve some of the problems for them without the need to write any code,” said Augustin. “And that was a key piece. There’s no coding required. And we chose to do that by giving them a spreadsheet-like interface that we felt many people would be familiar with as a good starting point.”

A lot of low-code/no-code tools also allow developers to then “escape the code,” as Augstin called it, but that’s not the intent here and there’s no real mechanism for exporting code from Honeycode and take it elsewhere, for example. “One of the tenets we thought about as we were building Honeycode was, gee, if there are things that people want to do and we would want to answer that by letting them escape the code — we kept coming back and trying to answer the question, ‘Well, okay, how can we enable that without forcing them to escape the code?’ So we really tried to force ourselves into the mindset of wanting to give people a great deal of power without escaping to code,” he noted.

Image Credits: Amazon

There are, however, APIs that would allow experienced developers to pull in data from elsewhere. Augustin and Vaidyanathan expect that companies may do this for their users on tthe platform or that AWS partners may create these integrations, too.

Even with these limitations, though, the team argues that you can build some pretty complex applications.

“We’ve been talking to lots of people internally at Amazon who have been building different apps and even within our team and I can honestly say that we haven’t yet come across something that is impossible,” Vaidyanathan said. “I think the level of complexity really depends on how expert of a builder you are. You can get very complicated with the expressions [in the spreadsheet] that you write to display data in a specific way in the app. And I’ve seen people write — and I’m not making this up — 30-line expressions that are just nested and nested and nested. So I really think that it depends on the skills of the builder and I’ve also noticed that once people start building on Honeycode — myself included — I start with something simple and then I get ambitious and I want to add this layer to it — and I want to do this. That’s really how I’ve seen the journey of builders progress. You start with something that’s maybe just one table and a couple of screens, and very quickly, before you know, it’s a far more robust app that continues to evolve with your needs.”

Another feature that sets Honeycode apart is that a spreadsheet sits at the center of its user interface. In that respect, the service may seem a bit like Airtable, but I don’t think that comparison holds up, given that both then take these spreadsheets into very different directions. I’ve also seen it compared to Retool, which may be a better comparison, but Retool is going after a more advanced developer and doesn’t hide the code. There is a reason, though, why these services were built around them and that is simply that everybody is familiar with how to use them.

“People have been using spreadsheets for decades,” noted Augustin. “They’re very familiar. And you can write some very complicated, deep, very powerful expressions and build some very powerful spreadsheets. You can do the same with Honeycode. We felt people were familiar enough with that metaphor that we could give them that full power along with the ability to turn that into an app.”

The team itself used the service to manage the launch of Honeycode, Vaidyanathan stressed — and to vote on the name for the product (though Vaidyanathan and Augustin wouldn’t say which other names they considered.

“I think we have really, in some ways, a revolutionary product in terms of bringing the power of AWS and putting it in the hands of people who are not coders,” said Augustin.


By Frederic Lardinois

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows and while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.


By Frederic Lardinois

Google Cloud’s fully-managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of API’s.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, hey, we want, in addition to GCP, we want AWS,” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads in on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and talking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”


By Frederic Lardinois

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.


By Frederic Lardinois

Pileus helps businesses cut their cloud spend

Israel-based Pileus, which is officially launching today, aims to help businesses keep their cloud spend under control. The company also today announced that it has raised a $1 million seed round from a private angel investor.

Using machine learning, the company’s platform continuously learns about how a user typically uses a given cloud and then provides forecasts and daily personalized recommendations to help them stay within a budget.

Pileus currently supports AWS, with support for Google Cloud and Microsoft Azure coming soon.

With all of the information it gathers about your cloud usage, the service can also monitor usage for any anomalies. Because, at its core, Pileus keeps a detailed log of all your cloud spend, it also can provide detailed reports and dashboards of what a user is spending on each project and resource.

If you’ve ever worked on a project like this, you know that these reports are only as good as the tags you use to identify each project and resource, so Pileus makes that a priority on its platform, with a tagging tool that helps enforce tagging policies.

“My team and I spent many sleepless nights working on this solution,” says Pileus CEO Roni Karp. “We’re thrilled to finally be able to unleash Pileus to the masses and help everyone gain more efficiency of their cloud experience while helping them understand their usage and costs better than ever before.”

Pileus currently offers a free 30-day trial. After that, users can either opt to pay $180/month or $800 per year. At those prices, the service isn’t exactly useful until your cloud spend is significantly more than that, of course.

The company isn’t just focused on individual businesses, though. It’s also targeting managed service providers that can use the platform to create reports and manage their own customer billing. Karp believes this will become a significant source of revenue for Pileus because “there are not many good tools in the field today, especially for Azure.”

It’s no secret that Pileus is launching into a crowded market, where well-known incumbents like Cloudability already share mindshare with a growing number of startups. Karp, however, believes that Pileus can stand out, largely because of its machine learning platform and its ability to provide users with immediate value, whereas, he argues, it often takes several weeks for other platforms to deliver results.

 


By Frederic Lardinois

Tech giants should let startups defer cloud payments

Google, Amazon, and Microsoft are the landlords. Amidst the Coronavirus economic crisis, startups need a break from paying rent. They’re in a cash crunch. Revenue has stopped flowing in, capital markets like venture debt are hesitant, and startups and small-to-medium sized businessesf are at risk of either having to lay off huge numbers of employees and/or shut down.

Meanwhile, the tech giants are cash rich. Their success this decade means they’re able to weather the storm for a few months. Their customers cannot.

Cloud infrastructure costs area amongst many startups’ top expenses besides payroll. The option to pay these cloud bills later could save some from going out of business or axing huge parts of their staff. Both would hurt the tech industry, the economy, and the individuals laid off. But most worryingly for the giants, it could destroy their customer base.

The mass layoffs have already begun. Soon we’re sure to start hearing about sizable companies shutting down, upended by COVID-19. But there’s still an opportunity to stop a larger bloodbath from ensuing.

That’s why I have a proposal: cloud relief.

The platform giants should let startups and small businesses defer their cloud infrastructure payments for three to six months until they can pay them back in installments. Amazon AWS, Google Cloud, Microsoft Azure, these companies’ additional infrastructure products, and other platform providers should let customers pause payment until the worst of the first wave of the COVID-19 economic disruption passes. Profitable SAAS providers like Salesforce could give customers an extension too.

There are plenty of altruistic reasons to do this. They have the resources to help businesses in need. We all need to support each other in these tough times. This could protect tons of families. Some of these startups are providing important services to the public and even discounting them, thereby ramping up their bills while decreasing revenue.

Then there are the PR reasons. After years of techlash and anti-trust scrutiny, here’s the chance for the giants to prove their size can be beneficial to the world. Recruiters could use it as a talking point. “We’re the company that helped save Silicon Valley.” There’s an explanation for them squirreling away so much cash: the rainy day has finally arrived.

But the capitalistic truth and the story they could sell to Wall Street is that it’s not good for our business if our customers go out of business. Look at what happened to infrastructure providers in the dotcom crash. When tons of startups vaporized, so did the profits for those selling them hosting and tools. Any government stimulus for businesses would be better spent by them paying employees than paying the cloud companies that aren’t in danger. Saving one future Netflix from shutting down could cover any short-term loss from helping 100 other businesses.

This isn’t a handout. These startups will still owe the money. They’d just be able to pay it a little later, spread out over their monthly bills for a year or so. Once mass shelter-in-place orders subside, businesses can operate at least a little closer to normal, and investors get less cautious, customers will have the cash they need to pay their dues. Plus interest if necessary.

Meanwhile, they’ll be locked in and loyal customers for the foreseeable future. Cloud vendors could gate the deferment to only customers that have been with them for X amount of months or that have already spent Y amount on the platform. The vendors could also offer the deferment on the condition that customers add a year or more to their existing contracts. Founders will remember who gave them the benefit of the doubt.

cloud ice cream cone imagine

Consider it a marketing expense. Platforms often offer discounts or free trials to new customers. Now it’s existing customers that need a reprieve. Instead of airport ads, the giants could spend the money ensuring they’ll still have plenty of developers building atop them by the end of 2020.

Beyond deferred payment, platforms could just push the due date on all outstanding bills to three or six months from now. Alternatively, they could offer a deep discount such as 50% off for three months if they didn’t want to deal with accruing debt and then servicing it. Customers with multi-year contracts could offered the opportunity to downgrade or renegotiate their contracts without penalties. Any of these might require giving sales quota forgiveness to their account executives.

It would likely be far too complicated and risky to accept equity in lieu of cash, a cut of revenue going forward, or to provide loans or credit lines to customers. The clearest and simplest solution is to let startups skip a few payments, then pay more every month later until they clear their debt. When asked for comment or about whether they’re considering payment deferment options, Microsoft declined, and Amazon and Google did not respond.

To be clear, administering payment deferment won’t be simple or free. There are sure to be holes that cloud economists can poke in this proposal, but my goal is to get the conversation startup. It could require the giants to change their earnings guidance. Rewriting deals with significantly sized customers will take work on both ends, and there’s a chance of breach of contract disputes. Giants would face the threat of customers recklessly using cloud resources before shutting down or skipping town.

Most taxing would be determining and enforcing the criteria of who’s eligible. The vendors would need to lay out which customers are too big so they don’t accidentally give a cloud-intensive but healthy media company a deferment they don’t need. Businesses that get questionably excluded could make a stink in public. Executing on the plan will require staff when giants are stretched thin trying to handle logistics disruptions, misinformation, and accelerating work-from-home usage.

Still, this is the moment when the fortunate need to lend a hand to the vulnerable. Not a hand out, but a hand up. Companies with billions in cash in their coffers could save those struggling to pay salaries. All the fundraisers and info centers and hackathons are great, but this is how the tech giants can live up to their lofty mission statements.

We all live in the cloud now. Don’t evict us. #CloudRelief

Thanks to Falon Fatemi, Corey Quinn, Ilya Fushman, Jason Kim, Ilya Sukhar, and Michael Campbell for their ideas and feedback on this proposal


By Josh Constine

Big opening for startups that help move entrenched on-prem workloads to the cloud

AWS CEO Andy Jassy showed signs of frustration at his AWS re:Invent keynote address in December.

Customers weren’t moving to the cloud nearly fast enough for his taste, and he prodded them to move along. Some of their hesitation, as Jassy pointed out, was due to institutional inertia, but some of it also was due to a technology problem related to getting entrenched, on-prem workloads to the cloud.

When a challenge of this magnitude presents itself and you have the head of the world’s largest cloud infrastructure vendor imploring customers to move faster, you can be sure any number of players will start paying attention.

Sure enough, cloud infrastructure vendors (ISVs) have developed new migration solutions to help break that big data logjam. Large ISVs like Accenture and Deloitte are also happy to help your company deal with migration issues, but this opportunity also offers a big opening for startups aiming to solve the hard problems associated with moving certain workloads to the cloud.

Think about problems like getting data off of a mainframe and into the cloud or moving an on-prem data warehouse. We spoke to a number of experts to figure out where this migration market is going and if the future looks bright for cloud-migration startups.

Cloud-migration blues

It’s hard to nail down exactly the percentage of workloads that have been moved to the cloud at this point, but most experts agree there’s still a great deal of growth ahead. Some of the more optimistic projections have pegged it at around 20%, with the U.S. far ahead of the rest of the world.


By Ron Miller

Despite JEDI loss, AWS retains dominant market position

AWS took a hard blow last year when it lost the $10 billion, decade-long JEDI cloud contract to rival Microsoft. Yet even without that mega deal for building out the nation’s Joint Enterprise Defense Infrastructure, the company remains fully in control of the cloud infrastructure market — and it intends to fight that decision.

In fact, AWS still owns almost twice as much cloud infrastructure market share as Microsoft, its closest rival. While the two will battle over the next decade for big contracts like JEDI, for now, AWS doesn’t have much to worry about.

There was a lot more to AWS’s year than simply losing JEDI. Per usual, the news came out with a flurry of announcements and enhancements to its vast product set. Among the more interesting moves was a shift to the edge, the fact the company is getting more serious about the chip business and a big dose of machine learning product announcements.

The fact is that AWS has such market momentum now, it’s a legitimate question to ask if anyone, even Microsoft, can catch up. The market is continuing to expand though, and the next battle is for that remaining market share. AWS CEO Andy Jassy spent more time than in the past trashing Microsoft at 2019’s re:Invent customer conference in December, imploring customers to move to the cloud faster and showing that his company is preparing for a battle with its rivals in the years ahead.

Numbers, please

AWS closed 2019 on a $36 billion run rate, growing from $7.43 billion in in its first report in January to $9 billion in earnings for its most recent earnings report in October. Believe it or not, according to CNBC, that number failed to meet analysts expectations of $9.1 billion, but still accounted for 13% of Amazon’s revenue in the quarter.

Regardless, AWS is a juggernaut, which is fairly amazing when you consider that it started as a side project for Amazon .com in 2006. In fact, if AWS were a stand-alone company, it would be a substantial business. While growth slowed a bit last year, that’s inevitable when you get as large as AWS, says John Dinsdale, VP, chief analyst and general manager at Synergy Research, a firm that follows all aspects of the cloud market.

“This is just math and the law of large numbers. On average over the last four quarters, it has incremented its revenues by well over $500 million per quarter. So it has grown its quarterly revenues by well over $2 billion in a twelve-month period,” he said.

Dinsdale added, “To put that into context, this growth in quarterly revenue is bigger than Google’s total revenues in cloud infrastructure services. In a very large market that is growing at over 35% per year, AWS market share is holding steady.”

Dinsdale says the cloud infrastructure market didn’t quite break $100 billion last year, but even without full Q4 results, his firm’s models project a total of around $95 billion, up 37% over 2018. AWS has more than a third of that. Microsoft is way back at around 17% with Google in third with around 8 or 9%.

While this is from Q1, it illustrates the relative positions of companies in the cloud market. Chart: Synergy Research

JEDI disappointment

It would be hard to do any year-end review of AWS without discussing JEDI. From the moment the Department of Defense announced its decade-long, $10 billion cloud RFP, it has been one big controversy after another.


By Ron Miller

AWS announces new savings plans to reduce complexity of reserved instances

Reserved instances (RIs) have provided a mechanism for companies, who expect to use a certain level of AWS infrastructure resources, to get some cost certainty, but as AWS’s Jeff Barr points out they are on the complex side. To fix that, the company announced a new method called Savings Plans.

“Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as RIs, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period,” Barr wrote in a blog post announcing the new program.

Amazon charges customers in a couple of ways. First, there is an on-demand price, which is basically the equivalent of the rack rate at a hotel. You are going to pay more for this because you’re walking up and ordering it on the fly.

Most organizations know they are going to need a certain level of resources over a period of time, and in these cases, they can save some money by buying in bulk up front. This gives them cost certainty as an organization, and it helps Amazon because it knows it’s going to have a certain level of usage and can plan accordingly.

While Reserved Instances aren’t going away yet, it sounds like Amazon is trying to steer customers to the new savings plans. “We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them,” Barr wrote.

The Savings Plans come in two flavors. Compute Savings Plans provide up to 66% savings and are similar to RIs in this regard. The aspect that customers should like is that the savings are broadly applicable across AWS products, and you can even move work loads between regions and maintain the same discounted rate.

The other is an EC2 Instance Savings Plan. With this one, also similarly to the reserved instance, you can save up to 72% over the on-demand price, but with this option you are limited to a single region.  It does offer a measure of flexibility though allowing you to select different sizes of the same instance type or even switch operating systems from Windows to Linux without affecting your discount with your region of choice.

You can sign up today through the AWS Cost Explorer.


By Ron Miller

Amazon migrates more than 100 consumer services from Oracle to AWS databases

AWS and Oracle love to take shots at each other, but as much as Amazon has knocked Oracle over the years, it was forced to admit that it was in fact a customer. Today in a company blog post, the company announced it was shedding Oracle for AWS databases, and had effectively turned off its final Oracle database.

The move involved 75 petabytes of internal data stored in nearly 7,500 Oracle databases, according to the company. “I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated),” AWS’s Jeff Barr wrote in the company blog post announcing the migration.

Over the last several years, the company has been working to move off of Oracle databases, but it’s not an easy task to move projects on Amazon scale. Barr wrote there were lots of reasons the company wanted to make the move. “Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted,” he wrote.

More than 100 consumer services have been moved to AWS databases including customer-facing tools like Alexa, Amazon Prime and Twitch among others. It also moved internal tools like AdTech, its fulfillment system, external payments and ordering. These are not minor matters. They are the heart and soul of Amazon’s operations.

Each team moved the Oracle database to an AWS database service like Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift. Each group was allowed to choose the service they wanted, based on its individual needs and requirements.

 


By Ron Miller

Annual Extra Crunch members can receive $1,000 in AWS credits

We’re excited to announce a new partnership with Amazon Web Services for annual members of Extra Crunch. Starting today, qualified annual members can receive $1,000 in AWS credits. You also must be a startup founder to claim this Extra Crunch community perk.

AWS is the premier service for your application hosting needs, and we want to make sure our community is well-resourced to build. We understand that hosting and infrastructure costs can be a major hurdle for tech startups, and we’re hoping that this offer will help better support your team.

What’s included in the perk:

  • $1,000 in AWS Promotional Credit valid for 1 year
  • 2 months of AWS Business Support
  • 80 credits for self-paced labs

Applications are processed in 7-10 days, once an application is received. Companies may not be eligible for AWS Promotional Credits if they previously received a similar or greater amount of credit. Companies may be eligible to be “topped up” to a higher credit amount if they previously received a lower credit.

In addition to the AWS community perk, Extra Crunch members also get access to how-tos and guides on company building, intelligence on what’s happening in the startup ecosystem, stories about founders and exits, transcripts from panels at TechCrunch events, discounts on TechCrunch events, no banner ads on TechCrunch.com and more. To see a full list of the types of articles you get with Extra Crunch, head here.

You can sign up for annual Extra Crunch membership here.

Once you are signed up, you’ll receive a welcome email with a link to the AWS offer. If you are already an annual Extra Crunch member, you will receive an email with the offer at some point today. If you are currently a monthly Extra Crunch subscriber and want to upgrade to annual in order to claim this deal, head over to the “my account” section on TechCrunch.com and click the “upgrade” button.

This is one of several new community perks we’ve been working on for Extra Crunch members. Extra Crunch members also get 20% off all TechCrunch event tickets (email [email protected] with the event name to receive a discount code for event tickets). You can learn more about our events lineup here. You also can read about our Brex community perk here.


By Travis Bernard