The single vendor requirement ultimately doomed the DoD’s $10B JEDI cloud contract

When the Pentagon killed the JEDI cloud program yesterday, it was the end of a long and bitter road for a project that never seemed to have a chance. The question is why it didn’t work out in the end, and ultimately I think you can blame the DoD’s stubborn adherence to a single vendor requirement, a condition that never made sense to anyone, even the vendor that ostensibly won the deal.

In March 2018, the Pentagon announced a mega $10 billion, decade-long cloud contract to build the next generation of cloud infrastructure for the Department of Defense. It was dubbed JEDI, which aside from the Star Wars reference, was short for Joint Enterprise Defense Infrastructure.

The idea was a 10 year contract with a single vendor that started with an initial two year option. If all was going well, a five year option would kick in and finally a three year option would close things out with earnings of $1 billion a year.

While the total value of the contract had it been completed was quite large, a billion a year for companies the size of Amazon, Oracle or Microsoft is not a ton of money in the scheme of things. It was more about the prestige of winning such a high-profile contract and what it would mean for sales bragging rights. After all, if you passed muster with the DoD, you could probably handle just about anyone’s sensitive data, right?

Regardless, the idea of a single-vendor contract went against conventional wisdom that the cloud gives you the option of working with the best-in-class vendors. Microsoft, the eventual winner of the ill-fated deal acknowledged that the single vendor approach was flawed in an interview in April 2018:

Leigh Madden, who heads up Microsoft’s defense effort, says he believes Microsoft can win such a contract, but it isn’t necessarily the best approach for the DoD. “If the DoD goes with a single award path, we are in it to win, but having said that, it’s counter to what we are seeing across the globe where 80 percent of customers are adopting a multi-cloud solution,” Madden told TechCrunch.

Perhaps it was doomed from the start because of that. Yet even before the requirements were fully known there were complaints that it would favor Amazon, the market share leader in the cloud infrastructure market. Oracle was particularly vocal, taking its complaints directly to the former president before the RFP was even published. It would later file a complaint with the Government Accountability Office and file a couple of lawsuits alleging that the entire process was unfair and designed to favor Amazon. It lost every time — and of course, Amazon wasn’t ultimately the winner.

While there was a lot of drama along the way, in April 2019 the Pentagon named two finalists, and it was probably not too surprising that they were the two cloud infrastructure market leaders: Microsoft and Amazon. Game on.

The former president interjected himself directly in the process in August that year, when he ordered the Defense Secretary to review the matter over concerns that the process favored Amazon, a complaint which to that point had been refuted several times over by the DoD, the Government Accountability Office and the courts. To further complicate matters, a book by former defense secretary Jim Mattis claimed the president told him to “screw Amazon out of the $10 billion contract.” His goal appeared to be to get back at Bezos, who also owns the Washington Post newspaper.

In spite of all these claims that the process favored Amazon, when the winner was finally announced in October 2019, late on a Friday afternoon no less, the winner was not in fact Amazon. Instead, Microsoft won the deal, or at least it seemed that way. It wouldn’t be long before Amazon would dispute the decision in court.

By the time AWS re:Invent hit a couple of months after the announcement, former AWS CEO Andy Jassy was already pushing the idea that the president had unduly influenced the process.

“I think that we ended up with a situation where there was political interference. When you have a sitting president, who has shared openly his disdain for a company, and the leader of that company, it makes it really difficult for government agencies, including the DoD, to make objective decisions without fear of reprisal,” Jassy said at that time.

Then came the litigation. In November the company indicated it would be challenging the decision to choose Microsoft charging that it was was driven by politics and not technical merit. In January 2020, Amazon filed a request with the court that the project should stop until the legal challenges were settled. In February, a federal judge agreed with Amazon and stopped the project. It would never restart.

In April the DoD completed its own internal investigation of the contract procurement process and found no wrong-doing. As I wrote at the time:

While controversy has dogged the $10 billion, decade-long JEDI contract since its earliest days, a report by the DoD’s Inspector General’s Office concluded today that, while there were some funky bits and potential conflicts, overall the contract procurement process was fair and legal and the president did not unduly influence the process in spite of public comments.

Last September the DoD completed a review of the selection process and it once again concluded that Microsoft was the winner, but it didn’t really matter as the litigation was still in motion and the project remained stalled.

The legal wrangling continued into this year, and yesterday The Pentagon finally pulled the plug on the project once and for all, saying it was time to move on as times have changed since 2018 when it announced its vision for JEDI.

The DoD finally came to the conclusion that a single vendor approach wasn’t the best way to go, and not because it could never get the project off the ground, but because it makes more sense from a technology and business perspective to work with multiple vendors and not get locked into any particular one.

“JEDI was developed at a time when the Department’s needs were different and both the CSPs’ (cloud service providers) technology and our cloud conversancy was less mature. In light of new initiatives like JADC2 (the Pentagon’s initiative to build a network of connected sensors) and AI and Data Acceleration (ADA), the evolution of the cloud ecosystem within DoD, and changes in user requirements to leverage multiple cloud environments to execute mission, our landscape has advanced and a new way-ahead is warranted to achieve dominance in both traditional and non-traditional warfighting domains,” said John Sherman, acting DoD Chief Information Officer in a statement.

In other words, the DoD would benefit more from adopting a multi-cloud, multi-vendor approach like pretty much the rest of the world. That said, the department also indicated it would limit the vendor selection to Microsoft and Amazon.

“The Department intends to seek proposals from a limited number of sources, namely the Microsoft Corporation (Microsoft) and Amazon Web Services (AWS), as available market research indicates that these two vendors are the only Cloud Service Providers (CSPs) capable of meeting the Department’s requirements,” the department said in a statement.

That’s not going to sit well with Google, Oracle or IBM, but the department further indicated it would continue to monitor the market to see if other CSPs had the chops to handle their requirements in the future.

In the end, the single vendor requirement contributed greatly to an overly competitive and politically charged atmosphere that resulted in the project never coming to fruition. Now the DoD has to play technology catch-up, having lost three years to the histrionics of the entire JEDI procurement process and that could be the most lamentable part of this long, sordid technology tale.


By Ron Miller

Salesforce, AWS announce extended partnership with further two-way integration

Salesforce and AWS represent the two most successful cloud companies in their respective categories. Over the last few years the two cloud giants have had an evolving partnership. Today they announced plans for a new set of integration capabilities to make it easier to share data and build applications that cross the two platforms.

Patrick Stokes, EVP and GM for Platform at Salesforce, points out that the companies have worked together in the past to provide features like secure sharing between the two services, but they were hearing from customers that they wanted to take it further and today’s announcement is the first step towards making that happen.

“[The initial phases of the partnership] have really been massively successful. We’re learning a lot from each other and from our mutual customers about the types of things that they want to try to accomplish, both within the Salesforce portfolio of products, as well as all the Amazon products, so that the two solutions complement each other really nicely. And customers are asking us for more, and so we’re excited to enter into this next phase of our partnership,” Stokes explained.

He added, “The goal really is to unify our platforms, so bring [together] all the power of the Amazon services with all of the power of the of the Salesforce platform.” These capabilities could be the next step in accomplishing that.

This involves a couple of new features the companies are working on to help developers on both the platform and application side of the equation. For starters that includes enabling developers to virtualize Amazon data inside Salesforce without having to do all the coding to make that happen manually.

“More specifically, we’re going to virtualize Amazon data within the Salesforce platform, so whether you’re working with an S3 bucket, Amazon RDS or whatever it is we’re going to make it so that that the data is virtualized and just appears just like it’s native data on the Salesforce platform,” he said.

Similarly, developers building applications on Amazon will be able to access Salesforce data and have it appear natively in Amazon. This involves providing connectors between the two systems to make the data flow smoothly without a lot of coding to make that happen.

The companies are also announcing event sharing capabilities, which makes it easier for both Amazon and Salesforce customers to build microservices-based applications that cross both platforms.

“You can build microservices-oriented architecture that spans the services of Salesforce and Amazon platforms, again without having to write any code. To do that, [we’re developing] out of the box connectors so you can click and drag the events that you want.”

The companies are also announcing plans to make it easier from an identity and access management perspective to access the platforms with a guided setup. Finally, the companies are working on applications to build Amazon Chime communications tooling into Service Cloud and other Salesforce services to build things like virtual call centers using AWS machine learning technology.

Amazon VP of Global Marketing Rachel Thorton says that having the two cloud giants work together in this way should make it easier for developers to create solutions that span the two platforms. “I just think it unlocks such possibilities for developers, and the faster and more innovative developers can be, it just unlocks opportunities for businesses, and creates better customer experiences,” Thornton said.

It’s worth noting that Salesforce also has extensive partnerships with other cloud providers including Microsoft Azure and Google Cloud Platform.

As is typically the case with Salesforce announcements, while all of these capabilities are being announced today, they are still in the development stage and won’t go into beta testing until later this year with GA expected sometime next year. The companies are expected to release more details about the partnership at Dreamforce and re:Invent, their respective customer conferences later this year.


By Ron Miller

AWS releases tool to open source that turns on-prem software into SaaS

AWS announced today that it’s releasing a tool called AWS SaaS Boost as open source distributed under the Apache 2.0 license. The tool, which was first announced at the AWS:re:Invent conference last year, is designed to help companies transform their on-prem software into cloud-based Software as a Service

In the charter for the software, the company describes its mission this way: “Our mission is to create a community-driven suite of extensible building blocks for Software-as-a-Service (SaaS) builders. Our goal is to foster an open environment for developing and sharing reusable code that accelerates the ability to deliver and operate multi-tenant SaaS solutions on AWS.”

What it effectively does is provide the tools to turn the application into one that lets you sign up users and let them use the app in a multi-tenant cloud context. Even though it’s open source, it is designed to get you to move your application into the AWS system where you can access a number of AWS services such as AWS CloudFormation, AWS Identity and Access Management (IAM), Amazon Route 53, Elastic Load Balancing, AWS Lambda (Amazon’s serverless tool), and Amazon Elastic Container Service (Amazon’s Kubernetes Service). Although presumably you could use alternative services if you were so inclined.

By making it open source, it gives companies who would need this kind of service access to the source code, giving them a comfort level and an ability to contribute to the project to expand upon the base product and give back to the community. That makes it a win for users who get flexibility and the benefit of a community behind the tool, and a win for AWS, which gets that community working on the tool to improve and enhance it over time.

“Our objective with AWS SaaS Boost is to get great quality software based on years of experience in the hands of as many developers and companies as possible. Because SaaS Boost is open source software, anyone can help improve it. Through a community of builders, our hope is to develop features faster, integrate with a wide range of SaaS software, and to provide a high quality solution for our customers regardless of company size or location,” Amazon’s Adrian De Lucan wrote in a blog post announcing the intent to open source SaaS Boost.

This announcement comes just a couple of weeks after the company open sourced its Deep Racer device software, which runs its machine-learning fueled mini race cars. That said, Amazon has had a complex relationship with the open source in the past couple of years, where companies like MongoDB, Elastic and CockroachDB have altered their open source licenses to prevent Amazon from making their own hosted versions of these software packages.


By Ron Miller

Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) and Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 


By Frederic Lardinois

Why Adam Selipsky was the logical choice to run AWS

When AWS CEO Andy Jassy announced in an email to employees yesterday that Tableau CEO Adam Selipsky was returning to run AWS, it was probably not the choice most considered. But to the industry watchers we spoke to over the last couple of days, it was a move that made absolute sense once you thought about it.

Gartner analyst Ed Anderson says that the cultural fit was probably too good for Jassy to pass up. Selipsky spent 11 years helping build the division. It was someone he knew well and had worked side by side with for over a decade. He could slide into the new role and be trusted to continue building the lucrative division.

Anderson says that even though the size and scope of AWS has changed dramatically since Selipsky left in 2016 when the company closed the year on $16 billion run rate, he says that the organization’s cultural dynamics haven’t changed all that much.

“Success in this role requires a deep understanding of the Amazon/AWS culture in addition to a vision for AWS’s future growth. Adam already knows the AWS culture from his previous time at AWS. Yes, AWS was a smaller business when he left, but the fundamental structure and strategy was in place and the culture hasn’t notably evolved since then,” Anderson told me.

Matt McIlwain, managing director at Madrona Venture Group says the experience Selipsky had after he left AWS will prove invaluable when he returns.

“Adam transformed Tableau from a desktop, licensed software company to a cloud, subscription software company that thrived. As the leader of AWS, Adam is returning to a culture he helped grow as the sales and marketing leader that brought AWS to prominence and broke through from startup customers to become the leading enterprise solution for public cloud,” he said.

Holger Mueller, an analyst with Constellation Research says that Selipsky’s business experience gave him the edge over other candidates. “His business acumen won out over [internal candidates] Matt Garmin and Peter DeSantis. Insight on how Salesforce works may be helpful and valued as well,” Mueller pointed out.

As for leaving Tableau and with it Salesforce, the company that purchased it for $15.7 billion in 2019, Brent Leary, founder and principal analyst at CRM Essentials believes that it was only a matter of time before some of these acquired company CEOs left to do other things. In fact, he’s surprised it didn’t happen sooner.

“Given Salesforce’s growing stable of top notch CEOs accumulated by way of a slew of high profile acquisitions, you really can’t expect them all to stay forever, and given Adam Selipsky’s tenure at AWS before becoming Tableau’s CEO, this move makes a whole lot of sense. Amazon brings back one of their own, and he is also a wildly successful CEO in his own right,” Leary said.

While the consensus is that Selipsky is a good choice, he is going to have awfully big shoes to fill.  The fact is that division is continuing to grow like a large company currently on a run rate of over $50 billion. With a track record like that to follow, and Jassy still close at hand, Selipsky has to simply continue letting the unit do its thing while putting his own unique stamp on it.

Any kind of change is disconcerting though, and it will be up to him to put customers and employees at ease and plow ahead into the future. Same mission. New boss.


By Ron Miller

Tableau CEO Adam Selipsky is returning to AWS to replace Andy Jassy as CEO

When Amazon announced last month that Jeff Bezos was moving into the executive chairman role, and AWS CEO Andy Jassy would be taking over the entire Amazon operation, speculation began about who would replace Jassy.

People considered a number of internal candidates such as Peter DeSantis, vice president of global infrastructure at AWS and Matt Garman, who is vice president of sales and marketing. Not many would have chosen Tableau CEO Adam Selipsky, but sure enough he is returning home to run the division he left in 2016.

In an email to employees, Jassy wasted no time getting to the point that Selipsky was his choice, saying that the former employee who helped launch the division when they hired him 2005, spent 11 years helping Jassy build the unit before taking the job at Tableau. Through that lens, the the choice makes perfect sense.

“Adam brings strong judgment, customer obsession, team building, demand generation, and CEO experience to an already very strong AWS leadership team. And, having been in such a senior role at AWS for 11 years, he knows our culture and business well,” Jassy wrote in the email.

Jassy has run the AWS since its earliest days taking it from humble beginnings as a kind of internal experiment on running a storage web service to building a mega division currently on a $51 billion run rate. It is that juggernaut that will be Selipsky to run, but he seems well suited for the job.

He is a seasoned executive, and while he’s been away from AWS when it really began to grow, he still understands the culture well enough to step smoothly into the role.  At the same, he’s leaving Tableau, a company he helped transform from a desktop software company into one firmly in the cloud.

Salesforce bought Tableau in June 2019 for a cool $15.7 billion and Selipsky has remained at the helm since then, but perhaps the lure of running AWS was too great and he decided to take the leap to the new job.

When we wrote a story at the end of last year about Salesforce’s deep bench of executive talent one of the CEOs we pointed at as a possible replacement was Selipsky. But with it looking more like president and COO Bret Taylor would be the heir apparent, perhaps Selipsky was ready for a new challenge.

Selipsky will make his return to AWS on May 17th and spend a few weeks with Jassy in a transitional time before taking over to run the division on his own. As Jassy slides into the Amazon CEO role, it’s clear the two will continue to work closely together, just like they did all those years ago.


By Ron Miller

Amazon will expand its Amazon Care on-demand healthcare offering U.S.-wide this summer

Amazon is apparently pleased with how its Amazon Care pilot in Seattle has gone, since it announced this morning that it will be expanding the offering across the U.S. this summer, and opening it up to companies of all sizes, in addition to its own employees. The Amazon Care model combines on-demand and in-person care, and is meant as a solution from the search giant to address shortfalls in current offering for employer-sponsored healthcare offerings.

In a blog post announcing the expansion, Amazon touted the speed of access to care made possible for its employees and their families via the remote, chat and video-based features of Amazon Care. These are facilitated via a dedicated Amazon Care app, which provides direct, live chats via a nurse or doctor. Issues that then require in-person care is then handled via a house call, so a medical professional is actually sent to your home to take care of things like administering blood tests or doing a chest exam, and prescriptions are delivered to your door as well.

The expansion is being handled differently across both in-person and remote variants of care; remote services will be available starting this summer to both Amazon’s own employees, as well as other companies who sign on as customers, starting this summer. The in-person side will be rolling out more slowly, starting with availability in Washington, D.C., Baltimore, and “other cities in the coming months” according to the company.

As of today, Amazon Care is expanding in its home state of Washington to begin serving other companies. The idea is that others will sing on to make Amazon Care part of its overall benefits package for employees. Amazon is touting the speed advantages of testing services, including results delivery, for things including COVID-19 as a major strength of the service.

The Amazon Care model has a surprisingly Amazon twist, too – when using the in-person care option, the app will provide an updating ETA for when to expect your physician or medical technician, which is eerily similar to how its primary app treats package delivery.

While the Amazon Care pilot in Washington only launched a year-and-a-half ago, the company has had its collective mind set on upending the corporate healthcare industry for some time now. It announced a partnership with Berkshire Hathaway and JPMorgan back at the very beginning of 2018 to form a joint venture specifically to address the gaps they saw in the private corporate healthcare provider market.

That deep pocketed all-star team ended up officially disbanding at the outset of this year, after having done a whole lot of not very much in the three years in between. One of the stated reasons that Amazon and its partners gave for unpartnering was that each had made a lot of progress on its own in addressing the problems it had faced anyway. While Berkshire Hathaway and JPMorgan’s work in that regard might be less obvious, Amazon was clearly referring to Amazon Care.

It’s not unusual for large tech companies with lots of cash on the balance sheet and a need to attract and retain top-flight talent to spin up their own healthcare benefits for their workforces. Apple and Google both have their own on-campus wellness centers staffed by medical professionals, for instance. But Amazon’s ambitious have clearly exceeded those of its peers, and it looks intent on making a business line out of the work it did to improve its own employee care services — a strategy that isn’t too dissimilar from what happened with AWS, by the way.


By Darrell Etherington

Microsoft Azure expands its NoSQL portfolio with Managed Instances for Apache Cassandra

At its Ignite conference today, Microsoft announced the launch of Azure Managed Instance for Apache Cassandra, its latest NoSQL database offering and a competitor to Cassandra-centric companies like Datastax. Microsoft describes the new service as a ‘semi-managed offering that will help companies bring more of their Cassandra-based workloads into its cloud.

“Customers can easily take on-prem Cassandra workloads and add limitless cloud scale while maintaining full compatibility with the latest version of Apache Cassandra,” Microsoft explains in its press materials. “Their deployments gain improved performance and availability, while benefiting from Azure’s security and compliance capabilities.”

Like its counterpart, Azure SQL Manages Instance, the idea here is to give users access to a scalable, cloud-based database service. To use Cassandra in Azure before, businesses had to either move to Cosmos DB, its highly scalable database service which supports the Cassandra, MongoDB, SQL and Gremlin APIs, or manage their own fleet of virtual machines or on-premises infrastructure.

Cassandra was originally developed at Facebook and then open-sourced in 2008. A year later, it joined the Apache Foundation and today it’s used widely across the industry, with companies like Apple and Netflix betting on it for some of their core services, for example. AWS launched a managed Cassandra-compatible service at its re:Invent conference in 2019 (it’s called Amazon Keyspaces today), Microsoft only launched the Cassandra API for Cosmos DB last November. With today’s announcement, though, the company can now offer a full range of Cassandra-based servicer for enterprises that want to move these workloads to its cloud.


By Frederic Lardinois

TigerGraph raises $105M Series C for its enterprise graph database

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph found and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI, and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”


By Frederic Lardinois

Is overseeing cloud operations the new career path to CEO?

When Amazon announced last week that founder and CEO Jeff Bezos planned to step back from overseeing operations and shift into an executive chairman role, it also revealed that AWS CEO Andy Jassy, head of the company’s profitable cloud division, would replace him.

As Bessemer partner Byron Deeter pointed out on Twitter, Jassy’s promotion was similar to Satya Nadella’s ascent at Microsoft: in 2014, he moved from executive VP in charge of Azure to the chief exec’s office. Similarly, Arvind Krishna, who was promoted to replace Ginni Rometti as IBM CEO last year, also was formerly head of the company’s cloud business.

Could Nadella’s successful rise serve as a blueprint for Amazon as it makes a similar transition? While there are major differences in the missions of these companies, it’s inevitable that we will compare these two executives based on their former jobs. It’s true that they have an awful lot in common, but there are some stark differences, too.

Replacing a legend

For starters, Jassy is taking over for someone who founded one of the world’s biggest corporations. Nadella replaced Steve Ballmer, who had taken over for the company’s face, Bill Gates. Holger Mueller, an analyst at Constellation Research, says this notable difference could have a huge impact for Jassy with his founder boss still looking over his shoulder.

“There’s a lot of similarity in the two situations, but Satya was a little removed from the founder Gates. Bezos will always hover and be there, whereas Gates (and Ballmer) had retired for good. [ … ] It was clear [they] would not be coming back. [ … ] For Jassy, the owner could [conceivably] come back anytime,” Mueller said.

But Andrew Bartels, an analyst at Forrester Research, says it’s not a coincidence that both leaders were plucked from the cloud divisions of their respective companies, even if it was seven years apart.

“In both cases, these hyperscale business units of Microsoft and Amazon were the fastest-growing and best-performing units of the companies. [ … ] In both cases, cloud infrastructure was seen as a platform on top of which and around which other cloud offerings could be developed,” Bartels said. The companies both believe that the leaders of these two growth engines were best suited to lead the company into the future.


By Ron Miller

What Andy Jassy’s promotion to Amazon CEO could mean for AWS

Blockbuster news struck late this afternoon when Amazon announced that Jeff Bezos would be stepping back as CEO of Amazon, the company he built from a business in his garage to worldwide behemoth. As he takes on the role of executive chairman, his replacement will be none other than AWS CEO Andy Jassy.

With Jassy moving into his new role at the company, the immediate question is who replaces him to run AWS. Let the games begin. Among the names being tossed about in the rumor mill are Peter DeSantis, vice president of global infrastructure at AWS and Matt Garman, who is Vice President of sales and marketing. Both are members of Bezos’ elite executive team known as the S-team and either would make sense as Jassy’s successor. Nobody knows for sure though, and it could be any number of people inside the organization, or even someone from outside. (We have asked Amazon PR to provide clarity on the successor, but as of publication we had not heard from them.)

Holger Mueller, a senior analyst at Constellation Research, says that Jassy is being rewarded for doing a stellar job raising AWS from a tiny side business to one on a $50 billion run rate. “On the finance side it makes sense to appoint an executive who intimately knows Amazon’s most profitable business, that operates in more competitive markets. [Appointing Jassy] ensures that the new Amazon CEO does not break the ‘golden goose’,” Mueller told me.

Alex Smith, VP of channels, who covers the cloud infrastructure market at analyst firm Canalys, says the writing has been on the wall that a transition was in the works. “This move has been coming for some time. Jassy is the second most public-facing figure at Amazon and has lead one of its most successful business units. Bezos can go out on a high and focus on his many other ventures,” Smith said.

Smith adds that this move should enhance AWS’s place in the organization. “I think this is more of an AWS gain, in terms of its increasing strategic importance to Amazon going forwards, rather than loss in terms of losing Andy as direct lead. I expect he’ll remain close to that organization.”

Ed Anderson, a Gartner analyst also sees Jassy as the obvious choice to take over for Bezos. “Amazon is a company driven by technology innovation, something Andy has been doing at AWS for many years now. Also, it’s worth noting that Andy Jassy has an impressive track record of building and running a very large business. Under Andy’s leadership, AWS has grown to be one of the biggest technology companies in the world and one of the most impactful in defining what the future of computing will be,” Anderson said.

In the company earnings report released today, AWS came in at $12.74 billion for the quarter up 28% YoY from $9.60 billion a year ago. That puts the company on an elite $50 billion run rate. No other cloud infrastructure vendor, even the mighty Microsoft, is even close in this category. Microsoft stands at around 20% marketshare compared to AWS’s approximately 33% market share.

It’s unclear what impact the executive shuffle will have on the company at large or AWS in particular. In some ways it feels like when Larry Ellison stepped down as CEO of Oracle in 2014 to take on the exact same executive chairman role. While Safra Catz and Mark Hurd took over at co-CEOs in that situation, Ellison has remained intimately involved with the company he helped found. It’s reasonable to assume that Bezos will do the same.

With Jassy, the company is getting a man who has risen through the ranks since joining the company in 1997 after getting an undergraduate degree and an MBA from Harvard. In 2002 he became VP/ technical assistant, working directly under Bezos. It was in this role that he began to see the need for a set of common web services for Amazon developers to use. This idea grew into AWS and Jassy became a VP at the fledgling division working his way up until he was appointed CEO in 2016.


By Ron Miller

Twitter taps AWS for its latest foray into the public cloud

Twitter has a lot going on, and it’s not always easy to manage that kind of scale on your own. Today, Amazon announced that Twitter has chosen AWS to run its real-time timelines. It’s a major win for Amazon’s cloud arm.

While the companies have worked together in some capacity for over a decade, this marks the first time that Twitter is tapping AWS to help run its core timelines.

“This expansion onto AWS marks the first time that Twitter is leveraging the public cloud to scale their real-time service. Twitter will rely on the breadth and depth of AWS, including capabilities in compute, containers, storage, and security, to reliably deliver the real-time service with the lowest latency, while continuing to develop and deploy new features to improve how people use Twitter,” the company explained in the announcement.

Parag Agrawal, Chief Technology Officer at Twitter sees this as a way to expand and improve the company’s real-time offerings by taking advantage of AWS’s network of data centers to deliver content closer to the user. “The collaboration with AWS will improve performance for people who use Twitter by enabling us to serve Tweets from data centers closer to our customers at the same time as we leverage the Arm-based architecture of AWS Graviton2 instances. In addition to helping us scale our infrastructure, this work with AWS enables us to ship features faster as we apply AWS’s diverse and growing portfolio of services,” Agrawal said in a statement.

It’s worth noting that Twitter also has a relationship with Google Cloud. In 2018, it announced it was moving its Hadoop clusters to GCP.

This announcement could be considered a case of the rich getting richer as AWS is the leader in the cloud infrastructure market by far with around 33% market share. Microsoft is in second with around 18% and Google is in third with 9%, according to Synergy Research. In its most recent earnings report, Amazon reported that $11.6 billion in AWS revenue putting it on a run rate of over $46 billion.


By Ron Miller

Amazon S3 Storage Lens gives IT visibility into complex S3 usage

As your S3 storage requirements grow, it gets harder to understand exactly what you have, and this especially true when it crosses multiple regions. This could have broad implications for administrators, who are forced to build their own solutions to get that missing visibility. AWS changed that this week when it announced a new product called Amazon S3 Storage Lens, a way to understand highly complex S3 storage environments.

The tool provides analytics that help you understand what’s happening across your S3 object storage installations, and to take action when needed. As the company describes the new service in a blog post, “This is the first cloud storage analytics solution to give you organization-wide visibility into object storage, with point-in-time metrics and trend lines as well as actionable recommendations,” the company wrote in the post.

Amazon S3 Storage Lens Console

Image Credits: Amazon

The idea is to present a set of 29 metrics in a dashboard that help you “discover anomalies, identify cost efficiencies and apply data protection best practices,” according to the company. IT administrators can get a view of their storage landscape and can drill down into specific instances when necessary, such as if there is a problem that requires attention. The product comes out of the box with a default dashboard, but admins can also create their own customized dashboards, and even export S3 Lens data to other Amazon tools.

For companies with complex storage requirements, as in thousands or even tens of thousands of S3 storage instances, who have had to kludge together ways to understand what’s happening across the systems, this gives them a single view across it all.

S3 Storage Lens is now available in all AWS regions, according to the company.


By Ron Miller

Amazon inks cloud deal with Airtel in India

Amazon has found a new partner to expand the reach of its cloud services business AWS in India, the world’s second largest internet market.

On Wednesday, the e-commerce giant announced it has partnered with Bharti Airtel, the third-largest telecom operator in India with more than 300 million subscribers, to sell a wide-range of AWS offerings under Airtel Cloud brand to small, medium, and large-sized businesses in the country.

The deal could help AWS, which leads the cloud market in India, further expand its dominance in the country. The move follows a similar deal Reliance Jio, India’s largest telecom operator, struck with Microsoft last year to sell cloud services to small businesses. The two announced a 10-year partnership to “serve millions of customers.”

Airtel, which serves over 2,500 large enterprises and more than a million emerging businesses, itself signed a similar cloud deal with Google in January this year. That partnership is still in place.

“AWS brings over 175 services. We pretty much support any workload on the cloud. We have the largest and the most vibrant community of customers,” said Puneet Chandok, President of AWS in India and South Asia, said on a call with reporters.

The two companies, which had a similar partnership in 2015, will also collaborate on building new services and help existing customers migrate to Airtel Cloud, they said.

Today’s deal illustrates Airtel’s push to build businesses beyond its telecom venture, said Harmeen Mehta, Global CIO and Head of Cloud and Security Business at Airtel, said on the call. Last month, Airtel partnered with Verizon — which is TechCrunch’s parent company — to sell BlueJeans video conferencing service to business customers in India.

Deals with carriers were very common a decade ago in India as tech giants looked to acquire new users in the country. Replicating a similar strategy now illustrates the phase of the cloud adoption in the nation.

Nearly half a billion people in India came online last decade. And slowly, small businesses and merchants are also beginning to use digital tools, storage services, and accept online payments.

India has emerged as one of the emerging leading grounds for cloud services. The public cloud services market of the country is estimated to reach $7.1 billion by 2024, according to research firm IDC.


By Manish Singh

Even as cloud infrastructure growth slows, revenue rises over $30B for quarter

The cloud market is coming into its own during the pandemic as the novel coronavirus forced many companies to accelerate plans to move to the cloud, even while the market was beginning to mature on its own.

This week, the big three cloud infrastructure vendors — Amazon, Microsoft and Google — all reported their earnings, and while the numbers showed that growth was beginning to slow down, revenue continued to increase at an impressive rate, surpassing $30 billion for a quarter for the first time, according to Synergy Research Group numbers.


By Ron Miller