AWS wants to rule the world

AWS, once a nice little side hustle for Amazon’s eCommerce business, has grown over the years into a behemoth that’s on a $27 billion run rate, one that’s still growing at around 45 percent a year. That’s a highly successful business by any measure, but as I listened to AWS executives last week at their AWS re:Invent conference in Las Vegas, I didn’t hear a group that was content to sit still and let the growth speak for itself. Instead, I heard one that wants to dominate every area of enterprise computing.

Whether it was hardware like the new Inferentia chip and Outposts, the new on-prem servers or blockchain and a base station service for satellites, if AWS saw an opportunity they were not ceding an inch to anyone.

Last year, AWS announced an astonishing1400 new features, and word was that they are on pace to exceed that this year. They get a lot of credit for not resting on their laurels and continuing to innovate like a much smaller company, even as they own gobs of marketshare.

The feature inflation probably can’t go on forever, but for now at least they show no signs of slowing down, as the announcements came at a furious pace once again. While they will tell you that every decision they make is about meeting customer needs, it’s clear that some of these announcements were also about answering competitive pressure.

Going after competitors harder

In the past, AWS kept criticism of competitors to a minimum maybe giving a little jab to Oracle, but this year they seemed to ratchet it up. In their keynotes, AWS CEO Andy Jassy and Amazon CTO Werner Vogels continually flogged Oracle, a competitor in the database market, but hardly a major threat as a cloud company right now.

They went right for Oracle’s market though with a new on prem system called Outposts, which allows AWS customers to operate on prem and in the cloud using a single AWS control panel or one from VMware if customers prefer. That is the kind of cloud vision that Larry Ellison might have put forth, but Jassy didn’t necessarily see it as going after Oracle or anyone else. “I don’t see Outposts as a shot across the bow of anyone. If you look at what we are doing, it’s very much informed by customers,” he told reporters at a press conference last week.

AWS CEO Andy Jassy at a press conference at AWS Re:Invent last week.

Yet AWS didn’t reserve its criticism just for Oracle. It also took aim at Microsoft, taking jabs at Microsoft SQL Server, and also announcing Amazon FSx for Windows File Server, a tool specifically designed to move Microsoft files to the AWS cloud.

Google wasn’t spared either when launching Inferentia and Elastic Inference, which put Google on notice that AWS wasn’t going to yield the AI market to Google’s TPU infrastructure. All of these tools and much more were about more than answering customer demand, they were about putting the competition on notice in every aspect of enterprise computing.

Upward growth trajectory

The cloud market is continuing to grow at a dramatic pace, and as market leader, AWS has been able to take advantage of its market dominance to this point. Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy says that AWS has been using its market position to keep expanding into different areas.

“AWS has the scale right now to do many things others cannot, particularly lesser players like Google Cloud Platform and Oracle Cloud. They are trying to make a point with the thousands of new products and features they bring out. This serves as a disincentive longer-term for other players, and I believe will result in a shakeout,” he told TechCrunch.

As for the frenetic pace of innovation, Moorhead believes it can’t go on forever. “To me, the question is, when do we reach a point where 95% of the needs are met, and the innovation rate isn’t required. Every market, literally every market, reaches a point where this happens, so it’s not a matter of if but when,” he said.

Certainly areas like the AWS Ground Station announcement, showed that AWS was willing to expand beyond the conventional confines of enterprise computing and into outer space to help companies process satellite data. This ability to think beyond traditional uses of cloud computing resources shows a level of creativity that suggests there could be other untapped markets for AWS that we haven’t yet imagined.

As AWS moves into more areas of the enterprise computing stack, whether on premises or in the cloud, they are showing their desire to dominate every aspect of the enterprise computing world, and last week they demonstrated that there is no area that they are willing to surrender to anyone.

more AWS re:Invent 2018 coverage


By Ron Miller

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all of the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS launches a managed Kafka service

Kafka is an open source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his keynote, Kafka users traditionally had to do a lot of happy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage


By Frederic Lardinois

AWS is bringing the cloud on prem with Outposts

AWS has always been the pure cloud vendor, and even though it has given a nod to hybrid, it is now fully embracing it. Today in conjunction with VMware, it announced a pair of options to bring AWS into the datacenter.

Yes, you read it correctly. You can now put AWS into your data center with AWS hardware, the same design they use in their own datacenters. The two new products are part of AWS Outposts.

There are two Outposts variations — VMware Cloud on AWS Outposts and AWS Outposts. The first uses the VMware control panel. The second allows customers to run compute and storage on-premises using the same AWS APIs that are used in the AWS cloud

In fact, VMware CEO Pat  Gelsinger joined AWS CEO Andy Jassy on stage for a joint announcement. The two companies have been working together for some to bring VMware to the AWS cloud. Part of this announcement flips that on its head bringing the AWS cloud on prem to work with VMware. In both cases, AWS sells you their hardware, installs it if you wish, and will even maintain it for you.

This is an area that AWS has lagged, preferring the vision of a cloud, rather than moving back to the datacenter, but it’s a tacit acknowledgment that customers want to operate in both places for the foreseeable future.

The announcement also extends the company’s cloud-native like vision. On Monday, the company announced Transit Gateways, which is designed to provide a single way to manage network resources, whether they live in the cloud or on-prem.

Now AWS is bringing its cloud on prem, something that Microsoft, Canonical, Oracle and others have had for some time. It’s worth noting that today’s announcement is a public preview. The actual release is expected in the second half of next year.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change and if you are using a template as a work-around for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like social security numbers, dates of birth and addresses and it interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a social security number. If forms change Textract won’t miss it,” Jassy explained

more AWS re:Invent 2018 coverage


By Ron Miller

AWS launches new time series database

AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called DynamoDB On-Demand is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.

“With time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.

He sees a problem though with existing open source and commercial solutions, which says don’t scale well and hard to manage. This is of course a problem that a cloud service like AWS often helps solve.

Not surprising as customers were looking for a good time series database solution, AWS decided to create one themselves. “Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning,” Danilo Poccia from AWS wrote in the blog post introducing the new service.

Jassy said that they built DynamoDB on-demand from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.

He claims it will be a thousand times faster at a tenth of cost, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.

This new service is available across the world starting today.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high throughput low-latency, sustained performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and and mixed precision. What’s more, it supports multiple machine learning frameworks including Tensorflow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, Sagemaker and the new Elastic Inference Engine announced today

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS Lake Formation makes setting up data lakes easier

The concept of data lakes has been around for a long time, but being able to set up one of these systems, which store vast amounts of raw data in its native formats, was never easy. AWS wants to change this with the launch of AWS Lake Formation. At its core, this new service, which is available today, allows developers to create a secure data lake within a few days.

While “a few days” may still sound like a long time in this age of instant gratification, it’s nothing in the world of enterprise software.

“Everybody is excited about data lakes,” said AWS CEO Andy Jassy in today’s AWS re:Invent keynote. “People realize that there is significant value in moving all that disparate data that lives in your company in different silos and make it much easier by consolidating it in a data lake.”

Setting up a data lake today means you have to, among other things, configure your storage and (on AWS) S3 buckets, move your data, add metadata and add that to a catalog. And then you have to clean up that data and set up the right security policies for the data lake. “This is a lot of work and for most companies, it takes them several months to set up a data lake. It’s frustrating,” said Jassy.

Lake Formation is meant to handle all of these complications with just a few clicks. It sets up the right tags and cleans up and dedupes the data automatically. And it provides admins with a list of security policies to help secure that data.

“This is a step-level change for how easy it is to set up data lakes,” said Jassy.

more AWS re:Invent 2018 coverage


By Frederic Lardinois

AWS launches a managed blockchain service

It was only a year ago that AWS CEO Andy Jassy said that he wasn’t all that interested in blockchain services. Clearly something has changed over the course of the last year because today, the company is launching two new blockchain services: Quantum Ledger Database and Amazon Managed Blockchain.

As the name implies, AWS Managed Blockchain is a managed blockchain service. It supports Ethereum and Hyperledger Fabric.

“This service is going to make it much easier for you to use the two most popular blockchain frameworks,” said AWS CEO Andy Jassy. He noted that companies tend to use Hyperledger Fabric when they know the number of members in their blockchain network and want robust private operations and capabilities. AWS promises that the service will scale to thousands of applications and will allow users to run millions of transactions (though the company didn’t say with what kind of latency).

Support for Hyperledger Fabric is available today. Ethereum support is launching a few months from now.

Getting started with Managed Blockchain is a matter of using the AWS Console and configuring nodes, adding members and deploying applications.

“When we heard people saying ‘blockchain,’ we felt like there was their weird conveluting and conflating what they really wanted,” said Jassy. “And as we spent time working with customers and figuring out the jobs they were really trying to solve, this is what we think people are trying to do with blockchain.”

more AWS re:Invent 2018 coverage


By Frederic Lardinois

AWS tries to lure Windows users with Amazon FSx for Windows File Server

Amazon has had storage options for Linux file servers for some time, but it recognizes that a number of companies still use Windows file servers, and they are not content to cede that market to Microsoft. Today the company announced Amazon FSx for Windows File Server to provide a fully compatible Windows option.

“You get a native Windows file system backed by fully-managed Windows file servers, accessible via the widely adopted SMB (Server Message Block) protocol. Built on SSD storage, Amazon FSx for Windows File Server delivers the throughput, IOPS, and consistent sub-millisecond performance that you (and your Windows applications) expect,” AWS’s Jeff Barr wrote in a blog post introducing the new feature.

That means if you use this service, you have a first-class Windows system with all of the compatibility with Windows services that you would expect such as Active Directory and Windows Explorer.

AWS CEO Andy Jassy introduced the new feature today at AWS Re:Invent, the company’s customer conference going on in Las Vegas this week. He said that even though Windows File Server usage is diminishing as more IT pros turn to Linux, there are still a fair number of customers who want a Windows compatible system and they wanted to provide a service for them to move their Windows files to the cloud.

Of course, it doesn’t hurt that it provides a path for Microsoft customers to use AWS instead of turning to Azure for these workloads. Companies undertaking a multi-cloud strategy should like having a fully compatible option.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS launches a base station for satellites as a service

Today at AWS Re:invent in Las Vegas, AWS announced a new service for satellite providers with the launch of AWS Ground Station, the first fully-managed ground station as a service.

With this new service, AWS will provide ground antennas through their existing network of worldwide availability zones, and data processing services to simplify the entire data retrieval and processing process for satellite companies.

Andy Jassy, AWS CEO introducing the new service explained that in the end, this is a big data processing problem. Satellite operators need to get data down from the satellite, process it and then make it available for developers to use in applications. In that regard, it’s not that much different from any IoT device. It just so happens that these are flying around in space.

Jassy pointed out that they hadn’t really considered a service like this until they had customers asking for it. “Customers said that we have so much data in space with so many applications that want to use that data. Why don’t you make it easier,” Jassy said. He said they thought about that and figured they could put their vast worldwide network to bear on the problem. .

Prior to this service, companies had to build these base stations themselves to get the data down from the satellites as they passed over the base stations on earth wherever those base stations happened to be. It required that providers buy land and build the hardware, then deal with the data themselves. By offering this as a managed service, it greatly simplifies every aspect of the workflow.

The value proposition of any cloud service has always been about reducing the resource allocation required by a company to achieve a goal. With AWS Ground Station, AWS handles every aspect of the satellite data retrieval and processing problem for the company, greatly reducing the cost and complexity associated with it.

AWS claims it can save up to 80 percent by using an on-demand model over ownership. They are starting with two ground stations today as they launch the service, but plan to expand it to 12 by the middle of next year.

more AWS re:Invent 2018 coverage


By Ron Miller

AWS Transit Gateways help customers understand their entire network

Tonight at AWS re:Invent, the company announced a new tool called AWS Transit Gateway designed to help build a network topology inside of AWS that lets you share resources across accounts and bring together on premises and cloud resources in a single network topology.

Amazon already has a popular product called Amazon Virtual Private Cloud (VPC), which helps customers build private instances of their applications. The Transit Gateway is designed to help build connections between VPCs, which up until now has been tricky to do.

As Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent explained, AWS Transit Gateway gives you a single set of controls that lets you connect to a centrally managed gateway to grow your network easily and quickly.

Diagram: AWS

DeSantis said that this tool also gives you the ability to traverse your AWS and on-premises networks. “A gateway is another way that we’re innovating to enable customers to have secure, easy-to-manage networking across both on premise and their AWS cloud environment,” he explained.

AWS Transit Gateway lets you build connections across a network wherever the resources live in a standard kind of network topology. “Today we are giving you the ability to use the new AWS Transit Gateway to build a hub-and-spoke network topology. You can connect your existing VPCs, data centers, remote offices, and remote gateways to a managed Transit Gateway, with full control over network routing and security, even if your VPCs, Active Directories, shared services, and other resources span multiple AWS accounts,” Amazon’s Jeff Barr wrote in a blog post announcing to the new feature.

For much of its existence, AWS was about getting you to the cloud and managing your cloud resources. This makes sense for a pure cloud company like AWS, but customers tend to have complex configurations with some infrastructure and software still living on premises and some in the cloud. This could help bridge the two worlds.


By Ron Miller

AWS Global Accelerators helps customers manage traffic across zones

Many AWS customers have to run in multiple zones for many reasons including performance requirements, regulatory issues or fail-over management. Whatever the reason, AWS announced a new tool tonight called Global Accelerators designed to help customers route traffic more easily across multiple regions.

Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent explained that much of AWS customer traffic already flows over their massive network, and customers are using AWS Direct Connect to help applications get consistent performance and low network variability as customers move between AWS regions. He said what has been missing is a way to use the AWS global network to optimize their applications.

“Tonight I’m excited to announce AWS Global Accelerator. AWS Global Accelerator makes it easy for you to improve the performance and availability of your applications by taking advantage of the AWS global network,” he told the AWS re:Invent audience.

Graphic: AWS

“Your customer traffic is routed from your end users to the closest AWS edge location and from there traverses congestion-free redundant, highly available AWS global network. In addition to improving performance AWS Global Accelerator has built-in fault isolation, which instantly reacts to changes in the network health or your applications configuration,” DeSantis explained.

In fact, network administrators can route traffic based on defined policies such as health or geographic requirements and the traffic will move to the designated zone automatically based on those policies.

AWS plans to charge customers based on the number of accelerators they create. “An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network. Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator,” AWS’s Shaun Ray wrote in a blog post announcing the new feature.

AWS Global Accelerator is available today in several regions in the US, Europe and Asia.


By Ron Miller