Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) and Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 


By Frederic Lardinois

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”


By Frederic Lardinois

Google Cloud launches its Business Application Platform based on Apigee and AppSheet

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully-managed service that is meant o makes it easier for developers to secure and manage their API across Google’s cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features you’d expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers won’t have to write any code. Instead, AppSheet Automation provides a visual interface, that according to Google, “provides contextual suggestions based on natural language inputs.” 

“We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications,” Google notes in today’s announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .


By Frederic Lardinois

Google Cloud lets businesses create their own text-to-speech voices

Google launched a few updates to its Contact Center AI product today, but the most interesting one is probably the beta of its new Custom Voice service, which will let brands create their own text-to-speech voices to best represent their own brands.

Maybe your company has a well-known spokesperson for example, but it would be pretty arduous to have them record every sentence in an automated response system or bring them back to the studio whenever you launch a new product or procedure. With Custom Voice, businesses can bring in their voice talent to the studio and have them record a script provided by Google. The company will then take those recordings and train its speech models based on them.

As of now, this seems to be a somewhat manual task on Google’s side. Training and evaluating the model will take “several weeks,” the company says and Google itself will conduct its own tests of the trained model before sending it back to the business that commissioned the model. After that, the business must follow Google’s own testing process to evaluate the results and sign off on it.

For now, these custom voices are still in beta and only American English is supported so far.

It’s also worth noting that Google’s review process is meant to ensure that the result is aligned with its internal AI Principles, which it released back in 2018.

Like with similar projects, I would expect that this lengthy process of creating custom voices for these contact center solutions will become mainstream quickly. While it will just be a gimmick for some brands (remember those custom voices for stand-alone GPS systems back in the day?), it will allow the more forward-thinking brands to distinguish their own contact center experiences from those of the competition. Nobody likes calling customer support, but a more thoughtful experience that doesn’t make you think you’re talking to a random phone tree may just help alleviate some of the stress at least.


By Frederic Lardinois

Google Cloud Anthos update brings support for on-prem, bare metal

When Google announced Anthos last year at Google Cloud Next, it was a pretty big deal. Here was a cloud company releasing a product that purported to help you move your applications between cloud companies like AWS and Azure  — that would be GCP’s competitors — because it’s what customers demanded.

Google tapped into genuine anxiety tech leaders at customer companies are having over vendor lock-in in the cloud. Back in the client-server days, most of these folks got locked into a tech stack where they were at the mercy of the vendor. It’s something companies desperately want to avoid this go-round.

With Anthos, Google claimed you could take an application, package it in a container, and then move it freely between clouds without having to rewrite it for the underlying infrastructure. It was and remains a compelling idea.

This year, the company is updating the product to include a couple of speciality workloads that didn’t get into version 1.0 last year. For starters, many customers aren’t just multi-cloud, meaning they have workloads on various infrastructure cloud vendors, they are also hybrid. That means they still have workloads on-prem in their own data centers, as well as in the cloud, and Google wanted to provide a way to include these workloads in Anthos.

Pali Bhat, VP of product and design at Google Cloud says they have heard customers still have plenty of applications on premises and they want a way to package them as containerized, cloud native workloads.

“They do want to be able to bring all of the benefits of cloud to both their own data centers, but also to any cloud they choose to use. And what Anthos enables them to do is go on this journey of modernization and digital transformation and be able to take advantage of it by writing once and running it anywhere, and that’s a really cool vision,” Bhat said.

And while some companies have made the move from on prem to the cloud, they still want the comfort of working on bare metal where they are the only tenant. The cloud typically offers a multi-tenant environment where users share space on servers, but bare metal gives a customer the benefits of being in the cloud with the ability to control your own destiny as you do on prem.

Customers were asking for Anthos to support bare metal, and so Google gave the people what they wanted and are releasing a beta of Anthos for bare metal this week, which Bhat says provides the answer for companies looking to have the benefits of Anthos at the edge.

“[The bare metal support] lets customers run Anthos […] at edge locations without using any hypervisor. So this is a huge benefit for customers who are looking to minimize unnecessary overhead and unlock new use cases, especially both in the cloud and on the edge,” Bhat said.

Anthos is part of a broader cloud modernization platform that Google Cloud is offering customers that includes GKE (the Kubernetes engine), Cloud Functions (the serverless offering) and Cloud Run (container run time platform). Bhat says this set of products taps into a couple of trends they are seeing with customers. First of all, as we move deeper into the pandemic companies are looking for ways to cut costs while making a faster push to the cloud. The second is taking advantage of that push by becoming more agile and innovative.

It seems to be working. Bhat reports that in Q2, the company the company has seen a lot of interest. “One of the things in Q2 of 2020 that we’ve seen is that just Q2, over 100,000 companies used our application modernization platform and services,” he said.


By Ron Miller

Nvidia’s Ampere GPUs come to Google Cloud

Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.

The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.

Image Credits: Nvidia

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market Nvidia A100 GPUs, just as we were with Nvidia’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.


By Frederic Lardinois

Google Cloud launches Filestore High Scale, a new storage tier for high-performance computing workloads

Google Cloud today announced the launch of Filestore High Scale, a new storage option — and tier of Google’s existing Filestore service — for workloads that can benefit from access to a distributed high-performance storage option.

With Filestore High Scale, which is based on technology Google acquired when it bought Elastifile in 2019, users can deploy shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput and at a scale of 100s of TBs.

“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a postdoctoral research fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces. “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster, or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs.”

The standard Google Cloud Filestore service already supports some of these use cases, but the company notes that it specifically built Filestore High Scale for high-performance computing (HPC) workloads. In today’s announcement, the company specifically focuses on biotech use cases around COVID-19. Filestore High Scale is meant to support tens of thousands of concurrent clients, which isn’t necessarily a standard use case, but developers who need this kind of power can now get it in Google Cloud.

In addition to High Scale, Google also today announced that all Filestore tiers now offer beta support for NFS IP-based access controls, an important new feature for those companies that have advanced security requirements on top of their need for a high-performance, fully managed file storage service.


By Frederic Lardinois

Google makes it easier to migrate VMware environments to its cloud

Google Cloud today announced the next step in its partnership with VMware: the Google Cloud VMware Engine. This fully managed service provides businesses with a full VMware Cloud Foundation stack on Google Cloud to help businesses easily migrate their existing VMware-based environments to Google’s infrastructure. Cloud Foundation is VMware’s stack for hybrid and private cloud deployments

Given Google Cloud’s focus on enterprise customers, it’s no surprise that the company continues to bet on partnerships with the likes of VMware to attract more of these companies’ workloads. Less than a year ago, Google announced that VMware Cloud Foundation would come to Google Cloud and that it would start supporting VMware workloads. Then, last November, Google Cloud acquired CloudSimple, a company that specialized in running VMware environments and that Google had already partnered with for its original VMware deployments. The company describes today’s announcement as the third step in this journey.

VMware Engine provides users with all of the standard Cloud Foundation components: vSphere, vCenter, vSAN, NSX-T and HCX. With this, Google Cloud General Manager June Yang notes in today’s announcement, businesses can quickly stand up their own software-defined data center in the Google Cloud.

“Google Cloud VMware Engine is designed to minimize your operational burden, so you can focus on your business,” she notes. “We take care of the lifecycle of the VMware software stack and manage all related infrastructure and upgrades. Customers can continue to leverage IT management tools and third-party services consistent with their on-premises environment.”

Google is also working with third-party providers like NetApp, Veeam, Zerto, Cohesity and Dell Technologies to ensure that their solutions work on Google’s platform, too.

“As customers look to simplify their cloud migration journey, we’re committed to build cloud services to help customers benefit from the increased agility and efficiency of running VMware workloads on Google Cloud,” said Bob Black, Dell Technologies Global Lead Alliance Principal at Deloitte Consulting. “By combining Google Cloud’s technology and Deloitte’s business transformation experience, we can enable our joint customers to accelerate their cloud migration, unify operations, and benefit from innovative Google Cloud services as they look to modernize applications.””


By Frederic Lardinois

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”


By Frederic Lardinois

Google Cloud’s fully-managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of API’s.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, hey, we want, in addition to GCP, we want AWS,” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads in on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and talking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”


By Frederic Lardinois

Tech giants should let startups defer cloud payments

Google, Amazon, and Microsoft are the landlords. Amidst the Coronavirus economic crisis, startups need a break from paying rent. They’re in a cash crunch. Revenue has stopped flowing in, capital markets like venture debt are hesitant, and startups and small-to-medium sized businessesf are at risk of either having to lay off huge numbers of employees and/or shut down.

Meanwhile, the tech giants are cash rich. Their success this decade means they’re able to weather the storm for a few months. Their customers cannot.

Cloud infrastructure costs area amongst many startups’ top expenses besides payroll. The option to pay these cloud bills later could save some from going out of business or axing huge parts of their staff. Both would hurt the tech industry, the economy, and the individuals laid off. But most worryingly for the giants, it could destroy their customer base.

The mass layoffs have already begun. Soon we’re sure to start hearing about sizable companies shutting down, upended by COVID-19. But there’s still an opportunity to stop a larger bloodbath from ensuing.

That’s why I have a proposal: cloud relief.

The platform giants should let startups and small businesses defer their cloud infrastructure payments for three to six months until they can pay them back in installments. Amazon AWS, Google Cloud, Microsoft Azure, these companies’ additional infrastructure products, and other platform providers should let customers pause payment until the worst of the first wave of the COVID-19 economic disruption passes. Profitable SAAS providers like Salesforce could give customers an extension too.

There are plenty of altruistic reasons to do this. They have the resources to help businesses in need. We all need to support each other in these tough times. This could protect tons of families. Some of these startups are providing important services to the public and even discounting them, thereby ramping up their bills while decreasing revenue.

Then there are the PR reasons. After years of techlash and anti-trust scrutiny, here’s the chance for the giants to prove their size can be beneficial to the world. Recruiters could use it as a talking point. “We’re the company that helped save Silicon Valley.” There’s an explanation for them squirreling away so much cash: the rainy day has finally arrived.

But the capitalistic truth and the story they could sell to Wall Street is that it’s not good for our business if our customers go out of business. Look at what happened to infrastructure providers in the dotcom crash. When tons of startups vaporized, so did the profits for those selling them hosting and tools. Any government stimulus for businesses would be better spent by them paying employees than paying the cloud companies that aren’t in danger. Saving one future Netflix from shutting down could cover any short-term loss from helping 100 other businesses.

This isn’t a handout. These startups will still owe the money. They’d just be able to pay it a little later, spread out over their monthly bills for a year or so. Once mass shelter-in-place orders subside, businesses can operate at least a little closer to normal, and investors get less cautious, customers will have the cash they need to pay their dues. Plus interest if necessary.

Meanwhile, they’ll be locked in and loyal customers for the foreseeable future. Cloud vendors could gate the deferment to only customers that have been with them for X amount of months or that have already spent Y amount on the platform. The vendors could also offer the deferment on the condition that customers add a year or more to their existing contracts. Founders will remember who gave them the benefit of the doubt.

cloud ice cream cone imagine

Consider it a marketing expense. Platforms often offer discounts or free trials to new customers. Now it’s existing customers that need a reprieve. Instead of airport ads, the giants could spend the money ensuring they’ll still have plenty of developers building atop them by the end of 2020.

Beyond deferred payment, platforms could just push the due date on all outstanding bills to three or six months from now. Alternatively, they could offer a deep discount such as 50% off for three months if they didn’t want to deal with accruing debt and then servicing it. Customers with multi-year contracts could offered the opportunity to downgrade or renegotiate their contracts without penalties. Any of these might require giving sales quota forgiveness to their account executives.

It would likely be far too complicated and risky to accept equity in lieu of cash, a cut of revenue going forward, or to provide loans or credit lines to customers. The clearest and simplest solution is to let startups skip a few payments, then pay more every month later until they clear their debt. When asked for comment or about whether they’re considering payment deferment options, Microsoft declined, and Amazon and Google did not respond.

To be clear, administering payment deferment won’t be simple or free. There are sure to be holes that cloud economists can poke in this proposal, but my goal is to get the conversation startup. It could require the giants to change their earnings guidance. Rewriting deals with significantly sized customers will take work on both ends, and there’s a chance of breach of contract disputes. Giants would face the threat of customers recklessly using cloud resources before shutting down or skipping town.

Most taxing would be determining and enforcing the criteria of who’s eligible. The vendors would need to lay out which customers are too big so they don’t accidentally give a cloud-intensive but healthy media company a deferment they don’t need. Businesses that get questionably excluded could make a stink in public. Executing on the plan will require staff when giants are stretched thin trying to handle logistics disruptions, misinformation, and accelerating work-from-home usage.

Still, this is the moment when the fortunate need to lend a hand to the vulnerable. Not a hand out, but a hand up. Companies with billions in cash in their coffers could save those struggling to pay salaries. All the fundraisers and info centers and hackathons are great, but this is how the tech giants can live up to their lofty mission statements.

We all live in the cloud now. Don’t evict us. #CloudRelief

Thanks to Falon Fatemi, Corey Quinn, Ilya Fushman, Jason Kim, Ilya Sukhar, and Michael Campbell for their ideas and feedback on this proposal


By Josh Constine

Big opening for startups that help move entrenched on-prem workloads to the cloud

AWS CEO Andy Jassy showed signs of frustration at his AWS re:Invent keynote address in December.

Customers weren’t moving to the cloud nearly fast enough for his taste, and he prodded them to move along. Some of their hesitation, as Jassy pointed out, was due to institutional inertia, but some of it also was due to a technology problem related to getting entrenched, on-prem workloads to the cloud.

When a challenge of this magnitude presents itself and you have the head of the world’s largest cloud infrastructure vendor imploring customers to move faster, you can be sure any number of players will start paying attention.

Sure enough, cloud infrastructure vendors (ISVs) have developed new migration solutions to help break that big data logjam. Large ISVs like Accenture and Deloitte are also happy to help your company deal with migration issues, but this opportunity also offers a big opening for startups aiming to solve the hard problems associated with moving certain workloads to the cloud.

Think about problems like getting data off of a mainframe and into the cloud or moving an on-prem data warehouse. We spoke to a number of experts to figure out where this migration market is going and if the future looks bright for cloud-migration startups.

Cloud-migration blues

It’s hard to nail down exactly the percentage of workloads that have been moved to the cloud at this point, but most experts agree there’s still a great deal of growth ahead. Some of the more optimistic projections have pegged it at around 20%, with the U.S. far ahead of the rest of the world.


By Ron Miller

Thomas Kurian on his first year as Google Cloud CEO

“Yes.”

That was Google Cloud CEO Thomas Kurian’s simple answer when I asked if he thought he’d achieved what he set out to do in his first year.

A year ago, he took the helm of Google’s cloud operations — which includes G Suite — and set about giving the organization a sharpened focus by expanding on a strategy his predecessor Diane Greene first set during her tenure.

It’s no secret that Kurian, with his background at Oracle, immediately put the entire Google Cloud operation on a course to focus on enterprise customers, with an emphasis on a number of key verticals.

So it’s no surprise, then, that the first highlight Kurian cited is that Google Cloud expanded its feature lineup with important capabilities that were previously missing. “When we look at what we’ve done this last year, first is maturing our products,” he said. “We’ve opened up many markets for our products because we’ve matured the core capabilities in the product. We’ve added things like compliance requirements. We’ve added support for many enterprise things like SAP and VMware and Oracle and a number of enterprise solutions.” Thanks to this, he stressed, analyst firms like Gartner and Forrester now rank Google Cloud “neck-and-neck with the other two players that everybody compares us to.”

If Google Cloud’s previous record made anything clear, though, it’s that technical know-how and great features aren’t enough. One of the first actions Kurian took was to expand the company’s sales team to resemble an organization that looked a bit more like that of a traditional enterprise company. “We were able to specialize our sales teams by industry — added talent into the sales organization and scaled up the sales force very, very significantly — and I think you’re starting to see those results. Not only did we increase the number of people, but our productivity improved as well as the sales organization, so all of that was good.”

He also cited Google’s partner business as a reason for its overall growth. Partner influence revenue increased by about 200% in 2019, and its partners brought in 13 times more new customers in 2019 when compared to the previous year.


By Frederic Lardinois

Google brings IBM Power Systems to its cloud

As Google Cloud looks to convince more enterprises to move to its platform, it needs to be able to give businesses an onramp for their existing legacy infrastructure and workloads that they can’t easily replace or move to the cloud. A lot of those workloads run on IBM Power Systems with their Power processors and until now, IBM was essentially the only vendor that offered cloud-based Power systems. Now, however, Google is also getting into this game by partnering with IBM to launch IBM Power Systems on Google Cloud.

“Enterprises looking to the cloud to modernize their existing infrastructure and streamline their business processes have many options,” writes Kevin Ichhpurani, Google Cloud’s corporate VP for its global ecosystem in today’s announcement. “At one end of the spectrum, some organizations are re-platforming entire legacy systems to adopt the cloud. Many others, however, want to continue leveraging their existing infrastructure while still benefiting from the cloud’s flexible consumption model, scalability, and new advancements in areas like artificial intelligence, machine learning, and analytics.”

Power Systems support obviously fits in well here, given that many companies use them for mission-critical workloads based on SAP and Oracle applications and databases. With this, they can take those workloads and slowly move them to the cloud, without having to re-engineer their applications and infrastructure. Power Systems on Google Cloud is obviously integrated with Google’s services and billing tools.

This is very much an enterprise offering, without a published pricing sheet. Chances are, given the cost of a Power-based server, you’re not looking at a bargain, per-minute price here.

Since IBM has its own cloud offering, it’s a bit odd to see it work with Google to bring its servers to a competing cloud — though it surely wants to sell more Power servers. The move makes perfect sense for Google Cloud, though, which is on a mission to bring more enterprise workloads to its platform. Any roadblock the company can remove works in its favor and as enterprises get comfortable with its platform, they’ll likely bring other workloads to it over time.


By Frederic Lardinois

The 7 most important announcements from Microsoft Ignite

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.


By Frederic Lardinois