Google Cloud announces four new regions as it expands its global footprint

Google Cloud today announced its plans to open four new data center regions. These regions will be in Delhi (India), Doha (Qatar), Melbourne (Australia) and Toronto (Canada) and bring Google Cloud’s total footprint to 26 regions. The company previously announced that it would open regions in Jakarta, Las Vegas, Salt Lake City, Seoul and Warsaw over the course of the next year. The announcement also comes only a few days after Google opened its Salt Lake City data center.

GCP already had a data center presence India, Australia and Canada before this announcement, but with these newly announced regions, it now offers two geographically separate regions for in-country disaster recovery, for example.

Google notes that the region in Doha marks the company’s first strategic collaboration agreement to launch a region in the Middle East with the Qatar Free Zones Authority. One of the launch customers there, is Bespin Global, a major manages services provider in Asia.

“We work with some of the largest Korean enterprises, helping to drive their digital transformation initiatives. One of the key requirements that we have is that we need to deliver the same quality of service to all of our customers around the globe,” said John Lee, CEO, Bespin Global. “Google Cloud’s continuous investments in expanding their own infrastructure to areas like the Middle East make it possible for us to meet our customers where they are.”


By Frederic Lardinois

Datastax acquires The Last Pickle

Data management company Datastax, one of the largest contributors to the Apache Cassandra project, today announced that it has acquired The Last Pickle (and no, I don’t know what’s up with that name either), a New Zealand-based Cassandra consulting and services firm that’s behind a number of popular open-source tools for the distributed NoSQL database.

As Datastax Chief Strategy Officer Sam Ramji, who you may remember from his recent tenure at Apigee, the Cloud Foundry Foundation, Google and Autodesk, told me, The Last Pickle is one of the premier Apache Cassandra consulting and services companies. The team there has been building Cassandra-based open source solutions for the likes of Spotify, T Mobile and AT&T since it was founded back in 2012. And while The Last Pickle is based in New Zealand, the company has engineers all over the world that do the heavy lifting and help these companies successfully implement the Cassandra database technology.

It’s worth mentioning that Last Pickle CEO Aaron Morton first discovered Cassandra when he worked for WETA Digital on the special effects for Avatar, where the team used Cassandra to allow the VFX artists to store their data.

“There’s two parts to what they do,” Ramji explained. “One is the very visible consulting, which has led them to become world experts in the operation of Cassandra. So as we automate Cassandra and as we improve the operability of the project with enterprises, their embodied wisdom about how to operate and scale Apache Cassandra is as good as it gets — the best in the world.” And The Last Pickle’s experience in building systems with tens of thousands of nodes — and the challenges that its customers face — is something Datastax can then offer to its customers as well.

And Datastax, of course, also plans to productize The Last Pickle’s open-source tools like the automated repair tool Reaper and the Medusa backup and restore system.

As both Ramji and Datastax VP of Engineering Josh McKenzie stressed, Cassandra has seen a lot of commercial development in recent years, with the likes of AWS now offering a managed Cassandra service, for example, but there wasn’t all that much hype around the project anymore. But they argue that’s a good thing. Now that it is over ten years old, Cassandra has been battle-hardened. For the last ten years, Ramji argues, the industry tried to figure out what the de factor standard for scale-out computing should be. By 2019, it became clear that Kubernetes was the answer to that.

“This next decade is about what is the de facto standard for scale-out data? We think that’s got certain affordances, certain structural needs and we think that the decades that Cassandra has spent getting harden puts it in a position to be data for that wave.”

McKenzie also noted that Cassandra provides users with a number of built-in features like support for mutiple data centers and geo-replication, rolling updates and live scaling, as well as wide support across programming languages, give it a number of advantages over competing databases.

“It’s easy to forget how much Cassandra gives you for free just based on its architecture,” he said. “Losing the power in an entire datacenter, upgrading the version of the database, hardware failing every day? No problem. The cluster is 100 percent always still up and available. The tooling and expertise of The Last Pickle really help bring all this distributed and resilient power into the hands of the masses.”

The two companies did not disclose the price of the acquisition.


By Frederic Lardinois

Google cancels Cloud Next because of coronavirus, goes online-only

Google today announced that it is canceling the physical part of Cloud Next, its cloud-focused event and its largest annual conference by far with around 30,000 attendees, over concerns around the current spread of COVID-19.

Given all of the recent conference cancellations, this announcement doesn’t come as a huge surprise, especially after Facebook canceled its F8 developer conference only a few days ago.

Cloud Next was scheduled to run from Apri 6 to 8. Instead of the physical event, Google will now host an online event under the “Google Cloud Next ’20: Digital Connect” moniker. So there will still be keynotes and breakout sessions, as well as the ability to connect with experts.

“Innovation is in Google’s DNA and we are leveraging this strength to bring you an immersive and inspiring event this year without the risk of travel,” the company notes in today’s announcement.

The virtual event will be free and in an email to attendees, Google says that it will automatically refund all tickets to this year’s conference. It will also automatically cancel all hotel reservations made through its conference reservation system.

It now remains to be seen what happens to Google’s other major conference, I/O, which is slated to run from May 12 to 14 in Mountain View. The same holds true for Microsoft’s rival Build conference in Seattle, which is scheduled to start on May 19. These are the two premier annual news events for both companies, but given the current situation, nobody would be surprised if they got canceled, too.


By Frederic Lardinois

Microsoft’s Cortana drops consumer skills as it refocuses on business users

With the next version of Windows 10, coming this spring, Microsoft’s Cortana digital assistant will lose a number of consumer skills around music and connected homes, as well as some third-party skills. That’s very much in line with Microsoft’s new focus for Cortana, but it may still come as a surprise to the dozens of loyal Cortana fans.

Microsoft is also turning off Cortana support in its Microsoft Launcher on Android by the end of April and on older versions of Windows that have reached their end-of-service date, which usually comes about 36 months after the original release.

cortana

As the company explained last year, it now mostly thinks of Cortana as a service for business users. The new Cortana is all about productivity, with deep integrations into Microsoft’s suite of Office tools, for example. In this context, consumer services are only a distraction, and Microsoft is leaving that market to the likes of Amazon and Google .

Because the new Cortana experience is all about Microsoft 365, the subscription service that includes access to the Office tools, email, online storage and more, it doesn’t come as a surprise that the assistant’s new feature will give you access to data from these tools, including your calendar, Microsoft To Do notes and more.

And while some consumer features are going away, Microsoft stresses that Cortana will still be able to tell you a joke, set alarms and timers, and give you answers from Bing.

For now, all of this only applies to English-speaking users in the U.S. Outside of the U.S., most of the productivity features will launch in the future.


By Frederic Lardinois

Google Cloud’s newest data center opens in Salt Lake City

Google Cloud announced today that it’s a new data center in Salt Lake City has opened, making it the 22nd such center the company has opened to-date.

This Salt Lake City data center marks the third in the western region joining LA and Dalles, Oregon with the goal of providing lower latency compute power across the region.

“We’re committed to building the most secure, high-performance and scalable public cloud, and we continue to make critical infrastructure investments that deliver our cloud services closer to customers that need them the most,” said Jennifer Chason, director of Google Cloud Enterprise for the Western States and Southern California said in a statement.

Cloud vendors in general are trying to open more locations closer to potential customers. This is a similar approach taken by AWS when it announced its LA local zone at AWS re:Invent last year. The idea is to reduce latency by moving compute resources closer to the companies who need the, or to spread workloads across a set of regional resources.

Google also announced that PayPal, a company that was already a customer, has signed a multi-year contract, and will be moving parts of its payment systems into the western region. It’s worth noting that Salt Lake City is also home to a thriving startup scene that could benefit from having a data center located close by.

Google Cloud’s parent company Alphabet’s recently shared the cloud division’s quarterly earnings for the first time, indicating that it was on a run rate of more than $10 billion. While it still has a long way to go catch rivals Microsoft and Amazon, as it expands its reach in this fashion, it could help grow that market share.


By Ron Miller

Thomas Kurian on his first year as Google Cloud CEO

“Yes.”

That was Google Cloud CEO Thomas Kurian’s simple answer when I asked if he thought he’d achieved what he set out to do in his first year.

A year ago, he took the helm of Google’s cloud operations — which includes G Suite — and set about giving the organization a sharpened focus by expanding on a strategy his predecessor Diane Greene first set during her tenure.

It’s no secret that Kurian, with his background at Oracle, immediately put the entire Google Cloud operation on a course to focus on enterprise customers, with an emphasis on a number of key verticals.

So it’s no surprise, then, that the first highlight Kurian cited is that Google Cloud expanded its feature lineup with important capabilities that were previously missing. “When we look at what we’ve done this last year, first is maturing our products,” he said. “We’ve opened up many markets for our products because we’ve matured the core capabilities in the product. We’ve added things like compliance requirements. We’ve added support for many enterprise things like SAP and VMware and Oracle and a number of enterprise solutions.” Thanks to this, he stressed, analyst firms like Gartner and Forrester now rank Google Cloud “neck-and-neck with the other two players that everybody compares us to.”

If Google Cloud’s previous record made anything clear, though, it’s that technical know-how and great features aren’t enough. One of the first actions Kurian took was to expand the company’s sales team to resemble an organization that looked a bit more like that of a traditional enterprise company. “We were able to specialize our sales teams by industry — added talent into the sales organization and scaled up the sales force very, very significantly — and I think you’re starting to see those results. Not only did we increase the number of people, but our productivity improved as well as the sales organization, so all of that was good.”

He also cited Google’s partner business as a reason for its overall growth. Partner influence revenue increased by about 200% in 2019, and its partners brought in 13 times more new customers in 2019 when compared to the previous year.


By Frederic Lardinois

Google Cloud opens its Seoul region

Google Cloud today announced that its new Seoul region, its first in Korea, is now open for business. The region, which it first talked about last April, will feature three availability zones and support for virtually all of Google Cloud’s standard service, ranging from Compute Engine to BigQuery, Bigtable and Cloud Spanner.

With this, Google Cloud now has a presence in 16 countries and offers 21 regions with a total of 64 zones. The Seoul region (with the memorable name of asia-northeast3) will complement Google’s other regions in the area, including two in Japan, as well as regions in Hong Kong and Taiwan, but the obvious focus here is on serving Korean companies with low-latency access to its cloud services.

“As South Korea’s largest gaming company, we’re partnering with Google Cloud for game development, infrastructure management, and to infuse our operations with business intelligence,” said Chang-Whan Sul, the CTO of Netmarble. “Google Cloud’s region in Seoul reinforces its commitment to the region and we welcome the opportunities this initiative offers our business.”

Over the course of this year, Google Cloud also plans to open more zones and regions in Salt Lake City, Las Vegas and Jakarta, Indonesia.


By Frederic Lardinois

Google Cloud acquires mainframe migration service Cornerstone

Google today announced that it has acquired Cornerstone, a Dutch company that specializes in helping enterprise migrate their legacy workloads from mainframes to public clouds. Cornerstone, which provides very hands-on migration assistance, will form the basis of Google Cloud’s mainframe-to-GCP solutions.

This move is very much in line with Google Cloud’s overall enterprise strategy, which focuses on helping existing enterprises move their legacy workloads into the cloud (and start new projects as cloud-native solutions from the get-go).

“This is one more example of how Google Cloud is helping enterprise customers modernize their infrastructure and applications as they transition to the cloud,” said John Jester, VP of Customer Experience at Google Cloud. “We’ve been making great strides to better serve enterprise customers, including introducing Premium Support, better aligning our Customer Success organization, simplifying our commercial contracting process to make it easier to do business with Google Cloud, and expanding our partner relationships.”

A lot of businesses still rely on their mainframes to power mission-critical workloads. Moving them to the cloud is often a very complex undertaking, which is where Cornerstone and similar vendors come in. It doesn’t help that a lot of these mainframe applications were written in Cobol, PL/1 or assembly. Cornerstone’s technology can automatically break these processes down into cloud-native services that are then managed within a containerized environment. It can also migrate databases as needed.

It’s worth noting that Google Cloud also recently introduced support for IBM Power Systems in its cloud. This, too, was a move to help enterprises move their legacy systems into the cloud. With Cornerstone, Google Cloud adds yet another layer on top of this by providing even more hands-on migration assistance for users who want to slowly modernize their overall stack without having to re-architect all of their legacy applications.

 


By Frederic Lardinois

Cloud spending said to top $30B in Q4 as Amazon, Microsoft battle for market share

We all know the cloud infrastructure market is extremely lucrative; analyst firm Canalys reports that the sector reached $30.2 billion in revenue for Q4 2019.

Cloud numbers are hard to parse because companies often lump cloud revenue into a single bucket regardless of whether it’s generated by infrastructure or software. What’s interesting about Canalys’s numbers is that it attempts to measure the pure infrastructure results themselves without other cloud incomes mixed in:

As an example, Microsoft reported $12.5 billion in total combined cloud revenue for the quarter, but Canalys estimates that just $5.3 billion comes from infrastructure (Azure). Amazon has the purest number with $9.8 billion of a reported $9.95 billion attributed to its infrastructure business. This helps you understand why in spite of the fact that Microsoft reported bigger overall cloud earnings numbers and a higher growth rate, Amazon still has just less than double Microsoft’s market share in terms of IaaS spend.

That’s not to say Microsoft didn’t still have a good quarter — it garnered 17.6% of revenue for the period. That’s up from 14.5% in the same quarter a year ago. What’s more, Amazon lost a bit of ground, according to Canalys, dropping from 33.4% in Q4 2018 to 32.4% in the most recent quarter.

Part of the reason for that is because Microsoft is growing at close to twice the rate as Amazon — 62.3% versus Amazon’s 33.2%.

Meanwhile, number-three vendor Google came in at $1.8 billion for pure infrastructure revenue, good for 6% of the market, up from 4.9% a year ago on growth rate 67.6%. Google reported $2.61 billion in overall cloud revenue, but that included software. Despite the smaller results, it was a good quarter for the Mountain View-based company.


By Ron Miller

Google closes $2.6B Looker acquisition

When Google announced that it was acquiring data analytics startup Looker for $2.6 billion, it was a big deal on a couple of levels. It was a lot of money and it represented the first large deal under the leadership of Thomas Kurian. Today, the company announced that deal has officially closed and Looker is part of the Google Cloud Platform.

While Kurian was happy to announce that Looker was officially part of the Google family, he made it clear in a blog post that the analytics arm would continue to support multiple cloud vendors beyond Google.

“Google Cloud and Looker share a common philosophy around delivering open solutions and supporting customers wherever they are—be it on Google Cloud, in other public clouds, or on premises. As more organizations adopt a multi-cloud strategy, Looker customers and partners can expect continued support of all cloud data management systems like Amazon Redshift, Azure SQL, Snowflake, Oracle, Microsoft SQL Server and Teradata,” Kurian wrote.

As is typical in a deal like this, Looker CEO Frank Bien sees the much larger Google giving his company the resources to grow much faster than it could have on its own. “Joining Google Cloud provides us better reach, strengthens our resources, and brings together some of the best minds in both analytics and cloud infrastructure to build an exciting path forward for our customers and partners. The mission that we undertook seven years ago as Looker takes a significant step forward beginning today,” Bien wrote in his post.

At the time the deal was announced in June, the company shared a slide, which showed where Looker fits in what they call their “Smart Analytics Platform,” which provides ways to process, understand, analyze and visualize data. Looker fills in a spot in the visualization stack while continuing to support other clouds.

Slide: Google

Looker was founded in 2011 and raised more than $280 million, according to Crunchbase. Investors included Redpoint, Meritech Capital Partners, First Round Capital, Kleiner Perkins, CapitalG and PremjiInvest. The last deal before the acquisition was a $103 million Series E investment on a $1.6 billion valuation in December 2018.


By Ron Miller

OpsRamp raises $37.5M for its hybrid IT operations platform

OpsRamp, a service that helps IT teams discover, monitor, manage and — maybe most importantly — automate their hybrid environments, today announced that it has closed a $37.5 million funding round led by Morgan Stanley Expansion Capital, with participation from existing investor Sapphire Ventures and new investor Hewlett Packard Enterprise.

OpsRamp last raised funding in 2017, when Sapphire led its $20 million Series A round.

At the core of OpsRamp’s services is its AIOps platform. Using machine learning and other techniques, this service aims to help IT teams manage increasingly complex infrastructure deployments, provide intelligent alerting, and eventually automate more of their tasks. The company’s overall product portfolio also includes tools for cloud monitoring and incident management.

The company says its annual recurrent revenue increased by 300 percent in 2019 (though we obviously don’t know what number it started 2019 with). In total, OpsRamp says it now has 1,400 customers on its platform and alliances with AWS, ServiceNow, Google Cloud Platform and Microsoft Azure.

OpsRamp co-founder and CEO Varma Kunaparaju

According to OpsRamp co-founder and CEO Varma Kunaparaju, most of the company’s customers are mid to large enterprises. “These IT teams have large, complex, hybrid IT environments and need help to simplify and consolidate an incredibly fragmented, distributed and overwhelming technology and infrastructure stack,” he said. “The company is also seeing success in the ability of our partners to help us reach global enterprises and Fortune 5000 customers.”

Kunaparaju told me that the company plans to use the new funding to expand its go-to-market efforts and product offerings. “The company will be using the money in a few different areas, including expanding our go-to-market motion and new pursuits in EMEA and APAC, in addition to expanding our North American presence,” he said. “We’ll also be doubling-down on product development on a variety of fronts.”

Given that hybrid clouds only increase the workload for IT organizations and introduce additional tools, it’s maybe no surprise that investors are now interested in companies that offer services that rein in this complexity. If anything, we’ll likely see more deals like this one in the coming months.

“As more of our customers transition to hybrid infrastructure, we find the OpsRamp platform to be a differentiated IT operations management offering that aligns well with the core strategies of HPE,” said Paul Glaser, Vice President and Head of Hewlett Packard Pathfinder. “With OpsRamp’s product vision and customer traction, we felt it was the right time to invest in the growth and scale of their business.”


By Frederic Lardinois

Google Cloud gets a Secret Manager

Google Cloud today announced Secret Manager, a new tool that helps its users securely store their API keys, passwords, certificates and other data. With this, Google Cloud is giving its users a single tool to manage this kind of data and a centralized source of truth, something that even sophisticated enterprise organizations often lack.

“Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication,” Google developer advocate Seth Vargo and product manager Matt Driscoll wrote in today’s announcement. “Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.”

With Berglas, Google already offered an open-source command-line tool for managing secrets. Secret Manager and Berglas will play well together and users will be able to move their secrets from the open-source tool into Secret Manager and use Berglas to create and access secrets from the cloud-based tool as well.

With KMS, Google also offers a fully managed key management system (as do Google Cloud’s competitors). The two tools are very much complementary. As Google notes, KMS does not actually store the secrets — it encrypts the secrets you store elsewhere. Secret Manager provides a way to easily store (and manage) these secrets in Google Cloud.

Secret Manager includes the necessary tools for managing secret versions and audit logging, for example. Secrets in Secret Manager are also project-based global resources, the company stresses, while competing tools often manage secrets on a regional basis.

The new tool is now in beta and available to all Google Cloud customers.


By Frederic Lardinois

Google acquires AppSheet to bring no-code development to Google Cloud

Google announced today that it is buying AppSheet, an 8 year-old no-code mobile application building platform. The company had raised over $17 million on a $60 million valuation, according to PitchBook data. The companies did not share the purchase price.

With AppSheet, Google gets a simple way for companies to build mobile apps without having to write a line of code. It works by pulling data from a spreadsheet, database or form, and using the field or column names as the basis for building an app.

It is integrated with Google Cloud already integrating with Google Sheets and Google Forms, but also works with other tools including AWS DynamoDB, Salesforce, Office 365, Box and others. Google says it will continue to support these other platforms, even after the deal closes.

As Amit Zavery wrote in a blog post announcing the acquisition, it’s about giving everyone a chance to build mobile applications, even companies lacking traditional developer resources to build a mobile presence. “This acquisition helps enterprises empower millions of citizen developers to more easily create and extend applications without the need for professional coding skills,” he wrote.

In a story we hear repeatedly from startup founders, Praveen Seshadri, co-founder and CEO at AppSheet sees an opportunity to expand his platform and market reach under Google in ways he couldn’t as an independent company.

“There is great potential to leverage and integrate more deeply with many of Google’s amazing assets like G Suite and Android to improve the functionality, scale, and performance of AppSheet. Moving forward, we expect to combine AppSheet’s core strengths with Google Cloud’s deep industry expertise in verticals like financial services, retail, and media  and entertainment,” he wrote.

Google sees this acquisition as extending its development philosophy with no-code working alongside workflow automation, application integration and API management.

No code tools like AppSheet are not going to replace sophisticated development environments, but they will give companies that might not otherwise have a mobile app, the ability to put something decent out there.


By Ron Miller

Google brings IBM Power Systems to its cloud

As Google Cloud looks to convince more enterprises to move to its platform, it needs to be able to give businesses an onramp for their existing legacy infrastructure and workloads that they can’t easily replace or move to the cloud. A lot of those workloads run on IBM Power Systems with their Power processors and until now, IBM was essentially the only vendor that offered cloud-based Power systems. Now, however, Google is also getting into this game by partnering with IBM to launch IBM Power Systems on Google Cloud.

“Enterprises looking to the cloud to modernize their existing infrastructure and streamline their business processes have many options,” writes Kevin Ichhpurani, Google Cloud’s corporate VP for its global ecosystem in today’s announcement. “At one end of the spectrum, some organizations are re-platforming entire legacy systems to adopt the cloud. Many others, however, want to continue leveraging their existing infrastructure while still benefiting from the cloud’s flexible consumption model, scalability, and new advancements in areas like artificial intelligence, machine learning, and analytics.”

Power Systems support obviously fits in well here, given that many companies use them for mission-critical workloads based on SAP and Oracle applications and databases. With this, they can take those workloads and slowly move them to the cloud, without having to re-engineer their applications and infrastructure. Power Systems on Google Cloud is obviously integrated with Google’s services and billing tools.

This is very much an enterprise offering, without a published pricing sheet. Chances are, given the cost of a Power-based server, you’re not looking at a bargain, per-minute price here.

Since IBM has its own cloud offering, it’s a bit odd to see it work with Google to bring its servers to a competing cloud — though it surely wants to sell more Power servers. The move makes perfect sense for Google Cloud, though, which is on a mission to bring more enterprise workloads to its platform. Any roadblock the company can remove works in its favor and as enterprises get comfortable with its platform, they’ll likely bring other workloads to it over time.


By Frederic Lardinois

Making sense of a multi-cloud, hybrid world at KubeCon

More than 12,000 attendees gathered this week in San Diego to discuss all things containers, Kubernetes and cloud-native at KubeCon.

Kubernetes, the container orchestration tool, turned five this year, and the technology appears to be reaching a maturity phase where it accelerates beyond early adopters to reach a more mainstream group of larger business users.

That’s not to say that there isn’t plenty of work to be done, or that most enterprise companies have completely bought in, but it’s clearly reached a point where containerization is on the table. If you think about it, the whole cloud-native ethos makes sense for the current state of computing and how large companies tend to operate.

If this week’s conference showed us anything, it’s an acknowledgment that it’s a multi-cloud, hybrid world. That means most companies are working with multiple public cloud vendors, while managing a hybrid environment that includes those vendors — as well as existing legacy tools that are probably still on-premises — and they want a single way to manage all of this.

The promise of Kubernetes and cloud-native technologies, in general, is that it gives these companies a way to thread this particular needle, or at least that’s the theory.

Kubernetes to the rescue

Photo: Ron Miller/TechCrunch

If you were to look at the Kubernetes hype cycle, we are probably right about at the peak where many think Kubernetes can solve every computing problem they might have. That’s probably asking too much, but cloud-native approaches have a lot of promise.

Craig McLuckie, VP of R&D for cloud-native apps at VMware, was one of the original developers of Kubernetes at Google in 2014. VMware thought enough of the importance of cloud-native technologies that it bought his former company, Heptio, for $550 million last year.

As we head into this phase of pushing Kubernetes and related tech into larger companies, McLuckie acknowledges it creates a set of new challenges. “We are at this crossing the chasm moment where you look at the way the world is — and you look at the opportunity of what the world might become — and a big part of what motivated me to join VMware is that it’s successfully proven its ability to help enterprise organizations navigate their way through these disruptive changes,” McLuckie told TechCrunch.

He says that Kubernetes does actually solve this fundamental management problem companies face in this multi-cloud, hybrid world. “At the end of the day, Kubernetes is an abstraction. It’s just a way of organizing your infrastructure and making it accessible to the people that need to consume it.

“And I think it’s a fundamentally better abstraction than we have access to today. It has some very nice properties. It is pretty consistent in every environment that you might want to operate, so it really makes your on-prem software feel like it’s operating in the public cloud,” he explained.

Simplifying a complex world

One of the reasons Kubernetes and cloud-native technologies are gaining in popularity is because the technology allows companies to think about hardware differently. There is a big difference between virtual machines and containers, says Joe Fernandes, VP of product for Red Hat cloud platform.

“Sometimes people conflate containers as another form of virtualization, but with virtualization, you’re virtualizing hardware, and the virtual machines that you’re creating are like an actual machine with its own operating system. With containers, you’re virtualizing the process,” he said.

He said that this means it’s not coupled with the hardware. The only thing it needs to worry about is making sure it can run Linux, and Linux runs everywhere, which explains how containers make it easier to manage across different types of infrastructure. “It’s more efficient, more affordable, and ultimately, cloud-native allows folks to drive more automation,” he said.

Bringing it into the enterprise

Photo: Ron Miller/TechCrunch

It’s one thing to convince early adopters to change the way they work, but as this technology enters the mainstream. Gabe Monroy, partner program manager at Microsoft says to carry this technology to the next level, we have to change the way we talk about it.


By Ron Miller