Google Cloud makes it easier to set up continuous delivery with Spinnaker

Google Cloud today announced Spinnaker for Google Cloud Platform, a new solution that makes it easier to install and run the Spinnaker continuous delivery (CD) service on Google’s cloud.

Spinnaker was created inside Netflix and is now jointly developed by Netflix and Google. Netflix open-sourced it back in 2015 and over the course of the last few years, it became the open-source CD platform of choice for many enterprises. Today, companies like Adobe, Box, Cisco, Daimler, Samsung and others use it to speed up their development process.

With Spinnaker for Google Cloud Platform, which runs on the Google Kubernetes Engine, Google is making the install process for the service as easy as a few clicks. Once up and running, the Spinnaker install includes all of the core tools, as well as Deck, the user interface for the service. Users pay for the resources used by the Google Kubernetes Engine, as well as Cloud Memorystore for Redis, Google Cloud Load Balancing and potentially other resources they use in the Google Cloud.

could spinnker.max 1100x1100

The company has pre-configured Spinnaker for testing and deploying code on Google Kubernetes Engine, Compute Engine and App Engine, though it will also work with any other public or on-prem cloud. It’s also integrated with Cloud Build, Google’s recently launched continuous integration service and features support for automatic backups and integrated auditing and monitoring with Google’s Stackdriver.

“We want to make sure that the solution is great both for developers and DevOps or SRE teams,” says Matt Duftler, Tech Lead for Google’s Spinnaker effort, in today’s announcement. “Developers want to get moving fast with the minimum of overhead. Platform teams can allow them to do that safely by encoding their recommended practice into Spinnaker, using Spinnaker for GCP to get up and running quickly and start onboard development teams.”

 


By Frederic Lardinois

We’ll talk even more Kubernetes at TC Sessions: Enterprise with Microsoft’s Brendan Burns and Google’s Tim Hockin

You can’t go to an enterprise conference these days without talking containers — and specifically the Kubernetes container management system. It’s no surprise then, that we’ll do the same at our inaugural TC Sessions: Enterprise event on September 5 in San Francisco. As we already announced last week, Kubernetes co-founder Craig McLuckie and Aparna Sinha, Google’s director of product management for Kubernetes, will join us to talk about the past, present and future of containers in the enterprise.

In addition, we can now announce that two other Kubernetes co-founders will join us: Google principal software engineer Tim Hockin, who currently works on Kubernetes and the Google Container Engine, and Microsoft distinguished engineer Brendan Burns, who was the lead engineer for Kubernetes during his time at Google.

With this, we’ll have three of the four Kubernetes co-founders onstage to talk about the five-year-old project.

Before joining the Kuberntes efforts, Hockin worked on internal Google projects like Borg and Omega, as well as the Linux kernel. On the Kubernetes project, he worked on core features and early design decisions involving networking, storage, node, multi-cluster, resource isolation and cluster sharing.

While his colleagues Craig McLuckie and Joe Beda decided to parlay their work on Kubernetes into a startup, Heptio, which they then successfully sold to VMware for about $550 million, Burns took a different route and joined the Microsoft Azure team three years ago.

I can’t think of a better group of experts to talk about the role that Kubernetes is playing in reshaping how enterprise build software.

If you want a bit of a preview, here is my conversation with McLuckie, Hockin and Microsoft’s Gabe Monroy about the history of the Kubernetes project.

Early-Bird tickets are now on sale for $249; students can grab a ticket for just $75. Book your tickets here before prices go up.


By Frederic Lardinois

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.


By Ron Miller

Google Cloud makes some strong moves to differentiate itself from AWS and Microsoft

Google Cloud held its annual customer conference, Google Cloud Next, this week in San Francisco. It had a couple of purposes. For starters it could introduce customers to new CEO Thomas Kurian for the first time since his hiring at the end of last year. And secondly, and perhaps more importantly, it could demonstrate that it could offer a value proposition that is distinct from AWS and Microsoft.

Kurian’s predecessor, Diane Greene, was fond of saying that it was still early days for the cloud market, and she’s still right, but while the pie has continued to grow substantially, Google’s share of the market has stayed stubbornly in single digits. It needed to use this week’s conference as at least a springboard to showcase its strengths .

Its lack of commercial cloud market clout has always been a bit of a puzzler. This is Google after all. It runs Google Search and YouTube and Google Maps and Google Docs. These are massive services that rarely go down. You would think being able to run these massive services would translate into massive commercial success, but so far it hasn’t.

Missing ingredients

Even though Greene brought her own considerable enterprise cred to GCP, having been a co-founder at VMware, the company that really made the cloud possible by popularizing the virtual machine, she wasn’t able to significantly change the company’s commercial cloud fortunes.

In a conversation with TechCrunch’s Frederic Lardinois, Kurian talked about missing ingredients like having people to talk to (or maybe a throat to choke). “A number of customers told us ‘we just need more people from you to help us.’ So that’s what we’ll do,” Kurian told Lardinois.

But of course, it’s never one thing when it comes to a market as complex as cloud infrastructure. Sure, you can add more bodies in customer support or sales, or more aggressively pursue high value enterprise customers, or whatever Kurain has identified as holes in GCP’s approach up until now, but it still requires a compelling story and Google took a big step toward having the ingredients for a new story this week.

Changing position

Google is trying to position itself in the same way as any cloud vendor going after AWS. They are selling themselves as the hybrid cloud company that can help with your digital transformation. It’s a common strategy, but Google did more than throw out the usual talking points this week. It walked the walk too.

For starters, it introduced Anthos, a single tool to manage your workloads wherever they live, even in a rival cloud. This is a big deal, and if it works as described it does give that new beefed-up sales team at Google Cloud a stronger story to tell around integration. As my colleague, Frederic Lardinois described it:

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure, he wrote

AWS hasn’t made made many friends in the open source community of late and Google reiterated that it was going to be the platform that is friendly to open source projects. To that end, it announced a number of major partnerships.

Finally, the company took a serious look at verticals, trying to put together packages of Google Cloud services designed specifically for a given vertical. As an example, it put together a package for retailers that included special services to help keep you up and running during peak demand, tools to suggest if you like this, you might be interested in these items, contact center AI and other tools specifically geared toward the retail market. You can expect the company will be doing more of this to make the platform more attractive to a given market space.

Photo: Michael Short/Bloomberg via Getty Images

All of this and more, way too much to summarize in one article, was exactly what Google Cloud needed to do this week. Now comes the hard part. They have come up with some good ideas and they have to go out and sell it.

Nobody has ever denied that Google lacked good technology. That has always been an inherently obvious strength, but it has struggled to translate that into substantial market share. That is Kurian’s challenge. As Greene used to say, in baseball terms, it’s still early innings. And it really still is, but the game is starting to move along, and Kurian needs to get the team moving in the right direction if it expects to be competitive.


By Ron Miller

Google Cloud takes aim at verticals starting with new set of tools for retailers

Google might not be Adobe or Salesforce, but it has a particular set of skills, which fit nicely with retailer requirements and can over time help improve the customer experience, even if that means just simply making sure the website or app is running, even on peak demand. Today, at Google Cloud Next, the company showed off a package of solutions as an example its vertical strategy.

Just this morning, the company announced a new phase of its partnership with Salesforce, where it’s using its contact center AI tools and chatbot technology in combination with Salesforce data to produce a product that plays to each company’s strengths and helps improve the customer service experience.

But Google didn’t stop with a high profile partnership. It has a few tricks of its own for retailers, starting with the classic retailer Black Friday kind of scenario. The easiest way to explain the value of cloud scaling is to look at a retail event like Black Friday when you know servers are going to be bombarded with traffic.

The cloud has always been good at scaling up for those kind of events, but it’s not perfect, as Amazon learned last year when it slowed down on Prime Day. Google wants to help companies avoid those kinds of disasters because a slow or down website translates into lots of lost revenue.

The company offers eCommerce Hosting, designed specifically for online retailers, and it is offering a special premium program, so retailers get “white glove treatment with technical architecture reviews and peak season operations support…” according to the company. In other words, it wants to help these companies avoid disastrous, money-losing results when a site goes down due to demand.

In addition, Google is offering real-time inventory tools, so customers and clerks can know exactly what stock is on hand, and it’s applying its AI expertise to this, as well with tools like Google Contact Center AI solution to help deliver better customer service experiences or Cloud Vision technology to help customers point their cameras at a product and see similar or related products. They also offer Recommendations AI, a tool, that says, if you bought these things, you might like this too, among other tools.

The company counts retail customers like Shopify and Ikea. In addition, the company is working with SI partners like Accenture, CapGemini and Deloitte and software partners like Salesforce, SAP and Tableau.

All of this is about creating a set of services created specifically for a given vertical to help that industry take advantage of the cloud. It’s one more way for Google Cloud to bring solutions to market and help increase its marketshare.


By Ron Miller

Google Cloud unveils new identity tools based on zero trust framework

Google Cloud announced some new identity tools today at Google Cloud Next designed to simplify identity Access Management within the context of the BeyondCorp Zero Trust security model.

Zero Trust, as the name implies, means you have to assume you can’t trust anyone using your network. In the days before the cloud, you could set up a firewall and with some reasonable degree of certainty assume people inside had permission to be there. The cloud changed that, and Zero Trust was born to help provide a more modern security posture that took that into account.

The company wants to make it easier for developers to build identity into applications without a lot of heavy lifting. It sees identity as more than a way to access applications, but as an integral part of the security layer, especially in the context of the BeyondCorp approach. If you know who the person is, and can understand the context of how they are interacting with you, that can give strong clues as to whether the person is who they actually say they are.

This is about more than protecting your applications, it’s about making sure that your entire system from your virtual machine to your APIs are all similarly protected. “Over the past few months, we added context-aware access capabilities in Beta to C​loud Identity-Aware Proxy ​(IAP) and V​PC Service Controls ​to help protect web apps, VMs and Google Cloud Platform (GCP) APIs. Today, we are making these capabilities generally available in Cloud IAP, as well as extending them in Beta to C​loud Identity​ to help you protect access to G Suite apps,” the company wrote in an introductory blog post.

Diagram: Google

This Context Aware Access layer protects all of these areas across the cloud. “Context-aware access allows you to define and enforce granular access to apps and infrastructure based on a user’s identity and the context of their request. This can help increase your organization’s security posture while giving users an easy way to more securely access apps or infrastructure resources, from virtually any device, anywhere,” the company wrote.

The G Suite protection is in Beta, but the rest is generally available starting today.


By Ron Miller

Google Cloud announces Traffic Director, a networking management tool for service mesh

With each new set of technologies comes a new set of terms. In the containerized world, applications are broken down into discrete pieces or micro services. As these services proliferate, it creates a service mesh, a network of services and the interactions that take place as they interact. For each new technology like this, it requires a management layer, especially for the network administrators to understand and control the new concept, in this case, the service mesh.

Today at Google Cloud Next, the company announced the Beta of Traffic Director for open service mesh, specifically to help network managers understand what’s happening in their service mesh.

“To accelerate adoption and reduce the toil of managing service mesh, we’re excited to introduce Traffic Director, our new GCP-managed, enterprise-ready configuration and traffic control plane for service mesh that enables global resiliency, intelligent load balancing, and advanced traffic control capabilities like canary deployments,” Brad Calder, VP of engineering for technical infrastructure at Google Cloud, wrote in a blog post introducing the tool.

Traffic Director provides a way for operations to deploy a service mesh on their networks and have more control over how it works and interacts with the rest of the system. The tool works with Virtual Machines, Compute Engine on GCP, or in a containerized approach, GKE on GCP.

The product is just launching into Beta today, but the road map includes additional security features and support for hybrid environments, and eventually integration with Anthos, the hybrid management tool the company introduced yesterday at Google Cloud Next.


By Ron Miller

Apigee jumps on hybrid bandwagon with new API for hybrid environments

This year at Google Cloud Next, the theme is all about supporting hybrid environments, so it shouldn’t come as a surprise that Apigee, the API company it bought in 2016 for $265 million, is also getting into the act. Today, Apigee announced the Beta of Apigee Hybrid, a new product designed for hybrid environments.

Amit Zavery, who recently joined Google Cloud after many years at Oracle, and Nandan Sridhar, describe the new product in a joint blog post as “a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice.”

As with Anthos, the company’s approach to hybrid management announced earlier today, the idea is to have a single way to manage your APIs no matter where you choose to run them.

“With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise,” Zavery and Sridhar wrote in the blog post announcing the new approach.

The announcement is part of an overall strategy by the company to support a customer’s approach to computing across a range environments, often referred to as hybrid cloud. In the Cloud Native world, the idea is to present a single fabric to manage your deployments, regardless of location.

This appears to be an extension of that idea, which makes sense given that Google was the first company to develop and open source Kubernetes, which is at the forefront of containerization and Cloud Native computing. While this isn’t pure Cloud Native computing, it is keeping true to its ethos and it fits in the scope of Google Cloud’s approach to computing in general, especially as it is being defined at this year’s conference.


By Ron Miller

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on the Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and its end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose —fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine,  the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.


By Ron Miller

Google is bringing two new data centers online in 2020

At Google Cloud Next today, the company announced it is bringing two brand new data centers online in the 2020 timeframe with one in Seoul, South Korea and one in Salt Lake City, Utah.

The company, like many of its web scale peers, has had the data center building pedal to the medal over the last several years. It has grown to 15 regions with each region hosting multiple zones for a total of 45 zones. In all, the company has a presence in 13 countrie and says it has invested an impressive $47 billion (with a B) of CAPEX investment from 2016-2018.

Google Data Center Map. Photo: Google

“We’re going to be announcing the availability in early 2020 of Seoul, South Korea. So we are announcing a region there with three zones for customers to build their applications. Again, customers, either multinationals that are looking to serve their customers in that market or local customers that are looking to go global. This really helps address their needs and allows them to serve the customers in the way that they want to,” Dominic Preuss, director of product management said.

He added, “Similarly, Salt Lake City is our third region in the western United States along with Oregon and Los Angeles. And so it allows developers to build a distributed applications across multiple regions in the western United States.”

In addition, the company announced that its new data center in Osaka, Japan is expected to come online some time in the coming weeks. One in Jakarta, Indonesia, currently under construction is expected to come online the first half of next year.


By Ron Miller

Google Cloud challenges AWS with new open-source integrations

Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners. The partners here are Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs.

The idea here, Google says, is to provide users with a seamless user experience and the ability to easily leverage these open-source technologies in Google’s cloud. But there is a lot more at play here, even though Google never quite says so. That’s because Google’s move here is clearly meant to contrast its approach to open-source ecosystems with Amazon’s. It’s no secret that Amazon’s AWS cloud computing platform has a reputation for taking some of the best open-source projects and then forking those and packaging them up under its own brand, often without giving back to the original project. There are some signs that this is changing, but a number of companies have recently taken action and changed their open-source licenses to explicitly prevent this from happening.

That’s where things get interesting, because those companies include Confluent, Elastic, MongoDB, Neo4j and Redis Labs — and those are all partnering with Google on this new project, though it’s worth noting that InfluxData is not taking this new licensing approach and that while DataStax uses lots of open-source technologies, its focus is very much on its enterprise edition.

“As you are aware, there has been a lot of debate in the industry about the best way of delivering these open-source technologies as services in the cloud,” Manvinder Singh, the head of infrastructure partnerships at Google Cloud, said in a press briefing. “Given Google’s DNA and the belief that we have in the open-source model, which is demonstrated by projects like Kubernetes, TensorFlow, Go and so forth, we believe the right way to solve this it to work closely together with companies that have invested their resources in developing these open-source technologies.”

So while AWS takes these projects and then makes them its own, Google has decided to partner with these companies. While Google and its partners declined to comment on the financial arrangements behind these deals, chances are we’re talking about some degree of profit-sharing here.

“Each of the major cloud players is trying to differentiate what it brings to the table for customers, and while we have a strong partnership with Microsoft and Amazon, it’s nice to see that Google has chosen to deepen its partnership with Atlas instead of launching an imitation service,” Sahir Azam, the senior VP of Cloud Products at MongoDB told me. “MongoDB and GCP have been working closely together for years, dating back to the development of Atlas on GCP in early 2017. Over the past two years running Atlas on GCP, our joint teams have developed a strong working relationship and support model for supporting our customers’ mission critical applications.”

As for the actual functionality, the core principle here is that Google will deeply integrate these services into its Cloud Console; for example, similar to what Microsoft did with Databricks on Azure. These will be managed services and Google Cloud will handle the invoicing and the billings will count toward a user’s Google Cloud spending commitments. Support will also run through Google, so users can use a single service to manage and log tickets across all of these services.

Redis Labs CEO and co-founder Ofer Bengal echoed this. “Through this partnership, Redis Labs and Google Cloud are bringing these innovations to enterprise customers, while giving them the choice of where to run their workloads in the cloud, he said. “Customers now have the flexibility to develop applications with Redis Enterprise using the fully integrated managed services on GCP. This will include the ability to manage Redis Enterprise from the GCP console, provisioning, billing, support, and other deep integrations with GCP.”


By Frederic Lardinois

Cloud Foundry ❤ Kubernetes

Cloud Foundry, the open source platform-as-a-service project that more than half of the Fortune 500 companies use to help them build, test and deploy their applications, launched well before Kubernetes existed. Because of this, the team ended up building Diego, its own container management service. Unsurprisingly, given the popularity of Kubernetes, which has become somewhat of the de facto standard for container orchestration, a number of companies in the Cloud Foundry ecosystem starting looking into how they could use Kubernetes to replace Diego.

The result of this is Project Eirini, which was first proposed by IBM. As the Cloud Foundry Foundation announced today, Project Eirini now passes the core functional tests the team runs to validate the software releases of its application runtime, the core Cloud Foundry service that deploys and manages applications (if that’s a bit confusing, don’t even think about the fact that there’s also a Cloud Foundry Container Runtime, which already uses Kubernetes, but which is mostly meant to give enterprise a single platform for running their own applications and pre-built containers from third-party vendors).

a foundry for clouds“That’s a pretty big milestone,” Cloud Foundry Foundation CTO Chip Childers told me. “The project team now gets to shift to a mode where they’re focused on hardening the solution and making it a bit more production-ready. But at this point, early adopters are also starting to deploy that [new] architecture.”

Childers stressed that while the project was incubated by IBM, which has been a long-time backer of overall Cloud Foundry project, Google, Pivotal and others are now also contributing and have dedicated full-time engineers working on the project. In addition, SUSE, SAP and IBM are also active in developing Eirini.

Eirini started out as an incubation project, and while few doubted that this would be a successful project, there was a bit of confusion around how Cloud Foundry would move forward now that it essentially had two container engines for running its core service. At the time, there was even some concern that the project could fork. “I pushed back at the time and said: no, this is the natural exploration process that open source communities need to go through,” Childers said. “What we’re seeing now is that with Pivotal and Google stepping in, that’s a very clear sign that this is going to be the go-forward architecture for the future of the Cloud Foundry Application Runtime.”

A few months ago, by the way, Kubernetes was still missing a few crucial pieces the Cloud Foundry ecosystem needed to make this move. Childers specifically noted that Windows support — something the project’s enterprise users really need — was still problematic and lacked some important features. In recent releases, though, the Kubernetes team fixed most of these issues and improved its Windows support, rendering those issues moot.

What does all of this mean for Diego? Childers noted that the community isn’t at a point where it’ll hold developing that tool. At some point, though, it seems likely that the community will decide that it’s time to start the transition period and make the move to Kubernetes official.

It’s worth noting that IBM today announced its own preview of Eirini in its Cloud Foundry Enterprise Environment and that the latest version of SUSE’s Cloud Foundry-based Application Platform includes a similar preview as well.

In addition, the Cloud Foundry Foundation, which is hosting its semi-annual developer conference in Philadelphia this week, also announced that it has certified its first to systems integrators, Accenture and HCL, as part of its recently launched certification program for companies that work in the Cloud Foundry ecosystem and have at least ten certified developers on their teams.


By Frederic Lardinois

Pixeom raises $15M for its software-defined edge computing platform

Pixeom, a startup that offers a software-defined edge computing platform to enterprises, today announced that it has raised a $15M funding round from Intel Capital, National Grid Partners and previous investor Samsung Catalyst Fund. The company plans to use the new funding to expands its go-to-market capacity and invest in product development.

If the Pixeom name sounds familiar, that may be because you remember it as a Raspberry Pie-based personal cloud platform. Indeed, that’s the service the company first launched back in 2014. It quickly pivoted to an enterprise model, though. As Pixeom CEO Sam Nagar told me, that pivot came about after a conversation the company had with Samsung about adopting its product for that company’s needs. In addition, it was also hard to find venture funding. The original Pixeom device allowed users to set up their own personal cloud storage and other applications at home. While there is surely a market for these devices, especially among privacy conscious tech enthusiasts, it’s not massive, especially as users became more comfortable with storing their data in the cloud. “One of the major drivers [for the pivot] was that it was actually very difficult to get VC funding in an industry where the market trends were all skewing towards the cloud,” Nagar told me.

At the time of its launch, Pixeom also based its technology on OpenStack, the massive open source project that helps enterprises manage their own data centers, which isn’t exactly known as a service that can easily be run on a single machine, let alone a low-powered one. Today, Pixeom uses containers to ship and manage its software on the edge.

What sets Pixeom apart from other edge computing platforms is that it can run on commodity hardware. There’s no need to buy a specific hardware configuration to run the software, unlike Microsoft’s Azure Stack or similar services. That makes it significantly more affordable to get started and allows potential customers to reuse some of their existing hardware investments.

Pixeom brands this capability as ‘software-defined edge computing’ and there is clearly a market for this kind of service. While the company hasn’t made a lot of waves in the press, more than a dozen Fortune 500 companies now use its services. With that, the company now has revenues in the double-digit millions and its software manages more than a million devices worldwide.

As is so often the case in the enterprise software world, these clients don’t want to be named, but Nagar tells me that they include one of the world’s largest fast food chains, for example, which uses the Pixeom platform in its stores.

On the software side, Pixeom is relatively cloud agnostic. One nifty feature of the platform is that it is API-compatible with Google Cloud Platform, AWS and Azure and offers an extensive subset of those platforms’ core storage and compute services, including a set of machine learning tools. Pixeom’s implementation may be different, but for an app, the edge endpoint on a Pixeom machine reacts the same way as its equivalent endpoint on AWS, for example.

Until now, Pixeom mostly financed its expansion — and the salary of its over 90 employees — from its revenue. It only took a small funding round when it first launched the original device (together with a Kickstarter campaign). Technically, this new funding round is part of this, so depending on how you want to look at this, we’re either talking about a very large seed round or a Series A round.


By Frederic Lardinois

Google’s managed hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.


By Frederic Lardinois

Google doubles down on its Asylo confidential computing framework

Last May, Google introduced Asylo, an open source framework for confidential computing, a technique favored by many of the big cloud vendors because it allows you to set up trusted execution environments that are shielded from the rest of the (potentially untrusted) system. Workloads and their data basically sit in a trusted enclave that adds another layer of protection against network and operating system vulnerabilities.

That’s not a new concept, but as Google argues, it has been hard to adopt. “Despite this promise, the adoption of this emerging technology has been hampered by dependence on specific hardware, complexity and the lack of an application development tool to run in confidential computing environments,” Google Cloud Engineering Director Jason Garms and Senior Product Manager Nelly Porter write in a blog post today. The promise of the Asylo framework, as you can probably guess, is to make confidential computing easy.

Asylo makes it easier to build applications that can run in these enclaves and can use various software- and hardware-based security back ends like Intel’s SGX and others. Once an app has been ported to support Asylo, you should also be able to take that code with you and run in on any other Asylo-supported enclave.

Right now, though, many of these technologies and practices around confidential computing remain in flux. Google notes that there are no set design patterns for building applications that then use the Asylo API and run in these enclaves, for example.The different hardware manufacturers also don’t necessarily work together to ensure their technologies are interoperable.

“Together with the industry, we can work toward more transparent and interoperable services to support confidential computing apps, for example, making it easy to understand and verify attestation claims, inter-enclave communication protocols, and federated identity systems across enclaves,” write Garms and Porter.

And to do that, Google is launching its Confidential Computing Challenge (C3) today. The idea here is to have developers create novel use cases for confidential computing — or to advance the current state of the technologies. If you do that and win, you’ll get $15,000 in cash, $5,000 in Google Cloud Platform credits and an undisclosed hardware gift (a Pixelbook or Pixel phone, if I had to guess).

In additionl, Google now also offers developers three hands-on labs that teach how to build apps using Asylo’s tools. Those are free for the first month if you use the code in Google’s blog post.


By Frederic Lardinois