IBM confirms layoffs are happening, but won’t provide details

IBM confirmed reports from over night that it is conducting layoffs, but wouldn’t provide details related to location, departments or number of employees involved. The company framed it in terms of replacing people with more needed skills as it tries to regroup under new CEO Arvind Krishna.

IBM’s work in a highly competitive marketplace requires flexibility to constantly remix to high-value skills, and our workforce decisions are made in the long-term interests of our business,” an IBM spokesperson told TechCrunch.

Patrick Moorhead, principal analyst at Moor Insights & Strategy says he’s hearing the layoffs are hitting across the business. “I’m hearing it’s a balancing act between business units. IBM is moving as many resources as it can to the cloud. Essentially, you lay off some of the people without the skills you need and who can’t be re-educated and you bring in people with certain skill sets. So not a net reduction in headcount,” Moorhead said.

It’s worth noting that IBM used a similar argument back in 2015 when it reportedly had layoffs. While there is no official number, Bloomberg is reporting that today’s number is in the thousands.

Holger Mueller, an analyst at Constellation Research, says that IBM is in a tough spot. “The bets of the past have not paid off. IBM Cloud as IaaS is gone, Watson did not deliver and Blockchain is too slow to keep thousands of consultants occupied,” he said.

Mueller adds that the company could also be feeling the impact of having workers at home instead of in the field. “Enterprises do not know and have not learnt how to do large software projects remotely. […] And for now enterprises are slowing down on projects as they are busy with reopening plans,” he said.

The news comes against the backdrop of companies large and small laying off large numbers of employees as the pandemic takes its toll on the workforce. IBM was probably due for a workforce reduction, regardless of the current macro situation as Krishna tries to right the financial ship.

The company has struggled in recent years, and with the acquisition of Red Hat for $34 billion in 2018, it is hoping to find its way as a more open hybrid cloud option. It apparently wants to focus on skills that can help them get there.

The company indicated that it would continue to subsidize medical expenses for laid off employees through June 2021, so there is that.


By Ron Miller

Incoming IBM CEO Arvind Krishna faces monumental challenges on multiple fronts

Arvind Krishna is not the only CEO to step into a new job this week, but he is the only one charged with helping turn around one of the world’s most iconic companies. Adding to the degree of difficulty, he took the role in the midst of a global pandemic and economic crisis. No pressure or anything.

IBM has struggled in recent years to find its identity as technology has evolved rapidly. While Krishna’s predecessor Ginni Rometty left a complex legacy as she worked to bring IBM into the modern age, she presided over a dreadful string of 22 straight quarters of declining revenue, a record Krishna surely hopes to avoid.

Strong headwinds

To her credit, under Rometty the company tried hard to pivot to more modern customer requirements, like cloud, artificial intelligence, blockchain and security. While the results weren’t always there, Krishna acknowledged in an email employees received on his first day that she left something to build on.

“IBM has already built enduring platforms in mainframe, services and middleware. All three continue to serve our clients. I believe now is the time to build a fourth platform in hybrid cloud. An essential, ubiquitous hybrid cloud platform our clients will rely on to do their most critical work in this century. A platform that can last even longer than the others,” he wrote.

But Ray Wang, founder and principal analyst at Constellation Research, says the market headwinds the company faces are real, and it’s going to take some strong leadership to get customers to choose IBM over its primary cloud infrastructure competitors.

“His top challenge is to restore the trust of clients that IBM has the latest technology and solutions and is reinvesting enough in innovation that clients want to see. He has to show that IBM has the same level of innovation and engineering talent as the hyper scalers Google, Microsoft and Amazon,” Wang explained.

Cultural transformation


By Ron Miller

Volterra announces $50M investment to manage apps in hybrid environment

Volterra is an early stage startup that has been quietly working on a comprehensive solution to help companies manage applications in hybrid environments. The company emerged from stealth today with a $50 million investment and a set of products.

Investors include Khosla Ventures and Mayfield along with strategic investors M12 (Microsoft’s venture arm), Itochu Technology Ventures and Samsung NEXT. The company, which was founded in 2017, already has 100 employees and more than 30 customers.

What has attracted these investors and customers is a full stack solution that includes both hardware and software to manage applications in the cloud or on prem. Volterra founder and CEO Ankur Singla says when he was at his previous company, Contrail Systems, which was acquired by Juniper Networks in 2012 for $176 million, he saw first-hand how large companies were struggling with the transition to hybrid.

“The big problem we saw was in building and operating application that scale is a really hard problem. They were adopting multiple hybrid cloud strategies, and none of them solved the problem of unifying the application and the infrastructure layer, so that the application developers and DevOps teams don’t have to worry about that,” Singla explained.

He says the Volterra solution includes three main products, VoltStack​, VoltMesh and VoltConsole to help solve this scaling and management problem. As Volterra describes the total solution, “Volterra has innovated a consistent, cloud-native environment that can be deployed across multiple public clouds and edge sites — a distributed cloud platform. Within this SaaS-based offering, Volterra integrates a broad range of services that have normally been siloed across many point products and network or cloud providers.” This includes not only the single management plane, but security, management and operations components.

Diagram: Volterra

The money has come over a couple of rounds, helping to build the solution to this point, and it required a complex combination of hardware and software to do it. They are hoping to help organizations that have been looking for a cloud native approach to large-scale applications such as industrial automation will adopt this approach.


By Ron Miller

With $34B Red Hat deal closed, IBM needs to execute now

In a summer surprise this week, IBM announced it had closed its $34 billion blockbuster deal to acquire Red Hat. The deal, which was announced in October, was expected to take a year to clear all of the regulatory hurdles, but U.S. and EU regulators moved surprisingly quickly. For IBM, the future starts now, and it needs to find a way to ensure that this works.

There are always going to be layers of complexity in a deal of this scope, as IBM moves to incorporate Red Hat into its product family quickly and get the company moving. It’s never easy combining two large organizations, but with IBM mired in single-digit cloud market share and years of sluggish growth, it is hoping that Red Hat will give it a strong hybrid cloud story that can help begin to alter its recent fortunes.

As Box CEO (and IBM partner) Aaron Levie tweeted at the time the deal was announced, “Transformation requires big bets, and this is a good one.” While the deal is very much about transformation, we won’t know for some time if it’s a good one.

Transformation blues


By Ron Miller

Yellowbrick Data raises $81M Series C for hybrid data warehouse

There’s lots of data in the world these days, and there are a number of companies vying to store that data in data warehouses or lakes or whatever they choose to call it. Old school companies have tended to be on prem, while new ones like Snowflake are strictly in the cloud. Yellowbrick Data wants to play the hybrid angle, and today it got a healthy $81 million Series C to continue its efforts.

The round was led by DFJ Growth with help from Next47, Third Point Ventures, Menlo Ventures, GV (formerly Google Ventures), Threshold Ventures and Samsung. New investors joining the round included IVP and BMW i Ventures. Today’s investment brings the total raised to a brisk $173 million.

Yellowbrick sees a world that many of the public cloud vendors like Microsoft and Google see, one where enterprise companies will be living in a hybrid world where some data and applications will stay on prem and some in the cloud. They believe this situation will be in place for the foreseeable future, so its product plays to that hybrid angle, where your data can be on prem or in the cloud.

The company did not want to discuss valuation in spite of the high amount of raised dollars. Neither did it want to discuss revenue growth rates, other than to say that it was growing at a healthy rate.

Randy Glein, partner at DFJ Growth, did say one of the things that attracted his company to invest in Yellowbrick was its momentum along with the technology, which in his view, provides a more modern way to build data warehouses. “Yellowbrick is quickly providing a new generation of ultra-high performance data warehouse capabilities for large enterprises. The technology is a step function improvement on every dimension compared to legacy solutions, helping modern enterprises digest and interpret massive data workloads in a fraction of the time at a fraction of the cost,” he said in a statement.

It’s interesting that a company with just 100 employees would require this kind of money, but as company COO Jason Snodgress told TechCrunch, it costs a lot of money to build out a data warehouse. He’s not wrong. Snowflake, a company that’s building a cloud data warehouse, has raised almost a billion dollars.


By Ron Miller

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.


By Ron Miller

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.


By Ron Miller

Apigee jumps on hybrid bandwagon with new API for hybrid environments

This year at Google Cloud Next, the theme is all about supporting hybrid environments, so it shouldn’t come as a surprise that Apigee, the API company it bought in 2016 for $265 million, is also getting into the act. Today, Apigee announced the Beta of Apigee Hybrid, a new product designed for hybrid environments.

Amit Zavery, who recently joined Google Cloud after many years at Oracle, and Nandan Sridhar, describe the new product in a joint blog post as “a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice.”

As with Anthos, the company’s approach to hybrid management announced earlier today, the idea is to have a single way to manage your APIs no matter where you choose to run them.

“With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise,” Zavery and Sridhar wrote in the blog post announcing the new approach.

The announcement is part of an overall strategy by the company to support a customer’s approach to computing across a range environments, often referred to as hybrid cloud. In the Cloud Native world, the idea is to present a single fabric to manage your deployments, regardless of location.

This appears to be an extension of that idea, which makes sense given that Google was the first company to develop and open source Kubernetes, which is at the forefront of containerization and Cloud Native computing. While this isn’t pure Cloud Native computing, it is keeping true to its ethos and it fits in the scope of Google Cloud’s approach to computing in general, especially as it is being defined at this year’s conference.


By Ron Miller

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud, is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe, or rosemary. That by itself would be interesting but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with over 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMWare, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices then to be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.


By Frederic Lardinois

Rackspace announces it has laid off 200 workers

Rackspace, the hosted private cloud vendor, let go around 200 workers or 3 percent of its worldwide workforce of 6600 employees this week. The company says that it’s part of a recalibration where it is trying to find workers who are better suited to their current business approach.

A Rackspace spokesperson told TechCrunch that it is “a stable and profitable company.” In fact, it hired 1500 employees in 2018 and currently has 200 job openings. “We continue to invest in our business based on market opportunity and our customers’ needs – we take actions on an ongoing basis in some areas where we are over-invested and hire in areas where we are under invested,” a company spokesperson explained.

The company, which went public in 2008 and private again for $4.3 billion in 2016, has struggled in a cloud market dominated by giants like Amazon, Microsoft and Google, but according to Synergy Research, a firm that keeps close watch on the cloud market, it is one of the top 3 companies in the Hosted Private Cloud category.

It’s worth noting that the top company in this category is IBM and Rackspace could be a good target for Big Blue if it wanted to use its checkbook to get a boost in marketshare. IBM is in third or fourth place in the cloud infrastructure market, depending on whose numbers you look at, but it could move the needle a bit by buying a company like Rackspace. Neither company is suggesting this, however, and IBM bought Red Hat at the end of last year for $34 billion, making it less likely it will be in a spending mood this year.

For now the layoffs appear to be a company tweaking its workforce to meet current market conditions, but whatever the reason, it’s never a happy day when people lose their jobs.


By Ron Miller

Google’s managed hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.


By Frederic Lardinois

Dell’s long game is in hybrid and private clouds

When Dell voted to buy back the VMware tracking stock and go public again this morning, you had to be wondering what exactly the strategy was behind these moves. While it’s clearly about gaining financial flexibility, the $67 billion EMC deal has always been about setting the company up for a hybrid and private cloud future.

The hybrid cloud involves managing workloads on premises and in the cloud, while private clouds are ones that companies run themselves, either in their own data centers or on dedicated hardware in the public cloud.

Patrick Moorhead, founder and principal analyst at Moor Insight & Strategy, says this approach takes a longer investment timeline, and that required the changes we saw this morning. “I believe Dell Technologies can better invest in its hybrid world with longer term investors as the investment will be longer term, at least five years,” he said. Part of that, he said, is due to the fact that many more on prem to public connectors services need to be built.

Dell could be the company that helps build some of those missing pieces. It has always been at its heart a hardware company, and as such either of these approaches could play to its strengths. When the company paid $67 billion for EMC in 2016, it had to have a long-term plan in mind. Michael Dell’s parents didn’t raise no fool and he saw an opportunity with that move to push his company in a new direction.

It was probably never about EMC’s core storage offerings, although a storage component was an essential ingredient in this vision. Dell and his investor’s eyes probably were more focused on other pieces inside the federation — the loosely coupled set of companies inside the broader EMC Corporation.

The VMware bridge

The crown jewel in that group was of course VMware, the company that introduced the enterprise to server virtualization. Today, it has taken residency in the hybrid world between the on premises data center and the cloud. Armed with broad agreements with AWS, VMware finagled its way to be a key bridge between on prem and the monstrously popular Amazon cloud. IT pros used to working with VMware would certainly be comfortable using it as cloud control panel as they shifted their workloads to AWS cloud virtual machines.

In fact, speaking at a press conference at AWS re:Invent earlier this month, AWS CEO Andy Jassy said the partnership with VMware has been really transformational for his company on a lot of different levels. “Most of the world is virtualized on top of VMware and VMware is at the core of most enterprises. When you start trying to solve people’s problems between being on premises and in the cloud, having the partnership we have with VMware allows us to find ways for customers to use the tools they’ve been using and be able to use them on top of our platform the way the they want,” Jassy told the press conference.

The two companies also announced an extension of the partnership with the new AWS Outposts servers, which bring the AWS cloud on prem where customers can choose between using VMware or AWS to manage the workloads, whether they live in the cloud or on-premises. It’s unclear whether AWS will extend this to other company’s hardware, but if they do you can be sure Dell would want to be a part of that.

Pivotal’s key role

But it’s not just VMware that Dell had its sights on when it bought EMC, it was Pivotal too. This is another company, much like VMware, that is publicly traded and operates independently of Dell, even while living inside of the Dell family of products. While VMware handles managing the server side of the house, Pivotal is about building software products.

When the company went public earlier this year, CEO Rob Mee told TechCrunch that Dell recognizes that Pivotal works better as an independent entity. “From the time Dell acquired EMC, Michael was clear with me: You run the company. I’m just here to help. Dell is our largest shareholder, but we run independently. There have been opportunities to test that [since the acquisition] and it has held true,” Mee said at the time.

Virtustream could also be a key piece providing a link to run traditional enterprise applications on multi-tenant clouds. EMC bought this company in 2015 for $1.2 billion, then later spun it out as a jointly owned venture of EMC and VMware later that year. The company provides another link between applications like SAP that once only ran on prem.

Surely they had to take all the pieces to get the ones it wanted most. It might have been a big price to pay for transformation, especially since you could argue that some of the pieces were probably past their freshness dates (although even older products bring with them plenty of legacy licensing and maintenance revenue).

Even though the long-term trend is shifting toward moving to the cloud, there will be workloads that stay on premises for some time to come. It seems that Dell is trying to position itself as the hybrid/private cloud vendor and all that entails to serve those who won’t be all cloud, all the time. Whether this strategy will work long term remains to be seen, but Dell appears to be betting the house on this approach and today’s moves only solidified that.


By Ron Miller

Pivotal announces new serverless framework

Pivotal has always been about making open source tools for enterprise developers, but surprisingly up until now the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless besides the fact that it’s open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient services to meet the demands of the serverless programs.

The new package includes several key components for developers including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require, and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This in contrast to the cloud providers like Amazon, Google and Microsoft who provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.


By Ron Miller

AWS is bringing the cloud on prem with Outposts

AWS has always been the pure cloud vendor, and even though it has given a nod to hybrid, it is now fully embracing it. Today in conjunction with VMware, it announced a pair of options to bring AWS into the datacenter.

Yes, you read it correctly. You can now put AWS into your data center with AWS hardware, the same design they use in their own datacenters. The two new products are part of AWS Outposts.

There are two Outposts variations — VMware Cloud on AWS Outposts and AWS Outposts. The first uses the VMware control panel. The second allows customers to run compute and storage on-premises using the same AWS APIs that are used in the AWS cloud

In fact, VMware CEO Pat  Gelsinger joined AWS CEO Andy Jassy on stage for a joint announcement. The two companies have been working together for some to bring VMware to the AWS cloud. Part of this announcement flips that on its head bringing the AWS cloud on prem to work with VMware. In both cases, AWS sells you their hardware, installs it if you wish, and will even maintain it for you.

This is an area that AWS has lagged, preferring the vision of a cloud, rather than moving back to the datacenter, but it’s a tacit acknowledgment that customers want to operate in both places for the foreseeable future.

The announcement also extends the company’s cloud-native like vision. On Monday, the company announced Transit Gateways, which is designed to provide a single way to manage network resources, whether they live in the cloud or on-prem.

Now AWS is bringing its cloud on prem, something that Microsoft, Canonical, Oracle and others have had for some time. It’s worth noting that today’s announcement is a public preview. The actual release is expected in the second half of next year.

more AWS re:Invent 2018 coverage


By Ron Miller

Vista snaps up Apptio for $1.94B, as enterprise companies remain hot

It seems that Sunday has become a popular day to announce large deals involving enterprise companies. IBM announced the $34 billion Red Hat deal two weeks ago. SAP announced its intent to buy Qualtrics for $8 billion last night, and Vista Equity Partners got into the act too, announcing a deal to buy Apptio for $1.94 billion, representing a 53 percent premium for stockholders.

Vista paid $38 per share for Apptio, a Seattle company that helps companies manage and understand their cloud spending inside a hybrid IT environment that has assets on-prem and in the cloud. The company was founded in 2007 right as the cloud was beginning to take off, and grew as the cloud did. It recognized that companies would have trouble understanding their cloud assets along side on-prem ones. It turned out to be a company in the right place at the right time with the right idea.

Investors like Andreessen Horowitz, Greylock and Madrona certainly liked the concept, showering the company with $261 million before it went public in 2016. The stock price has been up and down since, peaking in August at $41.23 a share before dropping down to $24.85 on Friday. The $38 a share Vista paid comes close to the high water mark for the stock.

Stock Chart: Google

Sunny Gupta, co-founder and CEO at Apptio liked the idea of giving his shareholders a good return while providing a good landing spot to take his company private. Vista has a reputation for continuing to invest in the companies it acquires and that prospect clearly excited him. “Vista’s investment and deep expertise in growing world-class SaaS businesses and the flexibility we will have as a private company will help us accelerate our growth…,” Gupta said in a statement.

The deal was approved by Apptio’s board of directors, which will recommend shareholders accept it. With such a high premium, it’s hard to imagine them turning it down. If it passes all of the regulatory hurdles, the acquisition is expected to close in Q1 2019.

It’s worth noting that the company has a 30-day “go shop” provision, which would allow it to look for a better price. Given how hot the enterprise market is right now and how popular hybrid cloud tools are, it is possible it could find another buyer, but it could be hard to find one willing to pay such a high premium.

Vista clearly likes to buy enterprise tech companies having snagged Ping Identity for $600 million and Marketo for $1.8 billion in 2016. It grabbed Jamf, an Apple enterprise device management company and Datto, a disaster recovery company last year. It turned Marketo around for $4.75 billion in a deal with Adobe just two months ago.


By Ron Miller