DREAMTECH NEWS

Apigee jumps on hybrid bandwagon with new API for hybrid environments

This year at Google Cloud Next, the theme is all about supporting hybrid environments, so it shouldn’t come as a surprise that Apigee, the API company it bought in 2016 for $265 million, is also getting into the act. Today, Apigee announced the Beta of Apigee Hybrid, a new product designed for hybrid environments.

Amit Zavery, who recently joined Google Cloud after many years at Oracle, and Nandan Sridhar, describe the new product in a joint blog post as “a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice.”

As with Anthos, the company’s approach to hybrid management announced earlier today, the idea is to have a single way to manage your APIs no matter where you choose to run them.

“With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise,” Zavery and Sridhar wrote in the blog post announcing the new approach.

The announcement is part of an overall strategy by the company to support a customer’s approach to computing across a range environments, often referred to as hybrid cloud. In the Cloud Native world, the idea is to present a single fabric to manage your deployments, regardless of location.

This appears to be an extension of that idea, which makes sense given that Google was the first company to develop and open source Kubernetes, which is at the forefront of containerization and Cloud Native computing. While this isn’t pure Cloud Native computing, it is keeping true to its ethos and it fits in the scope of Google Cloud’s approach to computing in general, especially as it is being defined at this year’s conference.


By Ron Miller

Accenture announces intent to buy French cloud consulting firm

As Google Cloud Next opened today in San Francisco, Accenture announced its intent to acquire Cirruseo, a French cloud consulting firm that specializes in Google Cloud intelligence services. The companies did not share the terms of the deal.

Accenture says that Cirruseo’s strength and deep experience in Google’s cloud-based artificial intelligence solutions should help as Accenture expands its own AI practice. Google TensorFlow and other intelligence solutions are a popular approach to AI and machine learning, and the purchase should help give Accenture a leg up in this area, especially in the French market.

“The addition of Cirruseo would be a significant step forward in our growth strategy in France, bringing a strong team of Google Cloud specialists to Accenture,” Olivier Girard, Accenture’s geographic unit managing director for France and Benelux said in a statement.

With the acquisition, should it pass French regulatory muster, the company would add a team of 100 specialists trained in Google Cloud and G Suite to the an existing team of 2600 Google specialists worldwide.

The company sees this as a way to enhance its artificial intelligence and machine learning expertise in general, while giving it a much strong market placement in France in particular and the EU in general.

As the company stated there are some hurdles before the deal becomes official. “The acquisition requires prior consultation with the relevant works councils and would be subject to customary closing conditions,” Accenture indicated in a statement. Should all that come to pass, then Cirruseo will become part of Accenture.


By Ron Miller

Talk key takeaways from Google Cloud Next with TechCrunch writers

Google’s Cloud Next conference is taking over the Moscone Center in San Francisco this week and TechCrunch is on the scene covering all the latest announcements.

Google Cloud already powers some of the world’s premier companies and startups, and now it’s poised to put even more pressure on cloud competitors like AWS with its newly-released products and services. TechCrunch’s Frederic Lardinois will be on the ground at the event, and Ron Miller will be covering from afar. Thursday at 10:00 am PT, Frederic and Ron will be sharing what they saw and what it all means with Extra Crunch members on a conference call.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things cloud or enterprise.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.


By Arman Tabatabai

Google is bringing two new data centers online in 2020

At Google Cloud Next today, the company announced it is bringing two brand new data centers online in the 2020 timeframe with one in Seoul, South Korea and one in Salt Lake City, Utah.

The company, like many of its web scale peers, has had the data center building pedal to the medal over the last several years. It has grown to 15 regions with each region hosting multiple zones for a total of 45 zones. In all, the company has a presence in 13 countrie and says it has invested an impressive $47 billion (with a B) of CAPEX investment from 2016-2018.

Google Data Center Map. Photo: Google

“We’re going to be announcing the availability in early 2020 of Seoul, South Korea. So we are announcing a region there with three zones for customers to build their applications. Again, customers, either multinationals that are looking to serve their customers in that market or local customers that are looking to go global. This really helps address their needs and allows them to serve the customers in the way that they want to,” Dominic Preuss, director of product management said.

He added, “Similarly, Salt Lake City is our third region in the western United States along with Oregon and Los Angeles. And so it allows developers to build a distributed applications across multiple regions in the western United States.”

In addition, the company announced that its new data center in Osaka, Japan is expected to come online some time in the coming weeks. One in Jakarta, Indonesia, currently under construction is expected to come online the first half of next year.


By Ron Miller

Google’s hybrid cloud platform is coming to AWS and Azure

Google’s Cloud Services Platform for managing hybrid clouds that span on-premise data centers and the Google cloud, is coming out of beta today. The company is also changing the product’s name to Anthos, a name that either refers to a lost Greek tragedy, the name of an obscure god in the Marvel universe, or rosemary. That by itself would be interesting but minor news. What makes this interesting is that Google also today announced that Anthos will run on third-party clouds as well, including AWS and Azure.

“We will support Anthos and AWS and Azure as well, so people get one way to manage their application and that one way works across their on-premise environments and all other clouds,” Google’s senior VP for its technical infrastructure, Urs Hölzle, explained in a press conference ahead of today’s announcement.

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

“You can use one consistent approach — one open-source based approach — across all environments,” Hölzle said. “I can’t really stress how big a change that is in the industry, because this is really the stack for the next 20 years, meaning that it’s not really about the three different clouds that are all randomly different in small ways. This is the way that makes these three cloud — and actually on-premise environments, too — look the same.”

Anthos/Google Cloud Services Platform is based on the Google Kubernetes Engine, as well as other open source projects like the Istio service mesh. It’s also hardware agnostic, meaning that users can take their current hardware and run the service on top of that without having to immediately invest in new servers.

Why is Google doing this? “We hear from our customers that multi-cloud and hybrid is really an acute pain point,” Hölzle said. He noted that containers are the enabling technology for this but that few enterprises have developed a unifying strategy to manage these deployments and that it takes expertise in all major clouds to get the most out of them.

Enterprises already have major investments in their infrastructure and created relationships with their vendors, though, so it’s no surprise that Google is launching Anthos with over 30 major hardware and software partners that range from Cisco to Dell EMC, HPE and VMWare, as well as application vendors like Confluent, Datastax, Elastic, Portworx, Tigera, Splunk, GitLab, MongoDB and others.

Anthos is a subscription-based service, with the list prices starting at $10,000/month per 100 vCPU block. Enterprise prices then to be up for negotiation, though, so many customers will likely pay less.

It’s one thing to use a service like this for new applications, but many enterprises already have plenty of line-of-business tools that they would like to bring to the cloud as well. For them, Google is launching the first beta of Anthos Migrate today. This service will auto-migrate VMs from on-premises or other clouds into containers in the Google Kubernetes Engine. The promise here is that this is essentially an automatic process and once the container is on Google’s platform, you’ll be able to use all of the other features that come with the Anthos platform, too.

Google’s Hölzle noted that the emphasis here was on making this migration as easy as possible. “There’s no manual effort there,” he said.


By Frederic Lardinois

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on the Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and its end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose —fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine,  the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.


By Ron Miller

Google Cloud challenges AWS with new open-source integrations

Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners. The partners here are Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs.

The idea here, Google says, is to provide users with a seamless user experience and the ability to easily leverage these open-source technologies in Google’s cloud. But there is a lot more at play here, even though Google never quite says so. That’s because Google’s move here is clearly meant to contrast its approach to open-source ecosystems with Amazon’s. It’s no secret that Amazon’s AWS cloud computing platform has a reputation for taking some of the best open-source projects and then forking those and packaging them up under its own brand, often without giving back to the original project. There are some signs that this is changing, but a number of companies have recently taken action and changed their open-source licenses to explicitly prevent this from happening.

That’s where things get interesting, because those companies include Confluent, Elastic, MongoDB, Neo4j and Redis Labs — and those are all partnering with Google on this new project, though it’s worth noting that InfluxData is not taking this new licensing approach and that while DataStax uses lots of open-source technologies, its focus is very much on its enterprise edition.

“As you are aware, there has been a lot of debate in the industry about the best way of delivering these open-source technologies as services in the cloud,” Manvinder Singh, the head of infrastructure partnerships at Google Cloud, said in a press briefing. “Given Google’s DNA and the belief that we have in the open-source model, which is demonstrated by projects like Kubernetes, TensorFlow, Go and so forth, we believe the right way to solve this it to work closely together with companies that have invested their resources in developing these open-source technologies.”

So while AWS takes these projects and then makes them its own, Google has decided to partner with these companies. While Google and its partners declined to comment on the financial arrangements behind these deals, chances are we’re talking about some degree of profit-sharing here.

“Each of the major cloud players is trying to differentiate what it brings to the table for customers, and while we have a strong partnership with Microsoft and Amazon, it’s nice to see that Google has chosen to deepen its partnership with Atlas instead of launching an imitation service,” Sahir Azam, the senior VP of Cloud Products at MongoDB told me. “MongoDB and GCP have been working closely together for years, dating back to the development of Atlas on GCP in early 2017. Over the past two years running Atlas on GCP, our joint teams have developed a strong working relationship and support model for supporting our customers’ mission critical applications.”

As for the actual functionality, the core principle here is that Google will deeply integrate these services into its Cloud Console; for example, similar to what Microsoft did with Databricks on Azure. These will be managed services and Google Cloud will handle the invoicing and the billings will count toward a user’s Google Cloud spending commitments. Support will also run through Google, so users can use a single service to manage and log tickets across all of these services.

Redis Labs CEO and co-founder Ofer Bengal echoed this. “Through this partnership, Redis Labs and Google Cloud are bringing these innovations to enterprise customers, while giving them the choice of where to run their workloads in the cloud, he said. “Customers now have the flexibility to develop applications with Redis Enterprise using the fully integrated managed services on GCP. This will include the ability to manage Redis Enterprise from the GCP console, provisioning, billing, support, and other deep integrations with GCP.”


By Frederic Lardinois

Slack integration with Office 365 one more step toward total enterprise integration

Slack’s goal of integrating enterprise tools in the chat interface has been a major differentiator from the giant companies it’s competing with like Microsoft and Facebook. Last year, it bought Astro, specifically with the goal of integrating enterprise productivity tools inside Slack, and today it announced new integrations with Microsoft OneDrive and Outlook.

Specifically, Slack is integrating calendar, files and calls and bringing in integrations with other services including Box, Dropbox and Zoom.

Andy Pflaum, director of project management at Slack, came over in the Astro deal and he says one of the primary goals of the acquisition was to help build connections like this to Microsoft and Google productivity tools.

“When we joined Slack, it was to build out the interoperability between Slack and Microsoft’s products, particularly Office and Office 365 products, and the comparable products from from Google, G Suite. We focused on deep integration with mail and calendar in Slack, as well as bringing in files and calls in from Microsoft, Google and other leading providers like Zoom, Box and Dropbox,” Pflaum, who was co-founder and CEO at Astro, told TechCrunch.

For starters, the company is announcing deep integration with Outlook that enables users to get and respond to invitations in Slack. You can also join a meeting with a click directly from Slack, whether that’s Zoom, WebEx or Skype for Business. What’s more, when you’re in a meeting your status will update automatically in Slack, saving users from manually doing this (or more likely forgetting to and getting a flurry of Slack questions in the middle of a meeting).

Another integration lets you share emails directly into Slack. Instead of copying and pasting or forwarding the email to a large group, you can click a Slack button in the Outlook interface share it as a direct message, with a group or to your personal Slack channel.

File sharing is not being left behind here either, whether from Microsoft, Box or Dropbox; users will be able to share files inside of Slack easily. Finally, users will be able to view full Office document previews inside of Slack, another step in avoiding tasking switching to get work done.

Screenshot: Slack

Mike Gotta, an analyst at Gartner who has been following the collaboration space for many years, says the integration has done a good job of preserving the user experience, while allowing for a seamless connection between email, calendar and files. He says that this could give them an edge in the highly competitive collaboration market, and more importantly allow users to maintain context.

“The collaboration market is highly fragmented with many vendors adding “just a little” collaboration to products designed for specific purposes. Buyers can find that this type of collaboration in context to the flow of work is more impactful than switching to a generalized tool that lacks situational awareness of the task at hand. Knowledge-based work often involves process and project related applications so the more we can handle transitions across tools the more productive the user experience becomes. More importantly there’s less context fragmentation for the individual and team,” Gotta told TechCrunch.

These updates are about staying one step ahead of the competition, and being able to run Microsoft tools inside of Slack gives customers another reason to stick with (or to buy) Slack instead of Microsoft’s competing product, Teams.

All of this new functionality is designed to work in both mobile and desktop versions of the product and is available today.


By Ron Miller

PubNub nabs $23M as its IaaS network hits 1.3T messages sent each month

There’s been a huge boom in the last decade of applications and services that rely on on real-time notifications and other alerts as a core part of how they operate, and today one of the companies that powers those notifications is announcing a growth round. PubNub, an infrastructure-as-a-service provider that provides a real-time network to send and manage messaging traffic between companies, companies and apps, and betweeninternet-of-things devices — has raised $23 million in a Series D round of funding to ramp up its business internationally, with an emphasis on emerging markets.

The round adds another strategic investor to PubNub’s cap table: Hewlett Packard Enterprise is coming on as an investor, joining previous backers Sapphire Ventures (backed by SAP), Relay Ventures, Scale Venture Partners, Cisco Investments, Bosch and Ericsson in this round.

Todd Greene, the CEO of PubNub (who co-founded it with Stephen Blum), said the startup is not disclosing its valuation with this round except to say that “we are happy with it, and it’s a solid increase on where we were the last time.” That, according to PitchBook, was just under $155 million back in 2016 in a small extension to its Series C round. The company has raised around $70 million to date.

PubNub’s growth — along with that of competing companies and technologies, which includes the likes of Pusher, RabbitMQ, Google’s Firebase and others — has come alongside the emergence of a number of use cases built on the premise of real-time notifications. These include a multitude of apps, for example, for on-demand commerce (eg, ride hailing and online food ordering), medical services, entertainment services, IoT systems and more.

That’s pushed PubNub to a new milestone of enabling some 1.3 trillion messages per month for customers that include the likes of Peloton, Atlassian, athenahealth, JustEat, Swiggy, Yelp, the Sacramento Kings and Gett, who choose from some 70 SDKs to tailor what kinds of notifications and actions are triggered around their specific services.

Greene said that while some of the bigger services in the world have largely built their own messaging platforms to manage their notifications — Uber, for example, has taken this route — that process can result in “death by 1,000 paper cuts,” in Greene’s words. Others will opt for a PubNub-style alternative from the start.

“About 50 percent of our customers started by building themselves and then got to scale, and then decided to turn to PubNub,” Greene said.

It’s analogous to the same kind of decision businesses make regarding public cloud infrastructure: whether it makes sense to build and operate their own servers, or turn to a third-party provider — a decision that PubNub itself ironically is also in the process of contemplating.

Today the company runs its own business as an overlay on the public cloud, using a mixture of AWS and others, Greene said — the company has partnerships with Microsoft Azure, AWS, and IBM Watson — but “every year we evaluate the benefits of going into different kinds of data centres and interesting opportunities there. We are evaluating a cost and performance calculation,” he added.

And while he didn’t add it, that could potentially become an exit opportunity for PubNub down the line, too, aligning with a cloud provider that wanted to offer messaging infrastructure-as-a-service as an additional feature to customers.

The strategic relationship with its partners, in fact, is one of the engines for this latest investment. “Edge computing and realtime technologies will be at the heart of the next wave of technology innovation,” commented Vishal Lall, COO of Aruba, a Hewlett Packard Enterprise company, said in a statement. “PubNub’s global Data Stream Network has demonstrated extensive accomplishments powering both enterprise and consumer solutions. HPE is thrilled to be investing in PubNub’s fast-growing success, and to accelerate the commercial and industrial applications of PubNub’s real time platform.”


By Ingrid Lunden

Watch Google Cloud Next developer conference live right here

If you can’t stop dreaming about NoSQL databases, Google’s Cloud Next conference is the closest thing to heaven that you’ll find today. At 9 AM PT, 12 PM ET, 5 PM GMT, some of the brightest minds in cloud computing are going to introduce the upcoming features of Google Cloud.

Along with Amazon Web Services and Microsoft Azure, Google is building the infrastructure of the web. Countless startups use Google Cloud as their only hosting provider. And there are now launching more and more specialized and niche services. So it’s going to be interesting to see what Google has in store to beat their competitors on the cloud front.

We’ll have a team on the ground covering all the announcements and explaining what it means.


By Romain Dillet