Oracle delves deeper into blockchain with four new applications

Oracle is a traditional tech company that has been struggling to gain traction in the cloud, but it could see blockchain as a way to differentiate itself. At Oracle OpenWorld today it announced the Oracle Blockchain Applications Cloud, a series of four applications designed for transactions-based processing scenarios using Internet of Things as a data source.

“Customers struggle with how exactly to go from concepts like smart contracts, distributed ledger and cryptography to solving specific business problems,” Atul Mahamuni, VP of IoT and Blockchain at Oracle told TechCrunch.

The company actually introduced a more generalized blockchain as a service offering at OpenWorld last year, but this year they have decided to focus more on specific use cases, announcing four new applications. The blockchain comes into account because of its nature as an irrefutable and immutable record.

In cases where there is a dispute over the accuracy of a particular piece of data, the blockchain can provide incontrovertible proof. As for the Internet of Things, that provides data points you can use to provide that proof. Your sensor feeds the data and it (or some reference to it) gets added to the blockchain, leaving no room for doubt.

The four applications involve supply chain-transaction data including a track and trace capability to follow a product through its delivery from inception to market, proof of provenance for valuables like drugs, intelligent temperature tracking (what they are calling Intelligent Cold Chain) and warranty and usage tracking. Intelligent Cold chain ensures that a product that is supposed to be kept cold didn’t get exposed to higher than recommended temperatures, while warranty tracking ensures that a product was being used in a proscribed fashion and should be subject to warranty claims.

Each of these plays to the some of Oracle’s strengths as a company that builds databases and ERP software. It can draw on the information it tends to collect any way as part of the nature of its business processes and add it to a blockchain and other applications when it makes sense.

“So what we do is we we get events and insights from IoT systems, as well as from supply chain ERP data, and we get those insights and translation from all of this and then put them into the blockchain and then do the correlations and artificial intelligence machine learning algorithms on top of those transactions,” Mahamuni explained.

This year perhaps even more so than the last couple, Oracle is trying to differentiate itself from the rest of the cloud pack, as it tries to right its cloud business. By building applications on top of base technologies like blockchain, IoT and artificial intelligence, while taking advantage of their domain knowledge around databases and ERP, they are hoping to show customers they can offer something their cloud competitors can’t.


By Ron Miller

Twilio launches a new SIM card and narrowband dev kit for IoT developers

Twilio is hosting its Signal developer conference in San Francisco this week. Yesterday was all about bots and taking payments over the phone; today is all about IoT. The company is launching two new (but related) products today that will make it easier for IoT developers to connect their devices. The first is the Global Super SIM that offers global connectivity management through the networks of Twilio’s partners. The second is Twilio Narrowband, which, in cooperation with T-Mobile, offers a full software and hardware kit for building low-bandwidth IoT solutions and the narrowband network to connect them.

Twilio also announced that it is expanding its wireless network partnerships with the addition of Singtel, Telefonica and Three Group. Unsurprisingly, those are also the partners that make the company’s Super SIM project possible.

The Super SIM, which is currently in private preview and will launch in public beta in the spring of 2019, provides developers with a global network that lets them deploy and manage their IoT devices anywhere (assuming there is a cell connection or other internet connectivity, of course). The Super SIM gives developers the ability to choose the network they want to use or to let Twilio pick the defaults based on the local networks.

Twilio Narrowband is a slightly different solution. Its focus right now is on the U.S., where T-Mobile rolled out its Narrowband IoT network earlier this year. As the name implies, this is about connecting low-bandwidth devices that only need to send out small data packets like timestamps, GPS coordinates or status updates. Twilio Narrowband sits on top of this, using Twilio’s Programmable Wireless and SIM card. It then adds an IoT developer kit with an Arduino-based development board and the standard Grove sensors on top of that, as well as a T-Mobile-certified hardware module for connecting to the narrowband network. To program that all, Twilio is launching an SDK for handling network registrations and optimizing the communication between the devices and the cloud.

The narrowband service will launch as a beta in early 2019 and offer three pricing plans: a developer plan for $2/month, an annual production plan for $10/year or $5/year at scale, and a five-year plan for $8/year or $4/year at scale.


By Frederic Lardinois

Atlassian launches the new Jira Software Cloud

Atlassian previewed the next generation of its hosted Jira Software project tracking tool earlier this year. Today, it’s available to all Jira users. To build the new Jira, Atlassian redesigned both the back-end stack and rethought the user experience from the ground up. That’s not an easy change, given how important Jira has become for virtually every company that develops software — and given that it is Atlassian’s flagship product. And with this launch, Atlassian is now focusing on its hosted version of Jira (which is hosted on AWS) and prioritizing that over the self-hosted server version.

So the new version of Jira that’s launching to all users today doesn’t just have a new, cleaner look, but more importantly, new functionality that allows for a more flexible workflow that’s less dependent on admins and gives more autonomy to teams (assuming the admins don’t turn those features off).

Because changes to such a popular tool are always going to upset at least some users, it’s worth noting at the outset that the old classic view isn’t going away. “It’s important to note that the next-gen experience will not replace our classic experience, which millions of users are happily using,” Jake Brereton, head of marketing for Jira Software Cloud, told me. “The next-gen experience and the associated project type will be available in addition to the classic projects that users have always had access to. We have no plans to remove or sunset any of the classic functionality in Jira Cloud.”

The core tenet of the redesign is that software development in 2018 is very different from the way developers worked in 2002, when Jira first launched. Interestingly enough, the acquisition of Trello also helped guide the overall design of the new Jira.

“One of the key things that guided our strategy is really bringing the simplicity of Trello and the power of Jira together,” Sean Regan, Atlassian’s head of growth for Software Teams, told me. “One of the reasons for that is that modern software development teams aren’t just developers down the hall taking requirements. In the best companies, they’re embedded with the business, where you have analysts, marketing, designers, product developers, product managers — all working together as a squad or a triad. So JIRA, it has to be simple enough for those teams to function but it has to be powerful enough to run a complex software development process.”

Unsurprisingly, the influence of Trello is most apparent in the Jira boards, where you can now drag and drop cards, add new columns with a few clicks and easily filter cards based on your current needs (without having to learn Jira’s powerful but arcane query language). Gone are the days where you had to dig into the configuration to make even the simplest of changes to a board.

As Regan noted, when Jira was first built, it was built with a single team in mind. Today, there’s a mix of teams from different departments that use it. So while a singular permissions model for all of Jira worked for one team, it doesn’t make sense anymore when the whole company uses the product. In the new Jira then, the permissions model is project-based. “So if we wanted to start a team right now and build a product, we could design our board, customize our own issues, build our own workflows — and we could do it without having to find the IT guy down the hall,” he noted.

One feature the team seems to be especially proud of is roadmaps. That’s a new feature in Jira that makes it easier for teams to see the big picture. Like with boards, it’s easy enough to change the roadmap by just dragging the different larger chunks of work (or “epics,” in Agile parlance) to a new date.

“It’s a really simple roadmap,” Brereton explained. “It’s that way by design. But the problem we’re really trying to solve here is, is to bring in any stakeholder in the business and give them one view where they can come in at any time and know that what they’re looking at is up to date. Because it’s tied to your real work, you know that what we’re looking at is up to date, which seems like a small thing, but it’s a huge thing in terms of changing the way these teams work for the positive.

The Atlassian team also redesigned what’s maybe the most-viewed page of the service: the Jira issue. Now, issues can have attachments of any file type, for example, making it easier to work with screenshots or files from designers.

Jira now also features a number of new APIs for integrations with Bitbucket and GitHub (which launched earlier this month), as well as InVision, Slack, Gmail and Facebook for Work.

With this update, Atlassian is also increasing the user limit to 5,000 seats, and Jira now features compliance with three different ISO certifications and SOC 2 Type II.


By Frederic Lardinois

Seva snares $2.4M seed investment to find info across cloud services

Seva, a New York City startup, that wants to help customers find content wherever it lives across SaaS products, announced a $2.4 million seed round today. Avalon Ventures led the round with participation from Studio VC and Datadog founder and CEO Olivier Pomel.

Company founder and CEO Sanjay Jain says that he started this company because he felt the frustration personally of having to hunt across different cloud services to find the information he was looking for. When he began researching the idea for the company, he found others who also complained about this fragmentation.

“Our fundamental vision is to change the way that knowledge workers acquire the information they need to do their jobs from one where they have to spend a ton of time actually seeking it out to one where the Seva platform can prescribe the right information at the right time when and where the knowledge worker actually needs it, regardless of where it lives.”

Seva, which is currently in Beta, certainly isn’t the first company to try and solve this issue. Jain believes that with a modern application of AI and machine learning and single sign-on, Seva can provide a much more user-centric approach than past solutions simply because the technology wasn’t there yet.

The way they do this is by looking across the different information types. Today they support a range of products including Gmail, Google Calendar, Google Drive,, Box, Dropbox, Slack and JIRA, Confluence. Jain says they will be adding additional services over time.

Screenshot: Seva

Customers can link Seva to these products by simply selecting one and entering the user credentials. Seva inherits all of the security and permissioning applied to each of the services, so when it begins pulling information from different sources, it doesn’t violate any internal permissioning in the process.

Jain says once connected to these services, Seva can then start making logical connections between information wherever it lives. A salesperson might have an appointment with a customer in his or her calendar, information about the customer in a CRM and a training video related to the customer visit. It can deliver all of this information as a package, which users can share with one another within the platform, giving it a collaborative element.

Seva currently has 6 employees, but with the new funding is looking to hire a couple of more engineers to add to the team. Jain hopes the money will be a bridge to a Series A round at the end of next year by which time the product will be generally available.


By Ron Miller

Jeff Bezos is just fine taking the Pentagon’s $10B JEDI cloud contract

Some tech companies might have a problem taking money from the Department of Defense, but Amazon isn’t one of them, as CEO Jeff Bezos made clear today at the Wired25 conference. Just last week, Google pulled out of the running for the Pentagon’s $10 billion, 10-year JEDI cloud contract, but Bezos suggested that he was happy to take the government’s money.

Bezos has been surprisingly quiet about the contract up until now, but his company has certainly attracted plenty of attention from the companies competing for the JEDI deal. Just last week IBM filed a formal protest with the Government Accountability Office claiming that the contract was stacked in favor one vendor. And while it didn’t name it directly, the clear implication was that company was the one owned by Bezos.

Last summer Oracle also filed a protest and also complained that they believed the government had set up the contract to favor Amazon, a charge spokesperson Heather Babb denied. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said last month.

While competitors are clearly worried about Amazon, which has a substantial lead in the cloud infrastructure market, the company itself has kept quiet on the deal until now. Bezos set his company’s support in patriotic terms and one of leadership.

“Sometimes one of the jobs of the senior leadership team is to make the right decision, even when it’s unpopular. And if if big tech companies are going to turn their back on the US Department of Defense, this country is going to be in trouble,” he said.

“I know everyone is conflicted about the current politics in this country, but this country is a gem,” he added.

While Google tried to frame its decision as taking a principled stand against misuse of technology by the government, Bezos chose another tack, stating that all technology can be used for good or ill. “Technologies are always two-sided. You know there are ways they can be misused as well as used, and this isn’t new,” Bezos told Wired25.

He’s not wrong of course, but it’s hard not to look at the size of the contract and see it as purely a business decision on his part. Amazon is as hot for that $10 billion contract as any of its competitors. What’s different in this talk is that Bezos made it sound like a purely patriotic decision, rather than economic one.

The Pentagon’s JEDI contract could have a value of up to $10 billion with a maximum length of 10 years. The contract is framed as a two year deal with two three-year options and a final one for two years. The DOD can opt out before exercising any of the options.

Bidding for the contract closed last Friday. The DOD is expected to choose the winning vendor next April.


By Ron Miller

Celonis brings intelligent process automation software to cloud

Celonis has been helping companies analyze and improve their internal processes using machine learning. Today the company announced it was providing that same solution as a cloud service with a few nifty improvements you won’t find on prem.

The new approach, called Celonis Intelligent Business Cloud, allows customers to analyze a workflow, find inefficiencies and offer improvements very quickly. Companies typically follow a workflow that has developed over time and very rarely think about why it developed the way it did, or how to fix it. If they do, it usually involves bringing in consultants to help. Celonis puts software and machine learning to bear on the problem.

Co-founder and CEO Alexander Rinke says that his company deals with massive volumes of data and moving all of that to the cloud makes sense. “With Intelligent Business Cloud, we will unlock that [on prem data], bring it to the cloud in a very efficient infrastructure and provide much more value on top of it,” he told TechCrunch.

The idea is to speed up the whole ingestion process, allowing a company to see the inefficiencies in their business processes very quickly. Rinke says it starts with ingesting data from sources such as Salesforce or SAP and then creating a visual view of the process flow. There may be hundreds of variants from the main process workflow, but you can see which ones would give you the most value to change, based on the number of times the variation occurs.

Screenshot: Celonis

By packaging the Celonis tools as a cloud service, they are reducing the complexity of running and managing it. They are also introducing an app store with over 300 pre-packaged options for popular products like Salesforce and ServiceNow and popular process like order to cash. This should also help get customers up and running much more quickly.

New Celonis App Store. Screenshot: Celonis

The cloud service also includes an Action Engine, which Rinke describes as a big step toward moving Celonis from being purely analytical to operational. “Action Engine focuses on changing and improving processes. It gives workers concrete info on what to do next. For example in process analysis, it would notice on time delivery isn’t great because order to cash is to slow. It helps accelerate changes in system configuration,” he explained.

Celonis Action Engine. Screenshot: Celonis

The new cloud service is available today. Celonis was founded in 2011. It has raised over $77 million. The most recent round was a $50 million Series B on a valuation over $1 billion.


By Ron Miller

Anaplan hits the ground running with strong stock market debut up over 42 percent

You might think that Anaplan CEO, Frank Calderoni would have had a few sleepless nights this week. His company picked a bad week to go public as market instability rocked tech stocks. Still he wasn’t worried, and today the company had by any measure a successful debut with the stock soaring up over 42 percent. As of 4 pm ET, it hit $24.18, up from the IPO price of $17. Not a bad way to launch your company.

Stock Chart: Yahoo Finance

“I feel good because it really shows the quality of the company, the business model that we have and how we’ve been able to build a growing successful business, and I think it provides us with a tremendous amount of opportunity going forward,” Calderoni told TechCrunch.

Calderoni joined the company a couple of years ago, and seemed to emerge from Silicon Valley central casting as former CFO at Red Hat and Cisco along with stints at IBM and SanDisk. He said he has often wished that there were a tool around like Anaplan when he was in charge of a several thousand person planning operation at Cisco. He indicated that while they were successful, it could have been even more so with a tool like Anaplan.

“The planning phase has not had much change in in several decades. I’ve been part of it and I’ve dealt with a lot of the pain. And so having something like Anaplan, I see it’s really being a disrupter in the planning space because of the breadth of the platform that we have. And then it goes across organizations to sales, supply chain, HR and finance, and as we say, really connects the data, the people and the plan to make for better decision making as a result of all that,” he said.

Calderoni describes Anaplan as a planning and data analysis tool. In his previous jobs he says that he spent a ton of time just gathering data and making sure they had the right data, but precious little time on analysis. In his view Anaplan, lets companies concentrate more on the crucial analysis phase.

“Anaplan allows customers to really spend their time on what I call forward planning where they can start to run different scenarios and be much more predictive, and hopefully be able to, as we’ve seen a lot of our customers do, forecast more accurately,” he said.

Anaplan was founded in 2006 and raised almost $300 million along the way. It achieved a lofty valuation of $1.5 billion in its last round, which was $60 million in 2017. The company has just under 1000 customers including Del Monte, VMware, Box and United.

Calderoni says although the company has 40 percent of its business outside the US, there are plenty of markets left to conquer and they hope to use today’s cash infusion in part to continue to expand into a worldwide company.


By Ron Miller

IBM files formal JEDI protest a day before bidding process closes

IBM announced yesterday that it has filed a formal protest with the U.S. Government Accountability Office over the structure of the Pentagon’s winner-take-all $10 billion, 10-year JEDI cloud contract. The protest came just a day before the bidding process is scheduled to close. As IBM put it in a blog post, they took issues with the single vendor approach. They are certainly not alone.

Just about every vendor short of Amazon, which has remained mostly quiet, has been complaining about this strategy. IBM certainly faces a tough fight going up against Amazon and Microsoft.

IBM doesn’t disguise the fact that it thinks the contract has been written for Amazon to win and they believe the one-vendor approach simply doesn’t make sense. “No business in the world would build a cloud the way JEDI would and then lock in to it for a decade. JEDI turns its back on the preferences of Congress and the administration, is a bad use of taxpayer dollars and was written with just one company in mind.” IBM wrote in the blog post explaining why it was protesting the deal before a decision was made or the bidding was even closed.

For the record, DOD spokesperson Heather Babb told TechCrunch last month that the bidding is open and no vendor is favored. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Much like Oracle, which filed a protest of its own back in August, IBM is a traditional vendor that was late to the cloud. It began a journey to build a cloud business in 2013 when it purchased Infrastructure as a Service vendor SoftLayer and has been using its checkbook to buy software services to add on top of SoftLayer ever since. IBM has concentrated on building cloud services around AI, security, big data, blockchain and other emerging technologies.

Both IBM and Oracle have a problem with the one-vendor approach, especially one that locks in the government for a 10-year period. It’s worth pointing out that the contract actually is an initial two-year deal with two additional three year options and a final two year option. The DOD has left open the possibility this might not go the entire 10 years.

It’s also worth putting the contract in perspective. While 10 years and $10 billion is nothing to sneeze at, neither is it as market altering as it might appear, not when some are predicting the cloud will be $100 billion a year market very soon.

IBM uses the blog post as a kind of sales pitch as to why it’s a good choice, while at the same time pointing out the flaws in the single vendor approach and complaining that it’s geared toward a single unnamed vendor that we all know is Amazon.

The bidding process closes today, and unless something changes as a result of these protests, the winner will be selected next April


By Ron Miller

Zuora partners with Amazon Pay to expand subscription billing options

Zuora, the SaaS company helping organizations manage payments for subscription businesses, announced today that it had been selected as a Premier Partner in the Amazon Pay Global Partner Program. 

The “Premier Partner” distinction means businesses using Zuora’s billing platform can now easily integrate Amazon’s digital payment system as an option during checkout or recurring payment processes. 

The strategic rationale for Zuora is clear, as the partnership expands the company’s product offering to prospective and existing customers.  The ability to support a wide array of payment methodologies is a key value proposition for subscription businesses that enables them to service a larger customer base and provide a more seamless customer experience.

It also doesn’t hurt to have a deep-pocketed ally like Amazon in a fairly early-stage industry.  With omnipotent tech titans waging war over digital payment dominance, Amazon has reportedly doubled down on efforts to spread Amazon Pay usage, cutting into its own margins and offering incentives to retailers.

As adoption of Amazon Pay spreads, subscription businesses will be compelled to offer the service as an available payment option and Zuora should benefit from supporting early billing integration.

For Amazon Pay, teaming up with Zuora provides direct access to Zuora’s customer base, which caters to tens of millions of subscribers. 

With Zuora minimizing the complexity of adding additional payment options, which can often disrupt an otherwise unobtrusive subscription purchase experience, the partnership with Zuora should help spur Amazon Pay adoption and reduce potential friction.

“By extending the trust and convenience of the Amazon experience to Zuora, merchants around the world can now streamline the subscription checkout experience for their customers,” said Vice President of Amazon Pay, Patrick Gauthier.  “We are excited to be working with Zuora to accelerate the Amazon Pay integration process for their merchants and provide a fast, simple and secure payment solution that helps grow their business.”

The world subscribed

The collaboration with Amazon Pay represents another milestone for Zuora, which completed its IPO in April of this year and is now looking to further differentiate its offering from competing in-house systems or large incumbents in the Enterprise Resource Planning (ERP) space, such as Oracle or SAP.   

Going forward, Zuora hopes to play a central role in ushering a broader shift towards a subscription-based economy. 

Tien Tzuo, founder and CEO of Zuora, told TechCrunch he wants the company to help businesses first realize they should be in the subscription economy and then provide them with the resources necessary to flourish within it.

“Our vision is the world subscribed.”  said Tzuo. “We want to be the leading company that has the right technology platform to get companies to be successful in the subscription economy.”

The partnership will launch with publishers “The Seattle Times” and “The Telegraph”, with both now offering Amazon Pay as a payment method while running on the Zuora platform.


By Arman Tabatabai

Snowflake scoops up another blizzard of cash with $450 million round

When Snowflake, the cloud data warehouse, landed a $263 million investment earlier this year, CEO Bob Muglia speculated that it would be the last money his company would need before an eventual IPO. But just 9 months after that statement, the company announced a second even larger round. This time it’s getting $450 million, as an unexpected level of growth led them to seek additional cash.

Sequoia Capital led the round, joined by new investor Meritech Capital and existing investors Altimeter Capital, Capital One Growth Ventures, Madrona Venture Group, Redpoint Ventures, Sutter Hill Ventures and Wing Ventures. Today’s round brings the total raised to over $928 million with $713 million coming just this year. That’s a lot of dough.

Oh and the valuation has skyrocketed too from $1.5 billion in January to $3.5 billion with today’s investment. “We are increasing the valuation from the prior round substantially, and it’s driven by the growth numbers of almost quadrupling the revenue, and tripling the customer base,” company CFO Thomas Tuchscherer told TechCrunch.

At the time of the $263 million round, Muglia was convinced the company had enough funds and that the next fundraise would be an IPO. “We have put ourselves on the path to IPO. That’s our mid- to long-term plan. This funding allows us to go directly to IPO and gives us sufficient capital, that if we choose, IPO would be our next funding step,” he said in January.

Tuchscherer said in fact that was the plan at the time of the first batch of funding. He joined the company, partly because of his experience bringing Talend public in 2016, but he said the growth has been so phenomenal, that they felt it was necessary to change course.

“When we raised $263 million earlier in the year, we raised based on a plan that was ambitious in terms of growth and investment. We are exceeding and beating that, and it prompted us to explore how do we accelerate investment to continue driving the company’s growth,” he said.

Running on both Amazon Web Services and Microsoft Azure, which they added as a supported platform earlier this year, certainly contributed to the increased sales, and forced them to rethink the amount of money it would take to fuel their growth spurt.

“I think it’s very important as a distinction that we view the funding as being customer driven in the sense that in order to meet the demand that we’re seeing in the market for Snowflake, we have to invest in our infrastructure, as well as in our R&D capacity. So  the funding that we’re raising now is meant to finance those two core investments,” he stressed

The number of employees is skyrocketing as the company adds customers. Just eight months ago the company had around 350 employees. Today it has close to 650. Tuchscherer expects that to grow to between 900 and 1000 by the end of January, not that far off.

As for that IPO, surely that is still a goal, but the growth simply got in the way. “We are building the company to be autonomous and to be a large independent company. It’s definitely on the horizon,” he said.

While Tuchscherer wouldn’t definitively say that the company is looking to support at least one more cloud platform in addition to Amazon and Microsoft, he strongly hinted that such a prospect could happen.

The company also plans to plunge a lot of money into the sales team, building out new sales offices in the US and doubling their presence around the world, while also enhancing the engineering and R&D teams to expand their product offerings.

Just this year alone the company has added Netflix, Office Depot, DoorDash, Netgear, Ebates and Yamaha as customers. Other customers include Capital One, Lions Gate and Hubspot.


By Ron Miller

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.


By Frederic Lardinois

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 


By Frederic Lardinois

Google introduces dual-region storage buckets to simplify data redundancy

Google is playing catch-up in the cloud, and as such it wants to provide flexibility to differentiate itself from AWS and Microsoft. Today, the company announced a couple of new options to help separate it from the cloud storage pack.

Storage may seem stodgy, but it’s a primary building block for many cloud applications. Before you can build an application you need the data that will drive it, and that’s where the storage component comes into play.

One of the issues companies have as they move data to the cloud is making sure it stays close to the application when it’s needed to reduce latency. Customers also require redundancy in the event of a catastrophic failure, but still need access with low latency. The latter has been a hard problem to solve until today when Google introduced a new dual-regional storage option.

As Google described it in the blog post announcing the new feature, “With this new option, you write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. No replication tool is needed to do this and there are no network charges associated with replicating the data, which means less overhead for you storage administrators out there. In the event of a region failure, we transparently handle the failover and ensure continuity for your users and applications accessing data in Cloud Storage.”

This allows companies to have redundancy with low latency, while controlling where it goes without having to manually move it should the need arise.

Knowing what you’re paying

Companies don’t always require instant access to data, and Google (and other cloud vendors) offer a variety of storage options, making it cheaper to store and retrieve archived data. As of today, Google is offering a clear way to determine costs, based on customer storage choice types. While it might not seem revolutionary to let customers know what they are paying, Dominic Preuss, Google’s director of product management says it hasn’t always been a simple matter to calculate these kinds of costs in the cloud. Google decided to simplify it by clearly outlining the costs for medium (Nearline) and long-term (Coldline) storage across multiple regions.

As Google describes it, “With multi-regional Nearline and Coldline storage, you can access your data with millisecond latency, it’s distributed redundantly across a multi-region (U.S., EU or Asia), and you pay archival prices. This is helpful when you have data that won’t be accessed very often, but still needs to be protected with geographically dispersed copies, like media archives or regulated content. It also simplifies management.”

Under the new plan, you can select the type of storage you need, the kind of regional coverage you want and you can see exactly what you are paying.

Google Cloud storage pricing options. Chart: Google

Each of these new storage services has been designed to provide additional options for Google Cloud customers, giving them more transparency around pricing and flexibility and control over storage types, regions and the way they deal with redundancy across data stores.


By Ron Miller

Google’s Apigee officially launches its API monitoring service

It’s been about two years since Google acquired API management service Apigee. Today, the company is announcing new extensions that make it easier to integrate the service with a number of Google Cloud services, as well as the general availability of the company’s API monitoring solution.

Apigee API monitoring allows operations teams to get more insight into how their APIs are performing. The idea here is to make it easy for these teams to figure out when there’s an issue and what’s the root cause for it by giving them very granular data. “APIs are now part of how a lot of companies are doing business,” Ed Anuff, Apigee’s former SVP of product strategy and now Google’s product and strategy lead for the service, told me. “So that tees up the need for API monitoring.”

Anuff also told me that he believes that it’s still early days for enterprise API adoption — but that also means that Apigee is currently growing fast as enterprise developers now start adopting modern development techniques. “I think we’re actually still pretty early in enterprise adoption of APIs,” he said. “So what we’re seeing is a lot more customers going into full production usage of their APIs. A lot of what we had seen before was people using it for maybe an experiment or something that they were doing with a couple of partners.” He also attributed part of the recent growth to customers launching more mobile applications where APIs obviously form the backbone of much of the logic that drives those apps.

API Monitoring was already available as a beta, but it’s now generally available to all Apigee customers.

Given that it’s now owned by Google, it’s no surprise that Apigee is also launching deeper integrations with Google’s cloud services now — specifically services like BigQuery, Cloud Firestore, Pub/Sub, Cloud Storage and Spanner. Some Apigee customers are already using this to store every message passed through their APIs to create extensive logs, often for compliance reasons. Others use Cloud Firestore to personalize content delivery for their web users or to collect data from their APIs and then send that to BigQuery for analysis.

Anuff stressed that Apigee remains just as open to third-party integrations as it always was. That is part of the core promise of APIs, after all.


By Frederic Lardinois

Egnyte hauls in $75M investment led by Goldman Sachs

Egnyte launched in 2007 just two years after Box, but unlike its enterprise counterpart, which went all-cloud and raised hundreds of millions of dollars, Egnyte saw a different path with a slow and steady growth strategy and a hybrid niche, recognizing that companies were going to keep some content in the cloud and some on prem. Up until today it had raised a rather modest $62.5 million, and hadn’t taken a dime since 2013, but that all changed when the company announced a whopping $75 million investment.

The entire round came from a single investor, Goldman Sachs’ Private Capital Investing arm, a part of Goldman’s Special Situations group. Holger Staude, vice president of Goldman Sachs Private Capital Investing will join Egnyte’s board under the terms of the deal. He says Goldman liked what it saw, a steady company poised for bigger growth with the right influx of capital. In fact, the company has had more than eight straight quarters of growth and have been cash flow positive since Q4 in 2016.

“We were impressed by the strong management team and the company’s fiscal discipline, having grown their top line rapidly without requiring significant outside capital for the past several years. They have created a strong business model that we believe can be replicated with success at a much larger scale,” Staude explained.

Company CEO Vineet Jain helped start the company as a way to store and share files in a business context, but over the years, he has built that into a platform that includes security and governance components. Jain also saw a market poised for growth with companies moving increasing amounts of data to the cloud. He felt the time was right to take on more significant outside investment. He said his first step was to build a list of investors, but Goldman shined through, he said.

“Goldman had reached out to us before we even started the fundraising process. There was inbound interest. They were more aggressive compared to others. Given there was prior conversations, the path to closing was shorter,” he said.

He wouldn’t discuss a specific valuation, but did say they have grown 6x since the 2013 round and he got what he described as “a decent valuation.” As for an IPO, he predicted this would be the final round before the company eventually goes public. “This is our last fund raise. At this level of funding, we have more than enough funding to support a growth trajectory to IPO,” he said.

Philosophically, Jain has always believed that it wasn’t necessary to hit the gas until he felt the market was really there. “I started off from a point of view to say, keep building a phenomenal product. Keep focusing on a post sales experience, which is phenomenal to the end user. Everything else will happen. So this is where we are,” he said.

Jain indicated the round isn’t about taking on money for money’s sake. He believes that this is going to fuel a huge growth stage for the company. He doesn’t plan to focus these new resources strictly on the sales and marketing department, as you might expect. He wants to scale every department in the company including engineering, posts-sales and customer success.

Today the company has 450 employees and more than 14,000 customers across a range of sizes and sectors including Nasdaq, Thoma Bravo, AppDynamics and Red Bull. The deal closed at the end of last month.


By Ron Miller