Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debe financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.


By Ron Miller

DigitalOcean says data breach exposed customer billing data

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 


By Zack Whittaker

Wasabi announces $30M in debt financing as cloud storage business continues to grow

We may be in the thick of a pandemic with all of the economic fallout that comes from that, but certain aspects of technology don’t change no matter the external factors. Storage is one of them. In fact, we are generating more digital stuff than ever, and Wasabi, a Boston-based startup that has figured out a way to drive down the cost of cloud storage is benefiting from that.

Today it announced a $30 million debt financing round led led by Forestay Capital, the technology innovation arm of Waypoint Capital with help from previous investors. As with the previous round, Wasabi is going with home office investors, rather than traditional venture capital firms. Today’s round brings the total raised to $110 million, according to the company.

Founder and CEO David Friend says the company needs the funds to keep up with the rapid growth. “We’ve got about 15,000 customers today, hundreds of petabytes of storage, 2500 channel partners, 250 technology partners — so we’ve been busy,” he said.

He says that revenue continues to grow in spite of the impact of COVID-19 on other parts of the economy. “Revenue grew 5x last year. It’ll probably grow 3.5x this year. We haven’t seen any real slowdown from the Coronavirus. Quarter over quarter growth will be in excess of 40% — this quarter over Q1 — so it’s just continuing on a torrid pace,” he said.

He said the money will be used mostly to continue to expand its growing infrastructure requirements. The more they store, the more data centers they need and that takes money. He is going the debt route because his products are backed by a tangible asset, the infrastructure used to store all the data in the Wasabi system. And it turns out that debt financing is a lot cheaper in terms of payback than equity terms.

“Our biggest need is to build more infrastructure, because we are constantly buying equipment. We have to pay for it even before it fills up with customer data, so we’re raising another debt round now,” Friend said. He added, “Part of what we’re doing is just strengthening our balance sheet to give us access to more inexpensive debt to finance the building of the infrastructure.”

The challenge for a company like Wasabi, which is looking to capture a large chunk of the growing cloud storage market is the infrastructure piece. It needs to keep building more to meet increasing demand, while keeping costs down, which remains its primary value proposition with customers.

The money will help the company expand into new markets as many countries have data sovereignty laws that require data to be stored in-country. That requires more money and that’s the thinking behind this round.

The company launched in 2015. It previously raised $68 million in 2018.


By Ron Miller

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”


By Frederic Lardinois

Storj brings low-cost decentralized cloud storage to the enterprise

Storj, a startup that developed a low-cost, decentralized cloud storage solution, announced a new version today called Tardigrade Decentralized Cloud Storage Service.

The new service comes with an enterprise service level agreement (SLA) that promises 99.9999999% file durability and over 99.95 percent availability, which it claims is on par with Amazon S3.

The company has come up with an unusual system to store files safely, taking advantage of excess storage capacity around the world. They are effectively doing with storage what Airbnb does with an extra bedroom, enabling people and organizations to sell that excess capacity to make extra money.

It’s fair to ask if that wouldn’t be a dangerous way to store files, but Storj Executive Chairman Ben Golub says that they have come up with a way of distributing the data across drives on their network so that no single file would ever be fully exposed.

“What we do in order to make this work is, first, before any data is uploaded, our customers encrypt the data, and they hold the keys so nobody else can decrypt the data. And then every part of a file is split into 80 pieces, of which any 30 can be used to reconstitute it. And each of those 80 pieces goes to a different drive on the network,” Golub explained.

That means even if a hacker were able to somehow get at one encrypted piece of the puzzle, he or she would need 29 others, and the encryption keys, to put the file back together again. “All a storage node operator sees is gibberish, and they only see a portion of the file. So if a bad person wanted to get your file, they would have to compromise something like 30 different networks in order to get [a single file], and even if they did that they would only have gibberish unless you also lost your encryption keys,” he said.

The ability to buy excess capacity allows Storj to offer storage at much lower prices than typical cloud storage. Golub says his company’s list prices are one-half to one-third cheaper than Amazon S3 storage and it’s S3-compatible.

The company launched in 2014 and has 20,000 users on 100,000 distributed nodes today, but this is the first time it has launched an enterprise version of the cloud storage solution.


By Ron Miller

What Nutanix got right (and wrong) in its IPO roadshow

Back in 2016, Nutanix decided to take the big step of going public. Part of that process was creating a pitch deck and presenting it during its roadshow, a coming-out party when a company goes on tour prior to its IPO and pitches itself to investors of all stripes.

It’s a huge moment in the life of any company, and after talking to CEO Dheeraj Pandey and CFO Duston Williams, one we better understood. They spoke about how every detail helped define their company and demonstrate its long-term investment value to investors who might not have been entirely familiar with the startup or its technology.

Pandey and Williams reported going through more than 100 versions of the deck before they finished the one they took on the road. Pandey said they had a data room checking every fact, every number — which they then checked yet again.

In a separate Extra Crunch post, we looked at the process of building that deck. Today, we’re looking more closely at the content of the deck itself, especially the numbers Nutanix presented to the world. We want to see what investors did more than three years ago and what’s happened since — did the company live up to its promises?

Plan of attack


By Ron Miller

OpsRamp raises $37.5M for its hybrid IT operations platform

OpsRamp, a service that helps IT teams discover, monitor, manage and — maybe most importantly — automate their hybrid environments, today announced that it has closed a $37.5 million funding round led by Morgan Stanley Expansion Capital, with participation from existing investor Sapphire Ventures and new investor Hewlett Packard Enterprise.

OpsRamp last raised funding in 2017, when Sapphire led its $20 million Series A round.

At the core of OpsRamp’s services is its AIOps platform. Using machine learning and other techniques, this service aims to help IT teams manage increasingly complex infrastructure deployments, provide intelligent alerting, and eventually automate more of their tasks. The company’s overall product portfolio also includes tools for cloud monitoring and incident management.

The company says its annual recurrent revenue increased by 300 percent in 2019 (though we obviously don’t know what number it started 2019 with). In total, OpsRamp says it now has 1,400 customers on its platform and alliances with AWS, ServiceNow, Google Cloud Platform and Microsoft Azure.

OpsRamp co-founder and CEO Varma Kunaparaju

According to OpsRamp co-founder and CEO Varma Kunaparaju, most of the company’s customers are mid to large enterprises. “These IT teams have large, complex, hybrid IT environments and need help to simplify and consolidate an incredibly fragmented, distributed and overwhelming technology and infrastructure stack,” he said. “The company is also seeing success in the ability of our partners to help us reach global enterprises and Fortune 5000 customers.”

Kunaparaju told me that the company plans to use the new funding to expand its go-to-market efforts and product offerings. “The company will be using the money in a few different areas, including expanding our go-to-market motion and new pursuits in EMEA and APAC, in addition to expanding our North American presence,” he said. “We’ll also be doubling-down on product development on a variety of fronts.”

Given that hybrid clouds only increase the workload for IT organizations and introduce additional tools, it’s maybe no surprise that investors are now interested in companies that offer services that rein in this complexity. If anything, we’ll likely see more deals like this one in the coming months.

“As more of our customers transition to hybrid infrastructure, we find the OpsRamp platform to be a differentiated IT operations management offering that aligns well with the core strategies of HPE,” said Paul Glaser, Vice President and Head of Hewlett Packard Pathfinder. “With OpsRamp’s product vision and customer traction, we felt it was the right time to invest in the growth and scale of their business.”


By Frederic Lardinois

Microsoft’s Azure Synapse Analytics bridges the gap between data lakes and warehouses

At its annual Ignite conference in Orlando, Fla., Microsoft today announced a major new Azure service for enterprises: Azure Synapse Analytics, which Microsoft describes as “the next evolution of Azure SQL Data Warehouse.” Like SQL Data Warehouse, it aims to bridge the gap between data warehouses and data lakes, which are often completely separate. Synapse also taps into a wide variety of other Microsoft services, including Power BI and Azure Machine Learning, as well as a partner ecosystem that includes Databricks, Informatica, Accenture, Talend, Attunity, Pragmatic Works and Adatis. It’s also integrated with Apache Spark.

The idea here is that Synapse allows anybody working with data in those disparate places to manage and analyze it from within a single service. It can be used to analyze relational and unstructured data, using standard SQL.

Screen Shot 2019 10 31 at 10.11.48 AM

Microsoft also highlights Synapse’s integration with Power BI, its easy to use business intelligence and reporting tool, as well as Azure Machine Learning for building models.

With the Azure Synapse studio, the service provides data professionals with a single workspace for prepping and managing their data, as well as for their big data and AI tasks. There’s also a code-free environment for managing data pipelines.

As Microsoft stresses, businesses that want to adopt Synapse can continue to use their existing workloads in production with Synapse and automatically get all of the benefits of the service. “Businesses can put their data to work much more quickly, productively, and securely, pulling together insights from all data sources, data warehouses, and big data analytics systems,” writes Microsoft CVP of Azure Data, Rohan Kumar.

In a demo at Ignite, Kumar also benchmarked Synapse against Google’s BigQuery. Synapse ran the same query over a petabyte of data in 75% less time. He also noted that Synapse can handle thousands of concurrent users — unlike some of Microsoft’s competitors.


By Frederic Lardinois

Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.


By Greg Kumparak

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”


By Frederic Lardinois

Clumio raises $51M to bring enterprise backup into the 21st century

Creating backups for massive enterprise deployments may feel like a solved problem, but for the most part, we’re still talking about complex hardware and software setups. Clumio, which is coming out of stealth today, wants to modernize enterprise data protection by eliminating the on-premise hardware in favor of a flexible, SaaS-style cloud-based backup solution.

For the first time, Clumio also today announced that it has raised a total of $51 million in a Series A and B round since it was founded in 2017. The $11 million Series A round closed in October 2017 and the Series B round in November 2018, Clumio founder and CEO Poojan Kumar told me. Kumar’s previous company, storage startup PernixData, was acquired by Nutanix in 2016. It doesn’t look like the investors made their money back, though.

Clumio is backed by investors like Sutter Hill Ventures, which led the Series A, and Index Ventures, which drove the Series B together with Sutter Hill. Other individual investors include Mark Leslie, founder of Veritas Technologies, and John Thompson, chairman of the board at Microsoft .

2019 08 12 1904

“Enterprise workloads are being ‘SaaS-ified’ because IT can no longer afford the time, complexity and expense of building and managing heavy on-prem hardware and software solutions if they are to successfully deliver against their digital transformation initiatives,” said Kumar. “Unlike legacy backup vendors, Clumio SaaS is born in the cloud. We have leveraged the most secure and innovative cloud services available, now and in the future, within our service to ensure that we can meet customer requirements for backup, regardless of where the data is.”

In its current iteration, Clumio can be used to secure data from on-premise, VMware Cloud for AWS and native AWS service workloads. Given this list, it doesn’t come as a surprise that Clumio’s backend, too, makes extensive use of public cloud services.

The company says that it already has several customers, though it didn’t disclose any in today’s announcement.


By Frederic Lardinois

With the acquisition closed, IBM goes all in on Red Hat

IBM’s massive $34 billion acquisition of Red Hat closed a few weeks ago and today, the two companies are now announcing the first fruits of this process. For the most part, today’s announcement further IBM’s ambitions to bring its products to any public and private cloud. That was very much the reason why IBM acquired Red Hat in the first place, of course, so this doesn’t come as a major surprise, though most industry watchers probably didn’t expect this to happen this fast.

Specifically, IBM is announcing that it is bringing its software portfolio to Red Hat OpenShift, Red Hat’s Kubernetes-based container platform that is essentially available on any cloud that allows its customers to run Red Hat Enterprise Linux.

In total, IBM has already optimized more than 100 products for OpenShift and bundled them into what it calls “Cloud Paks.” There are currently five of these Paks: Cloud Pak for Data, Application, Integration, Automation and Multicloud Management. These technologies, which IBM’s customers can now run on AWS, Azure, Google Cloud Platform or IBM’s own cloud, among others, include DB2, WebSphere, API Connect, Watson Studio and Cognos Analytics.

“Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments,” said Jim Whitehurst, president and CEO of Red Hat, in today’s announcement. “This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime. Combined with IBM’s strong industry expertise and supported by a vast ecosystem of passionate developers and partners, customers can create modern apps with the technologies of their choice and the flexibility to deploy in the best environment for the app – whether that is on-premises or across multiple public clouds.”

IBM argues that a lot of the early innovation on the cloud was about bringing modern, customer-facing applications to market, with a focus on basic cloud infrastructure. Now, however, enterprises are looking at how they can take their mission-critical applications to the cloud, too. For that, they want access to an open stack that works across clouds.

In addition, IBM also today announced the launch of a fully managed Red Hat OpenShift service on its own public cloud, as well as OpenShift on IBM Systems, including the IBM Z and LinuxONE mainframes, as well as the launch of its new Red Hat consulting and technology services.


By Frederic Lardinois

Capital One CTO George Brady will join us at TC Sessions: Enterprise

When you think of old, giant mainframes that sit in the basement of a giant corporation, still doing the same work they did 30 years ago, chances are you’re thinking about a financial institution. It’s the financial enterprises, though, that are often leading the charge in bringing new technologies and software development practices to their employees and customers. That’s in part because they are in a period of disruption that forces them to become more nimble. Often, this means leaving behind legacy technology and embracing the cloud.

At TC Sessions Enterprise, which is happening on September 5 in San Francisco, Capital One executive VP in charge of its technology operations, George Brady, will talk about the company’s journey from legacy hardware and software to embracing the cloud and open source, all while working in a highly regulated industry. Indeed, Capital One was among the first companies to embrace the Facebook-led Open Compute project and it’s a member of the Cloud Native Computing Foundation. It’s this transformation at Captial One that Brady is leading.

At our event, Brady will join a number of other distinguished panelists to specifically talk about his company’s journey to the cloud. There, Captial One is using serverless compute, for example, to power its Credit Offers API using AWS’s Lambda service, as well as a number of other cloud technologies.

Before joining Capital One in 2014 as its CTO in 2014, Brady ran Fidelity Investment’s global enterprise infrastructure team from 2009 to 2014 and served as Goldman Sachs’ head of global business applications infrastructure before that.

Currently, he leads cloud application and platform productization for Capital One. Part of that portfolio is Critical Stack, a secure container orchestration platform for the enterprise. Capital One’s goal with this work is to help companies across industries become more compliant, secure and cost-effective operating in the public cloud.

Early bird tickets are still on sale for $249, grab yours today before we sell out.

Student tickets are for just $75 – grab them here.


By Frederic Lardinois

Three years after moving off AWS, Dropbox infrastructure continues to evolve

Conventional wisdom would suggest that you close your data centers and move to the cloud, not the other way around, but in 2016 Dropbox undertook the opposite journey. It (mostly) ended its long-time relationship with AWS and built its own data centers.

Of course, that same conventional wisdom would say, it’s going to get prohibitively expensive and more complicated to keep this up. But Dropbox still believes it made the right decision and has found innovative ways to keep costs down.

Akhil Gupta, VP of Engineering at Dropbox, says that when Dropbox decided to build its own data centers, it realized that as a massive file storage service, it needed control over certain aspects of the underlying hardware that was difficult for AWS to provide, especially in 2016 when Dropbox began making the transition.

“Public cloud by design is trying to work with multiple workloads, customers and use cases and it has to optimize for the lowest common denominator. When you have the scale of Dropbox, it was entirely possible to do what we did,” Gupta explained.

Alone again, naturally

One of the key challenges of trying to manage your own data centers, or build a private cloud where you still act like a cloud company in a private context, is that it’s difficult to innovate and scale the way the public cloud companies do, especially AWS. Dropbox looked at the landscape and decided it would be better off doing just that, and Gupta says even with a small team — the original team was just 30 people — it’s been able to keep innovating.


By Ron Miller

CloudBees acquires Electric Cloud to build out its software delivery management platform

CloudBees, the enterprise continuous integration and delivery service (and the biggest contributor to the Jenkins open-source automation server), today announced that it has acquired Electric Cloud, a continuous delivery and automation platform that first launched all the way back in 2002.

The two companies did not disclose the price of the acquisition, but CloudBees has raised a total of $113.2 million while Electric Cloud raised $64.6 million from the likes of  Rembrandt Venture Partners, U.S. Venture Partners, RRE Ventures and Next47.

CloudBees plans to integrate Electric Cloud’s application release automation platform into its offerings. Electric Flow’s 110 employees will join CloudBees.

“As of today, we provide customers with best-of-breed CI/CD software from a single vendor, establishing CloudBees as a continuous delivery powerhouse,” said Sacha Labourey, the CEO and co-founder of CloudBees, in today’s announcement. “By combining the strength of CloudBees, Electric Cloud, Jenkins and Jenkins X, CloudBees offers the best CI/CD solution for any application, from classic to Kubernetes, on-premise to cloud, self-managed to self-service.”

Electric Cloud offers its users a number of tools for automating their release pipelines and managing the application lifecycle afterward.

“We are looking forward to joining CloudBees and executing on our shared goal of helping customers build software that matters,” said Carmine Napolitano, CEO, Electric Cloud. “The combination of CloudBees’ industry-leading continuous integration and continuous delivery platform, along with Electric Cloud’s industry-leading application release orchestration solution, gives our customers the best foundation for releasing apps at any speed the business demands.”

As CloudBees CPO Christina Noren noted during her keynote at CloudBees’ developer conference today, the company’s customers are getting more sophisticated in their DevOps platforms, but they are starting to run into new problems now that they’ve reached this point.

“What we’re seeing is that these customers have disconnected and fragmented islands of information,” she said. “There’s the view that each development team has […] and there’s not a common language, there’s not a common data model, and there’s not an end-to-end process that unites from left to right, top to bottom.” This kind of integrated system is what CloudBees is building toward (and that competitors like GitLab would argue they already offer). Today’s announcement marks a first step into this direction toward building a full software delivery management platform, though others are likely to follow.

During his company’s developer conference, Labourey also today noted that CloudBees will profit from Electric Cloud’s long-standing expertise in continuous delivery and that the acquisition will turn CloudBees into a “DevOps powerhouse.”

Today’s announcement follows CloudBees’ acquisition of CI/CD tool CodeShip last year. As of now, CodeShip remains a stand-alone product in the company’s lineup. It’ll be interesting to see how CloudBees will integrate Electric Cloud’s products to build a more integrated system.

 


By Frederic Lardinois