Amagi tunes into $100M for cloud-based video content creation, monetization

Media technology company Amagi announced Friday $100 million to further develop its cloud-based SaaS technology for broadcast and connected televisions.

Accel, Avataar Ventures and Norwest Venture Partners joined existing investor Premji Invest in the funding round, which included buying out stakes held by Emerald Media and Mayfield Fund. Nadathur Holdings continues as an existing investor. The latest round gives Amagi total funding raised to date of $150 million, Baskar Subramanian, co-founder and CEO of Amagi, told TechCrunch.

New Delhi-based Amagi provides cloud broadcast and targeted advertising software so that customers can create content that can be created and monetized to be distributed via broadcast TV and streaming TV platforms like The Roku Channel, Samsung TV Plus and Pluto TV. The company already supports more than 2,000 channels on its platform across over 40 countries.

“Video is a complex technology to manage — there are large files and a lot of computing,” Subramanian said. “What Amagi does is enable a content owner with zero technology knowledge to simplify that complex workflow and scalable infrastructure. We want to make it easy to plug in and start targeting and monetizing advertising.”

As a result, Amagi customers see operational cost savings on average of up to 40% compared to traditional delivery models and their ad impressions grow between five and 10 times.

The new funding comes at a time when the company is experiencing rapid growth. For example, Amagi grew 30 times in the United States alone over the past few years, Subramanian said. Amagi commands an audience of over 2 billion people, and the U.S. is its largest market. The company also sees growth potential in both Latin America and Europe.

In addition, in the last year, revenue grew 136%, while new customer year over year growth was 44%, including NBCUniversal — Subramanian said the Tokyo Olympics were run on Amagi’s platform for NBC, USA Today and ABS-CBN.

As more of a shift happens with video content being developed for connected television experiences, which he said is a $50 billion market, the company plans to use the new funding for sales expansion, R&D to invest in the company’s product pipeline and potential M&A opportunities. The company has not made any acquisitions yet, Subramanian added.

In addition to the broadcast operations in New Delhi, Amagi also has an innovation center in Bangalore and offices in New York, Los Angeles and London.

“Consumer behavior and infrastructure needs have reached a critical mass and new companies are bringing in the next generation of media, and we are a large part of that growth,” Subramanian said. “Sports will come on quicker, while live news and events are going to be one of the biggest growth areas.”

Shekhar Kirani, partner at Accel, said Amagi is taking a unique approach to enterprise SaaS due to that $50 billion industry shift happening in video content, where he sees half of the spend moving to connected television platforms quickly.

Some of the legacy players like Viacom and NBCUniversal created their own streaming platforms, where Netflix and Amazon have also been leading, but not many SaaS companies are enabling the transition, he said.

When Kirani met Subramanian five years ago, Amagi was already well funded, but Kirani was excited about the platform and wanted to help the company scale. He believes the company has a long tailwind because it is saving people time and enabling new content providers to move faster to get their content distributed.

“Amagi is creating a new category and will grow fast,” Kirani added. “They are already growing and doubling each year with phenomenal SaaS metrics because they are helping content providers to connect to any audience.

 


By Christine Hall

Box, Zoom chief product officers discuss how the changing workplace drove their latest collaboration

If the past 18 months is any indication, the nature of the workplace is changing. And while Box and Zoom already have integrations together, it makes sense for them to continue to work more closely.

Their newest collaboration is the Box app for Zoom, a new type of in-product integration that allows users to bring apps into a Zoom meeting to provide the full Box experience.

While in Zoom, users can securely and directly access Box to browse, preview and share files from Zoom — even if they are not taking part in an active meeting. This new feature follows a Zoom integration Box launched last year with its “Recommended Apps” section that enables access to Zoom from Box so that workflows aren’t disrupted.

The companies’ chief product officers, Diego Dugatkin with Box and Oded Gal with Zoom, discussed with TechCrunch why seamless partnerships like these are a solution for the changing workplace.

With digitization happening everywhere, an integration of “best-in-breed” products for collaboration is essential, Dugatkin said. Not only that, people don’t want to be moving from app to app, instead wanting to stay in one environment.

“It’s access to content while never having to leave the Zoom platform,” he added.

It’s also access to content and contacts in different situations. When everyone was in an office, meeting at a moment’s notice internally was not a challenge. Now, more people are understanding the value of flexibility, and both Gal and Dugatkin expect that spending some time at home and some time in the office will not change anytime soon.

As a result, across the spectrum of a company, there is an increasing need for allowing and even empowering people to work from anywhere, Dugatkin said. That then leads to a conversation about sharing documents in a secure way for companies, which this collaboration enables.

The new Box and Zoom integration enables meeting in a hybrid workplace: chat, video, audio, computers or mobile devices, and also being able to access content from all of those methods, Gal said.

“Companies need to be dynamic as people make the decision of how they want to work,” he added. “The digital world is providing that flexibility.”

This long-term partnership is just scratching the surface of the continuous improvement the companies have planned, Dugatkin said.

Dugatkin and Gal expect to continue offering seamless integration before, during and after meetings: utilizing Box’s cloud storage, while also offering the ability for offline communication between people so that they can keep the workflow going.

“As Diego said about digitization, we are seeing continuous collaboration enhanced with the communication aspect of meetings day in and day out,” Gal added. “Being able to connect between asynchronous and synchronous with Zoom is addressing the future of work and how it is shaping where we go in the future.”


By Christine Hall

Bodo.ai secures $14M, aims to make Python better at handling large-scale data

Bodo.ai, a parallel compute platform for data workloads, is developing a compiler to make Python portable and efficient across multiple hardware platforms. It announced Wednesday a $14 million Series A funding round led by Dell Technologies Capital.

Python is one of the top programming languages used among artificial intelligence and machine learning developers and data scientists, but as Behzad Nasre, co-founder and CEO of Bodo.ai, points out, it is challenging to use when handling large-scale data.

Bodo.ai, headquartered in San Francisco, was founded in 2019 by Nasre and Ehsan Totoni, CTO, to make Python higher performing and production ready. Nasre, who had a long career at Intel before starting Bodo, met Totoni and learned about the project that he was working on to democratize machine learning and enable parallel learning for everyone. Parallelization is the only way to extend Moore’s Law, Nasre told TechCrunch.

Bodo does this via a compiler technology that automates the parallelization so that data and ML developers don’t have to use new libraries, APIs or rewrite Python into other programming languages or graphics processing unit code to achieve scalability. Its technology is being used to make data analytics tools in real time and is being used across industries like financial, telecommunications, retail and manufacturing.

“For the AI revolution to happen, developers have to be able to write code in simple Python, and that high-performance capability will open new doors,” Totoni said. “Right now, they rely on specialists to rewrite them, and that is not efficient.”

Joining Dell in the round were Uncorrelated Ventures, Fusion Fund and Candou Ventures. Including the new funding, Bodo has raised $14 million in total. The company went after Series A dollars after its product had matured and there was good traction with customers, prompting Bodo to want to scale quicker, Nasre said.

Nasre feels Dell Technologies Capital was “uniquely positioned to help us in terms of reserves and the role they play in the enterprise at large, which is to have the most effective salesforce in enterprise.”

Though he was already familiar with Nasre, Daniel Docter, managing director at Dell Technologies, heard about Bodo from a data scientist friend who told Docter that Bodo’s preliminary results “were amazing.”

Much of Dell’s investments are in the early-stage and in deep tech founders that understand the problem. Docter puts Totoni and Nasre in that category.

“Ehsan fits this perfectly, he has super deep technology knowledge and went out specifically to solve the problem,” he added. “Behzad, being from Intel, saw and lived with the problem, especially seeing Hadoop fail and Spark take its place.”

Meanwhile, with the new funding, Nasre intends to triple the size of the team and invest in R&D to build and scale the company. It will also be developing a marketing and sales team.

The company is now shifting from financing to customer- and revenue-focused as it aims to drive up adoption by the Python community.

“Our technology can translate simple code into the fast code that the experts will try,” Totoni said. “I joined Intel Labs to work on the problem, and we think we have the first solution that will democratize machine learning for developers and data scientists. Now, they have to hand over Python code to specialists who rewrite it for tools. Bodo is a new type of compiler technology that democratizes AI.”

 


By Christine Hall

Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”


By Christine Hall

ThirdAI raises $6M to democratize AI to any hardware

Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.

Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.

Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.

ThirdAI’s technology is designed to be “a smarter approach to deep learning,” using its algorithm and software innovations to make general-purpose central processing units (CPU) faster than graphics processing units for training large neural networks, Shrivastava said. Companies abandoned CPUs years ago in favor of graphics processing units that could more quickly render high-resolution images and video concurrently. The downside is that there is not much memory in graphics processing units, and users often hit a bottleneck while trying to develop AI, he added.

“When we looked at the landscape of deep learning, we saw that much of the technology was from the 1980s, and a majority of the market, some 80%, were using graphics processing units, but were investing in expensive hardware and expensive engineers and then waiting for the magic of AI to happen,” he said.

He and his team looked at how AI was likely to be developed in the future and wanted to create a cost-saving alternative to graphics processing units. Their algorithm, “sub-linear deep learning engine,” instead uses CPUs that don’t require specialized acceleration hardware.

Swaroop “Kittu” Kolluri, founder and managing partner at Neotribe, said this type of technology is still early. Current methods are laborious, expensive and slow, and for example, if a company is running language models that require more memory, it will run into problems, he added.

“That’s where ThirdAI comes in, where you can have your cake and eat it, too,” Kolluri said. “It is also why we wanted to invest. It is not just the computing, but the memory, and ThirdAI will enable anyone to do it, which is going to be a game changer. As technology around deep learning starts to get more sophisticated, there is no limit to what is possible.”

AI is already at a stage where it has the capability to solve some of the hardest problems, like those in healthcare and seismic processing, but he notes there is also a question about climate implications of running AI models.

“Training deep learning models can be more expensive than having five cars in a lifetime,” Shrivastava said. “As we move on to scale AI, we need to think about those.”

 


By Christine Hall

Youreka Labs spins out with $8M to provide smart mobile assistant apps to field workers

Mobile field service startup Youreka Labs Inc. raised an $8 million Series A round of funding co-led by Boulder Ventures and Grotech Ventures, with participation from Salesforce Ventures.

The Maryland-based company also officially announced its CEO — Bill Karpovich joined to lead the company after previously holding executive roles with General Motors and IBM Cloud & Watson Platform.

Youreka Labs spun out into its own company from parent company Synaptic Advisors, a cloud consulting business focused on the customer relationship management transformations using Salesforce and other artificial intelligence and automation technologies.

The company is developing robotic smart mobile assistants that enable frontline workers to perform their jobs more safely and efficiently. This includes things like guided procedures, smart forms and photo or video capture. Youreka is also embedded in existing Salesforce mobile applications like Field Service Mobile so that end-users only have to operate from one mobile app.

Youreka has identified four use cases so far: healthcare, manufacturing, energy and utilities and the public sector. Working with companies like Shell, P&G, Humana and the Transportation Security Administration, the company’s technology makes it possible for someone to share their knowledge and processes with their colleagues in the field, Karpovich told TechCrunch.

“In the case of healthcare, we are taking complex medical assessments from a doctor and pushing them out to nurses out in the field by gathering data into a simple mobile app and making it useful,” he added. “It allows nurses to do a great job without being doctors themselves.”

Karpovich said the company went after Series A dollars because it was “time for it to be on its own.” He was receiving inbound interest from investors, and the capital would enable the company to proceed more rapidly. Today, the company is focused on the Salesforce ecosystem, but that can evolve over time, he added.

The funding will be used to expand the company’s reach and products. He expects to double the team in the next six to 12 months across engineering to be able to expand the platform. Youreka boasts 100 customers today, and Karpovich would also like to invest in marketing to grow that base.

In addition to the use cases already identified, he sees additional potential in financial services and insurance, particularly for those assessing damage. The company is also concentrated in the United States, and Karpovich has plans to expand in the U.K. and Europe.

In 2020, the company grew 300%, which Karpovich attributes to the need of this kind of tool in field service. Youreka has a licensing model with charges per end user per month, along with an administrative license, for the people creating the apps, that also charges per user and per month pricing.

“There are 2.5 million jobs open today because companies can’t find people with the right skills,” he added. “We are making these jobs accessible. Some say that AI is doing away with jobs, but we are using AI to enhance jobs. If we can take 90% of the knowledge and give a digital assistant to less experienced people, you could open up so many opportunities.”

 


By Christine Hall

Talkdesk’s valuation jumps to $10B with Series D for smart contact centers

Talkdesk, a provider of cloud-based contact center software, announced $230 million in new Series D funding that more than triples the company’s valuation to $10 billion, Talkdesk founder CEO Tiago Paiva confirmed to TechCrunch.

New investors Whale Rock Capital Management, TI Platform Management and Alpha Square Group came on board for this round and were joined by existing investors Amity Ventures, Franklin Templeton, Top Tier Capital Partners, Viking Global Investors and Willoughby Capital.

Talkdesk uses artificial intelligence and machine learning to improve customer service for midmarket and enterprise businesses. It counts over 1,800 companies as customers, including IBM, Acxiom, Trivago and Fujitsu.

“The global pandemic was a big part of how customers interact and how we interacted with our customers, all working from home,” Paiva said. “When you think about ordering things online, call, chat and email interactions became more important, and contact centers became core in every company.”

San Francisco-based Talkdesk now has $498 million in total funding since its inception in 2011. It was a Startup Battlefield contestant at TechCrunch Disrupt NY in 2012. The new funding follows a $143 million Series C raised last July that gave it a $3 billion valuation. Prior to that, Talkdesk brought in $100 million in 2018.

The 2020 round was planned to buoy the company’s growth and expansion to nearly 2,000 employees, Paiva said. For the Series D, there was much interest from investors, including a lot of inbound interest, he said.

“We were not looking for new money, and finished last year with more money in the bank that we raised in the last round, but the investors were great and wanted to make it work,” Paiva said.

Half of Talkdesk’s staff is in product and engineering, an area he intends to double down in with the new funding as well as adding to the headcount to support customers. The company also has plans to expand in areas where it is already operating — Latin America, Europe, Asia and Australia.

This year, the company unveiled new features, including Talkdesk Workspace, a customizable interface for contact center teams, and Talkdesk Builder, a set of tools for customization across workspaces, routing, reporting and integrations. It also launched contact center tools designed specifically for financial services and healthcare organizations and what it is touting as the “industry’s first human-in-the-loop tool for contact centers and continues to lower the barrier to adopting artificial intelligence solutions.”

In addition to the funding, Talkdesk appointed its first chief financial officer, Sydney Carey, giving the company an executive team of 50% women, Paiva said. Carey has a SaaS background and joins the company from Sumo Logic, where she led the organization through an initial public offering in 2020.

“We were hiring our executive team over the past couple of years, and were looking for a CFO, but with no specific timeline, just looking for the right person,” Paiva added. “Sydney was the person we wanted to hire.”

Though Paiva didn’t hint at any upcoming IPO plans, TI Platform Management co-founders Trang Nguyen and Alex Bangash have followed Paiva since he started the company and said they anticipate the company heading in that direction in the future.

“Talkdesk is an example of what can happen when a strong team is assembled behind a winning idea,” they said in a written statement. “Today, Talkdesk has become near ubiquitous as a SaaS product with adoption across a broad array of industries and integrations with the most popular enterprise cloud platforms, including Salesforce, Zendesk and Slack.”

 


By Christine Hall

Salesforce’s Kathy Baxter is coming to TC Sessions: SaaS to talk AI

As the use of AI has grown and developed over the last several years, companies like Salesforce have tried to tap into it to improve their software and help customers operate faster and more efficiently. Kathy Baxter, principal architect for the ethical AI practice at Salesforce will be joining us at TechCrunch Sessions: SaaS on October 27th to talk about the impact of AI on SaaS.

Baxter, who has more than 20 years of experience as a software architect, joined Salesforce in 2017 after more than a decade at Google in a similar role. We’re going to tap into her expertise on a panel discussing AI’s growing role in software.

Salesforce was one of the earlier SaaS adherents to AI, announcing its artificial intelligence tooling, which the company dubbed Einstein, in 2016. While the positioning makes it sound like a product, it’s actually much more than a single entity. It’s a platform component, which the various pieces of the Salesforce platform can tap into to take advantage of various types of AI to help improve the user experience.

That could involve feeding information to customer service reps on Service Cloud to make the call move along more efficiently, helping salespeople find the customers most likely to close a deal soon in the Sales Cloud or helping marketing understand the optimal time to send an email in the Marketing Cloud.

The company began building out its AI tooling early on with the help of 175 data scientists and has been expanding on that initial idea since. Other companies, both startups and established companies like SAP, Oracle and Microsoft have continued to build AI into their platforms as Salesforce has. Today, many SaaS companies have some underlying AI built into their service.

Baxter will join us to discuss the role of AI in software today and how that helps improve the operations of the service itself, and what the implications are of using AI in your software service as it becomes a mainstream part of the SaaS development process.

In addition to our discussion with Baxter, the conference will also include Databricks’ Ali Ghodsi, UiPath’s Daniel Dines, Puppet’s Abby Kearns, and investors Casey Aylward and Sarah Guo, among others. We hope you’ll join us. It’s going to be a stimulating day.

Buy your pass now to save up to $100, and use CrunchMatch to make expanding your empire quick, easy and efficient. We can’t wait to see you in October!

Is your company interested in sponsoring or exhibiting at TC Sessions: SaaS 2021? Contact our sponsorship sales team by filling out this form.



By Ron Miller

DigitalOcean says data breach exposed customer billing data

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 


By Zack Whittaker

Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) and Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 


By Frederic Lardinois

Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.


By Jonathan Shieber

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”


By Frederic Lardinois

Testing platform Tricentis acquires performance testing service Neotys

If you develop software for a large enterprise company, chances are you’ve heard of Tricentis. If you don’t develop software for a large enterprise company, chances are you haven’t. The software testing company with a focus on modern cloud and enterprise applications was founded in Austria in 2007 and grew from a small consulting firm to a major player in this field, with customers like Allianz, BMW, Starbucks, Deutsche Bank, Toyota and UBS. In 2017, the company raised a $165 million Series B round led by Insight Venture Partners.

Today, Tricentis announced that it has acquired Neotys, a popular performance testing service with a focus on modern enterprise applications and a tests-as-code philosophy. The two companies did not disclose the price of the acquisition. France-based Neotys launched in 2005 and raised about €3 million before the acquisition. Today, it has about 600 customers for its NeoLoad platform. These include BNP Paribas, Dell, Lufthansa, McKesson and TechCrunch’s own corporate parent, Verizon.

As Tricentis CEO Sandeep Johri noted, testing tools were traditionally script-based, which also meant they were very fragile whenever an application changed. Early on, Tricentis introduced a low-code tool that made the automation process both easier and resilient. Now, as even traditional enterprises move to DevOps and release code at a faster speed than ever before, testing is becoming both more important and harder for these companies to implement.

“You have to have automation and you cannot have it be fragile, where it breaks, because then you spend as much time fixing the automation as you do testing the software,” Johri said. “Our core differentiator was the fact that we were a low-code, model-based automation engine. That’s what allowed us to go from $6 million in recurring revenue eight years ago to $200 million this year.”

Tricentis, he added, wants to be the testing platform of choice for large enterprises. “We want to make sure we do everything that a customer would need, from a testing perspective, end to end. Automation, test management, test data, test case design,” he said.

The acquisition of Neotys allows the company to expand this portfolio by adding load and performance testing as well. It’s one thing to do the standard kind of functional testing that Tricentis already did before launching an update, but once an application goes into production, load and performance testing becomes critical as well.

“Before you put it into production — or before you deploy it — you need to make sure that your application not only works as you expect it, you need to make sure that it can handle the workload and that it has acceptable performance,” Johri noted. “That’s where load and performance testing comes in and that’s why we acquired Neotys. We have some capability there, but that was primarily focused on the developers. But we needed something that would allow us to do end-to-end performance testing and load testing.”

The two companies already had an existing partnership and had integrated their tools before the acquisition — and many of its customers were already using both tools, too.

“We are looking forward to joining Tricentis, the industry leader in continuous testing,” said Thibaud Bussière, president and co-founder at Neotys. “Today’s Agile and DevOps teams are looking for ways to be more strategic and eliminate manual tasks and implement automated solutions to work more efficiently and effectively. As part of Tricentis, we’ll be able to eliminate laborious testing tasks to allow teams to focus on high-value analysis and performance engineering.”

NeoLoad will continue to exist as a stand-alone product, but users will likely see deeper integrations with Tricentis’ existing tools over time, include Tricentis Analytics, for example.

Johri tells me that he considers Tricentis one of the “best kept secrets in Silicon Valley” because the company not only started out in Europe (even though its headquarters is now in Silicon Valley) but also because it hasn’t raised a lot of venture rounds over the years. But that’s very much in line with Johri’s philosophy of building a company.

“A lot of Silicon Valley tends to pay attention only when you raise money,” he told me. “I actually think every time you raise money, you’re diluting yourself and everybody else. So if you can succeed without raising too much money, that’s the best thing. We feel pretty good that we have been very capital efficient and now we’re recognized as a leader in the category — which is a huge category with $30 billion spend in the category. So we’re feeling pretty good about it.”


By Frederic Lardinois

Amazon will expand its Amazon Care on-demand healthcare offering U.S.-wide this summer

Amazon is apparently pleased with how its Amazon Care pilot in Seattle has gone, since it announced this morning that it will be expanding the offering across the U.S. this summer, and opening it up to companies of all sizes, in addition to its own employees. The Amazon Care model combines on-demand and in-person care, and is meant as a solution from the search giant to address shortfalls in current offering for employer-sponsored healthcare offerings.

In a blog post announcing the expansion, Amazon touted the speed of access to care made possible for its employees and their families via the remote, chat and video-based features of Amazon Care. These are facilitated via a dedicated Amazon Care app, which provides direct, live chats via a nurse or doctor. Issues that then require in-person care is then handled via a house call, so a medical professional is actually sent to your home to take care of things like administering blood tests or doing a chest exam, and prescriptions are delivered to your door as well.

The expansion is being handled differently across both in-person and remote variants of care; remote services will be available starting this summer to both Amazon’s own employees, as well as other companies who sign on as customers, starting this summer. The in-person side will be rolling out more slowly, starting with availability in Washington, D.C., Baltimore, and “other cities in the coming months” according to the company.

As of today, Amazon Care is expanding in its home state of Washington to begin serving other companies. The idea is that others will sing on to make Amazon Care part of its overall benefits package for employees. Amazon is touting the speed advantages of testing services, including results delivery, for things including COVID-19 as a major strength of the service.

The Amazon Care model has a surprisingly Amazon twist, too – when using the in-person care option, the app will provide an updating ETA for when to expect your physician or medical technician, which is eerily similar to how its primary app treats package delivery.

While the Amazon Care pilot in Washington only launched a year-and-a-half ago, the company has had its collective mind set on upending the corporate healthcare industry for some time now. It announced a partnership with Berkshire Hathaway and JPMorgan back at the very beginning of 2018 to form a joint venture specifically to address the gaps they saw in the private corporate healthcare provider market.

That deep pocketed all-star team ended up officially disbanding at the outset of this year, after having done a whole lot of not very much in the three years in between. One of the stated reasons that Amazon and its partners gave for unpartnering was that each had made a lot of progress on its own in addressing the problems it had faced anyway. While Berkshire Hathaway and JPMorgan’s work in that regard might be less obvious, Amazon was clearly referring to Amazon Care.

It’s not unusual for large tech companies with lots of cash on the balance sheet and a need to attract and retain top-flight talent to spin up their own healthcare benefits for their workforces. Apple and Google both have their own on-campus wellness centers staffed by medical professionals, for instance. But Amazon’s ambitious have clearly exceeded those of its peers, and it looks intent on making a business line out of the work it did to improve its own employee care services — a strategy that isn’t too dissimilar from what happened with AWS, by the way.


By Darrell Etherington

Microsoft Azure expands its NoSQL portfolio with Managed Instances for Apache Cassandra

At its Ignite conference today, Microsoft announced the launch of Azure Managed Instance for Apache Cassandra, its latest NoSQL database offering and a competitor to Cassandra-centric companies like Datastax. Microsoft describes the new service as a ‘semi-managed offering that will help companies bring more of their Cassandra-based workloads into its cloud.

“Customers can easily take on-prem Cassandra workloads and add limitless cloud scale while maintaining full compatibility with the latest version of Apache Cassandra,” Microsoft explains in its press materials. “Their deployments gain improved performance and availability, while benefiting from Azure’s security and compliance capabilities.”

Like its counterpart, Azure SQL Manages Instance, the idea here is to give users access to a scalable, cloud-based database service. To use Cassandra in Azure before, businesses had to either move to Cosmos DB, its highly scalable database service which supports the Cassandra, MongoDB, SQL and Gremlin APIs, or manage their own fleet of virtual machines or on-premises infrastructure.

Cassandra was originally developed at Facebook and then open-sourced in 2008. A year later, it joined the Apache Foundation and today it’s used widely across the industry, with companies like Apple and Netflix betting on it for some of their core services, for example. AWS launched a managed Cassandra-compatible service at its re:Invent conference in 2019 (it’s called Amazon Keyspaces today), Microsoft only launched the Cassandra API for Cosmos DB last November. With today’s announcement, though, the company can now offer a full range of Cassandra-based servicer for enterprises that want to move these workloads to its cloud.


By Frederic Lardinois