Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”


By Frederic Lardinois

Mirantis brings extensions to its Lens Kubernetes IDE, launches a new Kubernetes distro

Earlier this year, Mirantis, the company that now owns Docker’s enterprise business, acquired Lens, a desktop application that provides developers with something akin to an IDE for managing their Kubernetes clusters. At the time, Mirantis CEO Adrian Ionel told me that the company wants to offer enterprises the tools to quickly build modern applications. Today, it’s taking another step in that direction with the launch of an extensions API for Lens that will take the tool far beyond its original capabilities

In addition to this update to Lens, Mirantis also today announced a new open-source project: k0s. The company describes it as “a modern, 100% upstream vanilla Kubernetes distro that is designed and packaged without compromise.”

It’s a single optimized binary without any OS dependencies (besides the kernel). Based on upstream Kubernetes, k0s supports Intel and Arm architectures and can run on any Linux host or Windows Server 2019 worker nodes. Given these requirements, the team argues that k0s should work for virtually any use case, ranging from local development clusters to private datacenters, telco clusters and hybrid cloud solutions.

“We wanted to create a modern, robust and versatile base layer for various use cases where Kubernetes is in play. Something that leverages vanilla upstream Kubernetes and is versatile enough to cover use cases ranging from typical cloud based deployments to various edge/IoT type of cases.,” said Jussi Nummelin, Senior Principal Engineer at Mirantis and founder of k0s. “Leveraging our previous experiences, we really did not want to start maintaining the setup and packaging for various OS distros. Hence the packaging model of a single binary to allow us to focus more on the core problem rather than different flavors of packaging such as debs, rpms and what-nots.”

Mirantis, of course, has a bit of experience in the distro game. In its earliest iteration, back in 2013, the company offered one of the first major OpenStack distributions, after all.

As for Lens, the new API, which will go live next week to coincide with KubeCon, will enable developers to extend the service with support for other Kubernetes-integrated components and services.

“Extensions API will unlock collaboration with technology vendors and transform Lens into a fully featured cloud native development IDE that we can extend and enhance without limits,” said Miska Kaipiainen, the co-founder of the Lens open-source project and senior director of engineering at Mirantis. “If you are a vendor, Lens will provide the best channel to reach tens of thousands of active Kubernetes developers and gain distribution to your technology in a way that did not exist before. At the same time, the users of Lens enjoy quality features, technologies and integrations easier than ever.”

The company has already lined up a number of popular CNCF projects and vendors in the cloud-native ecosystem to build integrations. These include Kubernetes security vendors Aqua and Carbonetes, API gateway maker Ambassador Labs and AIOps company Carbon Relay. Venafi, nCipher, Tigera, Kong and StackRox are also currently working on their extensions.

“Introducing an extensions API to Lens is a game-changer for Kubernetes operators and developers, because it will foster an ecosystem of cloud-native tools that can be used in context with the full power of Kubernetes controls, at the user’s fingertips,” said Viswajith Venugopal, StackRox software engineer and developer of KubeLinter. “We look forward to integrating KubeLinter with Lens for a more seamless user experience.”


By Frederic Lardinois

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.


By Frederic Lardinois

Microsoft’s Edge browser is coming to Linux in October

Microsoft’s Edge browser is coming to Linux, starting with the Dev channel. The first of these previews will go live in October.

When Microsoft announced that it would switch its Edge browser to the Chromium engine, it vowed to bring it to every popular platform. At the time, Linux wasn’t part of that list, but by late last year, it became clear that Microsoft was indeed working on a Linux version. Later, at this year’s Build, a Microsoft presenter even used it during a presentation.

Image Credits: Microsoft

Starting in October, Linux users will be able to either download the browser from the Edge Insider website or through their native package managers. Linux users will get the same Edge experience as users on Windows and macOS, as well as access to its built-in privacy and security features. For the most part, I would expect the Linux experience to be on par with that on the other platforms.

Microsoft also today announced that its developers have made more than 3,700 commits to the Chromium project so far. Some of this work has been on support for touchscreens, but the team also contributed to areas like accessibility features and developer tools, on top of core browser fundamentals.

Currently, Microsoft Edge is available on Windows 7, 8 and 10, as well as macOS, iOS and Android.


By Frederic Lardinois

Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: we want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of this of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lance makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate lens and how to make life for developers more enjoyable in this cloud -native technology landscape,” Miska Kaipiainen, the former CEO Kontena and now Mirantis’ Director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context on what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds, and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said that the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said that the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open-source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.


By Frederic Lardinois

Google launches the Open Usage Commons, a new organization for managing open-source trademarks

Google, in collaboration with a number of academic leaders and its consulting partner SADA Systems, today announced the launch of the Open Usage Commons, a new organization that aims to help open-source projects manage their trademarks.

To be fair, at first glance, open-source trademarks may not sound like it would be a major problem (or even a really interesting topic), but there’s more here than meets the eye. As Google’s director of open source Chris DiBona told me, trademarks have increasingly become an issue for open-source projects, not necessarily because there have been legal issues around them, but because commercial entities that want to use the logo or name of an open-source project on their websites, for example, don’t have the reassurance that they are free to use those trademarks.

“One of the things that’s been rearing its ugly head over the last couple years has been trademarks,” he told me. “There’s not a lot of trademarks in open-source software in general, but particularly at Google, and frankly the higher tier, the more popular open-source projects, you see them more and more over the last five years. If you look at open-source licensing, they don’t treat trademarks at all the way they do copyright and patents, even Apache, which is my favorite license, they basically say, nope, not touching it, not our problem, you go talk.”

Traditionally, open-source licenses didn’t cover trademarks because there simply weren’t a lot of trademarks in the ecosystem to worry about. One of the exceptions here was Linux, a trademark that is now managed by the Linux Mark Institute on behalf of Linus Torvalds.

With that, commercial companies aren’t sure how to handle this situation and developers also don’t know how to respond to these companies when they ask them questions about their trademarks.

“What we wanted to do is give guidance around how you can share trademarks in the same way that you would share patents and copyright in an open-source license […],” DiBona explained. “And the idea is to basically provide that guidance, you know, provide that trademarks file, if you will, that you include in your source code.”

Google itself is putting three of its own open-source trademarks into this new organization: the Angular web application framework for mobile, the Gerrit code review tool and the Istio service mesh. “All three of them are kind of perfect for this sort of experiment because they’re under active development at Google, they have a trademark associated with them, they have logos and, in some cases, a mascot.”

One of those mascots is Diffi, the Kung Fu Code Review Cuckoo, because, as DiBona noted, “we were trying to come up with literally the worst mascot we could possibly come up with.” It’s now up to the Open Usage Commons to manage that trademark.

DiBona also noted that all three projects have third parties shipping products based on these projects (think Gerrit as a service).

Another thing DiBona stressed is that this is an independent organization. Besides himself, Jen Phillips, a senior engineering manager for open source at Google is also on the board. But the team also brought in SADA’s CTO Miles Ward (who was previously at Google); Allison Randal, the architect of the Parrot virtual machine and member of the board of directors of the Perl Foundation and OpenStack Foundation, among others; Charles Isbel, the dean of the Georgia Institute of Technology College of Computing, and Cliff Lampe, a professor at the School of Information at the University of Michigan and a “rising star,” as DiBona pointed out.

“These are people who really have the best interests of computer science at heart, which is why we’re doing this,” DiBona noted. “Because the thing about open source — people talk about it all the time in the context of business and all the rest. The reason I got into it is because through open source we could work with other people in this sort of fertile middle space and sort of know what the deal was.”


By Frederic Lardinois

SUSE acquires Kubernetes management platform Rancher Labs

SUSE, which describes itself as ‘the world’s largest independent open source company,’ today announced that it has acquired Rancher Labs, a company that has long focused on making it easier for enterprises to make their container clusters.

The two companies did not disclose the price of the acquisition, but Rancher was well funded, with a total of $95 million in investments. It’s also worth mentioning that it’s only been a few months since the company announced its $40 million Series D round led by Telstra Ventures. Other investors include the likes of Mayfield and Nexus Venture Partners, GRC SinoGreen and F&G Ventures.

Like similar companies, Rancher’s original focus was first on Docker infrastructure before it pivoted to putting its emphasis on Kubernetes once that became the de facto standard for container orchestration. Unsurprisingly, this is also why SUSE is now acquiring this company. After a number of ups and downs — and various ownership changes — SUSE has now found its footing again and today’s acquisition shows that its aiming to capitalize on its current strengths.

Just last month, the company reported that the annual contract value of its booking increased by 30% year over year and that it saw a 63% increase in customer deals worth more than $1 million in the last quarter, with its cloud revenue growing 70%. While it is still in the Linux distribution business that the company was founded on, today’s SUSE is a very different company, offering various enterprise platforms (including its Cloud Foundry-based Cloud Application Platform), solutions and services. And while it already offered a Kubernetes-based container platform, Rancher’s expertise will only help it to build out this business.

“This is an incredible moment for our industry, as two open source leaders are joining forces. The merger of a leader in Enterprise Linux, Edge Computing and AI with a leader in Enterprise Kubernetes Management will disrupt the market to help customers accelerate their digital transformation journeys,” said SUSE CEO Melissa Di Donato in today’s announcement. “Only the combination of SUSE and Rancher will have the depth of a globally supported and 100% true open source portfolio, including cloud native technologies, to help our customers seamlessly innovate across their business from the edge to the core to the cloud.”

The company describes today’s acquisition as the first step in its ‘inorganic growth strategy’ and Di Donato notes that this acquisition will allow the company to “play an even more strategic role with cloud service providers, independent hardware vendors, systems integrators and value-added resellers who are eager to provide greater customer experiences.”


By Frederic Lardinois

OpenStack adds the StarlinkX edge computing stack to its top-level projects

The OpenStack Foundation today announced that StarlingX, a container-based system for running edge deployments, is now a top-level project. With this, it joins the main OpenStack private and public cloud infrastructure project, the Airship lifecycle management system, Kata Containers and the Zuul CI/CD platform.

What makes StarlingX a bit different from some of these other projects is that it is a full stack for edge deployments — and in that respect, it’s maybe more akin to OpenStack than the other projects in the foundation’s stable. It uses open-source components from the Ceph storage platform, the KVM virtualization solution, Kubernetes and, of course, OpenStack and Linux. The promise here is that StarlingX can provide users with an easy way to deploy container and VM workloads to the edge, all while being scalable, lightweight and providing low-latency access to the services hosted on the platform.

Early StarlingX adopters include China UnionPay, China Unicom and T-Systems. The original codebase was contributed to the foundation by Intel and Wind River System in 2018. Since then, the project has seen 7,108 commits from 211 authors.

“The StarlingX community has made great progress in the last two years, not only in building great open source software but also in building a productive and diverse community of contributors,” said Ildiko Vancsa, ecosystem technical lead at the OpenStack Foundation. “The core platform for low-latency and high-performance applications has been enhanced with a container-based, distributed cloud architecture, secure booting, TPM device enablement, certificate management and container isolation. StarlingX 4.0, slated for release later this year, will feature enhancements such as support for Kata Containers as a container runtime, integration of the Ussuri version of OpenStack, and containerization of the remaining platform services.”

It’s worth remembering that the OpenStack Foundation has gone through a few changes in recent years. The most important of these is that it is now taking on other open-source infrastructure projects that are not part of the core OpenStack project but are strategically aligned with the organization’s mission. The first of these to graduate out of the pilot project phase and become top-level projects were Kata Containers and Zuul in April 2019, with Airship joining them in October.

Currently, the only pilot project for the OpenStack Foundation is its OpenInfra Labs project, a community of commercial vendors and academic institutions, including the likes of Boston University, Harvard, MIT, Intel and Red Hat, that are looking at how to better test open-source code in production-like environments.

 


By Frederic Lardinois

IBM and Red Hat expand their telco, edge and AI enterprise offerings

At its Think Digital conference, IBM and Red Hat today announced a number of new services that all center around 5G edge and AI. The fact that the company is focusing on these two areas doesn’t come as a surprise, given that both edge and AI are two of the fastest-growing businesses in enterprise computing. Virtually every telecom company is now looking at how to best capitalize on the upcoming 5G rollouts, and most forward-looking enterprises are trying to figure out how to best plan around this for their own needs.

As IBM’s recently minted president Jim Whitehurst told me ahead of today’s announcement, he believes that IBM (in combination with Red Hat) is able to offer enterprises a very differentiated service because, unlike the large hyper clouds, IBM isn’t interested in locking these companies into a homogeneous cloud.

“Where IBM is competitively differentiated, is around how we think about helping clients on a journey to what we call hybrid cloud,” said Whitehurst, who hasn’t done a lot of media interviews since he took the new role, which still includes managing Red Hat. “Honestly, everybody has hybrid clouds. I wish we had a more differentiated term. One of the things that’s different is how we’re talking about how you think about an application portfolio that, by necessity, you’re going to have in multiple ways. If you’re a large enterprise, you probably have a mainframe running a set of transactional workloads that probably are going to stay there for a long time because there’s not a great alternative. And there’s going to be a set of applications you’re going to want to run in a distributed environment that need to access that data — all the way out to you running a factory floor and you want to make sure that the paint sprayer doesn’t have any defects while it’s painting a door.”

BARCELONA, CATALONIA, SPAIN – 2019/02/25: The IBM logo is seen during MWC 2019. (Photo by Paco Freire/SOPA Images/LightRocket via Getty Images)

He argues that IBM, at its core, is all about helping enterprises think about how to best run their workloads software, hardware and services perspective. “Public clouds are phenomenal, but they are exposing a set of services in a homogeneous way to enterprises,” he noted, while he argues that IBM is trying to weave all of these different pieces together.

Later in our discussion, he argued that the large public clouds essentially force enterprises to fit their workloads to those clouds’ service. “The public clouds do extraordinary things and they’re great partners of ours, but their primary business is creating these homogeneous services, at massive volumes, and saying ‘if your workloads fit into this, we can run it better, faster, cheaper etc.’ And they have obviously expanded out. They’ve added services. They are not saying we can put a box on-premise, but you’re still fitting into their model.”

On the news side, IBM is launching new services to automate business planning, budgeting and forecasting, for example, as well as new AI-driven tools for building and running automation apps that can handle routine tasks either autonomously or with the help of a human counterpart. The company is also launching new tools for call-center automation.

The most important AI announcement is surely Watson AIOps, though, which is meant to help enterprises detect, diagnose and respond to IT anomalies in order to reduce the effects of incidents and outages for a company.

On the telco side, IBM is launching new tools like the Edge Application Manager, for example, to make it easier to enable AI, analytics and IoT workloads on the edge, powered by IBM’s open-source Open Horizon edge computing project. The company is also launching a new Telco Network Cloud manager built on top of Red Hat OpenShift and the ability to also leverage the Red Hat OpenStack Platform (which remains to be an important platform for telcos and represents a growing business for IBM/Red Hat). In addition, IBM is launching a new dedicated IBM Services team for edge computing and telco cloud to help these customers build out their 5G and edge-enabled solutions.

Telcos are also betting big on a lot of different open-source technologies that often form the core of their 5G and edge deployments. Red Hat was already a major player in this space, but the acquisition has only accelerated this, Whitehurst argued. “Since the acquisition […] telcos have a lot more confidence in IBM’s capabilities to serve them long term and be able to serve them in mission-critical context. But importantly, IBM also has the capability to actually make it real now.”

A lot of the new telco edge and hybrid cloud deployments, he also noted, are built on Red Hat technologies but built by IBM, and neither IBM nor Red Hat could have really brought these to fruition in the same way. Red Hat never had the size, breadth and skills to pull off some of these projects, Whitehurst argued.

Whitehurst also argued that part of the Red Hat DNA that he’s bringing to the table now is helping IBM to think more in terms of ecosystems. “The DNA that I think matters a lot that Red Hat brings to the table with IBM — and I think IBM is adopting and we’re running with it — is the importance of ecosystems,” he said. “All of Red Hat’s software is open source. And so really, what you’re bringing to the table is ecosystems.”

It’s maybe no surprise then that the telco initiatives are backed by partners like Cisco, Dell Technologies, Juniper, Intel, Nvidia, Samsung, Packet, Equinix, Hazelcast, Sysdig, Turbonomics, Portworx, Humio, Indra Minsait, EuroTech, Arrow, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks and ADVA.

In many ways, Red Hat pioneered the open-source business model and Whitehurst argued that having Red Hat as part of the IBM family means it’s now easier for the company to make the decision to invest even more in open source. “As we accelerate into this hybrid cloud world, we’re going to do our best to leverage open-source technologies to make them real,” he added.


By Frederic Lardinois

Nvidia acquires Cumulus Networks

Nvidia today announced its plans to acquire Cumulus Networks, an open-source centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013 and they formed a first official partnership in 2016.  Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all of the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1  billion in revenue in the last quarter, up 43 percent from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”


By Frederic Lardinois

Granulate announces $12M Series A to optimize infrastructure performance

As companies increasingly look to find ways to cut costs, Granulate, an early-stage Israeli startup, has come up with a clever way to optimize infrastructure usage. Today it was rewarded with a tidy $12 million Series A investment.

Insight Partners led the round with participation from TLV Partners and Hetz Ventures. Lonne Jaffe, managing director at Insight Partners, will be joining the Granulate board under the terms of the agreement. Today’s investment brings the total raised to $15.6 million, according to the company.

The startup claims it can cut infrastructure costs, whether on-prem or in the cloud, from between 20% and 80%. This is not insignificant if they can pull this off, especially in the economic maelstrom in which we find ourselves.

Asaf Ezra, co-founder and CEO at Granulate, says the company achieved the efficiency through a lot of studying about how Linux virtual machines work. Over six months of experimentation, they simply moved the bottleneck around until they learned how to take advantage of the way the Linux kernel operates to gain massive efficiencies.

It turns out that Linux has been optimized for resource fairness, but Granulate’s founders wanted to flip this idea on its head and look for repetitiveness, concentrating on one function instead of fair allocation across many functions, some of which might not really need access at any given moment.

“When it comes to production systems, you have a lot of repetitiveness in the machine, and you basically want it to do one thing really well,” he said.

He points out that it doesn’t even have to be a VM. It could also be a container or a pod in Kubernetes. The important thing to remember is that you no longer care about the interactivity and fairness inherent in Linux; instead, you want that the machine to be optimized for certain things.

“You let us know what your utility function for that production system is, then our agents. basically optimize all the decision making for that utility function. That means that you don’t even have to do any code changes to gain the benefit,” Ezra explained.

What’s more, the solution uses machine learning to help understand how the different utility functions work to provide greater optimization to improve performance even more over time.

Insight’s Jaffe certainly recognized the potential of such a solution, especially right now.

“The need to have high-performance digital experiences and lower infrastructure costs has never been more important, and Granulate has a highly differentiated offering powered by machine learning that’s not dependent on configuration management or cloud resource purchasing solutions,” Jaffe said in a statement.

Ezra understands that a product like his could be particularly helpful at the moment. “We’re in a unique position. Our offering right now helps organizations survive the downturn by saving costs without firing people,” he said.

The company was founded in 2018 and currently has 20 employees. They plan to double that by the end of 2020.


By Ron Miller

Paul Cormier takes over as Red Hat CEO, as Jim Whitehurst moves to IBM

When Ginni Rometty indicated that she was stepping down as IBM CEO at the end of January, the company announced that Arvind Krishna would be taking over, while Red Hat CEO Jim Whitehurst would become president. To fill his role, Red Hat announced today that long-time executive Paul Cormier has been named president and CEO.

Cormier would seem to be a logical choice to run Red Hat, having been with the company since 2001. He joined as its VP of engineering and has seen the company grow from a small startup to a multi-billion dollar company.

Cormier spoke about the historical arc he has witnessed in his years at Red Hat. “Looking back to when I joined, we were in a different position and facing different issues, but the spirit was the same. We were on a mission to convince the world that open source was real, safe and enterprise-grade,” Cormier said in an email to employees about his promotion.

Former CEO Whitehurst certainly sees this as a sensible transition. “After working with him closely for more than a decade, I can confidently say that Paul was the natural choice to lead Red Hat. Having been the driving force behind Red Hat’s product strategy for nearly two decades, he’s been intimately involved in setting the company’s direction and uniquely understands how to help customers and partners make the most out of their cloud strategy,” he said in a statement.

In a Q&A with Cormier on the company website, he talked about the kind of changes he expects to see under his leadership in the next five years of the company. “There’s a term that we use today, ‘applications run the business.’ In five years, I see it becoming the case for the majority of enterprises. And with that, the infrastructure underpinning these applications will be even more critical. Management and security are paramount — and this isn’t just one environment. It’s bare metal and hypervisors to public and private clouds. It’s Linux, VMs, containers, microservices and more,” he said.

When IBM bought Red Hat in 2018 for $34 billion, there was widespread speculation that Whitehurst would eventually take over in an executive position there. Now that that has happened, Cormier will step into run Red Hat.

While Red Hat is under the IBM umbrella, it continues to operate as a separate company with its own executive structure, but that vision that Cormier outlined is in line with how it will fit within the IBM family as it tries to make its mark on the shifting cloud and enterprise open source markets.


By Ron Miller

AWS launches Bottlerocket, a Linux-based OS for container hosting

AWS has launched its own open-source operating system for running containers on both virtual machines and bare metal hosts. Bottlerocket, as the new OS is called, is basically a stripped-down Linux distribution that’s akin to projects like CoreOS’s now-defunct Container Linux and Google’s container-optimized OS. The OS is currently in its developer preview phase, but you can test it as an Amazon Machine Image for EC2 (and by extension, under Amazon EKS, too).

As AWS chief evangelist Jeff Barr notes in his announcement, Bottlerocket supports Docker images and images that conform to the Open Container Initiative image format, which means it’ll basically run all Linux-based containers you can throw at it.

One feature that makes Bottleneck stand out is that it does away with a package-based update system. Instead, it uses an image-based model that, as Barr notes, “allows for a rapid & complete rollback if necessary.” The idea here is that this makes updates easier. At the core of this update process is “The Update Framework,” an open-source project hosted by the Cloud Native Computing Foundation.

AWS says it will provide three years of support (after General Availability) for its own builds of Bottlerocket. As of now, the project is very much focused on AWS, of course, but the code is available on GitHub and chances are we will see others expand on AWS’ work.

The company is launching the project in cooperation with a number of partners, including Alcide, Armory, CrowdStrike, Datadog, New Relic, Sysdig, Tiger, Trend Micro and Waveworks.

“Container-optimized operating systems will give dev teams the additional speed and efficiency to run higher throughput workloads with better security and uptime,” said Michael Gerstenhaber, Director of Product Management at Datadog.” We are excited to work with AWS on Bottlerocket, so that as customers take advantage of the increased scale they can continue to monitor these ephemeral environments with confidence.”

 


By Frederic Lardinois

Databricks makes bringing data into its ‘lakehouse’ easier

Databricks today announced that launch of its new Data Ingestion Network of partners and the launch of its Databricks Ingest service. The idea here is to make it easier for businesses to combine the best of data warehouses and data lakes into a single platform — a concept Databricks likes to call ‘lakehouse.’

At the core of the company’s lakehouse is Delta Lake, Databricks’ Linux Foundation-managed open-source project that brings a new storage layer to data lakes that helps users manage the lifecycle of their data and ensures data quality through schema enforcement, log records and more. Databricks users can now work with the first five partners in the Ingestion Network — Fivetran, Qlik, Infoworks, StreamSets, Syncsort — to automatically load their data into Delta Lake. To ingest data from these partners, Databricks customers don’t have to set up any triggers or schedules — instead, data automatically flows into Delta Lake.

“Until now, companies have been forced to split up their data into traditional structured data and big data, and use them separately for BI and ML use cases. This results in siloed data in data lakes and data warehouses, slow processing and partial results that are too delayed or too incomplete to be effectively utilized,” says Ali Ghodsi, co-founder and CEO of Databricks. “This is one of the many drivers behind the shift to a Lakehouse paradigm, which aspires to combine the reliability of data warehouses with the scale of data lakes to support every kind of use case. In order for this architecture to work well, it needs to be easy for every type of data to be pulled in. Databricks Ingest is an important step in making that possible.”

Databricks VP or Product Marketing Bharath Gowda also tells me that this will make it easier for businesses to perform analytics on their most recent data and hence be more responsive when new information comes in. He also noted that users will be able to better leverage their structured and unstructured data for building better machine learning models, as well as to perform more traditional analytics on all of their data instead of just a small slice that’s available in their data warehouse.

 


By Frederic Lardinois

The 7 most important announcements from Microsoft Ignite

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.


By Frederic Lardinois