Which emerging technologies are enterprise companies getting serious about in 2020?

Startups need to live in the future. They create roadmaps, build products and continually upgrade them with an eye on next year — or even a few years out.

Big companies, often the target customers for startups, live in a much more near-term world. They buy technologies that can solve problems they know about today, rather than those they may face a couple bends down the road. In other words, they’re driving a Dodge, and most tech entrepreneurs are driving a DeLorean equipped with a flux-capacitor.

That situation can lead to a huge waste of time for startups that want to sell to enterprise customers: a business development black hole. Startups are talking about technology shifts and customer demands that the executives inside the large company — even if they have “innovation,” “IT,” or “emerging technology” in their titles — just don’t see as an urgent priority yet, or can’t sell to their colleagues.

How do you avoid the aforementioned black hole? Some recent research that my company, Innovation Leader, conducted in collaboration with KPMG LLP, suggests a constructive approach.

Rather than asking large companies about which technologies they were experimenting with, we created four buckets, based on what you might call “commitment level.” (Our survey had 211 respondents, 62% of them in North America and 59% at companies with greater than $1 billion in annual revenue.) We asked survey respondents to assess a list of 16 technologies, from advanced analytics to quantum computing, and put each one into one of these four buckets. We conducted the survey at the tail end of Q3 2020.

Respondents in the first group were “not exploring or investing” — in other words, “we don’t care about this right now.” The top technology there was quantum computing.

Bucket #2 was the second-lowest commitment level: “learning and exploring.” At this stage, a startup gets to educate its prospective corporate customer about an emerging technology — but nabbing a purchase commitment is still quite a few exits down the highway. It can be constructive to begin building relationships when a company is at this stage, but your sales staff shouldn’t start calculating their commissions just yet.

Here are the top five things that fell into the “learning and exploring” cohort, in ranked order:

  1. Blockchain.
  2. Augmented reality/mixed reality.
  3. Virtual reality.
  4. AI/machine learning.
  5. Wearable devices.

Technologies in the third group, “investing or piloting,” may represent the sweet spot for startups. At this stage, the corporate customer has already discovered some internal problem or use case that the technology might address. They may have shaken loose some early funding. They may have departments internally, or test sites externally, where they know they can conduct pilots. Often, they’re assessing what established tech vendors like Microsoft, Oracle and Cisco can provide — and they may find their solutions wanting.

Here’s what our survey respondents put into the “investing or piloting” bucket, in ranked order:

  1. Advanced analytics.
  2. AI/machine learning.
  3. Collaboration tools and software.
  4. Cloud infrastructure and services.
  5. Internet of things/new sensors.

By the time a technology is placed into the fourth category, which we dubbed “in-market or accelerating investment,” it may be too late for a startup to find a foothold. There’s already a clear understanding of at least some of the use cases or problems that need solving, and return-on-investment metrics have been established. But some providers have already been chosen, based on successful pilots and you may need to dislodge someone that the enterprise is already working with. It can happen, but the headwinds are strong.

Here’s what the survey respondents placed into the “in-market or accelerating investment” bucket, in ranked order:


By Walter Thompson

Qualcomm Ventures invests in four 5G startups

Qualcomm Ventures, Qualcomm’s investment arm, today announced four new strategic investments in 5G-related startups. These companies are private mobile network specialist Celona, mobile network automation platform Cellwize, the edge computing platform Azion and Pensando, another edge computing platform that combines its software stack with custom hardware.

The overall goal here is obviously to help jumpstart 5G use cases in the enterprise and — by extension — for consumers by investing in a wide range of companies that can build the necessary infrastructure to enable these.

“We invest globally in the wireless mobile ecosystem, with a goal of expanding our base of customers and partners — and one of the areas we’re particularly excited about is the area of 5G,” Quinn Li, a Senior VP at Qualcomm and the global head of Qualcomm Ventures, told me. “Within 5G, there are three buckets of areas we look to invest in: one is in use cases, second is in network transformation, third is applying 5G technology in enterprises.”

So far, Qualcomm Ventures has invested over $170 million in the 5G ecosystem, including this new batch. The firm did not disclose how much it invested in these four new startups, though.

Overall, this new set of companies touches upon the core areas Qualcomm Ventures is looking at, Li explained. Celona, for example, aims to make it as easy for enterprises to deploy private cellular infrastructure as it is to deploy Wi-Fi today.

“They built this platform with a cloud-based controller that leverages the available spectrum — CBRS — to be able to take the cellular technology, whether it’s LTE or 5G, into enterprises,” Li explained. “And then these enterprise use cases could be in manufacturing settings could be in schools, could be to be in hospitals, or it could be on campus for universities.”

Cellwize, meanwhile, helps automate wireless networks to make them more flexible and manageable, in part by using machine learning to tune the network based on the data it collects. One of the main investment theses for this fund, Li told me, is that wireless technology will become increasingly software-defined and Cellwize fits right into this trend. The potential customer here isn’t necessarily an individual enterprise, though, but wireless and mobile operators.

Edge computing, where Azion and Pensando play, is obviously also a hot category right now and when where 5G has some obvious advantages, so it’s maybe no surprise that Qualcomm Ventures is putting a bit of a focus on these today with its investments in Azion and Pensando.

“As we move forward, [you will] see a lot of the compute moving from the cloud into the edge of the network, which allows for processing happening at the edge of the network, which allows for low latency applications to run much faster and much more efficiently,” Li said.

In total, Qualcomm Ventures has deployed $1.5 billion and made 360 investments since its launch in 2000. Some of the more successful companies the firm has invested in include unicorns like Zoom, Cloudflare, Xiaomi, Cruise Automation and Fitbit.


By Frederic Lardinois

Microsoft launches Edge Zones for Azure

Microsoft today announced the launch of Azure Edge Zones, which will allow Azure users to bring their applications to the company’s edge locations. The focus here is on enabling real-time low-latency 5G applications. The company is also launching a version of Edge Zones with carriers (starting with AT&T) in preview, which connects these zones directly to 5G networks in the carrier’s data center. And to round it all out, Azure is also getting Private Edge Zones for those who are deploying private 5G/LTE networks in combination with Azure Stack Edge.

In addition to partnering with carriers like AT&T, as well as Rogers, SK Telecom, Telstra and Vodafone, Microsoft is also launching new standalone Azure Edge Zones in more than 10 cities over the next year, starting with L.A., Miami and New York later this summer.

“For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular,” the company notes in today’s announcement. “With cloud and 5G, there are new possibilities by combining cloud services, like compute and AI with high bandwidth and ultra-low latency. Microsoft is partnering with them bring 5G to life in immersive applications built by organization and developers.”

This may all sound a bit familiar and that’s because only a few weeks ago, Google launched Anthos for Telecom and its Global Mobile Edge Cloud, which at first glance offers a similar promise of bringing applications close to that cloud’s edge locations for 5G and telco usage. Microsoft argues that its offering is more comprehensive in terms of its partner ecosystem and geographic availability. But it’s clear that 5G is a trend all of the large cloud providers are trying to tap into. Microsoft’s own acquisition of 5G cloud specialist Affirmed Networks is yet another example of how it is looking to position itself in this market.

As far as the details of the various Edge Zone versions go, the focus of Edge Zones is mostly on IoT and AI workloads, while Microsoft notes that Edge Zones with Carriers is more about low-latency online gaming, remote meetings and events, as well as smart infrastructure. Private Edge Zones, which combine private carrier networks with Azure Stack Edge, is something only a small number of large enterprise companies is likely to look into, given the cost and complexity of rolling out a system like this.

 


By Frederic Lardinois

Microsoft acquires 5G specialist Affirmed Networks

Microsoft today announced that it has acquired Affirmed Networks, a company that specializes in fully virtualized, cloud-native networking solutions for telecom operators.

With its focus on 5G and edge computing, Affirmed looks like the ideal acquisition target for a large cloud provider looking to get deeper into the telco business. According to Crunchbase, Affirmed had raised a total of $155 million before this acquisition and the company’s over 100 enterprise customers include the likes of AT&T, Orange, Vodafone, Telus, Turkcell and STC.

“As we’ve seen with other technology transformations, we believe that software can play an important role in helping advance 5G and deliver new network solutions that offer step-change advancements in speed, cost and security,” writes Yousef Khalidi, Microsoft’s corporate vice president for Azure Networking. “There is a significant opportunity for both incumbents and new players across the industry to innovate, collaborate and create new markets, serving the networking and edge computing needs of our mutual customers.”

With its customer base, Affirmed gives Microsoft another entry point into the telecom industry. Previously, the telcos would often build their own data centers and stuff it with costly proprietary hardware (and the software to manage it). But thanks to today’s virtualization technologies, the large cloud platforms are now able to offer the same capabilities and reliability without any of the cost. And unsurprisingly, a new technology like 5G with its promise of new and expanded markets makes for a good moment to push forward with these new technologies.

Google recently made some moves in this direction with its Anthos for Telecom and Global Mobile Edge Cloud, too. Chances are, we will see all of the large cloud providers continue to go after this market in the coming months.

In a somewhat odd move, only yesterday Affirmed announced a new CEO and President, Anand Krishnamurthy. It’s not often that we see these kinds of executive moves hours before a company announces its acquisition.

The announcement doesn’t feature a single hint at today’s news and includes all of the usual cliches we’ve come to expect from a press release that announces a new CEO. “We are thankful to Hassan for his vision and commitment in guiding the company through this extraordinary journey and positioning us for tremendous success in the future,” Krishnamurthy wrote at the time. “It is my honor to lead Affirmed as we continue to drive this incredible transformation in our industry.”

We asked Affirmed for some more background about this and will update this post once we hear more.


By Frederic Lardinois

Microsoft Azure gets into ag tech with the preview of FarmBeats

At its annual Ignite event in Orlando, Fla., Microsoft today announced that  Azure FarmBeats, a project that until now was mostly a research effort, will be available as a public preview and in the Azure Marketplace, starting today. FarmBeats is Microsoft’s project that combines IoT sensors, data analysis and machine learning.

The goal of FarmBeats is to augment farmers’ knowledge and intuition about their own farm with data and data-driven insights,” Microsoft explained in today’s announcement. The idea behind FarmBeats is to take in data from a wide variety of sources, including sensors, satellites, drones and weather stations, and then turn that into actionable intelligence for farmers, using AI and machine learning. 

In addition, FarmBeats also wants to be somewhat of a platform for developers who can then build their own applications on top of this data that the platform aggregates and evaluates.

As Microsoft noted during the development process, having satellite imagery is one thing, but that can’t capture all of the data on a farm. For that, you need in-field sensors and other data — yet all of this heterogeneous data then has to be merged and analyzed somehow. Farms also often don’t have great internet connectivity. Because of this, the FarmBeats team was among the first to leverage Microsoft’s efforts in using TV white space for connectivity and, of course, Azure IoT Edge for collecting all of the data.


By Frederic Lardinois

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting GPUs and today’s neural processor units over a bus. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers can also opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm, argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and still allows its customers to use the existing standard tools they are already familiar with.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.


By Frederic Lardinois

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.


By Ron Miller

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies . The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.


By Frederic Lardinois

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compliation.

This new capability, which will work for applications written in Javascript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Business Development David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often signficantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platform and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Mesina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure they run on. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

 


By Frederic Lardinois

Armis nabs $65M Series C as IoT security biz grows in leaps and bounds

Armis is helping companies protect IoT devices on the network without using an agent, and it’s apparently a problem that is resonating with the market, as the startup reports 700 percent growth in the last year. That caught the attention of investors, who awarded them with a $65 million Series C investment to help keep accelerating that growth.

Sequoia Capital led the round with help from new investors Insight Venture Partners and Intermountain Ventures. Returning investors Bain Capital Ventures, Red Dot Capital Partners and Tenaya Capital also participated. Today’s investment brings the total raised to $112 million, according to the company.

The company is solving a hard problem around device management on a network. If you have devices where you cannot apply an agent to track them, how do you manage them? Nadir Izrael, company co-founder and CTO, says you have to do it very carefully because even scanning for ports could be too much for older devices and they could shut down. Instead, he says that Armis takes a passive approach to security, watching and learning and understanding what normal device behavior looks like — a kind of behavioral fingerprinting.

“We observe what devices do on the network. We look at their behavior, and we figure out from that everything we need to know,” Izreal told TechCrunch. He adds, “Armis in a nutshell is a giant device behavior crowdsourcing engine. Basically, every client of Armis is constantly learning how devices behave. And those statistical models, those machine learning models, they get merged into master models.”

Whatever they are doing, they seem to have hit upon a security pain point. They announced a $30 million Series B almost exactly a year ago, and they went back for more because they were growing quickly and needed the capital to hire people to keep up.

That kind of growth is a challenge for any startup. The company expects to double its 125 person work force before the end of the year, but the company is working to put systems in place to incorporate those new people and service all of those new customers.

The company plans to hire more people in sales and marketing, of course, but they will concentrate on customer support and building out partnership programs to get some help from systems integrators, ISVs and MSPs, who can do some of the customer hand-holding for them.


By Ron Miller

Microsoft gives 500 patents to startups

Microsoft today announced a major expansion of its Azure IP Advantage program, which provides its Azure users with protection against patent trolls. This program now also provides customers who are building IoT solutions that connect to Azure with access to 10,000 patents to defend themselves against intellectual property lawsuits.

What’s maybe most interesting here, though, is that Microsoft is also donating 500 patents to startups in the LOT Network. This organization, which counts companies like Amazon, Facebook, Google, Microsoft, Netflix, SAP, Epic Games, Ford, GM, Lyft and Uber among its well over 150 members, is designed to protect companies against patent trolls by giving them access to a wide library of patents from its member companies and other sources.

“The LOT Network is really committed to helping address the proliferation of intellectual property losses, especially ones that are brought by non-practicing entities, or so-called trolls,” Microsoft  CVP and Deputy General Counsel Erich Andersen told me. 

This new program goes well beyond basic protection from patent trolls, though. Qualified startups who join the LOT Network can acquire Microsoft patents as part of their free membership and as Andresen stressed, the startups will own them outright. The LOT network will be able to provide its startup members with up to three patents from this collection.

There’s one additional requirement here, though: to qualify for getting the patents, these startups also have to meet a $1,000 per month Azure spend. As Andersen told me, though, they don’t have to make any kind of forward pledge. The company will simply look at a startup’s last three monthly Azure bills.

“We want to help the LOT Network grow its network of startups,” Andersen said. “To provide an incentive, we are going to provide these patents to them.” He noted that startups are obviously interested in getting access to patents as a foundation of their companies, but also to raise capital and to defend themselves against trolls.

The patents we’re talking about here cover a wide range of technologies as well as geographies. Andersen noted that we’re talking about U.S. patents as well as European and Chinese patents, for example.

“The idea is that these startups come from a diverse set of industry sectors,” he said. “The hope we have is that when they approach LOT, they’ll find patents among those 500 that are going to be interesting to basically almost any company that might want a foundational set of patents for their business.”

As for the extended Azure IP Advantage program, it’s worth noting that every Azure customer who spends more than $1,000 per month over the past three months and hasn’t filed a patent infringement lawsuit against another Azure customers in the last two years can automatically pick one of the patents in the program’s portfolio to protect itself against frivolous patent lawsuits from trolls (and that’s a different library of patents from the one Microsoft is donating to the LOT Network as part of the startup program).

As Andresen noted, the team looked at how it could enhance the IP program by focusing on a number of specific areas. Microsoft is obviously investing a lot into IoT, so extending the program to this area makes sense. “What we’re basically saying is that if the customer is using IoT technology — regardless of whether it’s Microsoft technology or not — and it’s connected to Azure, then we’re going to provide this patent pick right to help customers defend themselves against patent suits,” Andersen said.

In addition, for those who do choose to use Microsoft IoT technology across the board, Microsoft will provide indemnification, too.

Patent trolls have lately started acquiring IoT patents, so chances are they are getting ready to making use of them and that we’ll see quite a bit of patent litigation in this space in the future. “The early signs we’re seeing indicate that this is something that customers are going to care about in the future,” said Andersen.


By Frederic Lardinois

Arm expands its push into the cloud and edge with the Neoverse N1 and E1

For the longest time, Arm was basically synonymous with chip designs for smartphones and very low-end devices. But more recently, the company launched solutions for laptops, cars, high-powered IoT devices and even servers. Today, ahead of MWC 2019, the company is officially launching two new products for cloud and edge applications, the Neoverse N1 and E1. Arm unveiled the Neoverse brand a few months ago, but it’s only now that it is taking concrete form with the launch of these new products.

“We’ve always been anticipating that this market is going to shift as we move more towards this world of lots of really smart devices out at the endpoint — moving beyond even just what smartphones are capable of doing,” Drew Henry, Arms’ SVP and GM for Infrastructure, told me in an interview ahead of today’s announcement. “And when you start anticipating that, you realize that those devices out of those endpoints are going to start creating an awful lot of data and need an awful lot of compute to support that.”

To address these two problems, Arm decided to launch two products: one that focuses on compute speed and one that is all about throughput, especially in the context of 5G.

ARM NEOVERSE N1

The Neoverse N1 platform is meant for infrastructure-class solutions that focus on raw compute speed. The chips should perform significantly better than previous Arm CPU generations meant for the data center and the company says that it saw speedups of 2.5x for Nginx and MemcacheD, for example. Chip manufacturers can optimize the 7nm platform for their needs, with core counts that can reach up to 128 cores (or as few as 4).

“This technology platform is designed for a lot of compute power that you could either put in the data center or stick out at the edge,” said Henry. “It’s very configurable for our customers so they can design how big or small they want those devices to be.”

The E1 is also a 7nm platform, but with a stronger focus on edge computing use cases where you also need some compute power to maybe filter out data as it is generated, but where the focus is on moving that data quickly and efficiently. “The E1 is very highly efficient in terms of its ability to be able to move data through it while doing the right amount of compute as you move that data through,” explained Henry, who also stressed that the company made the decision to launch these two different platforms based on customer feedback.

There’s no point in launching these platforms without software support, though. A few years ago, that would have been a challenge because few commercial vendors supported their data center products on the Arm architecture. Today, many of the biggest open-source and proprietary projects and distributions run on Arm chips, including Red Hat Enterprise Linux, Ubuntu, Suse, VMware, MySQL, OpenStack, Docker, Microsoft .Net, DOK and OPNFV. “We have lots of support across the space,” said Henry. “And then as you go down to that tier of languages and libraries and compilers, that’s a very large investment area for us at Arm. One of our largest investments in engineering is in software and working with the software communities.”

And as Henry noted, AWS also recently launched its Arm-based servers — and that surely gave the industry a lot more confidence in the platform, given that the biggest cloud supplier is now backing it, too.


By Frederic Lardinois

Salesforce wants to deliver more automated field service using IoT data

Salesforce has been talking about the Internet of Things for some time as a way to empower field service workers. Today, the company announced Field Service Lightning, a new component designed to deliver automated IoT data to service technicians in the field on their mobile devices.

Once you connect sensors in the field to Service Cloud, you can make this information available in an automated fashion to human customer service agents and pull in other data about the customer from Salesforce’s CRM system to give the CSR a more complete picture of the customer.

“Drawing on IoT signals surfaced in the Service Cloud console, agents can gauge whether device failure is imminent, quickly determine the source of the problem (often before the customer is even aware a problem exists) and dispatch the right mobile worker with the right skill set,” Salesforce’s SVP and GM for Salesforce Field Service Lightning Paolo Bergamo wrote in a blog post introducing the new feature.

The field service industry has been talking for years about using IoT data from the field to deliver more proactive service and automate the customer service and repair process. That’s precisely what this new feature is designed to do. Let’s say you have a “smart home” with a heating and cooling system that can transmit data to the company that installed your equipment. With a system like this in place, the sensors could tell your HVAC dealer that a part is ready to break down and automatically start a repair process (that would presumably include calling the customer to tell them about it). When a CSR determines a repair visit is required, the repair technician would receive all the details on their smart phone.

Customer Service Console view. Gif: Salesforce

It also could provide a smoother experience because the repair technician can prepare before he or she leaves for the visit with the right equipment and parts for the job and a better understanding of what needs to be done before arriving at the customer location. This should theoretically lead to more efficient service calls.

All of this is in line with a vision the field service industry has been talking about for some time that you could sell a subscription to a device like an air conditioning system instead of the device itself. This would mean that the dealer would be responsible for keeping it up and running and having access to data like this could help that vision to become closer to reality.

In reality, most companies are probably not ready to implement a system like this and most equipment in the field has not been fitted with sensors to deliver this information to the Service Cloud. Still, companies like Salesforce, ServiceNow and ServiceMax (owned by GE) want to release products like this for early adopters and to have something in place as more companies look to put smarter systems in place in the field.


By Ron Miller

Twilio launches a new SIM card and narrowband dev kit for IoT developers

Twilio is hosting its Signal developer conference in San Francisco this week. Yesterday was all about bots and taking payments over the phone; today is all about IoT. The company is launching two new (but related) products today that will make it easier for IoT developers to connect their devices. The first is the Global Super SIM that offers global connectivity management through the networks of Twilio’s partners. The second is Twilio Narrowband, which, in cooperation with T-Mobile, offers a full software and hardware kit for building low-bandwidth IoT solutions and the narrowband network to connect them.

Twilio also announced that it is expanding its wireless network partnerships with the addition of Singtel, Telefonica and Three Group. Unsurprisingly, those are also the partners that make the company’s Super SIM project possible.

The Super SIM, which is currently in private preview and will launch in public beta in the spring of 2019, provides developers with a global network that lets them deploy and manage their IoT devices anywhere (assuming there is a cell connection or other internet connectivity, of course). The Super SIM gives developers the ability to choose the network they want to use or to let Twilio pick the defaults based on the local networks.

Twilio Narrowband is a slightly different solution. Its focus right now is on the U.S., where T-Mobile rolled out its Narrowband IoT network earlier this year. As the name implies, this is about connecting low-bandwidth devices that only need to send out small data packets like timestamps, GPS coordinates or status updates. Twilio Narrowband sits on top of this, using Twilio’s Programmable Wireless and SIM card. It then adds an IoT developer kit with an Arduino-based development board and the standard Grove sensors on top of that, as well as a T-Mobile-certified hardware module for connecting to the narrowband network. To program that all, Twilio is launching an SDK for handling network registrations and optimizing the communication between the devices and the cloud.

The narrowband service will launch as a beta in early 2019 and offer three pricing plans: a developer plan for $2/month, an annual production plan for $10/year or $5/year at scale, and a five-year plan for $8/year or $4/year at scale.


By Frederic Lardinois

Foundries.io promises standardized open source IoT device security

IoT devices currently lack a standard way of applying security. It leaves consumers, whether business or individuals, left to wonder if their devices are secure and up-to-date. Foundries.io, a company that launched today, wants to change that by offering a standard way to secure devices and deliver updates over the air.

“Our mission is solving the problem of IoT and embedded space where there is no standardized core platform like Android for phones,” Foundries.io CEO George Grey explained.

What Foundries has created is an open and secure solution that saves everyone from creating their own and reinventing the wheel every time. Grey says Foundries’ approach is not only secure, it provides a long-term solution to the device update problem by providing a way to deliver updates over the air in an automated manner on any device from tiny sensors to smart thermostats to autonomous cars.

He says this approach will allow manufacturers to apply security patches in a similar way that Apple applies regular updates to iOS. “Manufacturers can continuously make sure their devices can be updated with the latest software to fix security flaws or Zero Day flaws,” he said.

The company offers two solutions, depending on the size and complexity of your device. The Zephyr RTOS microPlatform is designed for smaller, less complex devices. For those that are more complex, Foundries offers a version of Linux called the Linux OE microPlatform.

Diagram: Foundries.io

Grey claims that these platforms free manufacturers to build secure devices without having to hire a team of security experts. But he says the real beauty of the product is that the more people who use it, the more secure it will get, as more and more test it against their products in a virtuous cycle.

You may be wondering how they can make money in this model, but they do it by charging a flat fee of $10,000 per year for Zephyr RTOS and $25,000 per year for Linux OE. These are one-time prices and apply by the product, regardless of how many units get sold and there is no lock-in, according to Grey. Companies are free to back out any time. “If you want to stop subscribing you take over maintenance and you still have access to everything up to the point,. You just have to arrange maintenance yourself,” he said.

There is also a hobbyist and education package for $10 a month.

The company spun off from research at Linaro, an organization that promotes development on top of ARM chips.

To be successful, Foundries.io needs to build a broad community of manufacturers. Today’s launch is the first step in that journey. If it eventually takes off, it has the potential to provide a consistent way of securing and updating IoT devices, a move which would certainly be welcome.


By Ron Miller