OpsRamp raises $37.5M for its hybrid IT operations platform

OpsRamp, a service that helps IT teams discover, monitor, manage and — maybe most importantly — automate their hybrid environments, today announced that it has closed a $37.5 million funding round led by Morgan Stanley Expansion Capital, with participation from existing investor Sapphire Ventures and new investor Hewlett Packard Enterprise.

OpsRamp last raised funding in 2017, when Sapphire led its $20 million Series A round.

At the core of OpsRamp’s services is its AIOps platform. Using machine learning and other techniques, this service aims to help IT teams manage increasingly complex infrastructure deployments, provide intelligent alerting, and eventually automate more of their tasks. The company’s overall product portfolio also includes tools for cloud monitoring and incident management.

The company says its annual recurrent revenue increased by 300 percent in 2019 (though we obviously don’t know what number it started 2019 with). In total, OpsRamp says it now has 1,400 customers on its platform and alliances with AWS, ServiceNow, Google Cloud Platform and Microsoft Azure.

OpsRamp co-founder and CEO Varma Kunaparaju

According to OpsRamp co-founder and CEO Varma Kunaparaju, most of the company’s customers are mid to large enterprises. “These IT teams have large, complex, hybrid IT environments and need help to simplify and consolidate an incredibly fragmented, distributed and overwhelming technology and infrastructure stack,” he said. “The company is also seeing success in the ability of our partners to help us reach global enterprises and Fortune 5000 customers.”

Kunaparaju told me that the company plans to use the new funding to expand its go-to-market efforts and product offerings. “The company will be using the money in a few different areas, including expanding our go-to-market motion and new pursuits in EMEA and APAC, in addition to expanding our North American presence,” he said. “We’ll also be doubling-down on product development on a variety of fronts.”

Given that hybrid clouds only increase the workload for IT organizations and introduce additional tools, it’s maybe no surprise that investors are now interested in companies that offer services that rein in this complexity. If anything, we’ll likely see more deals like this one in the coming months.

“As more of our customers transition to hybrid infrastructure, we find the OpsRamp platform to be a differentiated IT operations management offering that aligns well with the core strategies of HPE,” said Paul Glaser, Vice President and Head of Hewlett Packard Pathfinder. “With OpsRamp’s product vision and customer traction, we felt it was the right time to invest in the growth and scale of their business.”


By Frederic Lardinois

Persona raises $17.5M for an identify verification platform that goes beyond user IDs and passwords

The proliferation of data breaches based on leaked passwords, and the rising tide of regulation that puts a hard stop on just how much user information can be collected, stored and used by companies have laid bare the holes in simple password and memorable-information-based verification systems.

Today a startup called Persona, which has built a platform to make it easier for organisations to implement more watertight methods based on third-party documentation, real-time evaluation, and AI to verify users, is announcing a funding round, speaking to the shift in the market and subsequent demand for new alternatives to the old way of doing things.

The startup has raised $17.5 million in a Series A from a list of impressive investors that include Coatue and First Round Capital, money that it plans to use to double down on its core product: a platform that businesses and organisations can access by way of an API, which lets them use a variety of documents, from government-issued IDs through to biometrics, to verify that customers are who they say they are.

Current customers include Rippling, Petal, UrbanSitter, Branch, Brex, Postmates, Outdoorsy, Rently, SimpleHealth and Hipcamp, among others. Persona’s target user today is any company involved in any kind of online financial transaction to verify for regulatory compliance, fraud prevention and for trust and safety.

The startup is young and is not disclosing valuation. Previously, Persona had raised an undisclosed amount of funding from Kleiner Perkins and FirstRound, according to data from PitchBook. Angels in the company have included Zach Perret and William Hockey (co-founders of Plaid), Dylan Field (founded Figma), Scott Belsky (Behance) and Tony Xu (DoorDash).

Founded by Rick Song and Charles Yeh, respectively former engineers from Square and Dropbox (companies that have had their own concerns with identity verification and breaches), Persona’s main premise is that most companies are not security companies and therefore lack the people, skills, time and money to build strong authentication and verification services — much less to keep up with the latest developments on what is best practice.

And on top of that, there have been too many breaches that underscored the problem with companies holding too much information on users, collected for identification purposes but then sitting there waiting to be hacked.

The name of the game for Persona is to provide services that are easy to use for customers — for those who can’t or don’t access the code of their apps or websites for registration flows, they can even verify users by way of email-based links.

“Digital identity is one of the most important things to get right, but there is no silver bullet,” Song, who is the CEO, said in an interview. “I believe longer term we’ll see that it’s not a one-size-fits-all approach.” Not least because malicious hackers have an ever-increasing array of tools to get around every system that gets put into place. (The latest is the rise of deep-fakes to mimic people, putting into question how to get around that in, say, a video verification system.)

At Persona, the company currently gives customers the option to ask for social security numbers, biometric verification such as fingerprints or pictures, or government ID uploads and phone lookups, some of which (like biometrics) is built by Persona itself and some of which is accessed via third-party partnerships. Added to that are other tools like quizzes and video-based interactions. Song said the list is expanding, and the company is looking at ways of using the AI engine that it’s building — which actually performs the matching — to also potentially suggest the best tools for each and every transaction.

The key point is that in every case, information is accessed from other databases, not kept by the customer itself.

This is a moving target, and one that is becoming increasingly harder to focus on, given not just the rise in malicious hacking, but also regulation that limits how and when data can be accessed and used by online businesses. Persona notes a McKinsey forecast that the personal identify and verification market will be worth some $20 billion by 2022, which is not a surprising figure when you consider the nearly $9 billion that Google has been fined so far for GDPR violations, or the $700 million Equifax paid out, or the $50 million Yahoo (a sister company now) paid out for its own user-data breach.


By Ingrid Lunden

ServiceNow acquires Loom Systems to expand AIOps coverage

ServiceNow announced today that it has acquired Loom Systems, an Israeli startup that specializes in AIOps. The companies did not reveal the purchase price.

IT operations collects tons of data across a number of monitoring and logging tools, way too much for any team of humans to keep up with. That’s why there are startups like Loom turning to AI to help sort through it. It can find issues and patterns in the data that would be challenging or impossible for humans to find. Applying AI to operations data in this manner has become known as AIOps in industry parlance.

ServiceNow is first and foremost a company trying to digitize the service process, however that manifests itself. IT service operations is a big part of that. Companies can monitor their systems, wait until a problem happens and then try and track down the cause and fix it, or they can use the power of artificial intelligence to find potential dangers to the system health and neutralize them before they become major problems. That’s what an AIOps product like Loom’s can bring to the table.

Jeff Hausman, vice president and general manager of IT Operations Management at ServiceNow sees Loom’s strengths merging with ServiceNow’s existing tooling to help keep IT systems running. “We will leverage Loom Systems’ log analytics capabilities to help customers analyze data, automate remediation and reduce L1 incidents,” he told TechCrunch.

Loom co-founder and CEO Gabby Menachem not surprisingly sees a similar value proposition. “By joining forces, we have the unique opportunity to bring together our AI innovations and ServiceNow’s AIOps capabilities to help customers prevent and fix IT issues before they become problems,” he said in a statement.

Loom raised $16 million since it launched in 2015, according to PitchBook data. Its most recent round for $10 million was in November 2019. Today’s deal is expected to close by the end of this quarter.


By Ron Miller

Google brings IBM Power Systems to its cloud

As Google Cloud looks to convince more enterprises to move to its platform, it needs to be able to give businesses an onramp for their existing legacy infrastructure and workloads that they can’t easily replace or move to the cloud. A lot of those workloads run on IBM Power Systems with their Power processors and until now, IBM was essentially the only vendor that offered cloud-based Power systems. Now, however, Google is also getting into this game by partnering with IBM to launch IBM Power Systems on Google Cloud.

“Enterprises looking to the cloud to modernize their existing infrastructure and streamline their business processes have many options,” writes Kevin Ichhpurani, Google Cloud’s corporate VP for its global ecosystem in today’s announcement. “At one end of the spectrum, some organizations are re-platforming entire legacy systems to adopt the cloud. Many others, however, want to continue leveraging their existing infrastructure while still benefiting from the cloud’s flexible consumption model, scalability, and new advancements in areas like artificial intelligence, machine learning, and analytics.”

Power Systems support obviously fits in well here, given that many companies use them for mission-critical workloads based on SAP and Oracle applications and databases. With this, they can take those workloads and slowly move them to the cloud, without having to re-engineer their applications and infrastructure. Power Systems on Google Cloud is obviously integrated with Google’s services and billing tools.

This is very much an enterprise offering, without a published pricing sheet. Chances are, given the cost of a Power-based server, you’re not looking at a bargain, per-minute price here.

Since IBM has its own cloud offering, it’s a bit odd to see it work with Google to bring its servers to a competing cloud — though it surely wants to sell more Power servers. The move makes perfect sense for Google Cloud, though, which is on a mission to bring more enterprise workloads to its platform. Any roadblock the company can remove works in its favor and as enterprises get comfortable with its platform, they’ll likely bring other workloads to it over time.


By Frederic Lardinois

Zebra’s SmartSight inventory robot keeps an eye on store shelves

How many times have you gone into a store and found the shelves need restocking of the very item you came in for? This is a frequent problem and it’s difficult, especially in larger retail establishments, to keep on top of stocking requirements. Zebra Technologies has a solution: a robot that scans the shelves and reports stock gaps to human associates.

The SmartSight robot is a hardware solution that roams the aisles of the store checking the shelves, using a combination of computer vision, machine learning, workflow automation and robotic capabilities. It can find inventory problems, pricing glitches and display issues. When it finds a problem, it sends a message to human associates via a Zebra mobile computer with the location and nature of the issue.

The robot takes advantage of Zebra’s EMA50 mobile automation technology and links to other store systems including inventory and online ordering systems. Zebra claims it increases available inventory by 95%, while reducing human time spent wandering the aisles to do inventory manually by an average of 65 hours.

While it will likely reduce the number of humans required to perform this type of task, Zebra’s Senior Vice President and General Manager of Enterprise Mobile Computing, Joe White, says it’s not always easy to find people to fill these types of positions.

“SmartSight and the EMA50 were developed to help retailers fully capitalize on the opportunities presented by the on-demand economy despite heightened competition and ongoing labor shortage concerns,” White said in a statement.

This is a solution that takes advantage of robotics to help humans keep store shelves stocked and find other issues. The SmartSight robot will be available on a subscription basis. That means retailers won’t have to worry about owning and maintaining the robot. If anything goes wrong, Zebra would be responsible for fixing it.


By Ron Miller

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.


By Ron Miller

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.


By Danny Crichton

Lawyers hate timekeeping. Ping raises $13M to fix it with AI

Counting billable time in six minute increments is the most annoying part of being a lawyer. It’s a distracting waste. It leads law firms to conservatively under-bill. And it leaves lawyers stuck manually filling out timesheets after a long day when they want to go home to their families.

Life is already short, as Ping CEO and co-founder Ryan Alshak knows too well. The former lawyer spent years caring for his mother as she battled a brain tumor before her passing. “One minute laughing with her was worth a million doing anything else” he tells me. “I became obsessed with the idea that we spend too much of our lives on things we have no need to do — especially at work.”

That’s motivated him as he’s built his startup Ping, which uses artificial intelligence to automatically track lawyers’ work and fill out timesheets for them. There’s a massive opportunity to eliminate a core cause of burnout, lift law firm revenue by around 10%, and give them fresh insights into labor allocation.

Ping co-founder and CEO Ryan Alshak. Image Credit: Margot Duane

That’s why today Ping is announcing a $13.2 million Series A led by Upfront Ventures, along with BoxGroup, First Round, Initialized, and Ulu Ventures. Adding to Ping’s quiet $3.7 million seed led by First Round last year, the startup will spend the cash to scale up enterprise distribution and become the new timekeeping standard.

I was a corporate litigator at Manatt Phelps down in LA and joke that I was voted the world’s worst timekeeper” Alshak tells me. “I could either get better at doing something I dreaded or I could try and build technology that did it for me.”

The promise of eliminating the hassle could make any lawyer who hears about Ping an advocate for the firm buying the startup’s software, like how Dropbox grew as workers demanded easier file sharing. “I’ve experienced first-hand the grind of filling out timesheets” writes Initialized partner and former attorney Alda Leu Dennis. “Ping takes away the drudgery of manual timekeeping and gives lawyers back all those precious hours.”

Traditionally, lawyers have to keep track of their time by themselves down to the tenth of an hour — reviewing documents for the Johnson case, preparing a motion to dismiss for the Lee case, a client phone call for Sriram case. There are timesheets built into legal software suites like MyCase, legal billing software like Timesolv, and one-off tools like Time Miner and iTimeKeep. They typically offer timers that lawyers can manually start and stop on different devices, with some providing tracking of scheduled appointments, call and text logging, and integration with billing systems.

Ping goes a big step further. It uses AI and machine learning to figure out whether an activity is billable, for which client, a description of the activity, and its codification beyond just how long it lasted. Instead of merely filling in the minutes, it completes all the logs automatically with entries like “Writing up a deposition – Jenkins Case – 18 minutes”. Then it presents the timesheet to the user for review before the send it to billing.

The big challenge now for Alshak and the team he’s assembled is to grow up. They need to go from cat-in-sunglasses logo Ping to mature wordmark Ping.  “We have to graduate from being a startup to being an enterprise software company” the CEO tells meThat means learning to sell to C-suites and IT teams, rather than just build solid product. In the relationship-driven world of law, that’s a very different skill set. Ping will have to convince clients it’s worth switching to not just for the time savings and revenue boost, but for deep data on how they could run a more efficient firm.

Along the way, Ping has to avoid any embarrassing data breaches or concerns about how its scanning technology could violate attorney-client privilege. If it can win this lucrative first business in legal, it could barge into the consulting and accounting verticals next to grow truly huge.

With eager customers, a massive market, a weak status quo, and a driven founder, Ping just needs to avoid getting in over its heads with all its new cash. Spent well, the startup could leap ahead of the less tech-savvy competition.

Alshak seems determined to get it right. “We have an opportunity to build a company that gives people back their most valuable resource — time — to spend more time with their loved ones because they spent less time working” he tells me. “My mom will live forever because she taught me the value of time. I am deeply motivated to build something that lasts . . . and do so in her name.”


By Josh Constine

How Microsoft is trying to become more innovative

Microsoft Research is a globally distributed playground for people interested in solving fundamental science problems.

These projects often focus on machine learning and artificial intelligence, and since Microsoft is on a mission to infuse all of its products with more AI smarts, it’s no surprise that it’s also seeking ways to integrate Microsoft Research’s innovations into the rest of the company.

Across the board, the company is trying to find ways to become more innovative, especially around its work in AI, and it’s putting processes in place to do so. Microsoft is unusually open about this process, too, and actually made it somewhat of a focus this week at Ignite, a yearly conference that typically focuses more on technical IT management topics.

At Ignite, Microsoft will for the first time present these projects externally at a dedicated keynote. That feels similar to what Google used to do with its ATAP group at its I/O events and is obviously meant to showcase the cutting-edge innovation that happens inside of Microsoft (outside of making Excel smarter).

To manage its AI innovation efforts, Microsoft created the Microsoft AI group led by VP Mitra Azizirad, who’s tasked with establishing thought leadership in this space internally and externally, and helping the company itself innovate faster (Microsoft’s AI for Good projects also fall under this group’s purview). I sat down with Azizirad to get a better idea of what her team is doing and how she approaches getting companies to innovate around AI and bring research projects out of the lab.

“We began to put together a narrative for the company of what it really means to be in an AI-driven world and what we look at from a differentiated perspective,” Azizirad said. “What we’ve done in this area is something that has resonated and landed well. And now we’re including AI, but we’re expanding beyond it to other paradigm shifts like human-machine interaction, future of computing and digital responsibility, as more than just a set of principles and practices but an area of innovation in and of itself.”

Currently, Microsoft is doing a very good job at talking and thinking about horizon one opportunities, as well as horizon three projects that are still years out, she said. “Horizon two, we need to get better at, and that’s what we’re doing.”

It’s worth stressing that Microsoft AI, which launched about two years ago, marks the first time there’s a business, marketing and product management team associated with Microsoft Research, so the team does get a lot of insights into upcoming technologies. Just in the last couple of years, Microsoft has published more than 6,000 research papers on AI, some of which clearly have a future in the company’s products.


By Frederic Lardinois

Coveo raises $227M at $1B+ valuation for AI-based enterprise search and personalization

Search and personalization services continue to be a major area of investment among enterprises, both to make their products and services more discoverable (and used) by customers, and to help their own workers get their jobs done, with the market estimated to be worth some $100 billion annually. Today, one of the big startups building services in this area raised a large round of growth funding to continue tapping that opportunity. Coveo, a Canadian company that builds search and personalisation services powered by artificial intelligence — used by its enterprise customers by way of clould-based, software-as-a-service — has closed a $227 million round, which CEO Louis Tetu tells me values the company at “well above” $1 billion, “Canadian or US dollars.”

The round is being led by Omers Capital Private Growth Equity Group, the investing arm of the Canadian pensions giant that makes large, later-stage bets (the company has been stepping up the pace of investments lately), with participation also from Evergreen Coast Capital, FSTQ, and IQ Ventures. Evergreen led the company’s last round of $100 million in April 2018, and in total the company has now raised just over $402 million with this round.

The $1 billion+ valuation appears to be a huge leap in the context of Coveo’s funding history: in that last round, it had a post-money valuation of about $370 million, according to PitchBook data.

Part of the reason for that is because of Coveo’s business trajectory, and part is due to the heat of the overall market.

Coveo’s round is coming about two weeks after another company that builds enterprise search solutions, Algolia, raised $110 million. The two aim at slightly different ends of the market, Tetu tells me, not directly competing in terms of target customers, and even services. “Algolia is in a different zip code,” he said. Good thing, too, if that’s the case: Salesforce — which is one of Coveo’s biggest partners and customers — was also a strategic investor in the Algolia round. Even if these two do not compete, there are plenty of others vying for the same end of the enterprise search and personalization continuum — they include Google, Microsoft, Elastic, IBM, Lucidworks, and many more. That, again, underscores the size of the market opportunity.

In terms of Coveo’s own business, the company works with some 500 customers today and says SaaS subscription revenues grew more than 55 percent year-over-year this year. Five hundred may sound like a small number, but it covers a lot of very large enterprises spanning web-facing businesses, commerce-based organizations, service-facing companies, and enterprise solutions.

In addition to Salesforce, it includes Visa, Tableau (also Salesforce now!), Honeywell, a Fortune 50 healthcare company (whose name is not getting disclosed), and what Tetu described to me as an Amazon competitor that does $21 billion in sales annually but doesn’t want to be named.

Coveo’s basic selling point is that the better discoverability and personalization that it provides helps its customers avoid as many call-center interactions (reducing operating expenditures), improving sales (boosting conversions and reducing cart abandonment), and help companies themselves just work faster.

“We believe that Coveo is the market leader in leveraging data and AI to personalize at scale,” said Mark Shulgan, Managing Director and Head of Growth Equity at Omers, in a statement. “Coveo fits our investment thesis precisely: an A-plus leadership team with deep expertise in enterprise SaaS, a Fortune 1000 customer base who deeply love the product, and a track record of high growth in a market worth over $100 billion. This makes Coveo a highly-coveted asset. We are glad to be partnering to scale this business.”

Alongside business development on its own steam, the company is going to be using this funding for acquisitions. Tetu notes that Coveo still has a lot of money in the bank from previous rounds.

“We are a real company with real positive economics,” he said. “This round is mostly to have dry powder to invest in a way that is commensurate in the AI space, and within commerce in particular.” To get the ball rolling on that, this past July, Coveo acquired Tooso, a specialist in AI-based digital commerce technology.


By Ingrid Lunden

Microsoft’s Azure Synapse Analytics bridges the gap between data lakes and warehouses

At its annual Ignite conference in Orlando, Fla., Microsoft today announced a major new Azure service for enterprises: Azure Synapse Analytics, which Microsoft describes as “the next evolution of Azure SQL Data Warehouse.” Like SQL Data Warehouse, it aims to bridge the gap between data warehouses and data lakes, which are often completely separate. Synapse also taps into a wide variety of other Microsoft services, including Power BI and Azure Machine Learning, as well as a partner ecosystem that includes Databricks, Informatica, Accenture, Talend, Attunity, Pragmatic Works and Adatis. It’s also integrated with Apache Spark.

The idea here is that Synapse allows anybody working with data in those disparate places to manage and analyze it from within a single service. It can be used to analyze relational and unstructured data, using standard SQL.

Screen Shot 2019 10 31 at 10.11.48 AM

Microsoft also highlights Synapse’s integration with Power BI, its easy to use business intelligence and reporting tool, as well as Azure Machine Learning for building models.

With the Azure Synapse studio, the service provides data professionals with a single workspace for prepping and managing their data, as well as for their big data and AI tasks. There’s also a code-free environment for managing data pipelines.

As Microsoft stresses, businesses that want to adopt Synapse can continue to use their existing workloads in production with Synapse and automatically get all of the benefits of the service. “Businesses can put their data to work much more quickly, productively, and securely, pulling together insights from all data sources, data warehouses, and big data analytics systems,” writes Microsoft CVP of Azure Data, Rohan Kumar.

In a demo at Ignite, Kumar also benchmarked Synapse against Google’s BigQuery. Synapse ran the same query over a petabyte of data in 75% less time. He also noted that Synapse can handle thousands of concurrent users — unlike some of Microsoft’s competitors.


By Frederic Lardinois

Microsoft launches Power Virtual Agents, its no-code bot builder

Microsoft today announced the public preview of its Power Virtual Agents tool, a new no-code tool for building chatbots that’s part of the company’s Power Platform, which also includes Microsoft Flow automation tool, which is being renamed to Power Automate today, and Power BI.

Built on top of Azure’s existing AI smarts and tools for building bots, Power Virtual Agents promises to make building a chatbot almost as easy as writing a Word document. With this, anybody within an organization could build a bot that walks a new employee through the onboarding experience for example.

“Power virtual agent is the newest addition to the Power Platform family,” said Microsoft’s Charles Lamanna in an interview ahead of today’s announcement. “Power Virtual Agent is very much focused on the same type of low code, accessible to anybody, no matter whether they’re a business user or business analyst or professional developer, to go build a conversational agent that’s AI-driven and can actually solve problems for your employees, for your customers, for your partners, in a very natural way.”

Power Virtual Agents handles the full lifecycle of the bot building experience, from the creation of the dialog to making it available in chat systems that include Teams, Slack, Facebook Messenger and others. Using Microsoft’s AI smarts, users don’t have to spend a lot of time defining every possible question and answer, but can instead rely on the tool to understand intentions and trigger the right action. “We do intent understanding, as well as entity extraction, to go and find the best topic for you to go down,” explained Lamanna. Like similar AI systems, the service also learns over time, based on feedback it receives from users.

One nice feature here is that if your setup outgrows the no-code/low-code stage and you need to get to the actual code, you’ll be able to convert the bot to Azure resources since that’s what’s powering the bot anyway. Once you’ve edited the code, you obviously can’t take it back into the no-code environment. “We have an expression for Power Platform, which is ‘no cliffs.’ […] The idea of ‘no cliffs’ is that the most common problem with a low-code platform is that, at some point, you want more control, you want code. And that’s frequently where low-code platforms run out of gas and you really have issues because you can’t have the pro dev take it over, you can’t make it mission-critical.”

The service is also integrated with tools like Power Automate/Microsoft Flow to allow users to trigger actions on other services based on the information the chatbot gathers.

Lamanna stressed that the service also generates lots of advanced analytics for those who are building bots with it. With this, users can see what topics are being asked about and where the system fails to provide answers, for example. It also visualizes the different text inputs that people provide so that bot builders can react to that.

Over the course of the last two or three years, we went from a lot of hype around chatbots to deep disillusionment with the experience they actually delivered. Lamanna isn’t fazed by that. In part, those earlier efforts failed because the developers weren’t close enough to the users. They weren’t product experts or part of the HR team inside a company. By using a low-code/no-code tool, he argues, the actual topic experts can build these bots. “If you hand it over to a developer or an AI specialist, they’re geniuses when it comes to developing code, but they won’t know the details and ins and outs of, say, the shoe business – and vice versa. So it actually changes how development happens.”


By Frederic Lardinois

Cortana wants to be your personal executive assistant and read your emails to you, too

Only a few years ago, Microsoft hoped that Cortana could become a viable competitor to the Google Assistant, Alexa and Siri . Over time, as Cortana failed to make a dent in the marketplace (do you ever remember that Cortana is built into your Windows 10 machine?), the company’s ambitions shrunk a bit. Today, Microsoft wants Cortana to be your personal productivity assistant — and to be fair, given the overall Microsoft ecosystem, Cortana may be better suited to that than to tell you about the weather.

At its Ignite conference, Microsoft today announced a number of new features that help Cortana to become even more useful in your day-to-day work, all of which fit into the company’s overall vision of AI as a tool that is helpful and augments human intelligence.

Screen Shot 2019 10 31 at 3.25.48 PM

The first of these is a new feature in Outlook for iOS that uses Microsoft text-to-speech features to read your emails to you (using both a male and female voice). Cortana can also now help you schedule meetings and coordinate participants, something the company first demoed at previous conferences.

Starting next month, Cortana will also be able to send you a daily email that summarizes all of your meetings, presents you with relevant documents and reminders to “follow up on commitments you’ve made in email.” This last part, especially, should be interesting as it seems to go beyond the basic (and annoying) nudges to reply to emails in Google’s Gmail.

2019 11 01 0914


By Frederic Lardinois

Google launches TensorFlow Enterprise with long-term support and managed services

Google open-sourced its TensorFlow machine learning framework back in 2015 and it quickly became one of the most popular platforms of its kind. Enterprises that wanted to use it, however, had to either work with third parties or do it themselves. To help these companies — and capture some of this lucrative market itself — Google is launching TensorFlow Enterprise, which includes hands-on, enterprise-grade support and optimized managed services on Google Cloud.

One of the most important features of TensorFlow Enterprise is that it will offer long-term support. For some versions of the framework, Google will offer patches for up to three years. For what looks to be an additional fee, Google will also offer engineering assistance from its Google Cloud and TensorFlow teams to companies that are building AI models.

All of this, of course, is deeply integrated with Google’s own cloud services. “Because Google created and open-sourced TensorFlow, Google Cloud is uniquely positioned to offer support and insights directly from the TensorFlow team itself,” the company writes in today’s announcement. “Combined with our deep expertise in AI and machine learning, this makes TensorFlow Enterprise the best way to run TensorFlow.”

Google also includes Deep Learning VMs and Deep Learning Containers to make getting started with TensorFlow easier and the company has optimized the enterprise version for Nvidia GPUs and Google’s own Cloud TPUs.

Today’s launch is yet another example of Google Cloud’s focus on enterprises, a move the company accelerated when it hired Thomas Kurian to run the Cloud businesses. After years of mostly ignoring the enterprise, the company is now clearly looking at what enterprises are struggling with and how it can adapt its products for them.


By Frederic Lardinois

Aurora Insight emerges from stealth with $18M and a new take on measuring wireless spectrum

Aurora Insight, a startup that provides a “dynamic” global map of wireless connectivity that it built and monitors in real time using AI combined with data from sensors on satellites, vehicles, buildings, aircraft and other objects, is emerging from stealth today with the launch of its first publicly-available product, a platform providing insights on wireless signal and quality covering a range of wireless spectrum bands, offered as a cloud-based, data-as-a-service product.

“Our objective is to map the entire planet, charting the radio waves used for communications,” said Brian Mengwasser, the co-founder and CEO. “It’s a daunting task.” He said that to do this the company first “built a bunker” to test the system before rolling it out at scale.

With it, Aurora Insight is also announcing that it has raised $18 million in funding — an aggregate amount that reaches back to its founding in 2016 and covering both a seed round and Series A — from an impressive list of investors. Led by Alsop Louie Partners and True Ventures, backers also include Tippet Venture Partners, Revolution’s Rise of the Rest Seed Fund, Promus Ventures, Alumni Ventures Group, ValueStream Ventures, and Intellectus Partners.

The area of measuring wireless spectrum and figuring out where it might not be working well (in order to fix it) may sound like an arcane area, but it’s a fairly essential one.

Mobile technology — specifically, new devices and the use of wireless networks to connect people, objects and services — continues to be the defining activity of our time, with more than 5 billion mobile users on the planet (out of 7.5 billion people) today and the proportion continuing to grow. With that, we’re seeing a big spike in mobile internet usage, too, with more than 5 billion people, and 25.2 billion objects, expected to be using mobile data by 2025, according to the GSMA.

The catch to all this is that wireless spectrum — which enables the operation of mobile services — is inherently finite and somewhat flaky in how its reliability is subject to interference. That in turn is creating a need for a better way of measuring how it is working, and how to fix it when it is not.

“Wireless spectrum is one of the most critical and valuable parts of the communications ecosystem worldwide,” said Rohit Sharma, partner at True Ventures and Aurora Insight board member, in a statement. “To date, it’s been a massive challenge to accurately measure and dynamically monitor the wireless spectrum in a way that enables the best use of this scarce commodity. Aurora’s proprietary approach gives businesses a unique way to analyze, predict, and rapidly enable the next-generation of wireless-enabled applications.”

If you follow the world of wireless technology and telcos, you’ll know that wireless network testing and measurement is an established field, about as old as the existence of wireless networks themselves (which says something about the general reliability of wireless networks). Aurora aims to disrupt this on a number of levels.

Mengwasser — who co-founded the company with Jennifer Alvarez, the CTO who you can see presenting on the company here — tells me that a lot of the traditional testing and measurement has been geared at telecoms operators, who own the radio towers, and tend to focus on more narrow bands of spectrum and technologies.

The rise of 5G and other wireless technologies, however, has come with a completely new playing field and set of challenges from the industry.

Essentially, we are now in a market where there are a number of different technologies coexisting — alongside 5G we have earlier network technologies (4G, LTE, Wifi); a potential set of new technologies. And we have a new breed of companies are building services that need to have close knowledge of how networks are working to make sure they remain up and reliable.

Mengwasser said Aurora is currently one of the few trying to tackle this opportunity by developing a network that is measuring multiples kinds of spectrum simultaneously, and aims to provide that information not just to telcos (some of whom have been working with Aurora while still in stealth) but the others kinds of application and service developers that are building businesses based on those new networks.

“There is a pretty big difference between us and performance measurement, which typically operates from the back of a phone and tells you when have a phone in a particular location,” he said. “We care about more than this, more than just homes, but all smart devices. Eventually, eerything will be connected to network so we are aiming to provide intelligence on that.”

One example are drone operators who are building delivery networks: Aurora has been working with at least one while in stealth to help develop a service, Mengwasser said, although he declined to say which one. (He also, incidentally, specifically declined to say whether the company had talked with Amazon.)

5G is a particularly tricky area of mobile network spectrum and services to monitor and tackle, one reason why Aurora Insight has caught the attention of investors.

“The reality of massive MIMO beamforming, high frequencies, and dynamic access techniques employed by 5G networks means it’s both more difficult and more important to quantify the radio spectrum,” said Gilman Louie of Alsop Louie Partners, in a statement. “Having the accurate and near-real-time feedback on the radio spectrum that Aurora’s technology offers could be the difference between building a 5G network right the first time, or having to build it twice.” Louie is also sitting on the board of the startup.


By Ingrid Lunden