Cortana wants to be your personal executive assistant and read your emails to you, too

Only a few years ago, Microsoft hoped that Cortana could become a viable competitor to the Google Assistant, Alexa and Siri . Over time, as Cortana failed to make a dent in the marketplace (do you ever remember that Cortana is built into your Windows 10 machine?), the company’s ambitions shrunk a bit. Today, Microsoft wants Cortana to be your personal productivity assistant — and to be fair, given the overall Microsoft ecosystem, Cortana may be better suited to that than to tell you about the weather.

At its Ignite conference, Microsoft today announced a number of new features that help Cortana to become even more useful in your day-to-day work, all of which fit into the company’s overall vision of AI as a tool that is helpful and augments human intelligence.

Screen Shot 2019 10 31 at 3.25.48 PM

The first of these is a new feature in Outlook for iOS that uses Microsoft text-to-speech features to read your emails to you (using both a male and female voice). Cortana can also now help you schedule meetings and coordinate participants, something the company first demoed at previous conferences.

Starting next month, Cortana will also be able to send you a daily email that summarizes all of your meetings, presents you with relevant documents and reminders to “follow up on commitments you’ve made in email.” This last part, especially, should be interesting as it seems to go beyond the basic (and annoying) nudges to reply to emails in Google’s Gmail.

2019 11 01 0914


By Frederic Lardinois

Amazon adds Hindi to the Alexa Skills Kit

Users of Amazon’s voice assistant will soon be able to talk to Alexa in Hindi. Amazon announced today that it has added a Hindi voice model to its Alexa Skills Kit for developers. Alexa developers can also update their existing published skills in India for Hindi.

Amazon first revealed that it would add fluent Hindi to Alexa last month during its re: MARS machine learning and artificial intelligence conference. Before, Alexa was only able to understand a few Hinglish (a portmanteau of Hindi and English) commands. Rohit Prasad, vice president and head scientist for Alexa, told Indian news agency IANS that adding Hindi to Alexa posed a “contextual, cultural as well as content-related challenge” because of the wide variety of dialects, accents and slang used in India.

Along with English, Hindi is one of India’s official languages (Google Voice Assistant also offers Hindi support). According to Citi Research, Amazon holds about a 30 percent market share, about the same as its main competitor, Walmart-backed Flipkart.


By Catherine Shu

Alexa for Business opens up to third-party device makers

Last year, Amazon announced a new initiative, Alexa for Business, designed to introduce its voice assistant technology and Echo devices into a corporate setting. Today, it’s giving the platform a big upgrade by opening it up to device makers who are building their own solutions that have Alexa built-in.

The change came about based on feedback from the existing organizations where Alexa for Business is today being used, Amazon says. The company claims thousands of businesses have added an Amazon Echo alongside their existing office equipment since the program’s debut last year, including companies like Express Trucking, Fender and Propel Insurance, for example.

But it heard from businesses that they want to have Alexa built in to existing devices, to minimize the amount of technology they need to manage and monitor.

The update will allow device makers building with the Alexa Voice Service (AVS) SDK can now create products that can be registered with Alexa for Business, and managed as shared devices across the organization.

The device management capabilities include the ability to configure things like the room designation, location and monitor the device’s health, as well as manage which public and private skills are assigned to the shared devices.

A part of Alexa for Business is the ability for organizations to create their own internal – and practical – skills for a business setting, like voice search for employee directories, Salesforce data, or company calendar information.

Amazon also recently launched its own feature for Alexa for Business users that offers the ability for staff to book conference rooms.

Amazon says it’s already working with several brands on integrating Alexa into their own devices including Plantronics, iHome, and BlackBerry. And it’s working with solution providers like Linkplay and Extron, it says. (Citrix has also begun to integrate with the ‘for Business’ platform.)

“We’ve been using Alexa for Business since its launch by pairing Echo devices with existing Polycom equipment,” noted Laura Marx, VP of Alliance Marketing at Plantronics, in a statement about its plans to make equipment that works with Alexa. “Integrating those experiences directly into products like Polycom Trio will take our customer experience to the next level of convenience and ease of use,” she said.

Plantronics provided an early look at the Alexa experience earlier this year, and iHome has an existing device with Alexa built-in – the iAVS16. However, it has not yet announced which product will be offered through Alexa for Business.

It’s still too soon to see how well any of Amazon’s business initiatives with Alexa pay off – after all, Echo devices today are often used for consumer-orientated purposes like playing music, getting news and information, setting kitchen timers, and making shopping lists. But if Amazon is able to penetrate businesses with Echo speakers and other Alexa-powered business equipment, it could make inroads into a profitable voice market, beyond the smart home.

But not everyone believes Alexa in the workplace is a good idea. Hackers envision how the devices could be used for corporate espionage and hacks, and warn that companies with trade secrets shouldn’t have listening devices set around their offices.

Amazon, however, is plodding ahead. It has even integrated with Microsoft’s Cortana so Alexa can gain access to Cortana’s knowledge of productivity features like calendar management, day at a glance, and customer email.

The Alexa for Business capabilities are provided as an extension to the AVS Device SDK, starting with version 1.10, available to download from Github.

 


By Sarah Perez

SoundHound has raised a big $100M round to take on Alexa and Google Assistant

As SoundHound looks to leverage its ten-plus years of experience and data to create a voice recognition tool that companies can bake into any platform, it’s raising another big $100 million round of funding to try to make its Houndify platform a third neutral option compared to Alexa and Google Assistant.

While Amazon works to get developers to adopt Alexa, SoundHound has been collecting data since it started as an early mobile app for the iPhone and Android devices. That’s given it more than a decade of data to work with as it tries to build a robust audio recognition engine and tie it into a system with dozens of different queries and options that it can tie to those sounds. The result was always a better SoundHound app, but it’s increasingly started to try to open up that technology to developers and show it’s more powerful (and accurate) than the rest of the voice assistants on the market — and get them to use it in their services.

“We launched [Houndify] before Google and Amazon,” CEO Keyvan Mohajer said. “Obviously, good ideas get copied, and Google and Amazon have copied us. Amazon has the Alexa fund to invest in smaller companies and bribe them to adopt the Alexa Platform. Our reaction to that was, we can’t give $100 million away, so we came up with a strategy which was the reverse. Instead of us investing in smaller companies, let’s go after big successful companies that will invest in us to accelerate Houndify. We think it’s a good strategy. Amazon would be betting on companies that are not yet successful, we would bet on companies that are already successful.”

This round is all coming in from strategic investors. Part of the reason is that taking on these strategic investments allows SoundHound to capture important partnerships that it can leverage to get wider adoption for its technology. The companies investing, too, have a stake in SoundHound’s success and will want to get it wherever possible. The strategic investors include Tencent Holdings Limited, Daimler AG, Hyundai Motor Company, Midea Group, and Orange S.A. SoundHound already has a number of strategic investors that include Samsung, NVIDIA, KT Corporation, HTC, Naver, LINE, Nomura, Sompo, and Recruit. It’s a ridiculously long list, but again, the company is trying to get that technology baked in wherever it can.

So it’s pretty easy to see what SoundHound is going to get out of this: access to China through partners, deeper integration into cars, as well as increased expansion to other avenues through all of its investors. Mohajer said the company could try to get into China on its own (or ignore it altogether), but there has been a very limited number of companies that have had any success there whatsoever. Google and Facebook, two of the largest technology companies in the world, are not on that list of successes.

“China is a very important market, it’s very big and has a lot of potential, and it’s growing,” Mohajer said. “You can go to Canada without having to rethink a big strategy, but China is so different. We saw even companies like Google and Facebook tried to do that and didn’t succeed. When those bigger companies didn’t succeed, it was a signal to us that strategy wouldn’t work. [Tencent] was looking at the space and they saw we have the best technology in the world. They appreciated it and were respectful, they helped us get there. We looked at so many partners and [Tencent and Midea Group] were the ones that worked out.”

The idea here is that developers in all sorts of different markets — whether that’s cars or apps — will want to have some element of voice interaction. SoundHound is betting that companies like Daimler will want to control the experience in their cars, and not be saying “Alexa” whenever they want to make a request while driving. Instead, it may come down to something as simple as a wake word that could change the entire user experience, and that’s why SoundHound is pitching Houndify as a flexible and customizable option that isn’t demanding a brand on top of it.

SoundHound still does have its stable of apps. The original SoundHound app is around, though those features are also baked into Hound, its main consumer app. That is more of a personal assistant-style voice recognition service where you can string together a sentence of as many as a dozen parameters and get a decent search result back. It’s more of a party trick than anything else, but it is a good demonstration of the technical capabilities SoundHound has as it looks to embed that software into lots of different pieces of hardware and software.

SoundHound may have raised a big round with a fresh set of strategic partners, but that certainly doesn’t mean it’s a surefire bet. Amazon is, after all, one of the most valuable companies in the world and Alexa has proven to be a very popular platform, even if it’s mostly for nominal requests and listening to music (and party tricks) at this point. SoundHound is going to have to convince companies — small and large — to bake in its tools, rather than go with massive competitors like Amazon with pockets deep enough to buy a whole grocery chain.

“We think every company is going to need to have a strategy in voice AI, jus like ten years ago everyone needed a mobile strategy,” Mohajer said. “Everyone should think about it. There aren’t many providers, mainly because it takes a long time to build the core technology. It took us 12 years. To Houndify everything we need to be global, we need to support all the main languages and regions in the world. We built the technology to be language independent, but there’s a lot of resources and execution involved.”


By Matthew Lynley

Suki raises $20M to create a voice assistant for doctors

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next.

It was there that Soni said he figured out one of the most annoying pain points for doctors in any office: writing down notes and documentation. That’s why he decided to start Suki — previously Robin AI — to create a way for doctors to simply start talking aloud to take notes when working with patients, rather than having to put everything into a medical record system, or even writing those notes down by hand. That seemed like the lowest hanging fruit, offering an opportunity to make it easier for doctors that see dozens of patients to make their lives significantly easier, he said.

“We decided we had found a powerful constituency who were burning out because of just documentation,” Soni said. “They have underlying EMR systems that are much older in design. The solution aligns with the commoditization of voice and machine learning. If you put it all together, if we can build a system for doctors and allow doctors to use it in a relatively easy way, they’ll use it to document all the interactions they do with patients. If you have access to all data right from a horse’s mouth, you can use that to solve all the other problems on the health stack.”

The company said it has raised a $15 million funding round led by Venrock, with First Round, Social+Capital, Nat Turner of Flatiron Health, Marc Benioff, and other individual Googlers and angels. Venrock also previously led a $5 million seed financing round, bringing the company’s total funding to around $20 million. It’s also changing its name from Robin AI to Suki, though the reason is actually a pretty simple one: “Suki” is a better wake word for a voice assistant than “Robin” because odds are there’s someone named Robin in the office.

The challenge for a company like Suki is not actually the voice recognition part. Indeed, that’s why Soni said they are actually starting a company like this today: voice recognition is commoditized. Trying to start a company like Suki four years ago would have meant having to build that kind of technology from scratch, but thanks to incredible advances in machine learning over just the past few years, startups can quickly move on to the core business problems they hope to solve rather than focusing on early technical challenges.

Instead, Suki’s problem is one of understanding language. It has to ingest everything that a doctor is saying, parse it, and figure out what goes where in a patient’s documentation. That problem is even more complex because each doctor has a different way of documenting their work with a patient, meaning it has to take extra care in building a system that can scale to any number of doctors. As with any company, the more data it collects over time, the better those results get — and the more defensible the business becomes, because it can be the best product.

“Whether you bring up the iOS app or want to bring it in a website, doctors have it in the exam room,” Soni said. “You can say, ‘Suki, make sure you document this, prescribe this drug, and make sure this person comes back to me for a follow-up visit.’ It takes all that, it captures it into a clinically comprehensive note and then pushes it to the underlying electronic medical record. [Those EMRs] are the system of record, it is not our job to day-one replace these guys. Our job is to make sure doctors and the burnout they are having is relieved.”

Given that voice recognition is commoditized, there will likely be others looking to build a scribe for doctors as well. There are startups like Saykara looking to do something similar, and in these situations it often seems like the companies that are able to capture the most data first are able to become the market leaders. And there’s also a chance that a larger company — like Amazon, which has made its interest in healthcare already known — may step in with its comprehensive understanding of language and find its way into the doctors’ office. Over time, Soni hopes that as it gets more and more data, Suki can become more intelligent and more than just a simple transcription service.

“You can see this arc where you’re going from an Alexa, to a smarter form of a digital assistant, to a device that’s a little bit like a chief resident of a doctor,” Soni said. “You’ll be able to say things like, ‘Suki, pay attention,’ and all it needs to do is listen to your conversation with the patient. I’m, not building a medical transcription company. I’m basically trying to build a digital assistant for doctors.”


By Matthew Lynley

Apple, in a very Apple move, is reportedly working on its own Mac chips

Apple is planning to use its own chips for its Mac devices, which could replace the Intel chips currently running on its desktop and laptop hardware, according to a report from Bloomberg.

Apple already designs a lot of custom silicon, including its chipsets like the W-series for its Bluetooth headphones, the S-series in its watches, its A-series iPhone chips, as well as customized GPU for the new iPhones. In that sense, Apple has in a lot of ways built its own internal fabless chip firm, which makes sense as it looks for its devices to tackle more and more specific use cases and remove some of its reliance on third parties for their equipment. Apple is already in the middle of in a very public spat with Qualcomm over royalties, and while the Mac is sort of a tertiary product in its lineup, it still contributes a significant portion of revenue to the company.

Creating an entire suite of custom silicon could do a lot of things for Apple, the least of which bringing in the Mac into a system where the devices can talk to each other more efficiently. Apple already has a lot of tools to shift user activities between all its devices, but making that more seamless means it’s easier to lock users into the Apple ecosystem. If you’ve ever compared connecting headphones with a W1 chip to the iPhone and just typical Bluetooth headphones, you’ve probably seen the difference, and that could be even more robust with its own chipset. Bloomberg reports that Apple may implement the chips as soon as 2020.

Intel may be the clear loser here, and the market is reflecting that. Intel’s stock is down nearly 8% after the report came out, as it would be a clear shift away from the company’s typical architecture where it has long held its ground as Apple moves on from traditional silicon to its own custom designs. Apple, too, is not the only company looking to design its own silicon, with Amazon looking into building its own AI chips for Alexa in another move to create a lock-in for the Amazon ecosystem. And while the biggest players are looking at their own architecture, there’s an entire suite of startups getting a lot of funding building custom silicon geared toward AI.

Apple declined to comment.