AI startup Sorcero secures $10M for language intelligence platform

Sorcero announced Thursday a $10 million Series A round of funding to continue scaling its medical and technical language intelligence platform.

The latest funding round comes as the company, headquartered in Washington, D.C. and Cambridge, Massachusetts, sees increased demand for its advanced analytics from life sciences and technical companies. Sorcero’s natural language processing platform makes it easier for subject-matter experts to find answers to their questions to aid in better decision making.

CityRock Venture Partners, the growth fund of H/L Ventures, led the round and was joined by new investors Harmonix Fund, Rackhouse, Mighty Capital and Leawood VC, as well as existing investors, Castor Ventures and WorldQuant Ventures. The new investment gives Sorcero a total of $15.7 million in funding since it was founded in 2018.

Prior to starting Sorcero, Dipanwita Das, co-founder and CEO, told TechCrunch she was working in public policy, a place where scientific content is useful, but often a source of confusion and burden. She thought there had to be a more effective way to make better decisions across the healthcare value chain. That’s when she met co-founders Walter Bender and Richard Graves and started the company.

“Everything is in service of subject-matter experts being faster, better and less prone to errors,” Das said. “Advances of deep learning with accuracy add a lot of transparency. We are used by science affairs and regulatory teams whose jobs it is to collect scientific data and effectively communicate it to a variety of stakeholders.”

The total addressable market for language intelligence is big — Das estimated it to be $42 billion just for the life sciences sector. Due to the demand, the co-founders have seen the company grow at 324% year over year since 2020, she added.

Raising a Series A enables the company to serve more customers across the life sciences sector. The company will invest in talent in both engineering and on the commercial side. It will also put some funds into Sorcero’s go-to-market strategy to go after other use cases.

In the next 12 to 18 months, a big focus for the company will be scaling into product market fit in the medical affairs and regulatory space and closing new partnerships.

Oliver Libby, partner at CityRock Venture Partners, said Sorcero’s platform “provides the rails for AI solutions for companies” that have traditionally found issues with AI technologies as they try to integrate data sets that are already in existence in order to run analysis effectively on top of that.

Rather than have to build custom technology and connectors, Sorcero is “revolutionizing it, reducing time and increasing accuracy,” and if AI is to have a future, it needs a universal translator that plugs into everything, he said.

“One of the hallmarks in the response to COVID was how quickly the scientific community had to do revolutionary things,” Libby added. “The time to vaccine was almost a miracle of modern science. One of the first things they did was track medical resources and turn them into a hook for pharmaceutical companies. There couldn’t have been a better use case for Sorcero than COVID.”

 


By Christine Hall

How artificial intelligence will be used in 2021

Scale AI CEO Alexandr Wang doesn’t need a crystal ball to see where artificial intelligence will be used in the future. He just looks at his customer list.

The four-year-old startup, which recently hit a valuation of more than $3.5 billion, got its start supplying autonomous vehicle companies with the labeled data needed to train machine learning models to develop and eventually commercialize robotaxis, self-driving trucks and automated bots used in warehouses and on-demand delivery.

The wider adoption of AI across industries has been a bit of a slow burn over the past several years as company founders and executives begin to understand what the technology could do for their businesses.

In 2020, that changed as e-commerce, enterprise automation, government, insurance, real estate and robotics companies turned to Scale’s visual data labeling platform to develop and apply artificial intelligence to their respective businesses. Now, the company is preparing for the customer list to grow and become more varied.

How 2020 shaped up for AI

Scale AI’s customer list has included an array of autonomous vehicle companies including Alphabet, Voyage, nuTonomy, Embark, Nuro and Zoox. While it began to diversify with additions like Airbnb, DoorDash and Pinterest, there were still sectors that had yet to jump on board. That changed in 2020, Wang said.

Scale began to see incredible use cases of AI within the government as well as enterprise automation, according to Wang. Scale AI began working more closely with government agencies this year and added enterprise automation customers like States Title, a residential real estate company.

Wang also saw an increase in uses around conversational AI, in both consumer and enterprise applications as well as growth in e-commerce as companies sought out ways to use AI to provide personalized recommendations for its customers that were on par with Amazon.

Robotics continued to expand as well in 2020, although it spread to use cases beyond robotaxis, autonomous delivery and self-driving trucks, Wang said.

“A lot of the innovations that have happened within the self-driving industry, we’re starting to see trickle out throughout a lot of other robotics problems,” Wang said. “And so it’s been super exciting to see the breadth of AI continue to broaden and serve our ability to support all these use cases.”

The wider adoption of AI across industries has been a bit of a slow burn over the past several years as company founders and executives begin to understand what the technology could do for their businesses, Wang said, adding that advancements in natural language processing of text, improved offerings from cloud companies like AWS, Azure and Google Cloud and greater access to datasets helped sustain this trend.

“We’re finally getting to the point where we can help with computational AI, which has been this thing that’s been pitched for forever,” he said.

That slow burn heated up with the COVID-19 pandemic, said Wang, noting that interest has been particularly strong within government and enterprise automation as these entities looked for ways to operate more efficiently.

“There was this big reckoning,” Wang said of 2020 and the effect that COVID-19 had on traditional business enterprises.

If the future is mostly remote with consumers buying online instead of in-person, companies started to ask, “How do we start building for that?,” according to Wang.

The push for operational efficiency coupled with the capabilities of the technology is only going to accelerate the use of AI for automating processes like mortgage applications or customer loans at banks, Wang said, who noted that outside of the tech world there are industries that still rely on a lot of paper and manual processes.


By Kirsten Korosec

AWS adds natural language search service for business intelligence from its data sets

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”


By Jonathan Shieber

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad WiFi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality, and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies buidling visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the cofounder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and Professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “Covid-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short- to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

All the same, the reasons why Headroom are interesting are also likely going to be the ones that will pose big challenges for it. The new ubiquity (and our present lives working at home) might make us more open to using video calling, but for better or worse, we’re all also now pretty used to what we already use. And for many companies, they’ve now paid up as premium users to one service or another, so they may be reluctant to try out new and less-tested platforms.

But as we’ve seen in tech so many times, sometimes it pays to be a late mover, and the early movers are not always the winners.

The first iteration of Headroom will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: the same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).


By Ingrid Lunden

Lightspeed leads Laiye’s $42M round to bet on Chinese enterprise IT

Laiye, a Chinese startup that offers robotic process automation services to several major tech firms in the nation and government agencies, has raised $42 million in a new funding round as it looks to scale its business.

The new financing round, Series C, was co-led by Lightspeed Venture Partners and Lightspeed China Partners. Cathay Innovation, which led the startup’s Series B+ round and Wu Capital, which led the Series B round, also participated in the new round.

China has been the hub for some of the cheapest labor in the world. But in recent years, a number of companies and government agencies have started to improve their efficiency with the help of technology.

That’s where Laiye comes into play. Robotic process automation (RPA) allows software to mimic several human behaviors such as keyboard strokes and mouse clicks.

“For instance, a number of banks did not previously offer APIs, so humans had to sign in and fetch the data and then feed it into some other software. Processes like these could be automated by our platform,” said Arvid Wang, co-founder and co-chief executive of Laiye, in an interview with TechCrunch.

The four-and-a-half-year-old startup, which has raised more than $100 million to date, will use the fresh capital to hire talent from across the globe and expand its services. “We believe robotic process automation will achieve its full potential when it combines AI and the best human talent,” he said.

Laiye’s announcement today comes as the market for robotic automation process is still in nascent stage in China. There are a handful of startups looking into this space, but Laiye, which counts Microsoft as an investor, and Sequoia-backed UiPath are the two clear leaders in the market currently.

As my colleague Rita Liao wrote last year, it was only recently that some entrepreneurs and investors in China started to shift their attention from consumer-facing products to business applications.

Globally, RPA has emerged as the fastest growing market in enterprise space. A Gartner report found last year that RPA market grew over 63% in 2018. Recent surveys have shown that most enterprises in China today are also showing interest in enhancing their RPA projects and AI capabilities.

Laiye today has more than 200 partners and more than 200,000 developers have registered to use its multilingual UiBot RPA platform. UiBot enables integration with Laiye’s native and third-party AI capabilities such as natural language processing, optical character recognition, computer vision, chatbot and machine learning.

“We are very bullish on China, and the opportunities there are massive,” said Lightspeed partner Amy Wu in an interview. “Laiye is doing phenomenally there, and with this new fundraise, they can look to expand globally,” she said.


By Manish Singh

Battlefield winner Forethought adds tool to automate support ticket routing

Last year at this time, Forethought won the TechCrunch Disrupt Battlefield competition. A  $9 million Series A investment followed last December. Today at TechCrunch Sessions: Enterprise in San Francisco, the company introduced the latest addition to its platform called Agatha Predictions.

Forethought CEO and co-founder, Deon Nicholas, said that after launching its original product, Agatha Answers to provide suggested answers to customer queries, customers were asking for help with the routing part of the process, as well. “We learned that there’s a there’s a whole front end of that problem before the ticket even gets to the agent,” he said. Forethought developed Agatha Predictions to help sort the tickets and get them to the most qualified agent to solve the problem.

“It’s effectively an entire tool that helps triage and route tickets. So when a ticket is coming in, it can predict whether it’s a high priority or low priority ticket and which agent is best qualified to handle this question. And this all happens before the agent even touches the ticket. This really helps drive efficiencies across the organization by helping to reduce triage time,” Nicholas explained.

The original product Agatha Answers is designed to help agents get answers more quickly and reduce the amount of time it takes to resolve an issue. “It’s a tool that integrates into your Help Desk software, indexes your past support tickets, knowledge base articles and other [related content]. Then we give agents suggested answers to help them close questions with reduced handle time,” Nicholas said.

He says that Agatha Predictions is based on the same underlying AI engine as Agatha Answers. Both use Natural Language Understanding (NLU) developed by the company. “We’ve been building out our product, and the Natural Language Understanding engine, the engine behind the system, works in a very similar manner [across our products]. So as a ticket comes in the AI reads it, understands what the customer is asking about, and understands the semantics, the words being used,” he explained. This enables them to automate the routing and supply a likely answer for the issue involved.

Nicholas maintains that winning Battlefield gave his company a jump start and a certain legitimacy it lacked as an early-stage startup. Lots of customers came knocking after the event, as did investors. The company has grown from 5 employees when it launched last year at TechCrunch Disrupt to 20 today.


By Ron Miller

Ment.io wants to help your team make decisions

Getting even the most well-organized team to agree on anything can be hard. Tel Aviv’s Ment.io, formerly known as Epistema, wants to make this process easier by applying smart design and a dose of machine learning to streamline the decision-making process.

Like with so many Israeli startups, Ment.io’s co-founders Joab Rosenberg and Tzvika Katzenelson got their start in Israel’s intelligence service. Indeed, Rosenberg spent 25 years in the intelligence service, where his final role was that of the deputy head analyst. “Our story starts from there, because we had the responsibility of gathering the knowledge of a thousand analysts, surrounded by tens of thousands of collection unit soldiers,” Katzenelson, who is Ment.io’s CRO, told me. He noted that the army had turned decision making into a form of art. But when the founders started looking at the tech industry, they found a very different approach to decision making — and one that they thought needed to change.

If there’s one thing the software industry has, it’s data and analytics. These days, the obvious thing to do with all of that information is to build machine learning models, but Katzenelson (rightly) argues that these models are essentially black boxes. “Data does not speak for itself. Correlations that you may find in the data are certainly not causations,” he said. “Every time you send analysts into the data, they will come up with some patterns that may mislead you.”

home 1

So Ment.io is trying to take a very different approach. It uses data and machine learning, but it starts with questions and people. The service actually measures the level of expertise and credibility every team member has around a given topic. “One of the crazy things we’re doing is that for every person, we’re creating their cognitive matrix. We’re able to tell you within the context of your organization how believable you are, how balanced you are, how clearly you are being perceived by your counterparts, because we are gathering all of your clarification requests and every time a person challenges you with something.”Ment1

At its core, Ment.io is basically an internal Q&A service. Anybody can pose questions and anybody can answer them with any data source or supporting argument they may have.

“We’re doing structuring,” Katzenelson explained. “And that’s basically our philosophy: knowledge is just arguments and counterarguments. And the more structure you can put in place, the more logic you can apply.”

In a sense, the company is doing this because natural language processing (NLP) technology isn’t yet able to understand the nuances of a discussion.Ment6If you’re anything like me, though, the last thing you want is to have to use yet another SaaS product at work. The Ment.io team is quite aware of that and has built a deep integration with Slack already and is about to launch support for Microsoft Teams in the next few days, which doesn’t come as a surprise, given that the team has participated in the Microsoft ScaleUp accelerator program.

The overall idea here, Katzenelson explained, is to provide a kind of intelligence layer on top of tools like Slack and Teams that can capture a lot of the institutional knowledge that is now often shared in relatively ephemeral chats.

Ment.io is the first Israeli company to raise funding from Peter Thiel’s late-stage fund, as well as from the Slack Fund, which surely creates some interesting friction, given the company’s involvement with both Slack and Microsoft, but Katzenelson argues that this is not actually a problem.

Microsoft is also a current Ment.io customer, together with the likes of Intel, Citibank and Fiverr.

Ment2


By Frederic Lardinois

Cathay Innovation leads Laiye’s $35M round to bet on Chinese enterprise IT

For many years, the boom and bust of China’s tech landscape have centered around consumer-facing products. As this space gets filled by Baidu, Alibaba, Tencent, and more recently Didi Chuxing, Meituan Dianping, and ByteDance, entrepreneurs and investors are shifting attention to business applications.

One startup making waves in China’s enterprise software market is four-year-old Laiye, which just raised a $35 million Series B round led by cross-border venture capital firm Cathay Innovation. Existing backers Wu Capital, a family fund, and Lightspeed China Partners, whose founding partner James Mi has been investing in every round of Laiye since Pre-A, also participated in this Series B.

The deal came on the heels of Laiye’s merger with Chinese company Awesome Technology, a team that’s spent the last 18 years developing Robotic Process Automation, a term for technology that lets organizations offload repetitive tasks like customer service onto machines. With this marriage, Laiye officially launched its RPA product UiBot to compete in the nascent and fast-growing market for streamlining workflow.

“There was a wave of B2C [business-to-consumer] in China, and now we believe enterprise software is about to grow rapidly,” Denis Barrier, co-founder and chief executive officer of Cathay Innovation, told TechCrunch over a phone interview.

Since launching in January, UiBot has collected some 300,000 downloads and 6,000 registered enterprise users. Its clients include major names such as Nike, Walmart, Wyeth, China Mobile, Ctrip and more.

Guanchun Wang, chairman and CEO of Laiye, believes there are synergies between AI-enabled chatbots and RPA solutions, as the combination allows business clients “to build bots with both brains and hands so as to significantly improve operational efficiency and reduce labor costs,” he said.

When it comes to market size, Barrier believes RPA in China will be a new area of growth. For one, Chinese enterprises, with a shorter history than those found in developed economies, are less hampered by legacy systems, which makes it “faster and easier to set up new corporate software,” the investor observed. There’s also a lot more data being produced in China given the population of organizations, which could give Chinese RPA a competitive advantage.

“You need data to train the machine. The more data you have, the better your algorithms become provided you also have the right data scientists as in China,” Barrier added.

However, the investor warned that the exact timing of RPA adoption by people and customers is always not certain, even though the product is ready.

Laiye said it will use the proceeds to recruit talents for research and development as well as sales of its RPA products. The startup will also work on growing its AI capabilities beyond natural language processing, deep learning, and reinforcement learning, in addition to accelerating commercialization of its robotic solutions across industries.


By Rita Liao

Gong.io nabs $40M investment to enhance CRM with voice recognition

With traditional CRM tools, sales people add basic details about the companies to the database, then a few notes about their interactions. AI has helped automate some of that, but Gong.io wants to take it even further using voice recognition to capture every word of every interaction. Today, it got a $40M Series B investment.

The round was led by Battery Ventures with existing investors Norwest Venture Partners, Shlomo Kramer, Wing Venture Capital, NextWorld Capital and Cisco Investments also participating. Battery general partner Dharmesh Thakker will join the startup’s Board under the terms of the deal. Today’s investment brings the total raised so far to $68 million, according to the company.

$40 million is a hefty Series B, but investors see a tool that has the potential to have a material impact on sales, or at least give management a deeper understanding of why a deal succeeded or failed using artificial intelligence, specifically natural language processing.

Company co-founder and CEO Amit Bendov says the solution starts by monitoring all customer-facing conversation and giving feedback in a fully automated fashion. “Our solution uses AI to extract important bits out of the conversation to provide insights to customer-facing people about how they can get better at what they do, while providing insights to management about how staff is performing,” he explained. It takes it one step further by offering strategic input like how your competitors are trending or how are customers responding to your products.

Screenshot: Gong.io

Bendov says he started the company because he has had this experience at previous startups where he wants to know more about why he lost a sale, but there was no insight from looking at the data in the CRM database. “CRM could tell you what customers you have, how many sales you’re making, who is achieving quota or not, but never give me the information to rationalize and improve operations,” he said.

The company currently has 350 customers, a number that has more than tripled since the end of 2017 when it had 100. He says it’s not only that it’s adding new customers, existing ones are expanding, and he says that there is almost zero churn.

Today, Gong has 120 employees with headquarters in San Francisco and a 55-person R&D team in Israel. Bendov expects the number of employees to double over the next year with the new influx of money to keep up with the customer growth.


By Ron Miller

Einstein Voice gives Salesforce users gift of gab

Salespeople usually spend their days talking. They are on the phone and in meetings, but when it comes to updating Salesforce, they are back at the keyboard again typing notes and milestones, or searching for metrics about their performance. Today, Salesforce decided to change that by introducing Einstein Voice, a bit of AI magic that allows salespeople to talk to the program instead of typing.

In a world where Amazon Alexa and Siri make talking to our devices more commonplace in our non-work lives, it makes sense that companies are trying to bring that same kind of interaction to work.

In this case, you can conversationally enter information about a meeting, get daily briefings about key information on your day’s meetings (particularly nice for salespeople who spend their day in the car) and interact with Salesforce data dashboards by asking questions instead of typing queries.

All of these tools are designed to make life easier for busy salespeople. Most hate doing the administrative part of their jobs because if they are entering information, even if it will benefit them having a record in the long run, they are not doing their primary job, which is selling stuff.

For the meetings notes part, instead of typing on a smartphone, which can be a challenge anyway, you simply touch Meeting Debrief in the Einstein Voice mobile tool and start talking to enter your notes. The tool interprets what you’re saying. As with most transcription services, this is probably not perfect and will require some correcting, but should get you most of the way there.

It can also pick out key data like dates and deal amounts and let you set action items to follow up on.

Gif: Salesforce

Brent Leary, who is the founder and principal analyst at CRM Essentials says this is a natural progression for Salesforce as people get more comfortable using voice interfaces. “I think this will make voice-first devices and assistants as important pieces to the CRM puzzle from both a customer experience and an employee productivity perspective,” he told TechCrunch.

It’s worth pointing out that Tact.AI has been doing this for some time on top of Salesforce giving this type of voice interaction for Salesforce users. It’s likely ahead of Salesforce at this point, but Leary believes having Salesforce enter the voice arena will probably benefit the startup more than hurt it.

“The Salesforce tide will lift all boats, and companies like Tact will see their profile increased significantly because while Salesforce is the leader in the category, it’s share of the market is still less than 20% of the market,” he pointed out.

Einstein is Salesforce’s catch-all brand for its artificial intelligence layer. In this case it’s using natural language processing, voice recognition technology and other artificial intelligence pieces to interpret the person’s voice and transcribe what they are saying or understand their request better.

Typically, Salesforce starts with a small set of functionality and the builds on that over time. That’s very likely what they are doing here, coming out with a product announcement in time for Dreamforce, their massive customer conference next week,


By Ron Miller

Fresh out of Y Combinator, Leena AI scores $2M seed round

Leena AI, a recent Y Combinator graduate focusing on HR chatbots to help employees answer questions like how much vacation time they have left, announced a $2 million seed round today from a variety of investors.

Company co-founder and CEO Adit Jain says the seed money is about scaling the company and gaining customers. They hope to have 50 enterprise customers within the next 12-18 months. They currently have 16.

We wrote about the company in June when it was part of the Y Combinator Summer 2018 class. At the time Jain explained that they began in 2015 in India as a company called Chatteron. The original idea was to help others build chatbots, but like many startups, they realized there was a need not being addressed, in this case around HR, and they started Leena AI last year to focus specifically on that.

As they delved deeper into the HR problem, they found most employees had trouble getting answers to basic questions like how much vacation time they had or how to get a new baby on their health insurance. This forced a call to a help desk when the information was available online, but not always easy to find.

Jain pointed out that most HR policies are defined in policy documents, but employees don’t always know where they are. They felt a chatbot would be a good way to solve this problem and save a lot of time searching or calling for answers that should be easily found. What’s more, they learned that the vast majority of questions are fairly common and therefore easier for a system to learn.

Employees can access the Leena chatbot in Slack, Workplace by Facebook, Outlook, Skype for Business, Microsoft Teams and Cisco Spark. They also offer Web and mobile access to their service independent of these other tools.

Photo: Leena AI

What’s more, since most companies use a common set of backend HR systems like those from Oracle, SAP and NetSuite (also owned by Oracle), they have been able to build a set of standard integrators that are available out of the box with their solution.

The customer provides Leena with a handbook or a set of policy documents and they put their machine learning to work on that. Jain says, armed with this information, they can convert these documents into a structured set of questions and answers and feed that to the chatbot. They apply Natural Language Processing (NLP) to understand the question being asked and provide the correct answer.

They see room to move beyond HR and expand into other departments such as sales or customer service that could also take advantage of bots to answer a set of common questions. For now, as a recent YC graduate, they have their first bit of significant funding and they will concentrate on building HR chatbots and see where that takes them.


By Ron Miller

Forethought looks to reshape enterprise search with AI

Forethought, a 2018 TechCrunch Disrupt Battlefield participant, has a modern vision for enterprise search that uses AI to surface the content that matters most in the context of work. Its first use case involves customer service, but it has a broader ambition to work across the enterprise.

The startup takes a bit of an unusual approach to search. Instead of a keyword-driven experience we are used to with Google, Forethought uses an information retrieval model driven by artificial intelligence underpinnings that they then embed directly into the workflow, company co-founder and CEO Deon Nicholas told TechCrunch. They have dubbed their answer engine ‘Agatha.’

Much like any search product, it begins by indexing relevant content. Nicholas says they built the search engine to be able to index millions of documents at scale very quickly. It then uses natural language processing (NLP) and natural language understanding (NLU) to read the documents as a human would.

“We don’t work on keywords. You can ask questions without keywords and using synonyms to help understand what you actually mean, we can actually pull out the correct answer [from the content] and deliver it to you,” he said.

One of first use cases where they are seeing traction in is customer support. “Our AI, Agatha for Support, integrates into a company’s help desk software, either Zendesk, Salesforce Service Cloud, and then we [read] tickets and suggest answers and relevant knowledge base articles to help close tickets more efficiently,” Nicholas explained. He claims their approach has increased agent efficiency by 20-30 percent.

Forethought at work in Salesforce Service Cloud. Screenshot: Forethought

The plan is to eventually expand beyond the initial customer service use case into other areas of the enterprise and follow a similar path of indexing documents and embedding the solution into the tools that people are using to do their jobs.

When they reach Beta or general release, they will operate as a cloud service where customers sign up, enter their Zendesk or Salesforce credentials (or whatever other products happen to be supported at that point) and the product begins indexing the content.

Forethought in Zendesk. Screenshot: Forethought

The founding team, all in their mid-20s, have had a passion for artificial intelligence since high school. In fact, Nicholas built an AI program to read his notes and quiz him on history while still in high school. Later at the University of Waterloo he published a paper on machine learning and had internships at Palantir, Facebook and Dropbox. His first job out of school was at Pure Storage. All these positions had a common thread of working with data and AI.

The company launched last year and they debuted Agatha in private Beta 4 months ago. They currently have six companies participating, the first of which has been converted to a paying customer.

They have closed a pre-seed round of funding too, and although they weren’t prepared to share the amount, the investment was led by K9 Ventures. While Village Global, Original Capital and other unnamed investors also participated.


By Ron Miller

Klarity uses AI to strip drudgery from contract review

Klarity, a member of the Y Combiner 2018 Summer class, wants to automate much of the contract review process by applying artificial intelligence, specifically natural language processing.

Company co-founder and CEO Andrew Antos has experienced the pain of contract reviews first hand. After graduating from Harvard Law, he landed a job spending 16 hours a day reviewing contract language, a process he called mind-numbing. He figured there had to be a way to put technology to bear on the problem and Klarity was born.

“A lot of companies are employing internal or external lawyers because their customers, vendors or suppliers are sending them a contract to sign,” Antos explained They have to get somebody to read it, understand it and figure out whether it’s something that they can sign or if it requires specific changes.

You may think that this kind of work would be difficult to automate, but Antos said that  contracts have fairly standard language and most companies use ‘playbooks.’ “Think of the playbook as a checklist for NDAs, sales agreements and vendor agreements — what they are looking for and specific preferences on what they agree to or what needs to be changed,” Antos explained.

Klarity is a subscription cloud service that checks contracts in Microsoft Word documents using NLP. It makes suggestions when it sees something that doesn’t match up with the playbook checklist. The product then generates a document, and a human lawyer reviews and signs off on the suggested changes, reducing the review time from an hour or more to 10 or 15 minutes.

Screenshot: Klarity

They launched the first iteration of the product last year and have 14 companies using it with 4 paying customers so far including one of the world’s largest private equity funds. These companies signed on because they have to process huge numbers of contracts. Klarity is helping them save time and money, while applying their preferences in a consistent fashion, something that a human reviewer can have trouble doing.

He acknowledges the solution could be taking away work from human lawyers, something they think about quite a bit. Ultimately though, they believe that contract reviewing is so tedious, it is freeing up lawyers for work that requires a greater level of intellectual rigor and creativity.

Antos met his co-founder and CTO, Nischal Nadhamuni, at an MIT entrepreneurship class in 2016 and the two became fast friends. In fact, he says that they pretty much decided to start a company the first day. “We spent 3 hours walking around Cambridge and decided to work together to solve this real problem people are having.”

They applied to Y Combinator two other times before being accepted in this summer’s cohort. The third time was the charm. He says the primary value of being in YC is the community and friendships they have formed and the help they have had in refining their approach.

“It’s like having a constant mirror that helps you realize any mistakes or any suboptimal things in your business on a high speed basis,” he said.


By Ron Miller

Dialpad dials up $50M Series D led by Iconiq

Dialpad announced a $50 million Series D investment today, giving the company plenty of capital to keep expanding its business communications platform.

The round was led by Iconiq Capital with help from existing investors Andreessen Horowitz, Amasia, Scale Ventures, Section 32 and Work-Bench. With today’s round, the company has now raised $120 million.

As technology like artificial intelligence and internet of things advances, it’s giving the company an opportunity to expand its platform. Dialpad products include UberConference conferencing software and VoiceAI for voice transcription applications.

The company is competing in a crowded market that includes giants like Google and Cisco and a host of smaller companies like GoToMeeting (owned by LogMeIn), Zoom and BlueJeans. All of these companies are working to provide cloud-based meeting and communications services.

Increasingly, that involves artificial intelligence like natural language processing (NLP) to provide on the fly transcription services. While none of these services is perfect yet, they are growing increasingly accurate.

VoiceAI was launched shortly after Dialpad acquired TalkIQ in May to take this idea a step further by applying sentiment analysis and analytics to voice transcripts. The company plans to use the cash infusion to continue investing in artificial intelligence on the Dialpad platform.

Post call transcript generated by VoiceAI. Screenshot: Dialpad

CEO Craig Walker certainly sees the potential of artificial intelligence for the company moving forward. “Smart CIOs know AI isn’t just another trendy tech tool, it’s the future of work. By arming sales and support teams, and frankly everybody in the organization, with VoiceAI’s real-time artificial intelligence and insights, businesses can dramatically improve customer satisfaction and ultimately their bottom line,” Walker said in a statement.

Dialpad is also working with voice-driven devices like the Amazon Alexa and it announced Alexa integration with Dialpad in April. This allows Alexa users to make calls by saying something like, “Alexa, call Liz Green with Dialpad” and the Echo will make the phone call on your behalf using Dialpad software.

According to the company website, it has over 50,000 customers including WeWork, Stitch Fix, Uber and Reddit. The company says it has added over 10,000 new customers since its last funding round in September, 2017.


By Ron Miller

IBM can’t stop milking the Watson brand

More than seven years after IBM Watson beat a couple of human Jeopardy! champions, the company has continued to make hay with the brand. Watson, at its core, is simply an artificial intelligence engine and while that’s not trivial by any means, neither is it the personified intelligence that their TV commercials would have the less technically savvy believe.

These commercials contribute to this unrealistic idea that humans can talk to machines in this natural fashion. You’ve probably seen some. They show this symbol talking to humans in a robotic voice explaining its capabilities. Some of the humans include Bob Dylan, Serena Williams and Stephen King.

In spite of devices like Alexa and Google Home, we certainly don’t have machines giving us detailed explanations, at least not yet.

IBM would probably be better served aiming its commercials at the enterprises it sells to, rather than the general public, who may be impressed by a talking box having a conversation with a star. However, those of us who have at least some understanding of the capabilities of such tech, and those who buy it, don’t need such bells and whistles. We need much more practical applications. While chatting with Serena Williams about competitiveness may be entertaining, it isn’t really driving home the actual value proposition of this tech for business.

The trouble with using Watson as a catch-all phrase is that it reduces the authenticity of the core technology behind it. It’s not as though IBM is alone in trying to personify its AI though. We’ve seen the same thing from Salesforce with Einstein, Microsoft with Cortana and Adobe with Sensei. It seems that these large companies can’t deliver artificial intelligence without hiding it behind a brand.

The thing is this though, this is not a consumer device like the Amazon Echo or Google Home. It’s a set of technologies like deep learning, computer vision and natural language processing, but that’s hard to sell, so these companies try to put a brand on it like it’s a single entity.

Just this week, at the IBM Think Conference in Las Vegas, we saw a slew of announcements from IBM that took on the Watson brand. That included Watson Studio, Watson Knowledge Catalog, Watson Data Kits and Watson Assistant. While they were at it, they also announced they were beefing up their partnership Apple with — you guessed it — Watson and Apple Core ML. (Do you have anything without quite so much Watson in it?)

Marketers gonna market and there is little we can do, but when you overplay your brand, you may be doing your company more harm than good. IBM has saturated the Watson brand, and might not be reaching the intended audience as a result.