TouchCast raises $55M to grow its mixed reality-based virtual event platform

Events — when they haven’t been cancelled altogether in the last 12 months due to the global health pandemic — have gone virtual and online, and a wave of startups that are helping people create and participate in those experiences are seeing a surge of attention, and funding.

In the latest development, New York video startup TouchCast — which has developed a platform aimed at companies to produce lifelike, virtual conferences and other events without much technical heavy-lifting — has picked up funding of $55 million, money that co-founder and CEO Edo Segal said the startup will use to build out its services and teams after being “overrun by demand” in the wake of Covid-19.

The funding is being led by a strategic investor, Accenture Ventures — the investment arm of the systems integrator and consultancy behemoth — with Alexander Capital Ventures, Saatchi Invest, Ronald Lauder and other unnamed investors also participating. The startup up to now has been largely self-funded, and while Segal isn’t disclosing the valuation he said it was definitely in the 9-figures (that is, somewhere in the large region of hundreds of millions of dollars).

Accenture has been using TouchCast’s technology for its own events, but that is likely just one part of its interest: Accenture also has a lot of corporate customers that tap it to build and implement interactive services, so potentially this could lead to more customers into TouchCast’s pipeline.

(Case in point: my interview with Segal, over Zoom, found me speaking to him in the middle of a vast aircraft hangar, with a 747 from one of the big airlines of the world — I won’t say which — parked behind him. He said he’d just come from a business pitch with the airline in question.)

A lot of what we have seen in virtual events, and in particular conferences, has to date has been, effectively, a managed version of a group call on one of the established videoconferencing platforms like Zoom, Google’s Hangout, Microsoft’s Teams, Webex, and so on.

You get a screen with participants’ individual video streams presented to you in a grid more reminiscent of the opening credits of the Brady Bunch or Hollywood Squares than an actual stage or venue.

There are some, of course, who are taking a much different route. Witness Apple’s online events in the last year, productions that have elevated what a virtual event can mean, with more detail and information, and less awkwardness, than an actual live event.

The problem is that not every company is Apple, unable to afford much less execute Hollywood-level presentations.

The essence of what TouchCast has built, as Segal describes it, is a platform that combines computer vision, video streaming technology and natural language processing to let other organizations create experiences that are closer to that of the iPhone giant’s than they are to a game show.

“We have created a platform so that all companies can create events like Apple’s,” Segal said. “We’re taking them on a journey beyond people sitting in their home offices.”

Yet “home office” remains the operative phrase. With TouchCast, people (the organizers and the on-stage participants) still use basic videoconferencing solutions like Zoom and Teams — in their homes, even — to produce the action. But behind the scenes, TouchCast is taking those videos, using computer vision to trim out the people and place them into virtual “venues” so that they appear as if they are on stage in an actual conference.

These venues come from a selection of templates, or the organiser can arrange for a specific venue to be shot and used. And in addition to the actual event, TouchCast then also provides tools for audience members to participate with questions and to chat to each other. As the event is progressing, TouchCast also produces transcriptions and summaries of the key points for those who want them.

Segal said that TouchCast is not planning to make this a consumer-focused product, not even on the B2B2C side, but it’s preparing a feature so that when business conference organisers do want to hold a music segment with a special guest, those can be incorporated, too. (In all honesty, it seems like a small leap to use this for more consumer-focused events, too.)

TouchCast’s growth into a startup serving an audience of hungry and anxious event planners has been an interesting pivot that is a reminder to founders (and investors) that the right opportunities might not be the ones you think they are.

You might recall that the company first came out of stealth back in 2013, with former TechCrunch editor Erick Schonfeld one of the co-founders.

Back then, the company’s concept was to supercharge online video, by making it easier for creators to bring in interactive elements and media widgets into their work, to essentially make videos closer to the kind of interactivity and busy media mix that we find on webpages themselves.

All that might have been too clever by half. Or, it was simply not the right time for that technology. The service never made many waves, and one of my colleagues even assumed it had deadpooled at some point.

Not at all, it turns out. Segal (a serial entrepreneur who also used to work at AOL as VP of emerging platforms — AOL being the company that acquired TechCrunch and eventually became a part of Verizon) notes that the technology that TouchCast is using for its conferencing solution is essentially the same as what it built for its original video product.

After launching an earlier, less feature-rich version of what it has on the market today, it took the company about six months to retool it, adding in more mixed reality customization via the use of Unreal Engine, to make it what it is now, and to meet the demand it started to see from customers, who approached the startup for their own events after attending conferences held by others using TouchCast.

“It took us eight years to get to our overnight success story,” Segal joked.

Figures from Grand View Research cited by TouchCast estimate that virtual events will be a $400 billion business by 2027, and that has made for a pretty large array of companies building out experiences that will make those events worth attending, and putting on.

They include the likes of Hopin and Bizzabo — both of which have recently also raised big rounds — but also more enhanced services from the big, established players in videoconferencing like Zoom, Google, Microsoft, Cisco and more.

It’s no surprise to see Accenture throwing its hat into that ring as a backer of what it has decided is one of the more interesting technology players in that mix.

The reason is because many understand and now accept that — similar to working life in general — it’s very likely that even when we do return to “live” events, the virtual component, and the expectation that it will work well and be compelling enough to watch, is here to stay.

“Digital disruption, distributed workforces, and customer experience are the driving forces behind the need for companies to transform how they do business and move toward the future of work,” said Tom Lounibos, managing director, Accenture Ventures, in a statement. “For organizations to harness the power of virtual experiences to deliver business impact, the pandemic has shown that quality interactions and insights are needed. Our investment in Touchcast demonstrates our commitment to identifying the latest technologies that help address our clients’ critical business needs.”


By Ingrid Lunden

Polarity raises $8.1M for its AI software that constantly analyzes employee screens and highlights key info

Reference docs and spreadsheets seemingly make the world go ’round, but what if employees could just close those tabs for good without losing that knowledge?

One startup is taking on that complicated challenge. Predictably, the solution is quite complicated as well from a tech perspective, involving an AI solution that analyzes everything on your PC screen — all the time — and highlights text onscreen that you could use a little bit more context on. The team at Polarity wants its tech to help teams lower the knowledge barrier to getting stuff done and allow people to focus more on procedure and strategy then memorizing file numbers, IP addresses and jargon.

The Connecticut startup just closed an $8.1 million “AA” round led by TechOperators, with Shasta Ventures, Strategic Cyber Ventures, Gula Tech Adventures and Kaiser Permanente Ventures also participating in the round. The startup closed its $3.5 million Series A in early 2017.

Interestingly, the enterprise-centric startup pitches itself as an AR company, augmenting what’s happening on your laptop screen much like a pair of AR glasses could.

The startup’s computer vision software that uses character recognition to analyze what’s on a user’s screen can be helpful for enterprise teams importing things like a company rolodex so that bios are always collectively a click away but the real utility comes from team-wide flagging of things like suspicious IP addresses that will allow entire teams to learn about new threats and issues at the same time without having to constantly check-in with their co-workers. The startup’s current product has a big focus on analysts and security teams.

Polarity before and after two

via Polarity

Using character recognition to analyze a screen for specific keywords is useful in itself, but that’s also largely a solved computer vision problem.

Polarity’s big advance has been getting these processes to occur consistently on-device without crushing a device’s CPU. Battista says that for the average customer, Polarity’s software generally eats up about 3-6% of their computer’s processing power though it can spike much higher if the system is getting fed a ton of new information at once.

“We spent years building the tech to accomplish [efficiency], readjusting how people think of [object character recognition] and then doing it in real time.” CEO Paul Battista tells me. “The more data that you have onscreen, the more power you use. So it does use a significant percentage of the CPU.”

Why bother with all of this AI trickery and CPU efficiency when you could pull this functionality off in certain apps with an API? The whole deliverable here is that it doesn’t matter if you’re working in Chrome, or Excel or pulling up a scanned document, the software is analyzing what’s actually being rendered onscreen, not what the individual app is communicating.

When it comes to a piece of software analyzing everything on your screen at all times, there are certainly some privacy concerns not only from the employee’s perspective but from a company’s security perspective.

Battista says the intent with this product isn’t to be some piece of corporate spyware, and that it won’t be something running in the background — it’s an app that users will launch. “If [companies] wanted to they could collect all of the data on everybody’s screens, but we don’t have any customers doing that.. The software is built to have a user interface for users to interact with so if the user didn’t choose to subscribe or turn on a metric, then [the company] wouldn’t be able to force them to collect it in the current product.”

Battista says that teams at seven Fortune 100 companies are currently paying for Polarity, with many more in pilot programs. The team is currently around 20 people and with this latest fundraise, Battista wants to double the size of the team in the next 18 months as they look to scale to larger rollouts at major companies.


By Lucas Matney

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.


By Neesha A. Tambe

Under the hood on Zoom’s IPO, with founder and CEO Eric Yuan

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Kate Clark sat down with Eric Yuan, the founder and CEO of video communications startup Zoom, to go behind the curtain on the company’s recent IPO process and its path to the public markets.

Since hitting the trading desks just a few weeks ago, Zoom stock is up over 30%. But the Zoom’s path to becoming a Silicon Valley and Wall Street darling was anything but easy. Eric tells Kate how the company’s early focus on profitability, which is now helping drive the stock’s strong performance out of the gate, actually made it difficult to get VC money early on, and the company’s consistent focus on user experience led to organic growth across different customer bases.

Eric: I experienced the year 2000 dot com crash and the 2008 financial crisis, and it almost wiped out the company. I only got seed money from my friends, and also one or two VCs like AME Cloud Ventures and Qualcomm Ventures.

nd all other institutional VCs had no interest to invest in us. I was very paranoid and always thought “wow, we are not going to survive next week because we cannot raise the capital. And on the way, I thought we have to look into our own destiny. We wanted to be cash flow positive. We wanted to be profitable.

nd so by doing that, people thought I wasn’t as wise, because we’d probably be sacrificing growth, right? And a lot of other companies, they did very well and were not profitable because they focused on growth. And in the future they could be very, very profitable.

Eric and Kate also dive deeper into Zoom’s founding and Eric’s initial decision to leave WebEx to work on a better video communication solution. Eric also offers his take on what the future of video conferencing may look like in the next five to 10 years and gives advice to founders looking to build the next great company.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Kate Clark: Well thanks for joining us Eric.

Eric Yuan: No problem, no problem.

Kate: Super excited to chat about Zoom’s historic IPO. Before we jump into questions, I’m just going to review some of the key events leading up to the IPO, just to give some context to any of the listeners on the call.


By Arman Tabatabai

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.


By Frederic Lardinois

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.


By Frederic Lardinois

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

Edgybees’s new developer platform brings situational awareness to live video feeds

San Diego-based Edgybees today announced the launch of Argus, its API-based developer platform that makes it easy to add augmented reality features to live video feeds.

The service has long used this capability to run its own drone platform for first responders and enterprise customers, which allows its users to tag and track objects and people in emergency situations, for example, to create better situational awareness for first responders.

I first saw a demo of the service a year ago, when the team walked a group of journalists through a simulated emergency, with live drone footage and an overlay of a street map and the location of ambulances and other emergency personnel. It’s clear how these features could be used in other situations as well, given that few companies have the expertise to combine the video footage, GPS data and other information, including geographic information systems, for their own custom projects.

Indeed, that’s what inspired the team to open up its platform. As the Edgybees team told me during an interview at the Ourcrowd Summit last month, it’s impossible for the company to build a new solution for every vertical that could make use of it. So instead of even trying (though it’ll keep refining its existing products), it’s now opening up its platform.

“The potential for augmented reality beyond the entertainment sector is endless, especially as video becomes an essential medium for organizations relying on drone footage or CCTV,” said Adam Kaplan, CEO and co-founder of Edgybees. “As forward-thinking industries look to make sense of all the data at their fingertips, we’re giving developers a way to tailor our offering and set them up for success.”

In the run-up to today’s launch, the company already worked with organizations like the PGA to use its software to enhance the live coverage of its golf tournaments.


By Frederic Lardinois

Alibaba acquires Israeli VR startup Infinity Augmented Reality

Infinity Augmented Reality, an Israeli virtual reality startup, has been acquired by Alibaba, the companies announced this weekend. The deal’s terms were not disclosed. Alibaba and InfinityAR have had a strategic partnership since 2016, when Alibaba Group led InfinityAR’s Series C. Since then, the two have collaborated on augmented reality, computer vision and artificial intelligence projects.

Founded in 2013, the startup’s augmented glasses platform enables developers in a wide range of industries (retail, gaming, medical, etc.) to integrate AR into their apps. InfinityAR’s products include software for ODMs and OEMs and a SDK plug-in for 3D engines.

Alibaba’s foray into virtual reality started three years ago, when it invested in Magic Leap and then announced a new research lab in China to develop ways of incorporating virtual reality into its e-commerce platform.

InfinityAR’s research and development team will begin working out of Alibaba’s Israel Machine Laboratory, part of Alibaba DAMO Academy, the R&D initiative it is pouring $15 billion into with the goal of eventually serving two billion customers and creating 100 million jobs by 2036. DAMO Academy collaborates with universities around the world and Alibaba’s Israel Machine Laboratory has a partnership with Tel Aviv University focused on video analysis and machine learning.

In a press statement, the laboratory’s head, Lihi Zelnik-Manor, said “Alibaba is delighted to be working with InfinityAR as one team after three years of partnership. The talented team brings unique knowhow in sensor fusion, computer vision and navigation technologies. We look forward to exploring these leading technologies and offering additional benefits to customers, partners and developers.”


By Catherine Shu

Matterport raises $48M to ramp up its 3D imaging platform

The growth of augmented and virtual reality applications and hardware is ushering in a new age of digital media and imaging technologies, and startups that are putting themselves at the center of that are attracting interest.

TechCrunch has learned and confirmed that Matterport, which started out making cameras but has since diversified into a wider platform to capture, create, search and utilise 3D imagery of interior and enclosed spaces in immersive real estate, design, insurance and other B2C and B2B applications, has raised $48 million. Sources tell us the money came at a pre-money valuation of around $325 million, although the company is not commenting on that.

From what we understand, the funding is coming ahead of a larger growth round from existing and new investors, to tap into what they see as a big opportunity for building and providing (as a service) highly accurate 3D images of enclosed spaces.

The company in December appointed a new CEO, RJ Pittman — who had been the chief product officer at eBay, and before that held executive roles at Apple and Google — also to help fill out that bigger strategy.

Matterport had raised just under $63 million prior to this and had been valued at around $207 million, according to PitchBook estimates.This current round is coming from existing backers, which include Lux Capital, DCM, Qualcomm Ventures and more.

Matterport’s roots are in high-end cameras built to capture multiple images to create 3D interior imagery for a variety of applications from interior design and real estate to gaming. Changing tides in the worlds of industry and hardware have somewhat shifted its course.

On the hardware side, we’ve seen a rise in the functionality of smartphone cameras, as well as a proliferation of specialised 3D cameras at lower price points. So while Matterport still sells its own high-end cameras, it is also starting to work with less expensive devices with spherical lenses — such as the Ricoh Theta, which is nearly 10 times less expensive than Matterport’s Pro2 camera — and smartphones.

Using an AI engine — which it has been building for some time — packaged into a service it calls Matterport Cloud 3.0, it converts 2D panoramic and 360-degree images into 3D ones. (Matterport Cloud 3.0 is currently in beta and will be launching fully on the 18th of March, initially supporting the Ricoh Theta V, the Theta Z1, the Insta360 ONE X, and the Leica Geosystems BLK360 laser scanner.)

Matterport is further using this technology to grow its wider database of images. It already has racked up 1.6 million 3D images and millions of 2D images, and at its current growth rate, the aim is to expand its library to 100 million in the coming years, positioning it as a Getty for 3D enclosed images.

These, in turn, will be used in two ways: to feed Matterport’s machine learning to train it to create better and faster 3D images; and to become part of a wider library, accessible to other businesses by way of a set of APIs.

And, from what I understand, the object will not just to be use images as they are: people would be able to manipulate the images to, for example, remove all the furniture in a room and re-stage it completely without needing to physically do that work ahead of listing a house for sale. Another is adding immersive interior shots into mapping applications like Google’s Street View.

“We are a data company,” RJ Pittman told me when I met him for coffee last month.

The ability to convert 2D into 3D images using artificial intelligence to help automate the process is a potentially big area that Matterport, and its investors, believe will be in increasing demand. That’s not just because people still think there will one day be a bigger market for virtual reality headsets, which will need more interesting content; but because we as consumers already have come to expect more realistic and immersive experiences today, even when viewing things on regular screens; and because B2B and enterprise services (for example design or insurance applications) have also grown in sophistication and now require these kinds of images.

(That demand is driving the creation of other kinds of 3D imaging startups, too. Threedy.ai launched last week with a seed round from a number of angels and VCs to perform a similar kind of 2D-to-3D mapping technique for objects rather than interior spaces. It is already working with a number of e-commerce sites to bypass some of the costs and inefficiencies of more established, manual methods of 3D rendering.)

While Matterport is doubling down on its cloud services strategy, it’s also been making some hires to take the business to its next steps. In addition to Pittman, they have included adding Dave Lippman, formerly design head at eBay, as its chief design officer; and engineering veteran Lou Marzano as its VP of hardware, R&D and manufacturing, with more hires to come.


By Ingrid Lunden

Say hello to Microsoft’s new $3,500 HoloLens with twice the field of view

Microsoft unveiled the latest version of its HoloLens ‘mixed reality’ headset at MWC Barcelona today. The new HoloLens 2 features a significantly larger field of view, higher resolution and a device that’s more comfortable to wear. Indeed, Microsoft says the device is three times as comfortable to wear (though it’s unclear how Microsoft measured this).

Later this year, HoloLens 2 will be available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand for $3,500.

One of the knocks against the original HoloLens was its limited field of view. When whatever you wanted to look at was small and straight ahead of you, the effect was striking. But when you moved your head a little bit or looked at a larger object, it suddenly felt like you were looking through a stamp-sized screen. HoloLens 2 features a field of view that’s twice as large as the original.

“Kinect was the first intelligent device to enter our homes,” HoloLens chief Alex Kipman said in today’s keynote, looking back the the device’s history. “It drove us to create Microsoft HoloLens. […] Over the last few years, individual developers, large enterprises, brand new startup have been dreaming up beautiful things, helpful things.”

The HoloLens was always just as much about the software as the hardware, though. For HoloLens, Microsoft developed a special version of Windows, together with a new way of interacting with the AR objects through gestures like air tap and bloom. In this new version, the interaction is far more natural and lets you tap objects. The device also tracks your gaze more accurately to allow the software to adjust to where you are looking.

“HoloLens 2 adapts to you,” Kipman stressed. “HoloLens 2 evolves the interaction model by significantly advancing how people engage with holograms.”

In its demos, the company clearly emphasized how much faster and fluid the interaction with HoloLens applications becomes when you can use slides, for example, by simply grabbing the slider and moving it, or by tapping on a button with either a finger or two or with your full hand. Microsoft event built a virtual piano that you can play with ten fingers to show off how well the HoloLens can track movement. The company calls this ‘instinctual interaction.’

Microsoft first unveiled the HoloLens concept at a surprise event on its Redmond campus back in 2015. After a limited, invite-only release that started days after the end of MWC 2016, the device went on sale to everybody in August  2016. Four years is a long time between hardware releases, but the company clearly wanted to seed the market and give developer a chance to build the first set of HoloLens applications on a stable platform.

To support developers, Microsoft is also launching a number of Azure services for HoloLens today. These include spatial anchors and remote rendering to help developers stream high-polygon content to HoloLens.

It’s worth noting that Microsoft never positioned the device as consumer hardware. I may have shown off the occasional game, but its focus was always on business applications, with a bit of educational applications thrown in, too. That trend continued today. Microsoft showed off the ability to have multiple people collaborate around a single hologram, for example. That’s not new, of course, but goes to show how Microsoft is positioning this technology.

For these enterprises, Microsoft will also offer the ability to customize the device.

“When you change the way you see the world, you change the world you see,” Microsoft CEO Satya Nadella said, repeating a line from the company’s first HoloLens announcement four years ago. He noted that he believes that connecting the physical world with the virtual world will transform the way we will work.


By Frederic Lardinois

Microsoft bringing Dynamics 365 mixed reality solutions to smartphones

Last year Microsoft introduced several mixed reality business solutions under the Dynamics 365 enterprise product umbrella. Today, the company announced it would be moving these to smartphones in the spring, starting with previews.

The company announced Remote Assist on HoloLens last year. This tool allows a technician working onsite to show a remote expert what they are seeing. The expert can then walk the less experienced employee through the repair. This is great for those companies that have equipped their workforce with HoloLens for hands-free instruction, but not every company can afford the new equipment.

Starting in the spring, Microsoft is going to help with that by introducing Remote Assist for Android phones. Just about everyone has a phone with them, and those with Android devices will be able to take advantage of Remote Assist capabilities without investing in HoloLens. The company is also updating Remote Assist to include mobile annotations, group calling, deeper integration with Dynamics 365 for Field Service along with improved accessibility features on the HoloLens app.

IPhone users shouldn’t feel left out though because the company announced a preview of Dynamics 365 Product Visualize for iPhone. This tool enables users to work with a customer to visualize what a customized product will look like as they work with them. Think about a furniture seller working with a customer in their homes to customize the color, fabrics and design in place in the room where they will place the furniture, or a car dealer offering different options such as color and wheel styles. Once a customer agrees to a configuration, the data gets saved to Dynamics 365 and shared in Microsoft Teams for greater collaboration across a group of employees working with a customer on a project.

Both of these features are part of the Dynamics 365 spring release and are going to be available in preview starting in April. They are part of a broader release that includes a variety of new artificial intelligence features such as customer service bots and a unified view of customer data across the Dynamics 365 family of products.


By Ron Miller

Daily Crunch: Nvidia breaks with tradition at CES 2019

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Nvidia launches the $349 GeForce RTX 2060

Nvidia broke with tradition and put a new focus on gaming at CES. Last night the company unveiled the RTX 2060, a $349 low-end version of its new Turing-based desktop graphics cards. The RTX 2060 will be available on Jan. 15.

2. Elon Musk’s vision of spaceflight is gorgeous 

This spring SapceX intends to launch the next phase in its space exploration plans. The newly named Starship rocket, previously known as the BFR, intends to to be rocket to rule them all. And it’s going to look good doing it.

3. Apple’s increasingly tricky international trade-offs

Far from its troubles in emerging markets like China, Apple is starting to face backlash from a European population that’s crying foul over the company’s perceived hypocrisy on data privacy. It’s become clear that Apple’s biggest success is now its biggest challenge in Europe.

Photo by Justin Sullivan/Getty Images

4. Marc Andreessen: audio will be “titanically important” and VR will be “1,000” times bigger than AR

In a recently recorded podcast Marc Andreesen gave some predictions on the future of the tech industry. Surprisingly, the all-start investor is continuing his support of the shaky VR industry saying that expanding the immersive world will require us to remove the head-mounted displays we’ve become accustomed to.

5. Fitness marketplace ClassPass acquires competitor GuavaPass

ClassPass, the five-year-old fitness marketplace, is in the midst of an expansion sprint. The company announced yesterday that it’s acquiring one it competitors, GuavaPass, for an undisclosed amount to expand into Asia. The move now puts ClassPass in more than 80 markets across the 11 countries, with plans to expand to 50 new cities in 2019.

6. Apple shows off new smart home products from HomeKit partners

Apple gave a snapshot of its future smart home ecosystem at CES. Looks like an array of smart light switches, door cameras, electrical outlets and more are on the way and will be configurable through the Home app and Siri.

7. Parcel Guard’s smart mailbox protects your packages from porch thieves

Danby is showing off its newly launched smart mailbox called Parcel Guard at CES, which allows deliveries to be left securely at customers’ doorsteps. Turns out you won’t need a farting glitter bomb to protect your packages after all. The Parcel Guard starts at $399 and pre-orders are will be available this week.


By Taylor Nakagawa

Enterprise AR is an opportunity to “do well by doing good”, says General Catalyst

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’.

Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another downer but argued it’s coming out of its third hype cycle now with fresh b2b opportunities on the horizon.

6d.ai investor General Catalyst‘s Niko Bonatsos was also on stage, and both suggested the challenge for AR startups is figuring out how to build for enterprises so the b2b market can carry the mixed reality torch forward.

“From my point of view the fact that Apple, Google, Microsoft, have made such big commitments to the space is very reassuring over the long term,” said Miesnieks. “Similar to the smartphone industry ten years ago we’re just gradually seeing all the different pieces come together. And as those pieces mature we’ll eventually, over the next few years, see it sort of coalesce into an iPhone moment.”

“I’m still really positive,” he continued. “I don’t think anyone should be looking for some sort of big consumer hit product yet but in verticals in enterprise, and in some of the core tech enablers, some of the tool spaces, there’s really big opportunities there.”

Investors shot the arrow over the target where consumer VR/AR is concerned because they’d underestimated how challenging the content piece is, Bonatsos suggested.

“I think what we got wrong is probably the belief that we thought more indie developers would have come into the space and that by now we would probably have, I don’t know, another ten Pokémon-type consumer massive hit applications. This is not happening yet,” he said.

“I thought we’d have a few more games because games always lead the adoption to new technology platforms. But in the enterprise this is very, very exciting.”

“For sure also it’s clear that in order to have the iPhone moment we probably need to have much better hardware capabilities,” he added, suggesting everyone is looking to the likes of Apple to drive that forward in the future. On the plus side he said current sentiment is “much, much much better than what it was a year ago”.

Discussing potential b2b applications for AR tech one idea Miesnieks suggested is for transportation platforms that want to link a rider to the location of an on-demand and/or autonomous vehicle.

Another area of opportunity he sees is working with hardware companies — to add spacial awareness to devices such as smartphones and drones to expand their capabilities.

More generally they mentioned training for technical teams, field sales and collaborative use-cases as areas with strong potential.

“There are interesting applications in pharma, oil & gas where, with the aid of the technology, you can do very detailed stuff that you couldn’t do before because… you can follow everything on your screen and you can use your hands to do whatever it is you need to be doing,” said Bonatsos. “So that’s really, really exciting.

“These are some of the applications that I’ve seen. But it’s early days. I haven’t seen a lot of products in the space. It’s more like there’s one dev shop is working with the chief innovation officer of one specific company that is much more forward thinking and they want to come up with a really early demo.

“Now we’re seeing some early stage tech startups that are trying to attack these problems. The good news is that good dollars is being invested in trying to solve some of these problems — and whoever figures out how to get dollars from the… bigger companies, these are real enterprise businesses to be built. So I’m very excited about that.”

At the same time, the panel delved into some of the complexities and social challenges facing technologists as they try to integrate blended reality into, well, the real deal.

Including raising the spectre of Black Mirror style dystopia once smartphones can recognize and track moving objects in a scene — and 6d.ai’s tech shows that’s coming.

Miesnieks showed a brief video demo of 3D technology running live on a smartphone that’s able to identify cars and people moving through the scene in real time.

“Our team were able to solve this problem probably a year ahead of where the rest of the world is at. And it’s exciting. If we showed this to anyone who really knows 3D they’d literally jump out of the chair. But… it opens up all of these potentially unintended consequences,” he said.

“We’re wrestling with what might this be used for. Sure it’s going to make Pokémon game more fun. It could also let a blind person walk down the street and have awareness of cars and people and they may not need a cane or something.

“But it could let you like tap and literally have people be removed from your field of view and so you only see the type of people that you want to look at. Which can be dystopian.”

He pointed to issues being faced by the broader technology industry now, around social impacts and areas like privacy, adding: “We’re seeing some of the social impacts of how this stuff can go wrong, even if you assume good intentions.

“These sort of breakthroughs that we’re having are definitely causing us to be aware of the responsibility we have to think a bit more deeply about how this might be used for the things we didn’t expect.”

From the investor point of view Bonatsos said his thesis for enterprise AR has to be similarly sensitive to the world around the tech.

“It’s more about can we find the domain experts, people like Matt, that are going to do well by doing good. Because there are a tonne of different parameters to think about here and have the credibility in the market to make it happen,” he suggested, noting: “It‘s much more like traditional enterprise investing.”

“This is a great opportunity to use this new technology to do well by doing good,” Bonatsos continued. “So the responsibility is here from day one to think about privacy, to think about all the fake stuff that we could empower, what do we want to do, what do we want to limit? As well as, as we’re creating this massive, augmented reality, 3D version of the world — like who is going to own it, and share all this wealth? How do we make sure that there’s going to be a whole new ecosystem that everybody can take part of it. It’s very interesting stuff to think about.”

“Even if we do exactly what we think is right, and we assume that we have good intentions, it’s a big grey area in lots of ways and we’re going to make lots of mistakes,” conceded Miesnieks, after discussing some of the steps 6d.ai has taken to try to reduce privacy risks around its technology — such as local processing coupled with anonymizing/obfuscating any data that is taken off the phone.

“When [mistakes] happen — not if, when — all that we’re going to be able to rely on is our values as a company and the trust that we’ve built with the community by saying these are our values and then actually living up to them. So people can trust us to live up to those values. And that whole domain of startups figuring out values, communicating values and looking at this sort of abstract ‘soft’ layer — I think startups as an industry have done a really bad job of that.

“Even big companies. There’d only a handful that you could say… are pretty clear on their values. But for AR and this emerging tech domain it’s going to be, ultimately, the core that people trust us.”

Bonatsos also pointed to rising political risk as a major headwind for startups in this space — noting how China’s government has decided to regulate the gaming market because of social impacts.

“That’s unbelievable. This is where we’re heading with the technology world right now. Because we’ve truly made it. We’ve become mainstream. We’re the incumbents. Anything we build has huge, huge intended and unintended consequences,” he said.

“Having a government that regulates how many games that can be built or how many games can be released — like that’s incredible. No company had to think of that before as a risk. But when people are spending so many hours and so much money on the tech products they are using every day. This is the [inevitable] next step.”


By Natasha Lomas

Upskill launches support for Microsoft HoloLens

Upskill has been working on a platform to support augmented and mixed reality for almost as long as most people have been aware of the concept. It began developing an agnostic AR/MR platform way back in 2010. Google Glass didn’t even appear until two years later. Today, the company announced the early release of Skylight for Microsoft HoloLens.

Upskill has been developing Skylight as an operating platform to work across all devices, regardless of the manufacturer, but company co-founder and CEO Brian Ballard sees something special with HoloLens. “What HoloLens does for certain types of experiences, is it actually opens up a lot more real estate to display information in a way that users can take advantage of,” Ballard explained.

He believes the Microsoft device fits well within the broader approach his company has been taking over the last several years to support the range of hardware on the market while developing solutions for hands-free and connected workforce concepts.

“This is about extending Skylight into the spatial computing environment making sure that the workflows, the collaboration, the connectivity is seamless across all of these different devices,” he told TechCrunch.

Microsoft itself just announced some new HoloLens use cases for its Dynamics 365 platform around remote assistance and 3D layout, use cases which play to the HoloLens strengths, but Ballard says his company is a partner with Microsoft, offering an enhanced, full-stack solution on top of what Microsoft is giving customers out of the box.

That is certainly something Microsoft’s Terry Farrell, director of product marketing for mixed reality at Microsoft recognizes and acknowledges. “As adoption of Microsoft HoloLens continues to rapidly increase in industrial settings, Skylight offers a software platform that is flexible and can scale to meet any number of applications well suited for mixed reality experiences,” he said in a statement.

That involves features like spatial content placement, which allows employees to work with digital content in HoloLens, while keeping their hands free to work in the real world. They enhance this with the ability to see multiple reference materials across multiple windows at the same time, something we are used to doing with a desktop computer, but not with a device on our faces like HoloLens. Finally, workers can use hand gestures and simple gazes to navigate in virtual space, directing applications or moving windows, as we are used to doing with keyboard or mouse.

Upskill also builds on the Windows 10 capabilities in HoloLens with its broad experience securely connecting to back-end systems to pull the information into the mixed reality setting wherever it lives in the enterprise.

The company is based outside of Washington, D.C. in Herndon, Virginia. It has raised over $45 million, according to Crunchbase. Ballard says the company currently has 70 employees. Customers using Skylight include Boeing, GE, Coca-Cola, Telestra and Accenture.


By Ron Miller