Messaging app Wire confirms $8.2M raise, responds to privacy concerns after moving holding company to the US

Big changes are afoot for Wire, an enterprise-focused end-to-end encrypted messaging app and service that advertises itself as “the most secure collaboration platform”. In February, Wire quietly raised $8.2 million from Morpheus Ventures and others, we’ve confirmed — the first funding amount it has ever disclosed — and alongside that external financing, it moved its holding company in the same month to the US from Luxembourg, a switch that Wire’s CEO Morten Brogger described in an interview as “simple and pragmatic.”

He also said that Wire is planning to introduce a freemium tier to its existing consumer service — which itself has half a million users — while working on a larger round of funding to fuel more growth of its enterprise business — a key reason for moving to the US, he added: There is more money to be raised there.

“We knew we needed this funding and additional to support continued growth. We made the decision that at some point in time it will be easier to get funding in North America, where there’s six times the amount of venture capital,” he said.

While Wire has moved its holding company to the US, it is keeping the rest of its operations as is. Customers are licensed and serviced from Wire Switzerland; the software development team is in Berlin, Germany; and hosting remains in Europe.

The news of Wire’s US move and the basics of its February funding — sans value, date or backers — came out this week via a blog post that raises questions about whether a company that trades on the idea of data privacy should itself be more transparent about its activities.

The changes to Wire’s financing and legal structure had not been communicated to users until news started to leak out, which brings up questions not just about transparency, but about how secure Wire’s privacy policy will play out, given the company’s ownership now being on US soil.

It was an issue picked up and amplified by NSA whistleblower Edward Snowden . Via Twitter, he described the move to the US as “not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on.”

The key question is whether Wire’s shift to the US puts users’ data at risk — a question that Brogger claims is straightforward to answer: “We are in Switzerland, which has the best privacy laws in the world” — it’s subject to Europe’s General Data Protection Regulation framework (GDPR) on top of its own local laws — “and Wire now belongs to a new group holding, but there no change in control.” 

In its blog post published in the wake of blowback from privacy advocates, Wire also claims it “stands by its mission to best protect communication data with state-of-the-art technology and practice” — listing several items in its defence:

  • All source code has been and will be available for inspection on GitHub (github.com/wireapp).
  • All communication through Wire is secured with end-to-end encryption — messages, conference calls, files. The decryption keys are only stored on user devices, not on our servers. It also gives companies the option to deploy their own instances of Wire in their own data centers.
  • Wire has started working on a federated protocol to connect on-premise installations and make messaging and collaboration more ubiquitous.
  • Wire believes that data protection is best achieved through state-of-the-art encryption and continues to innovate in that space with Messaging Layer Security (MLS).

But where data privacy and US law are concerned, it’s complicated. Snowden famously leaked scores of classified documents disclosing the extent of US government mass surveillance programs in 2013, including how data-harvesting was embedded in US-based messaging and technology platforms.

Six years on, the political and legal ramifications of that disclosure are still playing out — with a key judgement pending from Europe’s top court which could yet unseat the current data transfer arrangement between the EU and the US.

Privacy versus security

Wire launched at a time when interest in messaging apps was at a high watermark. The company made its debut in the middle of February 2014, and it was only one week later that Facebook acquired WhatsApp for the princely sum of $19 billion. We described Wire’s primary selling point at the time as a “reimagining of how a communications tool like Skype should operate had it been built today” rather than in in 2003.

That meant encryption and privacy protection, but also better audio tools and file compression and more. It was  a pitch that seemed especially compelling considering the background of the company. Skype co-founder Janus Friis and funds connected to him were the startup’s first backers (and they remain the largest shareholders); Wire was co-founded in by Skype alums Jonathan Christensen and Alan Duric (no longer with the company); and even new investor Morpheus has Skype roots.

Even with the Skype pedigree, the strategy faced a big challenge.

“The consumer messaging market is lost to the Facebooks of the world, which dominate it,” Brogger said today. “However, we made a clear insight, which is the core strength of Wire: security and privacy.”

That, combined with trend around the consumerization of IT that’s brought new tools to business users, is what led Wire to the enterprise market in 2017.

But fast forward to today, and it seems that even as security and privacy are two sides of the same coin, it may not be so simple when deciding what to optimise in terms of features and future development, which is part of the question now and what critics are concerned with.

“Wire was always for profit and planned to follow the typical venture backed route of raising rounds to accelerate growth,” one source familiar with the company told us. “However, it took time to find its niche (B2B, enterprise secure comms).

“It needed money to keep the operations going and growing. [But] the new CEO, who joined late 2017, didn’t really care about the free users, and the way I read it now, the transformation is complete: ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.’”

And that is the message you get from Brogger, too, who describes individual consumers as “not part of our strategy”, but also not entirely removed from it, either, as the focus shifts to enterprises and their security needs.

Brogger said there are still half a million individuals on the platform, and they will come up with ways to continue to serve them under the same privacy policies and with the same kind of service as the enterprise users. “We want to give them all the same features with no limits,” he added. “We are looking to switch it into a freemium model.”

On the other side, “We are having a lot of inbound requests on how Wire can replace Skype for Business,” he said. “We are the only one who can do that with our level of security. It’s become a very interesting journey and we are super excited.”

Part of the company’s push into enterprise has also seen it make a number of hires. This has included bringing in two former Huddle C-suite execs, Brogger as CEO and Rasmus Holst as chief revenue officer — a bench that Wire expanded this week with three new hires from three other B2B businesses: a VP of EMEA sales from New Relic, a VP of finance from Contentful; and a VP of Americas sales from Xeebi.

Such growth comes with a price-tag attached to it, clearly. Which is why Wire is opening itself to more funding and more exposure in the US, but also more scrutiny and questions from those who counted on its services before the change.

Brogger said inbound interest has been strong and he expects the startup’s next round to close in the next two to three months.


By Ingrid Lunden

Early stage privacy startup DataGrail gets boost from Okta partnership

When Okta launched its $50 million Okta Ventures investment fund in April, one of its investments was in an early stage privacy startup called DataGrail. Today, the companies announced a partnership that they hope will help boost DataGrail, while providing Okta customers with a privacy tool option.

DataGrail CEO and co-founder Daniel Barber says that with the increase in privacy legislation from GDPR to the upcoming California Consumer Protection Act (and many other proposed bills in various states of progress), companies need tools to help them comply and protect user privacy. “We are a privacy platform focused on delivering continuous compliance for businesses,” Barber says.

They do this in a way that fits nicely with Okta’s approach to identity. Whereas Okta provides a place to access all of your cloud applications from a single place with one logon, DataGrail connects to your applications with connectors to provide a way to monitor privacy across the organization from a single view.

It currently has 180 connectors to common enterprise applications like Salesforce, HubSpot, Marketo and Oracle. It then collects this data and presents it to the company in a central interface to help ensure privacy. “Our key differentiator is that we’re able to deliver a live data map of the customer data that exists within an organization,” Barber explained.

The company just launched last year, but Barber sees similarities in their approaches. “We we see clear alignment on our go-to-market approach. The product that we built aligns very similarly to the way Okta is deployed, and we’re a true  partner with the industry leader in identity management,” he said.

Monty Gray, SVP and head of corporate development at Okta, says that the company is always looking for innovative companies that fit well with Okta. The company liked DataGrail enough to contribute to the startup’s $5.2 million Series A investment in July.

Gray says that while DataGrail isn’t the only privacy company it’s partnering with, he likes how DataGrail is helping with privacy compliance in large organizations. “We saw how DataGrail was thinking about [privacy] in a modern fashion. They enable these technology companies to become not only compliant, but do it in a way where they were not directly in the flow, that they would get out of the way,” Gray explained.

Barber says having the help of Okta could help drive sales, and for a company that’s just getting off the ground, having a public company in your corner as an investor, as well as a partner, could help push the company forward. That’s all that any early startup can hope for.


By Ron Miller

Nadella warns government conference not to betray user trust

Microsoft CEO Satya Nadella, delivering the keynote at the Microsoft Government Leaders Summit in Washington, DC today, had a message for attendees to maintain user trust in their tools technologies above all else.

He said it is essential to earn user trust, regardless of your business. “Now, of course, the power law here is all around trust because one of the keys for us, as providers of platforms and tools, trust is everything,” he said today. But he says it doesn’t stop with the platform providers like Microsoft. Institutions using those tools also have to keep trust top of mind or risk alienating their users.

“That means you need to also ensure that there is trust in the technology that you adopt, and the technology that you create, and that’s what’s going to really define the power law on this equation. If you have trust, you will have exponential benefit. If you erode trust it will exponentially decay,” he said.

He says Microsoft sees trust along three dimensions: privacy, security and ethical use of artificial intelligence. All of these come together in his view to build a basis of trust with your customers.

Nadella said he sees privacy as a human right, pure and simple, and it’s up to vendors to ensure that privacy or lose the trust of their customers. “The investments around data governance is what’s going to define whether you’re serious about privacy or not,” he said. For Microsoft, they look at how transparent they are about how they use the data, their terms of service, and how they use technology to ensure that’s being carried out at runtime.

He reiterated the call he made last year for a federal privacy law. With GDPR in Europe and California’s CCPA coming on line in January, he sees a centralized federal law as a way to streamline regulations for business.

As for security, as you might expect, he defined it in terms of how Microsoft was implementing it, but the message was clear that you needed security as part of your approach to trust, regardless of how you implement that. He asked several key questions of attendees.

“Cyber is the second area where we not only have to do our work, but you have to [ask], what’s your operational security posture, how have you thought about having the best security technology deployed across the entire chain, whether it’s on the application side, the infrastructure side or on the endpoint, side, and most importantly, around identity,” Nadella said.

The final piece, one which he said was just coming into play was how you use artificial intelligence ethically, a sensitive topic for a government audience, but one he wasn’t afraid to broach. “One of the things people say is, ‘Oh, this AI thing is so unexplainable, especially deep learning.’ But guess what, you created that deep learning [model]. In fact, the data on top of which you train the model, the parameters and the number of parameters you use — a lot of things are in your control. So we should not abdicate our responsibility when creating AI,” he said.

Whether Microsoft or the US government can adhere to these lofty goals is unclear, but Nadella was careful to outline them both for his company’s benefit and this particular audience. It’s up to both of them to follow through.


By Ron Miller

Osano makes business risk and compliance (somewhat) sexy again

A new startup is clearing the way for other companies to better monitor and manage their risk and compliance with privacy laws.

Osano, an Austin, Texas-based startup, bills itself as a privacy platform startup, which uses a software-as-a-service solution to give businesses real-time visibility into their current privacy and compliance posture. On one hand, that helps startups and enterprises large and small insight into whether or not they’re complying with global or state privacy laws, and manage risk factors associated with their business such as when partner or vendor privacy policies change.

The company launched its privacy platform at Disrupt SF on the Startup Battlefield stage.

Risk and compliance is typically a fusty, boring and frankly unsexy topic. But with ever-changing legal landscapes and constantly moving requirements, it’s hard to keep up. Although Europe’s GDPR has been around for a year, it’s still causing headaches. And stateside, the California Consumer Privacy Act is about to kick in and it is terrifying large companies for fear they can’t comply with it.

Osano mixes tech with its legal chops to help companies, particularly smaller startups without their own legal support, to provide a one-stop shop for businesses to get insight, advice and guidance.

“We believe that any time a company does a better job with transparency and data protection, we think that’s a really good thing for the internet,” the company’s founder Arlo Gilbert told TechCrunch.

Gilbert, along with his co-founder and chief technology officer Scott Hertel, have built their company’s software-as-a-service solution with several components in mind, including maintaining its scorecard of 6,000 vendors and their privacy practices to objectively grade how a company fares, as well as monitoring vendor privacy policies to spot changes as soon as they are made.

One of its standout features is allowing its corporate customers to comply with dozens of privacy laws across the world with a single line of code.

You’ve seen them before: The “consent” popups that ask (or demand) you to allow cookies or you can’t come in. Osano’s consent management lets companies install a dynamic consent management in just five minutes, which delivers the right consent message to the right people in the best language. Using the blockchain, the company says it can record and provide searchable and cryptographically verifiable proof-of-consent in the event of a person’s data access request.

“There are 40 countries with cookie and data privacy laws that require consent,” said Gilbert. “Each of them has nuances about what they consider to be consent: what you have to tell them; what you have to offer them; when you have to do it.”

Osano also has an office in Dublin, Ireland, allowing its corporate customers to say it has a physical representative in the European Union — a requirement for companies that have to comply with GDPR.

And, for corporate customers with questions, they can dial-an-expert from Osano’s outsourced and freelance team of attorneys and privacy experts to help break down complex questions into bitesize answers.

Or as Gilbert calls it, “Uber, but for lawyers.”

The concept seems novel but it’s not restricted to GDPR or California’s upcoming law. The company says it monitors international, federal and state legislatures for new laws and changes to existing privacy legislation to alert customers of upcoming changes and requirements that might affect their business.

In other words, plug in a new law or two and Osano’s customers are as good as covered.

Osano is still in its pre-seed stage. But while the company is focusing on its product, it’s not thinking too much about money.

“We’re planning to kind of go the binary outcome — go big or go home,” said Gilbert, with his eye on the small- to medium-sized enterprise. “It’s greenfield right now. There’s really nobody doing what we’re doing.”

The plan is to take on enough funding to own the market, and then focus on turning a profit. So much so, Gilbert said, that the company is registered as a B Corporation, a more socially conscious and less profit-driven approach of corporate structure, allowing it to generate profits while maintaining its social vision.

The company’s idea is strong; its corporate structure seems mindful. But is it enough of an enticement for fellow startups and small businesses? It’s either dominate the market or bust, and only time will tell.


By Zack Whittaker

Segment’s new privacy portal helps companies comply with expanding regulations

With the EU’s sweeping GDPR privacy laws and the upcoming California Consumer Privacy ACT (CCPA), companies have to figure out how to deal with keeping private data private or face massive fines. Segment announced a new Privacy Portal today, that could help companies trying to remain in compliance.

Segment CEO and co-founder Peter Reinhardt says companies have built a false dichotomy between personalization and privacy, and he says that it doesn’t have to be that way. “We’ve noticed that a lot of companies feel this tension between privacy and growth. They basically see a paradox between being either privacy-respectful versus providing a very personalized experience,” he said.

The new Privacy Portal is designed to be a central place where customers can sort their data in an automated way and create an inventory of what data they have inside the company. “By introducing a single point of collection for all the data, it creates a choke point on the data collection to allow you to actually govern that, a single place to inspect, monitor, alert and have an inventory of all the data that you’re collecting, so that you can ensure that it’s compliant, and so that you can ensure that you’ve got consent, and all of those things,” he said.

The way this works is that as the data comes into the portal, it automatically gets put into a bucket based on the level of concern about it. “We are basically giving customers monitoring and a consolidated view over all of the different data points that are coming in. So we have matches that basically look for things that might be PII, and we automatically grade most of them with green, yellow or red in terms of the level of potential concern,” Reinhardt explained.

On top of that, companies can apply policies, based on the grades, say letting anything that’s green or yellow through, but preventing any red data (PII) from being shared with other applications.

In addition, to make sure that the product can connect to as many marketing tools as possible to get the most complete data picture, the company is releasing a new feature called Functions, which lets customers build their own custom data connectors. With thousands of marketing technology tools, it’s impossible for Segment to build connectors for all of them. Functions lets companies build custom connectors in a low-code way in instances where Segment doesn’t provide it out of the box.

The two tools are available to Segment customers starting today.


By Ron Miller

BigID announces $50M Series C investment as privacy takes center stage

It turns out GDPR was just the tip of the privacy iceberg. With California’s privacy law coming on line January 1st and dozens more in various stages of development, it’s clear that governments are taking privacy seriously, which means companies have to as well. New York-startup BigID, which has been developing a privacy platform for the last several years, finds itself in a good position to help. Today, the company announced a $50 million Series C.

The round was led by Bessemer Venture Partners with help from SAP.io Fund, Comcast Ventures, Boldstart Ventures, Scale Venture Partners and ClearSky. New investor Salesforce Ventures also participated. Today’s investment brings the total raised to over $96 million, according to Crunchbase.

In addition to the funding, the company is also announcing the formation of a platform of sorts, which will offer a set of privacy services for customers. It includes data discovery, classification and correlation. “We’ve separated the product into some constituent parts. While it’s still sold as a broad-based solution, it’s much more of a platform now in the sense that there’s a core set of capabilities that we heard over and over that customers want,” CEO and co-founder Dimitri Sirota told TechCrunch.

He says that these capabilities really enables customers to see connections in the data across a set of disparate data sources. “There are a lot of products that do the request part, but there’s nobody that’s able to look across your entire data landscape, the hundreds of petabytes, and pick out the data in Salesforce, Workday, AWS, mainframe, and all these places you could have data on [an individual], and show how it’s all tied together,” Sirota explained.

It’s interesting to see the mix of strategic investors and traditional venture capitalists who are investing in the company. The strategics in particular see the privacy landscape as well as anyone, and Sirota says it’s a case of privacy mattering more than ever and his company providing the means to navigate the changing landscape. “Consumers care about privacy, which means legislators care about it, which ultimately means companies have to care about it,” he said. He added, “Strategics, whether they are companies that collect personal data or those that sell to those companies, therefore have an interest in BigID .”

The company has been growing fast and raising money quickly to help it scale to meet demand. Starting in January 2018, it raised $14 million. Just six months later, it raised another $30 million and you can tack on today’s $50 million. Sirota says having money in the bank and seeing these investments helps give enterprise customers confidence that the company is in this for the long haul.

Sirota wouldn’t give an exact valuation, only saying that while the company is not a unicorn, the valuation was a “robust number.” He says the plan now it to keep expanding the platform, and there will be announcements coming soon around partnerships, customers and new capabilities.

Sirota will be appearing at TechCrunch Sessions: Enterprise on September 5th at 11 am on the panel, Cracking the Code: From Startup to Scaleup in Enterprise Software.


By Ron Miller

Preclusio uses machine learning to comply with GDPR, other privacy regulations

As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations.

“We have a platform that is deployed on-prem in our customer’s environment, and helps them identify what data they’re collecting, how they’re using it, where it’s being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected,” company co-founder and CEO Heather Wade told TechCrunch.

She says that the company made a deliberate decision to keep the solution on-prem. “We really believe in giving our clients control over their data. We don’t want to be just another third-party SaaS vendor that you have to ship your data to,” Wade explained.

That said, customers can run it wherever they wish, whether that’s on-prem or in the cloud in Azure or AWS. Regardless of where it’s stored, the idea is to give customers direct control over their own data. “We are really trying to alert our customers to threats or to potential privacy exceptions that are occurring in their environment in real time, and being in their environment is really the best way to facilitate this,” she said.

The product works by getting read-only access to the data, then begins to identify sensitive data in an automated fashion using machine learning. “Our product automatically looks at the schema and samples of the data, and uses machine learning to identify common protected data,” she said. Once that process is completed, a privacy compliance team can review the findings and adjust these classifications as needed.

Wade, who started the company in March, says the idea formed at previous positions where she was responsible for implementing privacy policies and found there weren’t adequate solutions on the market to help. “I had to face the challenges first-hand of dealing with privacy and compliance and seeing how resources were really taken away from our engineering teams and having to allocate these resources to solving these problems internally, especially early on when GDPR was first passed, and there really were not that many tools available in the market,” she said.

Interestingly Wade’s co-founder is her husband, John. She says they deal with the intensity of being married and startup founders by sticking to their areas of expertise. He’s the marketing person and she’s the technical one.

She says they applied to Y Combinator because they wanted to grow quickly, and that timing is important with more privacy laws coming online soon. She has been impressed with the generosity of the community in helping them reach their goals. “It’s almost indescribable how generous and helpful other folks who’ve been through the YC program are to the incoming batches, and they really do have that spirit of paying it forward,” she said.


By Ron Miller

OneTrust raises $200M at a $1.3B valuation to help organizations navigate online privacy rules

GDPR, and the newer California Consumer Privacy Act, have given a legal bite to ongoing developments in online privacy and data protection: it’s always good practice for companies with an online presence to take measures to safeguard people’s data, but now failing to do so can land them in some serious hot water.

Now — to underscore the urgency and demand in the market — one of the bigger companies helping organizations navigate those rules is announcing a huge round of funding. OneTrust, which builds tools to help companies navigate data protection and privacy policies both internally and with its customers, has raised $200 million in a Series A led by Insight that values the company at $1.3 billion.

It’s an outsized round for a Series A, being made at an equally outsized valuation — especially considering that the company is only three years old — but that’s because, according to CEO Kabir Barday, of the wide-ranging nature of the issue, and OneTrust’s early moves and subsequent pole position in tackling it.

“We’re talking about an operational overhaul in a company’s practices,” Barday said in an interview. “That requires the right technology and reach to be able to deliver that at a low cost.” Notably, he said that OneTrust wasn’t actually in search of funding — it’s already generating revenue and could have grown off its own balance sheet — although he noted that having the capitalization and backing sends a signal to the market and in particular to larger organizations of its stability and staying power.

Currently, OneTrust has around 3,000 customers across 100 countries (and 1,000 employees), and the plan will be to continue to expand its reach geographically and to more businesses. Funding will also go towards the company’s technology: it already has 50 patents filed and another 50 applications in progress, securing its own IP in the area of privacy protection.

OneTrust offers technology and services covering three different aspects of data protection and privacy management.

Its Privacy Management Software helps an organization manage how it collects data, and it generates compliance reports in line with how a site is working relative to different jurisdictions. Then there is the famous (or infamous) service that lets internet users set their preferences for how they want their data to be handled on different sites. The third is a larger database and risk management platform that assesses how various third-party services (for example advertising providers) work on a site and where they might pose data protection risks.

These are all provided either as a cloud-based software as a service, or an on-premises solution, depending on the customer in question.

The startup also has an interesting backstory that sheds some light on how it was founded and how it identified the gap in the market relatively early.

Alan Dabbiere, who is the co-chairman of OneTrust, had been the chairman of Airwatch — the mobile device management company acquired by VMware in 2014 (Airwatch’s CEO and founder, John Marshall, is OneTrust’s other co-chairman). In an interview, he told me that it was when they were at Airwatch — where Barday had worked across consulting, integration, engineering and product management — that they began to see just how a smartphone “could be a quagmire of information.”

“We could capture apps that an employee was using so that we could show them to IT to mitigate security risks,” he said, “but that actually presented a big privacy issue. If [the employee] has dyslexia [and uses a special app for it] or if the employee used a dating app, you’ve now shown things to IT that you shouldn’t have.”

He admitted that in the first version of the software, “we weren’t even thinking about whether that was inappropriate, but then we quickly realised that we needed to be thinking about privacy.”

Dabbiere said that it was Barday who first brought that sensibility to light, and “that is something that we have evolved from.” After that, and after the VMware sale, it seemed a no-brainer that he and Marshall would come on to help the new startup grow.

Airwatch made a relatively quick exit, I pointed out. His response: the plan is to stay the course at OneTrust, with a lot more room for expansion in this market. He describes the issues of data protection and privacy as “death by 1,000 cuts.” I guess when you think about it from an enterprising point of view, that essentially presents 1,000 business opportunities.

Indeed, there is obvious growth potential to expand not just its funnel of customers, but to add in more services, such as proactive detection of malware that might leak customers’ data (which calls to mind the recently-fined breach at British Airways), as well as tools to help stop that once identified.

While there are a million other companies also looking to fix those problems today, what’s interesting is the point from which OneTrust is starting: by providing tools to organizations simply to help them operate in the current regulatory climate as good citizens of the online world.

This is what caught Insight’s eye with this investment.

“OneTrust has truly established themselves as leaders in this space in a very short timeframe, and are quickly becoming for privacy professionals what Salesforce became for salespeople,” said Richard Wells of Insight. “They offer such a vast range of modules and tools to help customers keep their businesses compliant with varying regulatory laws, and the tailwinds around GDPR and the upcoming CCPA make this an opportune time for growth. Their leadership team is unparalleled in their ambition and has proven their ability to convert those ambitions into reality.”

Wells added that while this is a big round for a Series A it’s because it is something of an outlier — not a mark of how Series A rounds will go soon.

“Investors will always be interested in and keen to partner with companies that are providing real solutions, are already established and are led by a strong group of entrepreneurs,” he said in an interview. “This is a company that has the expertise to help solve for what could be one of the greatest challenges of the next decade. That’s the company investors want to partner with and grow, regardless of fund timing.”


By Ingrid Lunden

TextIQ, a machine learning platform for parsing sensitive corporate data, raises $12.6M

TextIQ, a machine learning system that parses and understands sensitive corporate data, has raised $12.6 million in Series A funding led by FirstMark Capital, with participation from Sierra Ventures.

TextIQ started as cofounder Apoorv Agarwal’s Columbia thesis project titled “Social Network Extraction From Text.” The algorithm he built was able to read a novel, like Jane Austen’s Emma, for example, and understand the social hierarchy and interactions between characters.

This people-centric approach to parsing unstructured data eventually became the kernel of TextIQ, which helps corporations find what they’re looking for in a sea of unstructured, and highly sensitive, data.

The platform started out as a tool used by corporate legal teams. Lawyers often have to manually look through troves of documents and conversations (text messages, emails, Slack, etc.) to find specific evidence or information. Even using search, these teams spend loads of time and resources looking through the search results, which usually aren’t as accurate as they should be.

“The status quo for this is to use search terms and hire hundreds of humans, if not thousands, to look for things that match their search terms,” said Agarwal. “It’s super expensive, and it can take months to go through millions of documents. And it’s still risky, because they could be missing sensitive information. Compared to the status quo, TextIQ is not only cheaper and faster but, most interestingly, it’s much more accurate.”

Following success with legal teams, TextIQ expanded into HR/compliance, giving companies the ability to retrieve sensitive information about internal compliance issues without a manual search. Because TextIQ understands who a person is relative to the rest of the organization, and learns that organization’s ‘language’, it can more thoroughly extract what’s relevant to the inquiry from all that unstructured data in Slack, email, etc.

More recently, in the wake of GDPR, TextIQ has expanded its product suite to work in the privacy realm. When a company is asked by a customer to get access to all their data, or to be forgotten, the process can take an enormous amount of resources. Even then, bits of data might fall through the cracks.

For example, if a customer emailed Customer Service years ago, that might not come up in the company’s manual search efforts to find all of that customer’s data. But since TextIQ understands this unstructured data with a person-centric approach, that email wouldn’t slip by its system, according to Agarwal.

Given the sensitivity of the data, TextIQ functions behind a corporation’s firewall, meaning that TextIQ simply provides the software to parse the data rather than taking on any liability for the data itself. In other words, the technology comes to the data, and not the other way around.

TextIQ operates on a tiered subscription model, and offers the product for a fraction of the value they provide in savings when clients switch over from a manual search. The company declined to share any further details on pricing.

Former Apple and Oracle General Counsel Dan Cooperman, former Verizon General Counsel Randal Milch, former Baxter International Global General Counsel Marla Persky, and former Nationwide Insurance Chief Legal and Governance Officer Patricia Hatler are on the advisory board for TextIQ.

The company has plans to go on a hiring spree following the new funding, looking to fill positions in R&D, engineering, product development, finance, and sales. Cofounder and COO Omar Haroun added that the company achieved profitability in its first quarter entering the market and has been profitable for eight consecutive quarters.


By Jordan Crook

Liberty’s challenge to UK state surveillance powers reveals shocking failures

A legal challenge to the UK’s controversial mass surveillance regime has revealed shocking failures by the main state intelligence agency, which has broad powers to hack computers and phones and intercept digital communications, in handling people’s information.

The challenge, by rights group Liberty, led last month to an initial finding that MI5 had systematically breached safeguards in the UK’s Investigatory Powers Act (IPA) — breaches the Home Secretary, Sajid Javid, euphemistically couched as “compliance risks” in a carefully worded written statement that was quietly released to parliament.

Today Liberty has put more meat on the bones of the finding of serious legal breaches in how MI5 handles personal data, culled from newly released (but redacted) documents that it says describe the “undoubtedly unlawful” conduct of the UK’s main security service which has been retaining innocent people’s data for years.

The series of 10 documents and letters from MI5 and the Investigatory Powers Commissioner’s Office (IPCO), the body charged with overseeing the intelligence agencies’ use of surveillance powers, show that the spy agency has failed to meet its legal duties for as long as the IPA has been law, according to Liberty.

The controversial surveillance legislation passed into UK law in November 2016 — enshrining a system of mass surveillance of digital communications which includes a provision that logs of all Internet users’ browsing activity be retained for a full year, accessible to a wide range of government agencies (not just law enforcement and/or spy agencies).

The law also allows the intelligence agencies to maintain large databases of personal information on UK citizens, even if they are not under suspicion of any crime. And sanctions state hacking of devices, networks and services, including bulk hacking on foreign soil. It also gives U.K. authorities the power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service.

The IPA has faced a series of legal challenges since making it onto the statute books, and the government has been forced to amend certain aspects of it on court order — including beefing up restrictions on access to web activity data. Other challenges to the controversial surveillance regime, including Liberty’s, remain ongoing.

The newly released court documents include damning comments on MI5’s handling of data by the IPCO — which writes that: “Without seeking to be emotive, I consider that MI5’s use of warranted data… is currently, in effect, in ‘special measures’ and the historical lack of compliance… is of such gravity that IPCO will need to be satisfied to a greater degree than usual that it is ‘fit for purpose’”.”

Liberty also says MI5 knew for three years of failures to maintain key safeguards — such as the timely destruction of material, and the protection of legally privileged material — before informing the IPCO.

Yet a key government sales pitch for passing the legislation was the claim of a ‘world class’ double-lock authorization and oversight regime to ensure the claimed safeguards on intelligence agencies powers to intercept and retain data.

So the latest revelations stemming from Liberty’s legal challenge represent a major embarrassment for the government.

“It is of course paramount that UK intelligence agencies demonstrate full compliance with the law,” the home secretary wrote in the statement last month, before adding his own political spin: “In that context, the interchange between the Commissioner and MI5 on this issue demonstrates that the world leading system of oversight established by the Act is working as it should.”

Liberty comes to the opposite conclusion on that point — emphasizing that warrants for bulk surveillance were issued by senior judges “on the understanding that MI5’s data handling obligations under the IPA were being met — when they were not”.

“The Commissioner has pointed out that warrants would not have been issued if breaches were known,” it goes on. “The Commissioner states that “it is impossible to sensibly reconcile the explanation of the handling of arrangements the Judicial Commissioners [senior judges] were given in briefings…with what MI5 knew over a protracted period of time was happening.”

So, basically, it’s saying that MI5 — having at best misled judges, whose sole job it is to oversee its legal access to data, about its systematic failures to lawfully handle data — has rather made a sham of the entire ‘world class’ oversight regime.

Liberty also flags what it calls “a remarkable admission to the Commissioner” — made by MI5’s deputy director general — who it says acknowledges that personal data collected by MI5 is being stored in “ungoverned spaces”. It adds that the MI5 legal team claims there is “a high likelihood [of material] being discovered when it should have been deleted, in a disclosure exercise leading to substantial legal or oversight failure”.

“Ungoverned spaces” is not a phrase that made it into Javid’s statement last month on MI5’s “compliance risks”.

But the home secretary did acknowledge: “A report of the Investigatory Powers Commissioner’s Office suggests that MI5 may not have had sufficient assurance of compliance with these safeguards within one of its technology environments.”

Javid also said he had set up “an independent review to consider and report back to me on what lessons can be learned for the future”. Though it’s unclear whether that report will be made public. 

We reached out to the Home Office for comment on the latest revelations from Liberty’s litigation. But a spokesman just pointed us to Javid’s prior statement. 

In a statement, Liberty’s lawyer, Megan Goulding, said: “These shocking revelations expose how MI5 has been illegally mishandling our data for years, storing it when they have no legal basis to do so. This could include our most deeply sensitive information – our calls and messages, our location data, our web browsing history.

“It is unacceptable that the public is only learning now about these serious breaches after the Government has been forced into revealing them in the course of Liberty’s legal challenge. In addition to showing a flagrant disregard for our rights, MI5 has attempted to hide its mistakes by providing misinformation to the Investigatory Powers Commissioner, who oversees the Government’s surveillance regime.

“And, despite a light being shone on this deplorable violation of our rights, the Government is still trying to keep us in the dark over further examples of MI5 seriously breaching the law.”


By Natasha Lomas

Facebook’s new Study app pays adults for data after teen scandal

Facebook shut down its Research and Onavo programs after TechCrunch exposed how the company paid teenagers for root access to their phones to gain market data on competitors. Now Facebook is relaunching its paid market research program, but this time with principles — namely transparency, fair compensation and safety. The goal? To find out which other competing apps and features Facebook should buy, copy or ignore.

Today Facebook releases its “Study from Facebook” app for Android only. Some adults 18+ in the U.S. and India will be recruited by ads on and off Facebook to willingly sign up to let Facebook collect extra data from them in exchange for a monthly payment. They’ll be warned that Facebook will gather which apps are on their phone, how much time they spend using those apps, the app activity names of features they use in other apps, plus their country, device and network type.

Facebook promises it won’t snoop on user IDs, passwords or any of participants’ content, including photos, videos or messages. It won’t sell participants’ info to third parties, use it to target ads or add it to their account or the behavior profiles the company keeps on each user. Yet while Facebook writes that “transparency” is a major part of “Approaching market research in a responsible way,” it refuses to tell us how much participants will be paid.

“Study from Facebook” could give the company critical insights for shaping its product roadmap. If it learns everyone is using screensharing social network Squad, maybe it will add its own screensharing feature. If it finds group video chat app Houseparty is on the decline, it might not worry about cloning that functionality. Or if it finds Snapchat’s Discover mobile TV shows are retaining users for a ton of time, it might amp up teen marketing of Facebook Watch. But it also might rile up regulators and politicians who already see it as beating back competition through acquisitions and feature cloning.

An attempt to be less creepy

TechCrunch’s investigation from January revealed that Facebook had been quietly operating a research program codenamed Atlas that paid users ages 13 to 35 up to $20 per month in gift cards in exchange for root access to their phone so it could gather all their data for competitive analysis. That included everything the Study app grabs, but also their web browsing activity, and even encrypted information, as the app required users to install a VPN that routed all their data through Facebook. It even had the means to collect private messages and content shared — potentially including data owned by their friends.

Facebook pays teens to install VPN that spies on them

Facebook’s Research app also abused Apple’s enterprise certificate program designed for distributing internal use-only apps to employees without the App Store or Apple’s approval. Facebook originally claimed it obeyed Apple’s rules, but Apple quickly disabled Facebook’s Research app and also shut down its enterprise certificate, temporarily breaking Facebook’s internal test builds of its public apps, as well as the shuttle times and lunch menu apps employees rely on.

In the aftermath of our investigation, Facebook shut down its Research program. It then also announced in February that it would shut down its Onavo Protect app on Android, which branded itself as a privacy app providing a free VPN instead of paying users while it collected tons of data on them. After giving users until May 9th to find a replacement VPN, the Onavo Protect was killed off.

This was an embarrassing string of events that stemmed from unprincipled user research. Now Facebook is trying to correct its course and revive its paid data collection program but with more scruples.

How Study from Facebook works

Unlike Onavo or Facebook Research, users can’t freely sign up for Study. They have to be recruited through ads Facebook will show on its own app and others to both 18+ Facebook users and non-users in the U.S. and India. That should keep out grifters and make sure the studies stay representative of Facebook’s user base. Eventually, Facebook plans to extend the program to other countries.

If users click through the ad, they’ll be brought to Facebook’s research operations partner Applause’s website, which clearly identifies Facebook’s involvement, unlike Facebook Research, which hid that fact until users were fully registered. There they’ll be informed how the Study app is opt-in, what data they’ll give up in exchange for what compensation and that they can opt out at any time. They’ll need to confirm their age, have a PayPal account (which are only supposed to be available to users 18 and over) and Facebook will cross-check the age to make sure it matches the person’s Facebook profile, if they have one. They won’t have to sign and NDA like with the Facebook Research program.

Anyone can download the Study from Facebook app from Google Play, but only those who’ve been approved through Applause will be able to log in and unlock the app. It will again explain what Facebook will collect, and ask for data permissions. The app will send periodic notifications to users reminding them they’re selling their data to Facebook and offering them an opt-out. Study from Facebook will use standard Google-approved APIs and won’t use a VPN, SSL bumping, root access, enterprise certificates or permission profiles you install on your device like the Research program that ruffled feathers.

Different users will be paid the same amount to their PayPal account, but Facebook wouldn’t say how much it’s dealing out, or even whether it was in the ball park of cents, dollars or hundreds of dollars per month. That seems like a stern departure from its stated principle of transparency. This matters, because Facebook earns billions in profit per quarter. It has the cash to potentially offer so much to Study participants that it effectively coerces them to give up their data; $10 to $20 per month like it was paying Research participants seems reasonable in the U.S., but that’s enough money in India to make people act against their better judgement.

The launch shows Facebook’s boldness despite the threat of antitrust regulation focusing on how it has suppressed competition through its acquisitions and copying. Democrat presidential candidates could use Study from Facebook as a talking point, noting how the company’s huge profits earned from its social network domination afford it a way to buy private user data to entrench its lead.

At 15 years old, Facebook is at risk of losing touch with what the next generation wants out of their phones. Rather than trying to guess based on their activity on its own app, it’s putting its huge wallet to work so it can pay for an edge on the competition.


By Josh Constine

Apple is making corporate ‘BYOD’ programs less invasive to user privacy

When people bring their own devices to work or school, they don’t want I.T. administrators to manage the entire device. But until now, Apple only offered two ways for I.T. to manage its iOS devices: either device enrollments, which offered device-wide management capabilities to admins or those same device management capabilities combined with an automated setup process. At Apple’s Worldwide Developer Conference last week, the company announced plans to introduce a third method: user enrollments.

This new MDM (mobile device management) enrollment option is meant to better balance the needs of I.T. to protect sensitive corporate data and manage the software and settings available to users, while at the same time allowing users’ private personal data to remain separate from I.T. oversight.

According to Apple, when both users’ and I.T.’s needs are in balance, users are more likely to accept a corporate “bring your own device” or BYOD program — something that can ultimately save the business money that doesn’t have to be invested in hardware purchases.

The new user enrollments option for MDM has three components: a managed Apple ID that sits alongside the personal ID; cryptographic separation of personal and work data; and a limited set of device-wide management capabilities for I.T.

The managed Apple ID will be the user’s work identity on the device, and is created by the admin in either Apple School Manager or Apple Business Manager — depending on whether this is for a school or a business. The user signs into the managed Apple ID during the enrollment process.

From that point forward until the enrollment ends, the company’s managed apps and accounts will use the managed Apple ID’s iCloud account.

Meanwhile, the user’s personal apps and accounts will use the personal Apple ID’s iCloud account, if one is signed into the device.

Third-party apps are then either used in managed or unmanaged modes.

That means users won’t be able to change modes or run the apps in both modes at the same time. However, some of the built-in apps like Notes will be account-based, meaning the app will use the appropriate Apple ID — either the managed one or personal — depending on which account they’re operating on at the time.

To separate work data from personal, iOS will create a managed APFS volume at the time of the enrollment. The volume uses separate cryptographic keys which are destroyed along with the volume itself when the enrollment period ends. (iOS had always removed the managed data when the enrollment ends, but this is a cryptographic backstop just in case anything were to go wrong during unenrollment, the company explained.)

The managed volume will host the local data stored by any managed third-party apps along with the managed data from the Notes app. It will also house a managed keychain that stores secure items like passwords and certificates; the authentication credentials for managed accounts; and mail attachments and full email bodies.

The system volume does host a central database for mail, including some metadata and five line previews, but this is removed as well when the enrollment ends.

Users’ personal apps and their data can’t be managed by the I.T. admin, so they’re never at risk of having their data read or erased.

And unlike device enrollments, user enrollments don’t provide a UDID or any other persistent identifier to the admin. Instead, it creates a new identifier called the “enrollment ID.” This identifier is used in communication with the MDM server for all communications and is destroyed when enrollment ends.

Apple also noted that one of the big reasons users fear corporate BYOD programs is because they think the I.T. admin will erase their entire device when the enrollment ends — including their personal apps and data.

To address this concern, the MDM queries can only return the managed results.

In practice, that means I.T. can’t even find out what personal apps are installed on the device — something that can feel like an invasion of privacy to end users. (This feature will be offered for device enrollments, too.) And because I.T. doesn’t know what personal apps are installed, it also can’t restrict certain apps’ use.

User enrollments will also not support the “erase device” command — and they don’t have to, because I.T. will know the sensitive data and emails are gone. There’s no need for a full device wipe.

Similarly, the Exchange Server can’t send its remote wipe command — just the account only remote wipe to remove the managed data.

Another new feature related to user enrollments is how traffic for managed accounts is guided through the corporate VPN. Using the per-app VPN feature, traffic from the Mail, Contacts, and Calendars built-in apps will only go through the VPN if the domains match that of the business. For example, mail.acme.com can pass through the VPN, but not mail.aol.com. In other words, the user’s personal mail remains private.

This addresses what has been an ongoing concern about how some MDM solutions operate — routing traffic through a corporate proxy meant the business could see the employees’ personal emails, social networking accounts, and other private information.

User enrollments also only enforces a 6-digit non-simple passcode, as the MDM server can’t help users by clearing the past code if the user forgets it.

Some today advise users to not accept BYOD MDM policies because of the impact to personal privacy. While a business has every right to manage and wipe its own apps and data, I.T. has overstepped with some of its remote management capabilities — including its ability to erase entire devices, access personal data, track a phone’s location, restrict personal use of apps, and more.

Apple’s MDM policies haven’t included GPS tracking, however, and nor does this new option.

Apple’s new policy is a step towards a better balance of concerns but will require that users understand the nuances of these more technical details — which they may not.

That user education will come down to the businesses who insist on these MDM policies to begin with — they will need to establish their own documentation, explainers, and establish new privacy policies with their employees that detail what sort of data they can and cannot access, as well as what sort of control they have over corporate devices.


By Sarah Perez

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

InCountry raises $7M to help multinationals store private data in countries of origin

The last few years have seen a rapid expansion of national regulations that, in the name of data protection, govern how and where organizations like healthcare and insurance companies, financial services companies and others store residents’ personal data that is used and collected through their services.

But keeping abreast of and following those rules has proven to be a minefield for companies. Now, a startup is coming out of stealth with a new product to to help.

InCountry, which provides “data residency-as-a-service” to businesses and other organizations, is launching with $7 million in funding and its first product: Profile, which focuses on user profile and registration information in 50 countries on six continents. There will be more products launched covering payment, transaction and health data later in the year, co-founder and CEO Peter Yared said in an interview.

The funding — a seed round — is coming from Caffeinated Capital , Felicis Ventures, Ridge Ventures, Bloomberg Beta, Charles River Ventures, Global Founders Capital.

InCountry is founded and led by Yared, a repeat entrepreneur who most recently co-founded and eventually sold the “micro-app” startup Sapho, which was acquired by Citrix. Other companies he’s sold startups to include VMWare, Sun, and Oracle, and he was also once the CIO of CBS Interactive. 

Yared told me in an interview that he has actually been self-funding, running and quietly accruing customers for InCountry for two years. He decided to raise this seed round — a number of investors in this list are repeat backers of his ventures — to start revving up the engines. (One of those ‘revs’ is an interesting talent hire. Today the company is also announcing Alex Castro as chief product officer. Castro was an early employee working on Amazon Web Services and Mircosoft’s move into CRM, and also worked on autonomous at Uber.)

If you have never heard of the term “data residency-as-a-service”, that might be because it’s something that has been coined by Yared himself to describe the function of his startup.

InCountry is part tech provider, part consultancy.

On the tech side, it provides the technical aspects of providing personal data storage in a specific national border for companies that might otherwise run other aspects of their services from other locations. That includes SDKs that link to a variety of data centers and cloud service providers that allow new countries to be added in under 10 minutes; two types of encryption on the data to make sure that it remains secure; and managed services for its biggest clients. (InCountry is not disclosing any client names right now, except for video-editing company Revl.)

On the consultancy side, it has an in-house team of researchers and partnerships with law firms to continually update its policies and ensure that customers remain compliant with any changes. InCountry says that to provide further assurance to customers, it provides insurance of up to three times the value of a customer’s spend.

InCountry’s aim is twofold: first, to solve the many pain points that a company or other organization has to go through when considering how to comply with data hosting regulations; and second, to make sure that by making it easy, companies actually do what’s required of them.

As Yared describes it, the process for becoming data compliant can be painful, but his startup is applying an economy of scale, since the process is essentially one that everyone will have to follow:

“They have to figure out what the requirements are, find the facility, audit the facility, which includes making sure it’s not owned by the state, make sure the network is properly segregated, develop the right software layer to manage the data, hire program managers, network operations people and more,” he said. And for those handling this themselves, cloud service providers will typically cover a smaller footprint of regions, 17 at most for the biggest. “We take care of all that, and add on more as we need to.”

The problem is that because the process is so painful, many companies often flout the requirements, which isn’t good for its customers, nor for the companies themselves, which run the risk of getting fined.

“It’s universally acknowledged that the way data is stored and handled by most companies and handled is not meeting the average requirements of citizens rights,” Yared said. “That’s why we now have GDPR, and will see more GDPR-like regulations get rolled out.”

One thing that InCountry is not touching is data such as messages between users and other kinds of personal files — data that has been the subject of sometimes very controversial data regulations. Its limit are the pieces of personal information about users — bank details, health information, social security numbers, and so on — that are part and parcel of what we provide to companies in the course of interacting with them online.

“In early outreach, we have had people as for private data storage, but we would be ethically uncomfortable with that,” Yared said. “We want to be in the business of helping people who have regulated data, by storing that in a compliant manner that is more helpful, and more fruitful to users.”

The aim will be to add more services over time covering ever more countries, to keep in line with the growing trend among regulators to put more data residency laws in place.

“We’re witnessing more countries signing in data laws each week, and we’re only going to see those numbers increase,” said Sundeep Peechu, Managing Director at Felicis Ventures, in a statement. “We’re excited to be leading the round and reinvesting in Peter as he launches his seventh company. He recognized the problem early on and started working on a solution nearly two years ago that goes beyond regional data centers and patchwork in-house DIY solutions.”


By Ingrid Lunden

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR, or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with recordkeeping laws but raises questions about potential admissibility as evidence.


By Arman Tabatabai