Preclusio uses machine learning to comply with GDPR, other privacy regulations

As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations.

“We have a platform that is deployed on-prem in our customer’s environment, and helps them identify what data they’re collecting, how they’re using it, where it’s being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected,” company co-founder and CEO Heather Wade told TechCrunch.

She says that the company made a deliberate decision to keep the solution on-prem. “We really believe in giving our clients control over their data. We don’t want to be just another third-party SaaS vendor that you have to ship your data to,” Wade explained.

That said, customers can run it wherever they wish, whether that’s on-prem or in the cloud in Azure or AWS. Regardless of where it’s stored, the idea is to give customers direct control over their own data. “We are really trying to alert our customers to threats or to potential privacy exceptions that are occurring in their environment in real time, and being in their environment is really the best way to facilitate this,” she said.

The product works by getting read-only access to the data, then begins to identify sensitive data in an automated fashion using machine learning. “Our product automatically looks at the schema and samples of the data, and uses machine learning to identify common protected data,” she said. Once that process is completed, a privacy compliance team can review the findings and adjust these classifications as needed.

Wade, who started the company in March, says the idea formed at previous positions where she was responsible for implementing privacy policies and found there weren’t adequate solutions on the market to help. “I had to face the challenges first-hand of dealing with privacy and compliance and seeing how resources were really taken away from our engineering teams and having to allocate these resources to solving these problems internally, especially early on when GDPR was first passed, and there really were not that many tools available in the market,” she said.

Interestingly Wade’s co-founder is her husband, John. She says they deal with the intensity of being married and startup founders by sticking to their areas of expertise. He’s the marketing person and she’s the technical one.

She says they applied to Y Combinator because they wanted to grow quickly, and that timing is important with more privacy laws coming online soon. She has been impressed with the generosity of the community in helping them reach their goals. “It’s almost indescribable how generous and helpful other folks who’ve been through the YC program are to the incoming batches, and they really do have that spirit of paying it forward,” she said.


By Ron Miller

OneTrust raises $200M at a $1.3B valuation to help organizations navigate online privacy rules

GDPR, and the newer California Consumer Privacy Act, have given a legal bite to ongoing developments in online privacy and data protection: it’s always good practice for companies with an online presence to take measures to safeguard people’s data, but now failing to do so can land them in some serious hot water.

Now — to underscore the urgency and demand in the market — one of the bigger companies helping organizations navigate those rules is announcing a huge round of funding. OneTrust, which builds tools to help companies navigate data protection and privacy policies both internally and with its customers, has raised $200 million in a Series A led by Insight that values the company at $1.3 billion.

It’s an outsized round for a Series A, being made at an equally outsized valuation — especially considering that the company is only three years old — but that’s because, according to CEO Kabir Barday, of the wide-ranging nature of the issue, and OneTrust’s early moves and subsequent pole position in tackling it.

“We’re talking about an operational overhaul in a company’s practices,” Barday said in an interview. “That requires the right technology and reach to be able to deliver that at a low cost.” Notably, he said that OneTrust wasn’t actually in search of funding — it’s already generating revenue and could have grown off its own balance sheet — although he noted that having the capitalization and backing sends a signal to the market and in particular to larger organizations of its stability and staying power.

Currently, OneTrust has around 3,000 customers across 100 countries (and 1,000 employees), and the plan will be to continue to expand its reach geographically and to more businesses. Funding will also go towards the company’s technology: it already has 50 patents filed and another 50 applications in progress, securing its own IP in the area of privacy protection.

OneTrust offers technology and services covering three different aspects of data protection and privacy management.

Its Privacy Management Software helps an organization manage how it collects data, and it generates compliance reports in line with how a site is working relative to different jurisdictions. Then there is the famous (or infamous) service that lets internet users set their preferences for how they want their data to be handled on different sites. The third is a larger database and risk management platform that assesses how various third-party services (for example advertising providers) work on a site and where they might pose data protection risks.

These are all provided either as a cloud-based software as a service, or an on-premises solution, depending on the customer in question.

The startup also has an interesting backstory that sheds some light on how it was founded and how it identified the gap in the market relatively early.

Alan Dabbiere, who is the co-chairman of OneTrust, had been the chairman of Airwatch — the mobile device management company acquired by VMware in 2014 (Airwatch’s CEO and founder, John Marshall, is OneTrust’s other co-chairman). In an interview, he told me that it was when they were at Airwatch — where Barday had worked across consulting, integration, engineering and product management — that they began to see just how a smartphone “could be a quagmire of information.”

“We could capture apps that an employee was using so that we could show them to IT to mitigate security risks,” he said, “but that actually presented a big privacy issue. If [the employee] has dyslexia [and uses a special app for it] or if the employee used a dating app, you’ve now shown things to IT that you shouldn’t have.”

He admitted that in the first version of the software, “we weren’t even thinking about whether that was inappropriate, but then we quickly realised that we needed to be thinking about privacy.”

Dabbiere said that it was Barday who first brought that sensibility to light, and “that is something that we have evolved from.” After that, and after the VMware sale, it seemed a no-brainer that he and Marshall would come on to help the new startup grow.

Airwatch made a relatively quick exit, I pointed out. His response: the plan is to stay the course at OneTrust, with a lot more room for expansion in this market. He describes the issues of data protection and privacy as “death by 1,000 cuts.” I guess when you think about it from an enterprising point of view, that essentially presents 1,000 business opportunities.

Indeed, there is obvious growth potential to expand not just its funnel of customers, but to add in more services, such as proactive detection of malware that might leak customers’ data (which calls to mind the recently-fined breach at British Airways), as well as tools to help stop that once identified.

While there are a million other companies also looking to fix those problems today, what’s interesting is the point from which OneTrust is starting: by providing tools to organizations simply to help them operate in the current regulatory climate as good citizens of the online world.

This is what caught Insight’s eye with this investment.

“OneTrust has truly established themselves as leaders in this space in a very short timeframe, and are quickly becoming for privacy professionals what Salesforce became for salespeople,” said Richard Wells of Insight. “They offer such a vast range of modules and tools to help customers keep their businesses compliant with varying regulatory laws, and the tailwinds around GDPR and the upcoming CCPA make this an opportune time for growth. Their leadership team is unparalleled in their ambition and has proven their ability to convert those ambitions into reality.”

Wells added that while this is a big round for a Series A it’s because it is something of an outlier — not a mark of how Series A rounds will go soon.

“Investors will always be interested in and keen to partner with companies that are providing real solutions, are already established and are led by a strong group of entrepreneurs,” he said in an interview. “This is a company that has the expertise to help solve for what could be one of the greatest challenges of the next decade. That’s the company investors want to partner with and grow, regardless of fund timing.”


By Ingrid Lunden

TextIQ, a machine learning platform for parsing sensitive corporate data, raises $12.6M

TextIQ, a machine learning system that parses and understands sensitive corporate data, has raised $12.6 million in Series A funding led by FirstMark Capital, with participation from Sierra Ventures.

TextIQ started as cofounder Apoorv Agarwal’s Columbia thesis project titled “Social Network Extraction From Text.” The algorithm he built was able to read a novel, like Jane Austen’s Emma, for example, and understand the social hierarchy and interactions between characters.

This people-centric approach to parsing unstructured data eventually became the kernel of TextIQ, which helps corporations find what they’re looking for in a sea of unstructured, and highly sensitive, data.

The platform started out as a tool used by corporate legal teams. Lawyers often have to manually look through troves of documents and conversations (text messages, emails, Slack, etc.) to find specific evidence or information. Even using search, these teams spend loads of time and resources looking through the search results, which usually aren’t as accurate as they should be.

“The status quo for this is to use search terms and hire hundreds of humans, if not thousands, to look for things that match their search terms,” said Agarwal. “It’s super expensive, and it can take months to go through millions of documents. And it’s still risky, because they could be missing sensitive information. Compared to the status quo, TextIQ is not only cheaper and faster but, most interestingly, it’s much more accurate.”

Following success with legal teams, TextIQ expanded into HR/compliance, giving companies the ability to retrieve sensitive information about internal compliance issues without a manual search. Because TextIQ understands who a person is relative to the rest of the organization, and learns that organization’s ‘language’, it can more thoroughly extract what’s relevant to the inquiry from all that unstructured data in Slack, email, etc.

More recently, in the wake of GDPR, TextIQ has expanded its product suite to work in the privacy realm. When a company is asked by a customer to get access to all their data, or to be forgotten, the process can take an enormous amount of resources. Even then, bits of data might fall through the cracks.

For example, if a customer emailed Customer Service years ago, that might not come up in the company’s manual search efforts to find all of that customer’s data. But since TextIQ understands this unstructured data with a person-centric approach, that email wouldn’t slip by its system, according to Agarwal.

Given the sensitivity of the data, TextIQ functions behind a corporation’s firewall, meaning that TextIQ simply provides the software to parse the data rather than taking on any liability for the data itself. In other words, the technology comes to the data, and not the other way around.

TextIQ operates on a tiered subscription model, and offers the product for a fraction of the value they provide in savings when clients switch over from a manual search. The company declined to share any further details on pricing.

Former Apple and Oracle General Counsel Dan Cooperman, former Verizon General Counsel Randal Milch, former Baxter International Global General Counsel Marla Persky, and former Nationwide Insurance Chief Legal and Governance Officer Patricia Hatler are on the advisory board for TextIQ.

The company has plans to go on a hiring spree following the new funding, looking to fill positions in R&D, engineering, product development, finance, and sales. Cofounder and COO Omar Haroun added that the company achieved profitability in its first quarter entering the market and has been profitable for eight consecutive quarters.


By Jordan Crook

Liberty’s challenge to UK state surveillance powers reveals shocking failures

A legal challenge to the UK’s controversial mass surveillance regime has revealed shocking failures by the main state intelligence agency, which has broad powers to hack computers and phones and intercept digital communications, in handling people’s information.

The challenge, by rights group Liberty, led last month to an initial finding that MI5 had systematically breached safeguards in the UK’s Investigatory Powers Act (IPA) — breaches the Home Secretary, Sajid Javid, euphemistically couched as “compliance risks” in a carefully worded written statement that was quietly released to parliament.

Today Liberty has put more meat on the bones of the finding of serious legal breaches in how MI5 handles personal data, culled from newly released (but redacted) documents that it says describe the “undoubtedly unlawful” conduct of the UK’s main security service which has been retaining innocent people’s data for years.

The series of 10 documents and letters from MI5 and the Investigatory Powers Commissioner’s Office (IPCO), the body charged with overseeing the intelligence agencies’ use of surveillance powers, show that the spy agency has failed to meet its legal duties for as long as the IPA has been law, according to Liberty.

The controversial surveillance legislation passed into UK law in November 2016 — enshrining a system of mass surveillance of digital communications which includes a provision that logs of all Internet users’ browsing activity be retained for a full year, accessible to a wide range of government agencies (not just law enforcement and/or spy agencies).

The law also allows the intelligence agencies to maintain large databases of personal information on UK citizens, even if they are not under suspicion of any crime. And sanctions state hacking of devices, networks and services, including bulk hacking on foreign soil. It also gives U.K. authorities the power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service.

The IPA has faced a series of legal challenges since making it onto the statute books, and the government has been forced to amend certain aspects of it on court order — including beefing up restrictions on access to web activity data. Other challenges to the controversial surveillance regime, including Liberty’s, remain ongoing.

The newly released court documents include damning comments on MI5’s handling of data by the IPCO — which writes that: “Without seeking to be emotive, I consider that MI5’s use of warranted data… is currently, in effect, in ‘special measures’ and the historical lack of compliance… is of such gravity that IPCO will need to be satisfied to a greater degree than usual that it is ‘fit for purpose’”.”

Liberty also says MI5 knew for three years of failures to maintain key safeguards — such as the timely destruction of material, and the protection of legally privileged material — before informing the IPCO.

Yet a key government sales pitch for passing the legislation was the claim of a ‘world class’ double-lock authorization and oversight regime to ensure the claimed safeguards on intelligence agencies powers to intercept and retain data.

So the latest revelations stemming from Liberty’s legal challenge represent a major embarrassment for the government.

“It is of course paramount that UK intelligence agencies demonstrate full compliance with the law,” the home secretary wrote in the statement last month, before adding his own political spin: “In that context, the interchange between the Commissioner and MI5 on this issue demonstrates that the world leading system of oversight established by the Act is working as it should.”

Liberty comes to the opposite conclusion on that point — emphasizing that warrants for bulk surveillance were issued by senior judges “on the understanding that MI5’s data handling obligations under the IPA were being met — when they were not”.

“The Commissioner has pointed out that warrants would not have been issued if breaches were known,” it goes on. “The Commissioner states that “it is impossible to sensibly reconcile the explanation of the handling of arrangements the Judicial Commissioners [senior judges] were given in briefings…with what MI5 knew over a protracted period of time was happening.”

So, basically, it’s saying that MI5 — having at best misled judges, whose sole job it is to oversee its legal access to data, about its systematic failures to lawfully handle data — has rather made a sham of the entire ‘world class’ oversight regime.

Liberty also flags what it calls “a remarkable admission to the Commissioner” — made by MI5’s deputy director general — who it says acknowledges that personal data collected by MI5 is being stored in “ungoverned spaces”. It adds that the MI5 legal team claims there is “a high likelihood [of material] being discovered when it should have been deleted, in a disclosure exercise leading to substantial legal or oversight failure”.

“Ungoverned spaces” is not a phrase that made it into Javid’s statement last month on MI5’s “compliance risks”.

But the home secretary did acknowledge: “A report of the Investigatory Powers Commissioner’s Office suggests that MI5 may not have had sufficient assurance of compliance with these safeguards within one of its technology environments.”

Javid also said he had set up “an independent review to consider and report back to me on what lessons can be learned for the future”. Though it’s unclear whether that report will be made public. 

We reached out to the Home Office for comment on the latest revelations from Liberty’s litigation. But a spokesman just pointed us to Javid’s prior statement. 

In a statement, Liberty’s lawyer, Megan Goulding, said: “These shocking revelations expose how MI5 has been illegally mishandling our data for years, storing it when they have no legal basis to do so. This could include our most deeply sensitive information – our calls and messages, our location data, our web browsing history.

“It is unacceptable that the public is only learning now about these serious breaches after the Government has been forced into revealing them in the course of Liberty’s legal challenge. In addition to showing a flagrant disregard for our rights, MI5 has attempted to hide its mistakes by providing misinformation to the Investigatory Powers Commissioner, who oversees the Government’s surveillance regime.

“And, despite a light being shone on this deplorable violation of our rights, the Government is still trying to keep us in the dark over further examples of MI5 seriously breaching the law.”


By Natasha Lomas

Facebook’s new Study app pays adults for data after teen scandal

Facebook shut down its Research and Onavo programs after TechCrunch exposed how the company paid teenagers for root access to their phones to gain market data on competitors. Now Facebook is relaunching its paid market research program, but this time with principles — namely transparency, fair compensation and safety. The goal? To find out which other competing apps and features Facebook should buy, copy or ignore.

Today Facebook releases its “Study from Facebook” app for Android only. Some adults 18+ in the U.S. and India will be recruited by ads on and off Facebook to willingly sign up to let Facebook collect extra data from them in exchange for a monthly payment. They’ll be warned that Facebook will gather which apps are on their phone, how much time they spend using those apps, the app activity names of features they use in other apps, plus their country, device and network type.

Facebook promises it won’t snoop on user IDs, passwords or any of participants’ content, including photos, videos or messages. It won’t sell participants’ info to third parties, use it to target ads or add it to their account or the behavior profiles the company keeps on each user. Yet while Facebook writes that “transparency” is a major part of “Approaching market research in a responsible way,” it refuses to tell us how much participants will be paid.

“Study from Facebook” could give the company critical insights for shaping its product roadmap. If it learns everyone is using screensharing social network Squad, maybe it will add its own screensharing feature. If it finds group video chat app Houseparty is on the decline, it might not worry about cloning that functionality. Or if it finds Snapchat’s Discover mobile TV shows are retaining users for a ton of time, it might amp up teen marketing of Facebook Watch. But it also might rile up regulators and politicians who already see it as beating back competition through acquisitions and feature cloning.

An attempt to be less creepy

TechCrunch’s investigation from January revealed that Facebook had been quietly operating a research program codenamed Atlas that paid users ages 13 to 35 up to $20 per month in gift cards in exchange for root access to their phone so it could gather all their data for competitive analysis. That included everything the Study app grabs, but also their web browsing activity, and even encrypted information, as the app required users to install a VPN that routed all their data through Facebook. It even had the means to collect private messages and content shared — potentially including data owned by their friends.

Facebook pays teens to install VPN that spies on them

Facebook’s Research app also abused Apple’s enterprise certificate program designed for distributing internal use-only apps to employees without the App Store or Apple’s approval. Facebook originally claimed it obeyed Apple’s rules, but Apple quickly disabled Facebook’s Research app and also shut down its enterprise certificate, temporarily breaking Facebook’s internal test builds of its public apps, as well as the shuttle times and lunch menu apps employees rely on.

In the aftermath of our investigation, Facebook shut down its Research program. It then also announced in February that it would shut down its Onavo Protect app on Android, which branded itself as a privacy app providing a free VPN instead of paying users while it collected tons of data on them. After giving users until May 9th to find a replacement VPN, the Onavo Protect was killed off.

This was an embarrassing string of events that stemmed from unprincipled user research. Now Facebook is trying to correct its course and revive its paid data collection program but with more scruples.

How Study from Facebook works

Unlike Onavo or Facebook Research, users can’t freely sign up for Study. They have to be recruited through ads Facebook will show on its own app and others to both 18+ Facebook users and non-users in the U.S. and India. That should keep out grifters and make sure the studies stay representative of Facebook’s user base. Eventually, Facebook plans to extend the program to other countries.

If users click through the ad, they’ll be brought to Facebook’s research operations partner Applause’s website, which clearly identifies Facebook’s involvement, unlike Facebook Research, which hid that fact until users were fully registered. There they’ll be informed how the Study app is opt-in, what data they’ll give up in exchange for what compensation and that they can opt out at any time. They’ll need to confirm their age, have a PayPal account (which are only supposed to be available to users 18 and over) and Facebook will cross-check the age to make sure it matches the person’s Facebook profile, if they have one. They won’t have to sign and NDA like with the Facebook Research program.

Anyone can download the Study from Facebook app from Google Play, but only those who’ve been approved through Applause will be able to log in and unlock the app. It will again explain what Facebook will collect, and ask for data permissions. The app will send periodic notifications to users reminding them they’re selling their data to Facebook and offering them an opt-out. Study from Facebook will use standard Google-approved APIs and won’t use a VPN, SSL bumping, root access, enterprise certificates or permission profiles you install on your device like the Research program that ruffled feathers.

Different users will be paid the same amount to their PayPal account, but Facebook wouldn’t say how much it’s dealing out, or even whether it was in the ball park of cents, dollars or hundreds of dollars per month. That seems like a stern departure from its stated principle of transparency. This matters, because Facebook earns billions in profit per quarter. It has the cash to potentially offer so much to Study participants that it effectively coerces them to give up their data; $10 to $20 per month like it was paying Research participants seems reasonable in the U.S., but that’s enough money in India to make people act against their better judgement.

The launch shows Facebook’s boldness despite the threat of antitrust regulation focusing on how it has suppressed competition through its acquisitions and copying. Democrat presidential candidates could use Study from Facebook as a talking point, noting how the company’s huge profits earned from its social network domination afford it a way to buy private user data to entrench its lead.

At 15 years old, Facebook is at risk of losing touch with what the next generation wants out of their phones. Rather than trying to guess based on their activity on its own app, it’s putting its huge wallet to work so it can pay for an edge on the competition.


By Josh Constine

Apple is making corporate ‘BYOD’ programs less invasive to user privacy

When people bring their own devices to work or school, they don’t want I.T. administrators to manage the entire device. But until now, Apple only offered two ways for I.T. to manage its iOS devices: either device enrollments, which offered device-wide management capabilities to admins or those same device management capabilities combined with an automated setup process. At Apple’s Worldwide Developer Conference last week, the company announced plans to introduce a third method: user enrollments.

This new MDM (mobile device management) enrollment option is meant to better balance the needs of I.T. to protect sensitive corporate data and manage the software and settings available to users, while at the same time allowing users’ private personal data to remain separate from I.T. oversight.

According to Apple, when both users’ and I.T.’s needs are in balance, users are more likely to accept a corporate “bring your own device” or BYOD program — something that can ultimately save the business money that doesn’t have to be invested in hardware purchases.

The new user enrollments option for MDM has three components: a managed Apple ID that sits alongside the personal ID; cryptographic separation of personal and work data; and a limited set of device-wide management capabilities for I.T.

The managed Apple ID will be the user’s work identity on the device, and is created by the admin in either Apple School Manager or Apple Business Manager — depending on whether this is for a school or a business. The user signs into the managed Apple ID during the enrollment process.

From that point forward until the enrollment ends, the company’s managed apps and accounts will use the managed Apple ID’s iCloud account.

Meanwhile, the user’s personal apps and accounts will use the personal Apple ID’s iCloud account, if one is signed into the device.

Third-party apps are then either used in managed or unmanaged modes.

That means users won’t be able to change modes or run the apps in both modes at the same time. However, some of the built-in apps like Notes will be account-based, meaning the app will use the appropriate Apple ID — either the managed one or personal — depending on which account they’re operating on at the time.

To separate work data from personal, iOS will create a managed APFS volume at the time of the enrollment. The volume uses separate cryptographic keys which are destroyed along with the volume itself when the enrollment period ends. (iOS had always removed the managed data when the enrollment ends, but this is a cryptographic backstop just in case anything were to go wrong during unenrollment, the company explained.)

The managed volume will host the local data stored by any managed third-party apps along with the managed data from the Notes app. It will also house a managed keychain that stores secure items like passwords and certificates; the authentication credentials for managed accounts; and mail attachments and full email bodies.

The system volume does host a central database for mail, including some metadata and five line previews, but this is removed as well when the enrollment ends.

Users’ personal apps and their data can’t be managed by the I.T. admin, so they’re never at risk of having their data read or erased.

And unlike device enrollments, user enrollments don’t provide a UDID or any other persistent identifier to the admin. Instead, it creates a new identifier called the “enrollment ID.” This identifier is used in communication with the MDM server for all communications and is destroyed when enrollment ends.

Apple also noted that one of the big reasons users fear corporate BYOD programs is because they think the I.T. admin will erase their entire device when the enrollment ends — including their personal apps and data.

To address this concern, the MDM queries can only return the managed results.

In practice, that means I.T. can’t even find out what personal apps are installed on the device — something that can feel like an invasion of privacy to end users. (This feature will be offered for device enrollments, too.) And because I.T. doesn’t know what personal apps are installed, it also can’t restrict certain apps’ use.

User enrollments will also not support the “erase device” command — and they don’t have to, because I.T. will know the sensitive data and emails are gone. There’s no need for a full device wipe.

Similarly, the Exchange Server can’t send its remote wipe command — just the account only remote wipe to remove the managed data.

Another new feature related to user enrollments is how traffic for managed accounts is guided through the corporate VPN. Using the per-app VPN feature, traffic from the Mail, Contacts, and Calendars built-in apps will only go through the VPN if the domains match that of the business. For example, mail.acme.com can pass through the VPN, but not mail.aol.com. In other words, the user’s personal mail remains private.

This addresses what has been an ongoing concern about how some MDM solutions operate — routing traffic through a corporate proxy meant the business could see the employees’ personal emails, social networking accounts, and other private information.

User enrollments also only enforces a 6-digit non-simple passcode, as the MDM server can’t help users by clearing the past code if the user forgets it.

Some today advise users to not accept BYOD MDM policies because of the impact to personal privacy. While a business has every right to manage and wipe its own apps and data, I.T. has overstepped with some of its remote management capabilities — including its ability to erase entire devices, access personal data, track a phone’s location, restrict personal use of apps, and more.

Apple’s MDM policies haven’t included GPS tracking, however, and nor does this new option.

Apple’s new policy is a step towards a better balance of concerns but will require that users understand the nuances of these more technical details — which they may not.

That user education will come down to the businesses who insist on these MDM policies to begin with — they will need to establish their own documentation, explainers, and establish new privacy policies with their employees that detail what sort of data they can and cannot access, as well as what sort of control they have over corporate devices.


By Sarah Perez

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

InCountry raises $7M to help multinationals store private data in countries of origin

The last few years have seen a rapid expansion of national regulations that, in the name of data protection, govern how and where organizations like healthcare and insurance companies, financial services companies and others store residents’ personal data that is used and collected through their services.

But keeping abreast of and following those rules has proven to be a minefield for companies. Now, a startup is coming out of stealth with a new product to to help.

InCountry, which provides “data residency-as-a-service” to businesses and other organizations, is launching with $7 million in funding and its first product: Profile, which focuses on user profile and registration information in 50 countries on six continents. There will be more products launched covering payment, transaction and health data later in the year, co-founder and CEO Peter Yared said in an interview.

The funding — a seed round — is coming from Caffeinated Capital , Felicis Ventures, Ridge Ventures, Bloomberg Beta, Charles River Ventures, Global Founders Capital.

InCountry is founded and led by Yared, a repeat entrepreneur who most recently co-founded and eventually sold the “micro-app” startup Sapho, which was acquired by Citrix. Other companies he’s sold startups to include VMWare, Sun, and Oracle, and he was also once the CIO of CBS Interactive. 

Yared told me in an interview that he has actually been self-funding, running and quietly accruing customers for InCountry for two years. He decided to raise this seed round — a number of investors in this list are repeat backers of his ventures — to start revving up the engines. (One of those ‘revs’ is an interesting talent hire. Today the company is also announcing Alex Castro as chief product officer. Castro was an early employee working on Amazon Web Services and Mircosoft’s move into CRM, and also worked on autonomous at Uber.)

If you have never heard of the term “data residency-as-a-service”, that might be because it’s something that has been coined by Yared himself to describe the function of his startup.

InCountry is part tech provider, part consultancy.

On the tech side, it provides the technical aspects of providing personal data storage in a specific national border for companies that might otherwise run other aspects of their services from other locations. That includes SDKs that link to a variety of data centers and cloud service providers that allow new countries to be added in under 10 minutes; two types of encryption on the data to make sure that it remains secure; and managed services for its biggest clients. (InCountry is not disclosing any client names right now, except for video-editing company Revl.)

On the consultancy side, it has an in-house team of researchers and partnerships with law firms to continually update its policies and ensure that customers remain compliant with any changes. InCountry says that to provide further assurance to customers, it provides insurance of up to three times the value of a customer’s spend.

InCountry’s aim is twofold: first, to solve the many pain points that a company or other organization has to go through when considering how to comply with data hosting regulations; and second, to make sure that by making it easy, companies actually do what’s required of them.

As Yared describes it, the process for becoming data compliant can be painful, but his startup is applying an economy of scale, since the process is essentially one that everyone will have to follow:

“They have to figure out what the requirements are, find the facility, audit the facility, which includes making sure it’s not owned by the state, make sure the network is properly segregated, develop the right software layer to manage the data, hire program managers, network operations people and more,” he said. And for those handling this themselves, cloud service providers will typically cover a smaller footprint of regions, 17 at most for the biggest. “We take care of all that, and add on more as we need to.”

The problem is that because the process is so painful, many companies often flout the requirements, which isn’t good for its customers, nor for the companies themselves, which run the risk of getting fined.

“It’s universally acknowledged that the way data is stored and handled by most companies and handled is not meeting the average requirements of citizens rights,” Yared said. “That’s why we now have GDPR, and will see more GDPR-like regulations get rolled out.”

One thing that InCountry is not touching is data such as messages between users and other kinds of personal files — data that has been the subject of sometimes very controversial data regulations. Its limit are the pieces of personal information about users — bank details, health information, social security numbers, and so on — that are part and parcel of what we provide to companies in the course of interacting with them online.

“In early outreach, we have had people as for private data storage, but we would be ethically uncomfortable with that,” Yared said. “We want to be in the business of helping people who have regulated data, by storing that in a compliant manner that is more helpful, and more fruitful to users.”

The aim will be to add more services over time covering ever more countries, to keep in line with the growing trend among regulators to put more data residency laws in place.

“We’re witnessing more countries signing in data laws each week, and we’re only going to see those numbers increase,” said Sundeep Peechu, Managing Director at Felicis Ventures, in a statement. “We’re excited to be leading the round and reinvesting in Peter as he launches his seventh company. He recognized the problem early on and started working on a solution nearly two years ago that goes beyond regional data centers and patchwork in-house DIY solutions.”


By Ingrid Lunden

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR, or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with recordkeeping laws but raises questions about potential admissibility as evidence.


By Arman Tabatabai

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics” though has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”

In other words, the scale of those predictions completely undermines notice and consent.

Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.

Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias, and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.

Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Since people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.

He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”

Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies who can take the value out of this tech without risking provoking consumer ire.

Who will steal your data more: China or America?

The Huawei logo is seen in the center of Warsaw, Poland

Jaap Arriens/NurPhoto via Getty Images

Talking about data ethics, Europe is in the middle of a superpower pincer. China’s telecom giant Huawei has made expansion on the continent a major priority, while the United States has been sending delegation after delegation to convince its Western allies to reject Chinese equipment. The dilemma was quite visible last week at MWC-Barcelona, where the two sides each tried to make their case.

It’s been years since the Snowden revelations showed that the United States was operating an enormous eavesdropping infrastructure targeting countries throughout the world, including across Europe. Huawei has reiterated its stance that it does not steal information from its equipment, and has repeated its demands that the Trump administration provide public proof of flaws in its security.

There is an abundance of moral relativism here, but I see this as increasingly a litmus test of the West on China. China has not hidden its ambitions to take a prime role in East Asia, nor has it hidden its intentions to build a massive surveillance network over its own people or to influence the media overseas.

Those tactics, though, are straight out of the American playbook, which lost its moral legitimacy over the past two decades from some combination of the Iraq War, Snowden, Wikileaks, and other public scandals that have undermined trust in the country overseas.

Security and privacy might have been a competitive advantage for American products over their Chinese counterparts, but that advantage has been weakened for many countries to near zero. We are increasingly going to see countries choose a mix of Chinese and American equipment in sensitive applications, if only to ensure that if one country is going to steal their data, it might as well be balanced.

Things that seem interesting that I haven’t read yet

Obsessions

  • Perhaps some more challenges around data usage and algorithmic accountability
  • We have a bit of a theme around emerging markets, macroeconomics, and the next set of users to join the internet.
  • More discussion of megaprojects, infrastructure, and “why can’t we build things”

Thanks

To every member of Extra Crunch: thank you. You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people, and companies. If I can ever be of assistance, hit reply, or send an email to [email protected].

This newsletter is written with the assistance of Arman Tabatabai from New York.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.


By Danny Crichton

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: [email protected].


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.


By Arman Tabatabai

Facial recognition startup Kairos acquires Emotion Reader

Kairos, the face recognition technology used for brand marketing, has announced the acquisition of EmotionReader.

EmotionReader is an Limerick, Ireland-based startup that uses algorithms to analyze facial expressions around video content. The startup allows brands and marketers to measure viewers emotional response to video, analyze viewer response via an analytics dashboard, and make different decisions around media spend based on viewer response.

The acquisition makes sense considering that Kairos core business is focused on facial identification for enterprise clients. Knowing who someone is, paired with how they feel about your content, is a powerful tool for brands and marketers.

The idea for Kairos started when founder Brian Brackeen was making HR time-clocking systems for Apple. People were cheating the system, so he decided to implement facial recognition to ensure that employees were actually clocking in and out when they said they were.

That premise spun out into Kairos, and Brackeen soon realized that facial identification as a service was much more powerful than any niche time clocking service.

But Brackeen is very cautious with the technology Kairos has built.

While Kairos aims to make facial recognition technology (and all the powerful insights that come with it) accessible and available to all businesses, Brackeen has been very clear about the fact that Kairos isn’t interested in selling this technology to government agencies.

Brackeen recently contributed a post right here on TechCrunch outlining the various reasons why governments aren’t ready for this type of technology. Alongside the outstanding invasion of personal privacy, there are also serious issues around bias against people of color.

From the post:

There is no place in America for facial recognition that supports false arrests and murder. In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities does not serve citizens, and will only lead to further unrest.

As part of the deal, EmotionReader CTO Dr. Stephen Moore will run Kairos’ new Singapore-based R&D center, allowing for upcoming APAC expansion.

Kairos has raised approximately $8 million from investors New World Angels, Kapor Capital, 500 Startups, Backstage Capital, Morgan Stanley, Caerus Ventures, and Florida Institute.


By Jordan Crook