President throws latest wrench in $10B JEDI cloud contract selection process

The $10 billion, decade long JEDI cloud contract drama continues. It’s a process that has been dogged by complaints, regulatory oversight and court cases. Throughout the months long selection process, the Pentagon has repeatedly denied accusations that the contract was somehow written to make Amazon a favored vendor, but today the Washington Post reports President Trump has asked the newly appointed Defense Secretary, Mark T. Esper to examine the process because of concerns over that very matter.

The Defense Department called for bids last year for a $10 billion, decade long contract. From the beginning Oracle in particular complained that the process favored Amazon. Even before the RFP process began Oracle executive Safra Catz took her concerns directly to the president, but at that time he did not intervene. Later, the company filed a complaint with the Government Accountability Office, which ruled that the procurement process was fair.

Finally, the company took the case to court alleging that a person involved in defining the selection process had a conflict of interest, due to being an employee at Amazon before joining the DoD. That case was dismissed last month.

In April, the DoD named Microsoft and Amazon as the two finalists, and the winner was finally expected to be named some time this month. It appeared that the we were close to the finish line, but now that the president has intervened at the 11th hour, it’s impossible to know what the outcome will be.

What we do know is that this is a pivotal project for the DoD, which is aimed at modernizing the U.S. military for the next decade and beyond. The fact is that the two finalists made perfect sense. They are the two market leaders, and each has tools, technologies and experience working with sensitive government contracts.

Amazon is the market leader with 33% marketshare. Microsoft is number two with 16%. Number three vendor, Google dropped out before the RFP process began. It is unclear at this point whether the president’s intervention will have any influence on the final decision, but the Washington Post reports it is an unusual departure from government procurement procedures.


By Ron Miller

Grasshopper’s Judith Erwin leaps into innovation banking

In the years following the financial crisis, de novo bank activity in the US slowed to a trickle. But as memories fade, the economy expands and the potential of tech-powered financial services marches forward, entrepreneurs have once again been asking the question, “Should I start a bank?”

And by bank, I’m not referring to a neobank, which sits on top of a bank, or a fintech startup that offers an interesting banking-like service of one kind or another. I mean a bank bank.

One of those entrepreneurs is Judith Erwin, a well-known business banking executive who was part of the founding team at Square 1 Bank, which was bought in 2015. Fast forward a few years and Erwin is back, this time as CEO of the cleverly named Grasshopper Bank in New York.

With over $130 million in capital raised from investors including Patriot Financial and T. Rowe Price Associates, Grasshopper has a notable amount of heft for a banking newbie. But as Erwin and her team seek to build share in the innovation banking market, she knows that she’ll need the capital as she navigates a hotly contested niche that has benefited from a robust start-up and venture capital environment.

Gregg Schoenberg: Good to see, Judith. To jump right in, in my opinion, you were a key part of one of the most successful de novo banks in quite some time. You were responsible for VC relationships there, right?

…My background is one where people give me broken things, I fix them and give them back.

Judith Erwin: The VC relationships and the products and services managing the balance sheet around deposits. Those were my two primary roles, but my background is one where people give me broken things, I fix them and give them back.

Schoenberg: Square 1 was purchased for about 22 times earnings and 260% of tangible book, correct?

Erwin: Sounds accurate.

Schoenberg: Plus, the bank had a phenomenal earnings trajectory. Meanwhile, PacWest, which acquired you, was a “perfectly nice bank.” Would that be a fair characterization?

Erwin: Yes.

Schoenberg: Is part of the motivation to start Grasshopper to continue on a journey that maybe ended a little bit prematurely last time?

Erwin: That’s a great insight, and I did feel like we had sold too soon. It was a great deal for the investors — which included me — and so I understood it. But absolutely, a lot of what we’re working to do here are things I had hoped to do at Square 1.

Image via Getty Images / Classen Rafael / EyeEm

Schoenberg: You’re obviously aware of the 800-pound gorilla in the room in the form of Silicon Valley Bank . You’ve also got the megabanks that play in the segment, as well as Signature Bank, First Republic, Bridge Bank and others.


By Gregg Schoenberg

The startups creating the future of RegTech and financial services

Technology has been used to manage regulatory risk since the advent of the ledger book (or the Bloomberg terminal, depending on your reference point). However, the cost-consciousness internalized by banks during the 2008 financial crisis combined with more robust methods of analyzing large datasets has spurred innovation and increased efficiency by automating tasks that previously required manual reviews and other labor-intensive efforts.

So even if RegTech wasn’t born during the financial crisis, it was probably old enough to drive a car by 2008. The intervening 11 years have seen RegTech’s scope and influence grow.

RegTech startups targeting financial services, or FinServ for short, require very different growth strategies — even compared to other enterprise software companies. From a practical perspective, everything from the security requirements influencing software architecture and development to the sales process are substantially different for FinServ RegTechs.

The most successful RegTechs are those that draw on expertise from security-minded engineers, FinServ-savvy sales staff as well as legal and compliance professionals from the industry. FinServ RegTechs have emerged in a number of areas due to the increasing directives emanating from financial regulators.

This new crop of startups performs sophisticated background checks and transaction monitoring for anti-money laundering purposes pursuant to the Bank Secrecy Act, the Office of Foreign Asset Control (OFAC) and FINRA rules; tracks supervision requirements and retention for electronic communications under FINRA, SEC, and CFTC regulations; as well as monitors information security and privacy laws from the EU, SEC, and several US state regulators such as the New York Department of Financial Services (“NYDFS”).

In this article, we’ll examine RegTech startups in these three fields to determine how solutions have been structured to meet regulatory demand as well as some of the operational and regulatory challenges they face.

Know Your Customer and Anti-Money Laundering


By Danny Crichton

Liberty’s challenge to UK state surveillance powers reveals shocking failures

A legal challenge to the UK’s controversial mass surveillance regime has revealed shocking failures by the main state intelligence agency, which has broad powers to hack computers and phones and intercept digital communications, in handling people’s information.

The challenge, by rights group Liberty, led last month to an initial finding that MI5 had systematically breached safeguards in the UK’s Investigatory Powers Act (IPA) — breaches the Home Secretary, Sajid Javid, euphemistically couched as “compliance risks” in a carefully worded written statement that was quietly released to parliament.

Today Liberty has put more meat on the bones of the finding of serious legal breaches in how MI5 handles personal data, culled from newly released (but redacted) documents that it says describe the “undoubtedly unlawful” conduct of the UK’s main security service which has been retaining innocent people’s data for years.

The series of 10 documents and letters from MI5 and the Investigatory Powers Commissioner’s Office (IPCO), the body charged with overseeing the intelligence agencies’ use of surveillance powers, show that the spy agency has failed to meet its legal duties for as long as the IPA has been law, according to Liberty.

The controversial surveillance legislation passed into UK law in November 2016 — enshrining a system of mass surveillance of digital communications which includes a provision that logs of all Internet users’ browsing activity be retained for a full year, accessible to a wide range of government agencies (not just law enforcement and/or spy agencies).

The law also allows the intelligence agencies to maintain large databases of personal information on UK citizens, even if they are not under suspicion of any crime. And sanctions state hacking of devices, networks and services, including bulk hacking on foreign soil. It also gives U.K. authorities the power to require a company to remove encryption, or limit the rollout of end-to-end encryption on a future service.

The IPA has faced a series of legal challenges since making it onto the statute books, and the government has been forced to amend certain aspects of it on court order — including beefing up restrictions on access to web activity data. Other challenges to the controversial surveillance regime, including Liberty’s, remain ongoing.

The newly released court documents include damning comments on MI5’s handling of data by the IPCO — which writes that: “Without seeking to be emotive, I consider that MI5’s use of warranted data… is currently, in effect, in ‘special measures’ and the historical lack of compliance… is of such gravity that IPCO will need to be satisfied to a greater degree than usual that it is ‘fit for purpose’”.”

Liberty also says MI5 knew for three years of failures to maintain key safeguards — such as the timely destruction of material, and the protection of legally privileged material — before informing the IPCO.

Yet a key government sales pitch for passing the legislation was the claim of a ‘world class’ double-lock authorization and oversight regime to ensure the claimed safeguards on intelligence agencies powers to intercept and retain data.

So the latest revelations stemming from Liberty’s legal challenge represent a major embarrassment for the government.

“It is of course paramount that UK intelligence agencies demonstrate full compliance with the law,” the home secretary wrote in the statement last month, before adding his own political spin: “In that context, the interchange between the Commissioner and MI5 on this issue demonstrates that the world leading system of oversight established by the Act is working as it should.”

Liberty comes to the opposite conclusion on that point — emphasizing that warrants for bulk surveillance were issued by senior judges “on the understanding that MI5’s data handling obligations under the IPA were being met — when they were not”.

“The Commissioner has pointed out that warrants would not have been issued if breaches were known,” it goes on. “The Commissioner states that “it is impossible to sensibly reconcile the explanation of the handling of arrangements the Judicial Commissioners [senior judges] were given in briefings…with what MI5 knew over a protracted period of time was happening.”

So, basically, it’s saying that MI5 — having at best misled judges, whose sole job it is to oversee its legal access to data, about its systematic failures to lawfully handle data — has rather made a sham of the entire ‘world class’ oversight regime.

Liberty also flags what it calls “a remarkable admission to the Commissioner” — made by MI5’s deputy director general — who it says acknowledges that personal data collected by MI5 is being stored in “ungoverned spaces”. It adds that the MI5 legal team claims there is “a high likelihood [of material] being discovered when it should have been deleted, in a disclosure exercise leading to substantial legal or oversight failure”.

“Ungoverned spaces” is not a phrase that made it into Javid’s statement last month on MI5’s “compliance risks”.

But the home secretary did acknowledge: “A report of the Investigatory Powers Commissioner’s Office suggests that MI5 may not have had sufficient assurance of compliance with these safeguards within one of its technology environments.”

Javid also said he had set up “an independent review to consider and report back to me on what lessons can be learned for the future”. Though it’s unclear whether that report will be made public. 

We reached out to the Home Office for comment on the latest revelations from Liberty’s litigation. But a spokesman just pointed us to Javid’s prior statement. 

In a statement, Liberty’s lawyer, Megan Goulding, said: “These shocking revelations expose how MI5 has been illegally mishandling our data for years, storing it when they have no legal basis to do so. This could include our most deeply sensitive information – our calls and messages, our location data, our web browsing history.

“It is unacceptable that the public is only learning now about these serious breaches after the Government has been forced into revealing them in the course of Liberty’s legal challenge. In addition to showing a flagrant disregard for our rights, MI5 has attempted to hide its mistakes by providing misinformation to the Investigatory Powers Commissioner, who oversees the Government’s surveillance regime.

“And, despite a light being shone on this deplorable violation of our rights, the Government is still trying to keep us in the dark over further examples of MI5 seriously breaching the law.”


By Natasha Lomas

Beyond costs, what else can we do to make housing affordable?

This week on Extra Crunch, I am exploring innovations in inclusive housing, looking at how 200+ companies are creating more access and affordability. Yesterday, I focused on startups trying to lower the costs of housing, from property acquisition to management and operations.

Today, I want to focus on innovations that improve housing inclusion more generally, such as efforts to pair housing with transit, small business creation, and mental rehabilitation. These include social impact-focused interventions, interventions that increase income and mobility, and ecosystem-builders in housing innovation.

Nonprofits and social enterprises lead many of these innovations. Yet because these areas are perceived to be not as lucrative, fewer technologists and other professionals have entered them. New business models and technologies have the opportunity to scale many of these alternative institutions — and create tremendous social value. Social impact is increasingly important to millennials, with brands like Patagonia having created loyal fan bases through purpose-driven leadership.

While each of these sections could be their own market map, this overall market map serves as an initial guide to each of these spaces.

Social impact innovations

These innovations address:


By Arman Tabatabai

Innovations in inclusive housing

Housing is big money. The industry has trillions under management and hundreds of billions under development.

And investors have noticed the potential. Opendoor raised nearly $1.3 billion to help homeowners buy and sell houses more quickly. Katerra raised $1.2 billion to optimize building development and construction, and Compass raised the same amount to help brokers sell real estate better. Even Amazon and Airbnb have entered the fray with high-profile investments.

Amidst this frenetic growth is the seed of the next wave of innovation in the sector. The housing industry — and its affordability problem — is only likely to balloon. By 2030, 84% of the population of developed countries will live in cities.

Yet innovation in housing lags compared to those of other industries. In construction, a major aspect of housing development, players spend less than 1% of their revenues on research and development. Technology companies, like the Amazons of the world, spend nearly 10% on average.

Innovations in older, highly-regulated industries, like housing and real estate, are part of what Steve Case calls the “third wave” of technology. VCs like Case’s Revolution Fund and the SoftBank Vision Fund are investing billions into what they believe is the future.

These innovations are far from silver bullets, especially if they lack involvement from underrepresented communities, avoid policy, and ignore distributive questions about who gets to benefit from more housing.

Yet there are hundreds of interventions reworking housing that cannot be ignored. To help entrepreneurs, investors, and job seekers interested in creating better housing, I mapped these innovations in this package of articles.

To make sense of this broad field, I categorize innovations into two main groups, which I detail in two separate pieces on Extra Crunch. The first (Part 1) identifies the key phases of developing and managing housing. The second (Part 2) section identifies interventions that contribute to housing inclusion more generally, such as efforts to pair housing with transit, small business creation, and mental rehabilitation.

Unfortunately, many of these tools don’t guarantee more affordability. Lowering acquisition costs, for instance, doesn’t mean that renters or homeowners will necessarily benefit from those savings. As a result, some tools likely need to be paired with others to ensure cost savings that benefit end users — and promote long-term affordability. I detail efforts here so that mission-driven advocates as well as startup founders can adopt them for their own efforts.


Topics We Explore

Today:

Coming Tomorrow:

  • Part 2. Other contributions to housing affordability
    • Social Impact Innovations
    • Landlord-Tenant Tools
    • Innovations that Increase Income
    • Innovations that Increase Transit Accessibility and Reduce Parking
    • Innovations that Improve the Ability to Regulate Housing
    • Organizations that Support the Housing Innovation Ecosystem
  • This is Just the Beginning
  • I’m Personally Closely Watching the Following Initiatives.
  • The Limitations of Technology
  • Move Fast and Protect People


Please feel free to let me know what else is exciting by adding a note to your LinkedIn invite here.

If you’re excited about this topic, feel free to subscribe to my future of inclusive housing newsletter by viewing a past issue here.


By Arman Tabatabai

Market map: the 200+ innovative startups transforming affordable housing

In this section of my exploration into innovation in inclusive housing, I am digging into the 200+ companies impacting the key phases of developing and managing housing.

Innovations have reduced costs in the most expensive phases of the housing development and management process. I explore innovations in each of these phases, including construction, land, regulatory, financing, and operational costs.

Reducing Construction Costs

This is one of the top three challenges developers face, exacerbated by rising building material costs and labor shortages.


By Arman Tabatabai

Diving into Google Cloud Next and the future of the cloud ecosystem

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller offered up their analysis on the major announcements that came out of Google’s Cloud Next conference this past week, as well as their opinions on the outlook for the company going forward.

Google Cloud announced a series of products, packages and services that it believes will improve the company’s competitive position and differentiate itself from AWS and other peers. Frederic and Ron discuss all of Google’s most promising announcements, including its product for managing hybrid clouds, its new end-to-end AI platform, as well as the company’s heightened effort to improve customer service, communication, and ease-of-use.

“They have all of these AI and machine learning technologies, they have serverless technologies, they have containerization technologies — they have this whole range of technologies.

But it’s very difficult for the average company to take these technologies and know what to do with them, or to have the staff and the expertise to be able to make good use of them. So, the more they do things like this where they package them into products and make them much more accessible to the enterprise at large, the more successful that’s likely going to be because people can see how they can use these.

…Google does have thousands of engineers, and they have very smart people, but not every company does, and that’s the whole idea of the cloud. The cloud is supposed to take this stuff, put it together in such a way that you don’t have to be Google, or you don’t have to be Facebook, you don’t have to be Amazon, and you can take the same technology and put it to use in your company”

Image via Bryce Durbin / TechCrunch

Frederic and Ron dive deeper into how the new offerings may impact Google’s market share in the cloud ecosystem and which verticals represent the best opportunity for Google to win. The two also dig into the future of open source in cloud and how they see customer use cases for cloud infrastructure evolving.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


By Arman Tabatabai

Much to Oracle’s chagrin, Pentagon names Microsoft and Amazon as $10B JEDI cloud contract finalists

Yesterday, the Pentagon announced two finalists in the $10 billion, decade-long JEDI cloud contract process — and Oracle was not one of them. In spite of lawsuits, official protests and even back-channel complaining to the president, the two finalists are Microsoft and and Amazon.

“After evaluating all of the proposals received, the Department of Defense has made a competitive range determination for the Joint Enterprise Defense Infrastructure Cloud request for proposals, in accordance with all applicable laws and regulations. The two companies within the competitive range will participate further in the procurement process,” Elissa Smith, DoD spokesperson for Public Affairs Operations told TechCrunch. She added that those two finalists were in fact Microsoft and Amazon Web Services (AWS, the cloud computing arm of Amazon).

This contract procurement process has caught the attention of the cloud computing market for a number of reasons. For starters, it’s a large amount of money, but perhaps the biggest reason it had cloud companies going nuts was that it is a winner-take-all proposition.

It is important to keep in mind that whether it’s Microsoft or Amazon who is ultimately chosen for this contract, the winner may never see $10 billion, and it may not last 10 years because there are a number of points where the DoD could back out, but the idea of a single winner has been irksome for participants in the process from the start.

Over the course of the last year, Google dropped out of the running, while IBM and Oracle have been complaining to anyone who will listen that the contract unfairly favored Amazon. Others have questioned the wisdom of even going with with a single-vendor approach. Even at $10 billion, an astronomical sum to be sure, we have pointed out that in the scheme of the cloud business, it’s not all that much money, but there is more at stake here than money.

There is a belief here that the winner could have an upper hand in other government contracts, that this is an entree into a much bigger pot of money. After all, if you are building the cloud for the Department of Defense and preparing it for a modern approach to computing in a highly secure way, you would be in a pretty good position to argue for other contracts with similar requirements.

In the end, in spite of the protests of the other companies involved, the Pentagon probably got this right. The two finalists are the most qualified to carry out the contract’s requirements. They are the top two cloud infrastructure vendors on the market, although Microsoft is far behind with around 13 or 14 percent marketshare. Amazon is far head with around 33 percent, according to several companies who track such things.

Microsoft in particular has tools and resources that would be very appealing, especially Azure Stack, a mini private version of Azure, that you can stand up anywhere, an approach that would have great appeal to the military, but both companies have experience with government contracts, and both bring strengths and weaknesses to the table. It will undoubtedly be a tough decision.

In February, the contract drama took yet another turn when the department reported it was investigating new evidence of conflict of interest by a former Amazon employee, who was involved in the RFP process for a time before returning to the company. Smith reports that the department found no such conflict, but there could be some ethical violations they are looking into.

“The department’s investigation has determined that there is no adverse impact on the integrity of the acquisition process. However, the investigation also uncovered potential ethical violations, which have been further referred to DOD IG,” Smith explained.

The DoD is supposed to announce the winner this month, but the drama has continued non-stop.


By Ron Miller

The right way to do AI in security

Artificial intelligence applied to information security can engender images of a benevolent Skynet, sagely analyzing more data than imaginable and making decisions at lightspeed, saving organizations from devastating attacks. In such a world, humans are barely needed to run security programs, their jobs largely automated out of existence, relegating them to a role as the button-pusher on particularly critical changes proposed by the otherwise omnipotent AI.

Such a vision is still in the realm of science fiction. AI in information security is more like an eager, callow puppy attempting to learn new tricks – minus the disappointment written on their faces when they consistently fail. No one’s job is in danger of being replaced by security AI; if anything, a larger staff is required to ensure security AI stays firmly leashed.

Arguably, AI’s highest use case currently is to add futuristic sheen to traditional security tools, rebranding timeworn approaches as trailblazing sorcery that will revolutionize enterprise cybersecurity as we know it. The current hype cycle for AI appears to be the roaring, ferocious crest at the end of a decade that began with bubbly excitement around the promise of “big data” in information security.

But what lies beneath the marketing gloss and quixotic lust for an AI revolution in security? How did AL ascend to supplant the lustrous zest around machine learning (“ML”) that dominated headlines in recent years? Where is there true potential to enrich information security strategy for the better – and where is it simply an entrancing distraction from more useful goals? And, naturally, how will attackers plot to circumvent security AI to continue their nefarious schemes?

How did AI grow out of this stony rubbish?

The year AI debuted as the “It Girl” in information security was 2017. The year prior, MIT completed their study showing “human-in-the-loop” AI out-performed AI and humans individually in attack detection. Likewise, DARPA conducted the Cyber Grand Challenge, a battle testing AI systems’ offensive and defensive capabilities. Until this point, security AI was imprisoned in the contrived halls of academia and government. Yet, the history of two vendors exhibits how enthusiasm surrounding security AI was driven more by growth marketing than user needs.


By Arman Tabatabai

Peter Kraus dishes on the market

During my recent conversation with Peter Kraus, which was supposed to be focused on Aperture and its launch of the Aperture New World Opportunities Fund, I couldn’t help veering off into tangents about the market in general. Below is Kraus’ take on the availability of alpha generation, the Fed, inflation vs. Amazon, housing, the cross-ownership of US equities by a few huge funds and high-frequency trading.

Gregg Schoenberg: Will alpha be more available over the next five years than it has been over the last five?

To think that at some point equities won’t become more volatile and decline 20% to 30%… I think it’s crazy.

Peter Kraus: Do I think it’s more available in the next five years than it was in the last five years? No. Do I think people will pay more attention to it? Yes, because when markets are up to 30%, if you get another five, it doesn’t matter. When markets are down 30% and I save you five by being 25% down, you care.

GS: Is the Fed’s next move up or down?

PK: I think the Fed does zero, nothing. In terms of its next interest rate move, in my judgment, there’s a higher probability that it’s down versus up.


By Gregg Schoenberg

How to handle dark data compliance risk at your company

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR, or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with recordkeeping laws but raises questions about potential admissibility as evidence.


By Arman Tabatabai

Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics” though has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”

In other words, the scale of those predictions completely undermines notice and consent.

Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.

Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias, and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.

Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Since people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.

He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”

Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies who can take the value out of this tech without risking provoking consumer ire.

Who will steal your data more: China or America?

The Huawei logo is seen in the center of Warsaw, Poland

Jaap Arriens/NurPhoto via Getty Images

Talking about data ethics, Europe is in the middle of a superpower pincer. China’s telecom giant Huawei has made expansion on the continent a major priority, while the United States has been sending delegation after delegation to convince its Western allies to reject Chinese equipment. The dilemma was quite visible last week at MWC-Barcelona, where the two sides each tried to make their case.

It’s been years since the Snowden revelations showed that the United States was operating an enormous eavesdropping infrastructure targeting countries throughout the world, including across Europe. Huawei has reiterated its stance that it does not steal information from its equipment, and has repeated its demands that the Trump administration provide public proof of flaws in its security.

There is an abundance of moral relativism here, but I see this as increasingly a litmus test of the West on China. China has not hidden its ambitions to take a prime role in East Asia, nor has it hidden its intentions to build a massive surveillance network over its own people or to influence the media overseas.

Those tactics, though, are straight out of the American playbook, which lost its moral legitimacy over the past two decades from some combination of the Iraq War, Snowden, Wikileaks, and other public scandals that have undermined trust in the country overseas.

Security and privacy might have been a competitive advantage for American products over their Chinese counterparts, but that advantage has been weakened for many countries to near zero. We are increasingly going to see countries choose a mix of Chinese and American equipment in sensitive applications, if only to ensure that if one country is going to steal their data, it might as well be balanced.

Things that seem interesting that I haven’t read yet

Obsessions

  • Perhaps some more challenges around data usage and algorithmic accountability
  • We have a bit of a theme around emerging markets, macroeconomics, and the next set of users to join the internet.
  • More discussion of megaprojects, infrastructure, and “why can’t we build things”

Thanks

To every member of Extra Crunch: thank you. You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people, and companies. If I can ever be of assistance, hit reply, or send an email to [email protected].

This newsletter is written with the assistance of Arman Tabatabai from New York.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.


By Danny Crichton

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: [email protected].


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.


By Arman Tabatabai

HyperScience, the machine learning startup tackling data entry, raises $30 million Series B

HyperScience, the machine learning company that turns human readable data into machine readable data, has today announced the close of a $30 million Series B funding round led by Stripes Group, with participation from existing investors FirstMark Capital and Felicis Ventures as well as new investors Battery Ventures, Global Founders Fund, TD Ameritrade, and QBE.

HyperScience launched out of stealth in 2016 with a suite of enterprise products focused on the healthcare, insurance, finance and government industries. The original products were HSForms (which handled data-entry by converting hand-written forms to digital), HSFreeForm (which did a similar function for hand-written emails or other non-form content) and HSEvaluate (which could parse through complex data on a form to help insurance companies approve or deny claims by pulling out all the relevant info).

Now, the company has combined all three of those products into a single product called HyperScience. The product is meant to help companies and organizations reduce their data-entry backlog and better serve their customers, saving money and resources.

The idea is that many of the forms we use in life or in the workplace are in an arbitrary format. My bank statements don’t look the same as your bank statements, and invoices from your company might look different than invoices from my company.

HyperScience is able to take those forms and pipe them into the system quickly and easily, without help from humans.

Instead of charging by seat, HyperScience charges by documents, as the mere use of HyperScience should mean that fewer humans are actually ‘using’ the product.

The latest round brings HyperScience’s total funding to $50 million, and the company plans to use a good deal of that funding to grow the team.

“We have a product that works and a phenomenally good product market fit,” said CEO Peter Brodsky. “What will determine our success is our ability to build and scale the team.”


By Jordan Crook