international law

GDPR Versus (Traditional) UX

Often, corporate entities hail user experience (“UX”) as an essential product feature. In fast-evolving tech markets, many believe, it is the Web-tool with the smoothest ride—the most frictionless UX—that absorbs and retains the most users. As a result, many platforms place a heavy premium on minimizing the steps between what the user wants and what the user gets. The less pages or options or hoops-to-jump-through in between, the better.

The General Data Protection Regulation (“GDPR”) purposefully disrupts this strategy.

Easily the most significant data privacy regulation in the last 20 years, the GDPR, whose compliance deadline is May 25, revolutionizes the way organizations must handle consumer information. Pivotally, the European Union (“EU”)-generated law requires any entity that collects, monitors, or targets EU residents’ data to provide such data’s subjects with broad access to and control of their information. The GDPR further requires covered entities to report data security breaches to local regulators; no longer is doing so merely a “best practice.” Perhaps the GDPR’s most monumental edict, however, lay in its muscle: entities that violate the GDPR’s strict provisions are liable for fines of up to $20 million or 4% of global turnover—whichever is greater.

The GDPR’s purpose is no secret. It is intended to disrupt monolithic data companies such as Google and Facebook, forcing them to boost their privacy and security practices to a level that EU regulators believe adequately protects the consumers that provide the endless data such companies peddle.

So: with UX on one side and increasingly complex data consent, access, and control requirements on the other, what will mega-data companies do?

On April 18, Facebook invited a host of journalists to its new Building 23 at the social media giant’s Menlo Park HQ. There, Facebook revealed its GDPR compliance plan to the reporters. And the reporters were, reportedly, underwhelmed. Their chief criticisms:

-          Facebook’s user consent prompt is placed beneath an “X” in a “big blue button.” This “X” prominently invites users to skip the GDPR bases for requiring legal, personal consent over their information.

-          Pages describing Facebook’s control of sensitive information—a crux of financial value, personal privacy, and privileged knowledge such as sexual preference, religious and political views—feature an “Accept And Continue” button in “pretty blue” and an “ugly gray” “Manage Data Settings” button. The former, which defaults to Facebook’s preferences, is selectable whether or not the user scrolls through the rules. This crucial page is “obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes.”

-          In the U.S., user interactions with political groups and events pages trigger each user’s placement in “overarching personality categories” that Facebook sells to advertisers. The only way to opt out is to “remove any info you’ve shared in these categories.”

-          Global facial recognition is enabled by default.

-          To reject Facebook’s Terms of Service, users must locate a “see your options” hyperlink that is “tiny” and “isn’t even a button.” (The “I Accept” button, however, is “big.”) This “see your options” link leads to a “scary permanent delete” button and “another tiny ‘I’m ready to delete my account’ hyperlink.” If a user selects this option, but wants to download their data first, this process can take hours. And the downloaded data’s portability has significant holes.

-          Users between 13-15 years old are off-limits to Facebook collecting their sexual and political data or serving them ads—unless the child gets parental consent. This consent is obtainable by the child providing Facebook with an email address, and that email subject granting consent via email. No further controls aim to determine whether the email subject is actually the child’s parent or guardian.

In sum: Instead of scaling back on UX to ensure that users—i.e. the providers and, per the GDPR, the proprietors of that data—understand what data Facebook elicits from users, how Facebook uses that data, and how users can adjudicate both of these processes, Facebook squeezed the GDPR’s requirements into its longstanding UX-first template.

GDPR or not, Facebook still “pushes” users “to speed though giving consent…with a design that encourages rapidly hitting the ‘Agree’ button.” Their platform “makes accepting the updates much easier than reviewing or changing them.”

Facebook and companies of its data-caliber made their bones on smooth UX. This methodology founded the bonds between users and these companies’ platforms, underwriting their success. But in an effort to continuously smooth users’ ride, UX-optimizers glossed over some weighty details. By enabling—read: training—users to hit “Agree” without reading the terms and conditions governing the services at play, data propagators obfuscated the true cost-benefit analysis underlying their products. Deprived users of a reasonable opportunity to make an informed decision re: whether they could responsibly press “Post.”

The Cambridge Analytica/Facebook controversy is the latest indicator of this dissonant status quo. On April 10 and 11 Facebook CEO Marc Zuckerberg apologized to the Internet-surfing world for his company’s untrustworthy custodianship of user data. On his watch, political marketers scraped user data, aggregated it, and built a media machine of epic proportions and historical effectiveness.

In the wake of this scandal, Internet searches for “delete Facebook” reached a five-year high. This compounded a troubling trend for Facebook at the close of 2017, when the company lost daily users in the US and Canada for the first time ever. And after the U.S. Federal Trade Commission confirmed its investigation of the company, Facebook’s stock dropped precipitously, shedding over $100 billion in value to match its lowest point since mid-2017.

The Cambridge Analytica revelations spotlighted yet again the reality of social media and many other online platforms: if you use them, your data may be forfeit. From Snowden to Yahoo to Uber to Target, this is not a new lesson for consumers who find themselves increasingly aware of the shady marketability of their data.

Aleksandr Kogan, the psychology professor hired by Cambridge Analytica to scrape millions of Facebook users’ profiles, agrees. He noted recently that users’ awareness that their data is improperly traded was a “core idea” underlying Cambridge Analytica’s practices—with a twist.

“Everybody knows,” Kogan said he and Cambridge Analytica believed, “and nobody cares.”

Now, post-fracas, Kogan believes the latter part of this theory was “wrong.” People not only know how their data is manipulated, but they care, too.

This uptick in user cognizance provides a pivotal impetus for Facebook, Google, and other blue-chip data stores to leave superficial UX, made of bubble letters and candy-colored buttons, behind. To invest in true UX via true transparency. To place a premium on educating their users on the innerworkings of the relationship between human and platform. To smooth UX not by shrouding choice, but by building trust.

That is, after all, the new preferred experience.

Otherwise, regardless of Mr. Zuckerberg’s congressional apologies, Prof. Kogan’s revisions, and whether the GDPR’s impending fines are as damning as planned, users now know what happens to their data. Who is misusing it. And, UX or not, what to do about it.

Update: Cambridge Analytica announced on May 2 that it will file for bankruptcy. Its Facebook controversy has "driven away virtually all of the company’s customers."

ICO Contracts: Choice of Law, Venue Selection, and How Fraud Upends It All

Intro

ICOs are multiplying. Likely siphoning early stage VC funding, initial coin offerings have raised $4 billion in 2017. Bitcoin, the standard bearer of cryptocurrencies worldwide and the most common ICO currency, hit an all-time high nearing $18,000 in mid-December. With commensurate speed, lawsuits and regulator crackdowns have followed.

In particular, a series of lawsuits surrounding the startup Tezos may provide some guidance on ICO contracts. That is, not the smart contracts that administer the cryptocurrency-for-ICO token exchange at the core of certain ICOs, but the paper contracts which (hopefully!) set forth the terms and conditions of an ICO exchange, including limitations of liability, tax responsibilities, venue selection provisions, and more.

Background

Tezos threw a phenomenally successful ICO: $232 million raised by co-founders and spouses Arthur and Kathleen Breitmen, for an incomplete blockchain-based platform, in July 2017. Tezos’ haul shattered records for funds raised in an ICO—especially considering that these funds were ostensibly raised via bitcoin and ether, two currencies whose value continues to trend (substantially) up, raising the ICO’s ensuing estimated value to hit $1.3 billion.

Those Suits

Tezos faces at least five lawsuits, all class actions, filed in state and federal courts from Florida to California. One of these suits, captioned Gaviria v. Dynamic Ledger Solutions, Inc., et al., Case No. 6:17-CV-01959-ORL-40-KRS, attached to its complaint the Tezos Contribution and XTZ Allocation Terms and Explanatory Notes (“Tezos Terms”). The Tezos Terms, according to the complaint, memorialize the terms of the Tezos ICO’s fundraising offer—and are “unenforceable for a variety of reasons.” Gaviria, at 14.

Early Guidance

While the Tezos suits have yet to be resolved, and the validity of their arguments yet to be tested, guidance may be gleaned already for the fast-moving ICO space. In particular, the Tezos suits offer a lesson for ICO contract drafters on choice of law and venue selection provisions.

Choice of Law & Venue—Meet Fraud

Tezos, like certain other ICOs, sought to adjudicate litigation concerning their enterprise in a foreign jurisdiction. Via the very last provision of the Tezos Terms, any disputes “arising out of or in connection with” Tezos’ ICO are restricted “exclusively and finally [to] the ordinary courts of Zug, Switzerland.” Gaviria, at Exhibit A. Tezos’ choice of law was Swiss as well. Id.

Organizations running ICOs, like many other enterprises, don’t want to travel far to litigate, produce witnesses, and transport evidence. Hence, venue selection clauses. Also like many other organizations, those running ICOs seek regulatory havens. Jurisdictions they think align with the claims they might make (and field) should litigation arise. In fact, Kathleen Breitman told Reuters in June that Tezos chose to incorporate the Tezos Foundation in Zug since Switzerland “has a regulatory authority that had a sufficient amount of oversight but not like anything too crazy.” Each party’s assessment along these lines informs its agreement’s choice of law clause.

Generally, courts afford venue selection clauses significant deference, even when the chosen jurisdiction is a non-U.S. state. After all, the parties assumedly negotiated these clauses prior to signing the agreement. Today, the majority of federal courts (including those of the 2nd, 4th, 7th, 8th, 9th, 10th and 11th circuits—which include New York, Florida, and California) strictly enforce forum-selection clauses. The Supreme Court of the U.S. blessed this trend, ruling that “forum-selection clauses should control except in unusual cases.” Atlantic Marine Construction Co. v. United States District Court for the Western District of Texas, 571 U.S. 488 (2013). The same applies for choice of law clauses. The Restatement (Second) of the Conflicts of Laws provides that choice of law provisions are presumptively enforceable.

That said, how can Tezos be sued—multiple times—in California and Florida, at opposite ends of the country whose laws Tezos sought to avoid altogether?

Because fraud wasn’t part of the agreement.

Fraud features heavily across the Tezos litigation. For example, each in their own way, the Tezos suits allege that the utility tokens (i.e. markers of purchased services or access) that Tezos distributed to its “donors” in exchange for their “donations” during the Tezos ICO were actually unregistered securities, sold in violation of the Securities Act of 1933. Gaviria, at 31. By misleading ICO participants about the unregistered securities status of these tokens—a “material fact” highly relevant to the ICO participants—Tezos “fraudulently induced [the ICO class] to participate in the ICO.” Id., at 34. 

Fraud is kryptonite for forum selection clauses in federal court.  Decades ago the Supreme Court ruled that where enforcement of a forum selection clause would be “unreasonable and unjust, or that the clause was invalid for such reasons as fraud or overreaching,” it should not be enforced. The Bremen v. Zapata Off-Shore Co., 407 U.S. 1 (1972). As for choice of law clauses, fraud can defeat those too. Carnival Cruise Lines, Inc. v. Shute, 499 U.S. 585 (1991).

Therefore, by claiming that the Tezos Terms were “induced by fraud and overreaching” (Gaviria, at 24), the plaintiffs at play may succeed in superimposing their own venue selection—Florida, for instance—over Tezos and its Swiss preferences.

Conclusion

ICOs operate for now in a regulatory gray-space. While crypto-entrepreneurs consider the securities status of their tokens, publish ambitious marketing materials, and hunt for ICO participants, they must also consider the jurisdictional impact their decisions might have on their ICO contracts—regardless of the law and venue they select.