Cybersquatting and Bad Faith

Interesting recent decision from Judge Rakoff denying a cybersquatting defendant's motion to dismiss a Plaintiff law firm's ACPA complaint. 

Useful to businesses who seek to plead bad faith as required for an anti-cybersquatting domain name cause of action under the ACPA.

The caption is McAllister Olivarius v. Mermel, No. 17 CV 9941 (S.D.N.Y. April 2, 2018) (Rakoff, J.)

US v. Thompson - 5 Years for Cyberstalking

Link here to the government's sentencing letter in US v. Thompson, a cyberstalking prosecution involving some truly egregious [alleged] conduct (calling in bomb threats; filing false court papers; false public accusations of STIs) against an ex-romantic partner.

Cyberstalking is a federal offense, 18 U.S.C. 2261A, and those who come under the glare of the FBI and cybercrime divisions of the US Attorneys' offices pay a hefty price.  The defendant here pled guilty.  He was sentenced to 60 months -- a guidelines sentence.

SEC Enforcement Action Targets ICO Operator

Check this out:  the first of what promises to be many securities enforcement actions in the cryptocurrency space.

On Friday afternoon the SEC announced fraud charges against two companies and their principal, Maksim Zaslavskiy, in connection with the marketing and sale of allegedly fraudulent Initial Coin Offerings ("ICOs").

U.S. securities regulators published a memorandum in July warning the industry that many ICOs qualify, in the SEC's eyes, as unregistered sales of securities.  

ICO offerors and participants large and small, sophisticated and unsophisticated -- like the defendant in this first case -- ought to be following these developments.

FTC Settlements - Privacy Shield Claims

Today's FTC announcement demonstrates a quick and easy way for companies to earn themselves an enforcement action:  publicly claim to participate in the EU-US Privacy Shield, without participating in the EU-US Privacy Shield.  

(Note -- when you self-certify, the FTC adds you to the list that it maintains pursuant to the Privacy Shield framework.)   

Is your organization's privacy policy accurate on this point?

CFAA Bullies

The ruling is preliminary, but Judge Chen's ruling in hiQ v. LinkedIn calls out LinkedIn as a Computer Fraud and Abuse Act bully. SCOTUS is unlikely to grant certiorari in US v. Nosal, where the scope of the CFAA is at issue in the criminal context. Civil litigants ought to care too, as this case demonstrates. We await better guidance from the Courts of Appeals; in the meantime -- fight the bullies.

via EFF / Deeplinks

Illinois Biometric Class Action

A well-known privacy plaintiff's firm has filed at least three class action lawsuits against Illinois employers alleging violations of an Illinois state biometrics action.

The claims allege violations of the Illinois Biometric Privacy Act, which requires employers to provide notice to employees when collecting biometric information.  The complaints allege that putative employee-class members were required to use their fingerprints to clock in and out of work.

According to the plaintiffs, the law allows $1000 per violation, plus attorney fees to a prevailing plaintiff.

Clickwrap, No; Scrollwrap, Yes. Are your app's Terms of Service enforceable?

App publishers intent on enforcing an arbitration provision -- or, really, any portion of their Terms of Service -- can take guidance from this June decision by Judge Koeltl in the Southern District of New York.  The decision describes the types of agreement that do -- and do not -- constitute reasonable notice sufficient to create a binding consumer contract.

(Spoiler alert:  Lyft's scrollwrap agreement, which required the plaintiff-user to view and accept the Terms, was binding.  The clickwrap agreement, which required the plaintiff-user to click through to a linked set of terms, was not.)

Judge Koeltl found that Lyft's "click-wrap" arbitration agreement was unenforceable because while they required the plaintiff-user to click a box indicating that he had "read and agreed" to a hyperlinked-copy of the Terms, the method of obtaining "did not alert reasonable consumers to the gravity of . . . 'clicks.'"  Among other things, the Judge observed that the font size and hyperlink color for the links was too small.

But a later scroll-wrap update to Lyft's Terms created a binding agreement.   The methods of obtaining user consent involved a legend reading:

“Before you can proceed you must read & accept the latest Terms of Service.” The Terms of Service were set out on the screen to be scrolled through. No subtle hyperlink was needed. The Terms of Service begin with the warning: “These Terms of Service constitute a legally binding agreement . . . between you and Lyft, Inc.” Before proceeding, the user was required to click on a conspicuous bar that said: “I accept.”

Those, the Judge found, were sufficient to bind the user:  motion to compel arbitration granted.

Revenge Porn: "Extreme and Outrageous" as a Matter of Law?

Here's an interesting federal decision by Judge Roman (S.D.N.Y.) out of White Plains, handed down last week.  The subject matter is the sordid civil fall-out from an incident of sexual abuse in 2013.

Both parties are proceeding anonymously under the pseudonyms Jane Doe (Plaintiff) and John Doe (Defendant).  Out of respect for the parties' privacy, and the Court's determination that they are entitled to proceed pseudonymously, we'll use the same names here.

As alleged in the Complaint, while Plaintiff and Defendant were dating, Defendant set up a video camera on his laptop to surreptitiously record the couple having sex.  He uploaded the video to PornHub, and sent links to several of his buddies.  Plaintiff discovered what her boyfriend had done.  She moved out.  She lawyered up.

NYPD got involved.  Defendant was charged with, and later pled guilty to, the New York crimes of unlawful surveillance and dissemination of an unlawful surveillance.

A few years later, at the allocution on his criminal plea, Defendant admitted:  "For my own and another person's amusement and entertainment, I intentionally used and installed . . . an imaging device to surreptitiously . . . record a person undressing and sexual and other intimate parts of such person at a place and time when such person had a reasonable expectation of privacy without that person's knowledge and consent."

Plaintiff then filed a civil suit against Defendant in federal court, alleging causes of action for Intentional Infliction of Emotional Distress (IIED) and Negligent Infliction of Emotional Distress (NIED).  

IIED is the interesting one here.  Under New York law, it requires a plaintiff to prove four elements:  (i) extreme and outrageous conduct; (ii) intent to cause, or disregard of a substantial probability of causing, severe emotional distress; (iii) a causal connection between the conduct and injury; and (iv) severe emotional distress."  The standard for "outrageous" is high, for as the Court of Appeals has observed:

Unlike other intentional torts, intentional infliction of emotional distress does not proscribe specific conduct (compare, e.g., Restatement [Second] of Torts § 18 [battery]; id., § 35 [false imprisonment]), but imposes liability based on after-the-fact judgments about the actor's behavior.  Accordingly, the broadly defined standard of liability is both a virtue and a vice. The tort is as limitless as the human capacity for cruelty. The price for this flexibility in redressing utterly reprehensible behavior, however, is a tort that, by its terms, may overlap other areas of the law, with potential liability for conduct that is otherwise lawful. Moreover, unlike other torts, the actor may not have notice of the precise conduct proscribed (see, Givelber, Social Decency, 82 Colum L Rev, at 51-52).

Consequently, the "requirements of the rule are rigorous, and difficult to satisfy" (Prosser and Keeton, Torts § 12, at 60-61 [5th ed]; see also, Murphy, 58 NY2d, at 303 [describing the standard as "strict"]). Indeed, of the intentional infliction of emotional distress claims considered by this Court, every one has failed because the alleged conduct was not sufficiently outrageous (see, Freihofer v Hearst Corp.,65 NY2d, at 143-144; Burlew v American Mut. Ins. Co., 63 NY2d 412, 417-418; Murphy, 58 NY2d, at 303; Fischer v Maloney, 43 NY2d, at 557).  "'Liability has been found only where the conduct has been so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency, and to be regarded as atrocious, and utterly intolerable in a civilized community' " ( Murphy, 58 NY2d, at 303, quoting Restatement [Second] of Torts § 46, comment d).

Howell v. N.Y. Post Co., 81 N.Y.2d 115, 122, 596 N.Y.S.2d 350, 353-54, 612 N.E.2d 699, 702-03 (1993).

Prior to discovery, Plaintiff moved for summary judgment on the issue of liability.  The motion effectively asked the court to find, as a matter of law, that unauthorized and covert recording and dissemination of Plaintiff having sex with Defendant was "extreme and outrageous" so as to meet the requirements of the tort of IIED.

The Court agreed.

A good decision for victims of this atrocious 21st century version of sexual abuse.  Useful in civil actions following-on surreptitious videorecording and online dissemination of "sex vids."  

It might give pause to defendants (and their counsel) in criminal actions involving these types of allegations.  Should your client plead guilty, even to reduced charges, when it means the victim has a slam-dunk civil case for the tort of outrage?

Finally, an observation about this tort, which the Court of Appeals has observed requires proof of conduct that is "utterly intolerable in a civilized community."  Tort reformers might argue that these types of vaguely-defined causes of action lead to uncertainty and economic inefficiency.  The other side of the coin is that some conduct is so OUTRAGEOUS! that it ought to give rise to tort liability -- even if it doesn't carry discrete physical injury or fall within the four corners of another cause of action, such as invasion of privacy or assault.  

But is judicial determination -- at the summary judgment stage -- appropriate for a tort that by its definition depends on judgments as to what is "utterly intolerable in a civilized community."  Aren't communitarian standards the very reason we have the jury system?  Or do we all agree that revenge porn, at least under the circumstances admitted by John Doe here, so tortiously outrageous as to carry tort liability as a matter of law?


Internet-connected Toys and Kids' Privacy

The FBI is warning parents of the privacy risks of internet-connected toys.

That's my cue: toy manufacturers, app merchants, and others who collect kids' information without appropriate attention to information security and privacy issues face major legal and reputational risk.

Child voice recordings and location history top the list.

Expanding New York's Journalist Shield Law

An appellate panel in New York ruled in favor of broader journalistic protections this week in a case testing the boundaries of New York's reporter Shield Law, which protects professional journalists against compelled disclosure of, among other things, their anonymous sources.

Petitioner in the case sought to reveal anonymous sources cited in reports from Reorg Research, a subscription publisher primarily serving bankruptcy investors.  A February 2017 decision by a New York trial court ruled that Reorg was not entitled to the protections of the Shield Law because it did not fall within the category of "professional journalist" protected by the law: 

In her decision, Justice Edmead wrote that while “extraordinary protections are afforded to the press by our laws,” companies like Reorg, which cater to select, wealthy clients, “do not carry out the vital function of informing the public.” Indeed, she added, Reorg’s business model “deliberately keeps its information from reaching the general public.”

NYT.  The appellate panel unanimously reversed:

"We find that [the] respondent is exempt from having to disclose the names of its confidential sources by New York's Shield Law because it is a 'professional medium or agency which has as one of its main functions the dissemination of news to the public.'"  In re Murray Energy Corp. v. Reorg Research, Inc., 157797/16.

The decision means a broader category of publishers can rely on the protections of the Shield Law in carrying out their news- and information-gathering functions.

Drones, DJI and the Manufacturer-Regulated IOT Device

On Friday, the D.C. Circuit struck down the FAA's regulations on consumer drones, holding that the agency exceeded its statutory authority by regulating "model aircraft." Barring new developments or appeals, the ruling exempts drone hobbyists from compliance with the FAA registration and regulatory regime.

Today comes news from heavyweight drone manufacturer DJI, who will disable certain capabilities on customers' DJI-made drones unless the units are registered on the DJI website. Gizmodo's coverage is less-than-approving, leading with the headline that DJI is "crippling" its customers' drones.

Click through, though, and you'll see a more accommodating tone:

"This is actually a really responsible move on DJI’s part. By impelling customers to log in to their DJI accounts and activate the latest firmware for their drones, the company will be able to sync up each device with the specific regulations of the country where it’s being operated. (Note: Customers in China, where DJI is headquartered, won’t be required to go through this new activation process.)"


It's a new frontier of geo-specific compliance.

The always-on connection between manufacturer and device spans from those that passively collect personal information (ahem, Alexa), to web-connected consumer products with nifty -- and potentially dangerous -- functionality like DJI drones. It creates new capabilities for manufacturer self-regulation of the ways, and the places, their devices can be used.

Companies like DJI can tailor the operation of their devices to the legal and regulatory regimes that the devices are used in. Not all consumers view these safety- and compliance-oriented programs as "crippling" the affected devices. For manufacturers rolling out use parameters across many jurisdictions, the customer-communications strategy may be as important to brand trust and enterprise value as the compliance goals.

State Censors and Streisand

Gennie Gebhart at EFF writes informatively on why it's getting tougher and tougher for state actors to censor online content they don't like.  

The example comes on the heels of Thailand's efforts to compel Facebook to remove content depicting the Thai king wearing a crop-top and shopping with his mistress in a German shopping mall.  Apparently Thai lese majeste laws prohibit speech criticizing or insulting the monarch.

EFF's position is that:

Facebook and other companies responding to government requests must provide the fullest legally permissible notice to users whenever possible. This means timely, informative notifications, on the record, that give users information like what branch of government requested to take down their content, on what legal grounds, and when the request was made.

Notice, it is argued, powers the Streisand Effect -- and the Streisand Effect combats censorship.

Data is the world's most valuable resource. Protect yours.

This article from the Economist argues that antitrust law paradigms are obsolete in the world of big data.  One of the solutions proposed to bust the data-monopoly is about access:

[L]oosen the grip that providers of online services have over data and give more control to those who supply them. More transparency would help: companies could be forced to reveal to consumers what information they hold and how much money they make from it. Governments could encourage the emergence of new services by opening up more of their own data vaults or managing crucial parts of the data economy as public infrastructure, as India does with its digital-identity system, Aadhaar. They could also mandate the sharing of certain kinds of data, with users’ consent—an approach Europe is taking in financial services by requiring banks to make customers’ data accessible to third parties.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.

I don't agree.  But I'm listening.

Via the Economist

Donor Disclosure Requirements for NY Nonprofits

Under NY State law, issue-oriented charitable organizations who engage in lobbying activities must disclose the names of persons and organizations who donate more than $2,500.   N.Y. Executive Law § 172-e.  

In addition, any 501(c)(3) nonprofit organization that donates “staff, staff time, personnel, offices, office supplies, financial support of any kind or any other resources” totaling $2,500 or more in a six-month period to a nonprofit lobbying organization in New York to file a report with the Attorney General. That report must include the identity of any donor to the 501(c)(3) of $2,500 or more, and will be posted to a public website within 30 days of filing.

There is plenty of action in agency tribunals and courts surrounding these disclosure rules.   The potential for disclosure of the identities of valuable charitable donors makes this a sensitive issue for nonprofits.  Organizations challenging the rules have argued that they intend to chill nonprofits' speech in violation of the First Amendment.

An exemption in the legislative law allows groups, including those operating "in the area of civil rights and civil liberties," to shield the identity of donors if doing so would "lead to harm, threats, harassment or reprisals to a source of funding."  The New York Civil Liberties Union (NYCLU) applied for such an exemption in 2014.  That was granted after an initial agency ruling denying the exemption to NYCLU was overturned as clearly erroneous.

Last week, the NY JCOPE (Joint Commission on Public Ethics) rejected NYCLU's application to extend the exemption.  (NYLJ (subscription).)  Litigation will continue.  

***Mullen PC has advised donors to charitable organizations on the privacy implications of New York state laws, and has advised 501(c)(3) and (c)(4) organizations on compliance with New York donor disclosure requirements.

Bruce Schneier on U.S. Intelligence Leaks

On recent leaks of NSA tools (via shadow brokers) and CIA ops documents (via wikileaks):

For both of these leaks, one big question is attribution: who did this? A whistleblower wouldn't sit on attack tools for years before publishing. A whistleblower would act more like Snowden or Manning, publishing immediately—and publishing documents that discuss what the U.S. is doing to whom, not simply a bunch of attack tools. It just doesn't make sense. Neither does random hackers. Or cybercriminals. I think it's being done by a country or countries.

My guess was, and is still, Russia in both cases. Here's my reasoning. Whoever got this information years before and is leaking it now has to 1) be capable of hacking the NSA and/or the CIA, and 2) willing to publish it all. Countries like Israel and France are certainly capable, but wouldn't ever publish. Countries like North Korea or Iran probably aren't capable. The list of countries who fit both criteria is small: Russia, China, and ... and ... and I'm out of ideas.

Via Schneier on Security / Lawfare

Google Takedown Requests

Here is an interesting article from Eugene Volokh [Volokh Conspiracy / WAPO] commenting on one man's efforts to have Google remove links that return on a Google search for his name.  According to the article, the man in question was accused of criminal sexual conduct, but later exonerated.  The state criminal authorities (NJ) expunged his arrest records.

Prof. Volokh approaches First Amendment law (and plenty else) from a libertarian tack, so I suppose I expected him to come out firmly in favor of more speech -- and against restrictions on Google's ability to link to pages referring to accusations that now prove false.  But his piece poses some thoughtful questions:

So my questions (and these are just to start with): What should our view be when someone tries to get the stories about them to vanish from search results this way? Should it matter that there is real evidence that he was innocent? (My sense is that the grand jury’s decision to essentially withdraw the indictment is such evidence.) Should we view attempts to vanish the exoneration stories differently from attemp[t]s to vanish stories about the accusation that hadn’t been updated to reflect the exoneration?

Almost a European approach.

In my experience with these types of deindexing requests on behalf of U.S. clients, Google is principled, process-oriented, and ultimately human when legitimate requests are brought to light.  Effective alternative procedures in civil courts are, relatively speaking, time consuming and costly.