View from inside the beast and state of the law re the freedom of speech at FB et al. (part 2 of 2)

Jun 24, 2019 13:58

This is the second part of the previous posting The state of the public's collective mind re FB et al. (part 1 of 2)


2. A view from an insider (cont-d)

Facebook Has a Right to Block ‘Hate Speech’-But Here’s Why It Shouldn’t
https://quillette.com/2019/02/07/facebook-has-a-right-to-block-hate-speech-but-heres-why-it-shouldnt/
In late August, I wrote a note to my then-colleagues at Facebook about the issues I saw with political diversity inside the company. You may have read it, because someone leaked the memo to the New York Times, and it spread outward rapidly from there. Since then, a lot has happened, including my departure from Facebook. I never intended my memos to leak publicly-they were written for an internal corporate audience. But now that I’ve left the company, there’s a lot more I can say about how I got involved, how Facebook’s draconian content policy evolved, and what I think should be done to fix it.
As of 2013, this was essentially Facebook’s content policy: “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).”
By the time the 2016 U.S. election craze began (particularly after Donald Trump secured the Republican nomination), however, things had changed. The combination of Facebook’s corporate encouragement to “bring your authentic self to work” along with the overwhelmingly left-leaning political demographics of my former colleagues meant that left-leaning politics had arrived on campus. Employees plastered up Barack Obama “HOPE” and “Black Lives Matter” posters. The official campus art program began to focus on left-leaning social issues. In Facebook’s Seattle office, there’s an entire wall that proudly features the hashtags of just about every left-wing cause you can imagine-from “#RESIST” to “#METOO.”
As this culture developed inside the company, no one openly objected. This was perhaps because dissenting employees, having watched the broader culture embrace political correctness, anticipated what would happen if they stepped out of line on issues related to “equality,” “diversity,” or “social justice.” The question was put to rest when “Trump Supporters Welcome” posters appeared on campus-and were promptly torn down in a fit of vigilante moral outrage by other employees. Then Palmer Luckey, boy-genius Oculus VR founder, whose company we acquired for billions of dollars, was put through a witch hunt and subsequently fired because he gave $10,000 to fund anti-Hillary ads. Still feeling brave?
It’s not a coincidence that it was around this time that Facebook’s content policy evolved to more broadly define “hate speech.” The internal political monoculture and external calls from left-leaning interest groups for us to “do something” about hateful speech combined to create a sort of perfect storm.
The evolution of our content policy not only risked the core of Facebook’s mission, but jeopardized my own alignment with the company. As a result, my primary intellectual focus became Facebook’s content policy.
I quickly discovered that I couldn’t even talk about these issues without being called a “hatemonger” by colleagues. To counter this, I started a political diversity effort to create a culture in which employees could talk about these issues without risking their reputations and careers. Unfortunately, while the effort was well received by the 1,000 employees who joined it, and by most senior Facebook leaders, it became clear that they were committed to sacrificing free expression in the name of “protecting” people. As a result, I left the company in October.
Let’s fast-forward to present day. This is Facebook’s summary of their current hate speech policy:
We define hate speech as a direct attack on people based on what we call protected characteristics-race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.
The policy aims to protect people from seeing content they feel attacked by. It doesn’t just apply to direct attacks on specific individuals (unlike the 2013 policy), but also prohibits attacks on “groups of people who share one of the above-listed characteristics.”
If you think this is reasonable, then you probably haven’t looked closely at how Facebook defines “attack.” Simply saying you dislike someone with reference to a “protected characteristic” (e.g., “I dislike Muslims who believe in Sharia law”) or applying a form of moral judgment (e.g., “Islamic fundamentalists who forcibly perform genital mutilation on women are barbaric”) are both technically considered “Tier-2“ hate speech attacks, and are prohibited on the platform.
This kind of social-media policy is dangerous, impractical, and unnecessary.
The inevitable result of this policy metastasis is that, eventually, anything that anyone finds remotely offensive will be prohibited.
Almost everything you can say is offensive to somebody. Offense isn’t a clear standard like imminent lawless action. It is subjective-left up to the offended to call it when they see it.
Perhaps even more importantly, you cannot prohibit controversy and offense without destroying the foundation needed to advance new ideas. … Risking being offended is the ante we all pay to advance our understanding of the world.
But let’s now come down to ground level, and focus on how Facebook’s policies actually work.
SB: Внимание, внимание - вот как ФБ осуществляет цензуру -- Attention, this is how FB implements its censorship:
When a post is reported as offensive on Facebook (or is flagged by Facebook’s automated systems), it goes into a queue of content requiring human moderation. That queue is processed by a team of about 8,000 (soon to be 15,000) contractors. These workers have little to no relevant experience or education, and often are staffed out of call centers around the world. Their primary training about Facebook’s Community Standards exists in the form of a 1,400 pages of rules spread out across dozens of PowerPoint presentations and Excel spreadsheets. Many of these workers use Google Translate to make sense of these rules. And once trained, they typically have eight to 10 seconds to make a decision on each post. Clearly, they are not expected to have a deep understanding of the philosophical rationale behind Facebook’s policies.
As a result, they often make wrong decisions. And that means the experience of having content moderated on a day-to-day basis will be inconsistent for users. This is why your own experience with content moderation not only probably feels chaotic, but is (in fact) barely better than random. It’s not just you. This is true for everyone.

Inevitably, some of the moderation decisions will affect prominent users, or frustrate a critical mass of ordinary users to the point that they seek media attention. When this happens, the case gets escalated inside Facebook, and a more senior employee reviews the case to consider reversing the moderation decision. Sometimes, the rules are ignored to insulate Facebook from “PR Risk.” Other times, the rules are applied more stringently when governments that are more likely to fine or regulate Facebook might get involved. Given how inconsistent and slapdash the initial moderation decisions are, it’s no surprise that reversals are frequent. Week after week, despite additional training, I’ve watched content moderators take down posts that simply contained photos of guns-even though the policy only prohibits firearm sales. It’s hard to overstate how sloppy this whole process is.
There is no path for something like this to improve. Many at Facebook, with admirable Silicon Valley ambition, think they can iterate their way out of this problem. This is the fundamental impasse I came to with Facebook’s leadership: They think they’ll be able to clarify the policies sufficiently to enforce them consistently, or use artificial intelligence (AI) to eliminate human variance. Both of these approaches are hopeless.
Iteration works when you’ve got a solid foundation to build on and optimize. But the Facebook hate speech policy has no such solid foundation because “hate speech” is not a valid concept in the first place. It lacks a principled definition-necessarily, because “hateful” speech isn’t distinguishable from subjectively offensive speech-and no amount of iteration or willpower will change that.
Consequently, hate speech enforcement doesn’t have a human variance problem that AI can solve. Machine learning (the relevant form of AI) works when the data is clear, consistent, and doesn’t require human discretion or context. For example, a machine-learning algorithm could “learn” to recognize a human face by reference to millions of other correctly identified human-face images. But the hate speech policy and Facebook’s enforcement of it is anything but clear and consistent, and everything about it requires human discretion and context.
Does it still make sense to pursue hate speech policies at all? I think the answer is a resounding “no.” Platforms would be better served by scrapping these policies altogether. But since all signs point to platforms doubling down on existing policies, what’s a user to do?

3. The state of the law

(a) Federal law proposal:

Supreme Court agrees to hear a case that could determine whether Facebook, Twitter and other social media companies can censor their users
https://www.cnbc.com/2018/10/16/supreme-court-case-could-decide-fb-twitter-power-to-regulate-speech.html
Supreme Court agrees to hear a case that could determine whether Facebook, Twitter and other social media companies can censor their users
The Supreme Court has agreed to hear a case that could determine whether users can challenge social media companies on free speech grounds.
The case could have broader implications for social media and other media outlets. In particular, a broad ruling from the high court could open the country’s largest technology companies up to First Amendment lawsuits.
That could shape the ability of companies like Facebook, Twitter and Alphabet’s Google to control the content on their platforms as lawmakers clamor for more regulation and activists on the left and right spar over issues related to censorship and harassment.
While the First Amendment is meant to protect citizens against government attempts to limit speech, there are certain situations in which private companies can be subject to First Amendment liability.
A ruling against MNN on the broad question it has asked the court to consider could open social media companies to First Amendment suits, which would force them to limit the actions they take to control the content on their platforms.

The outcome:

SB: К сожалению, решение суда не поможет в данной ситуации - this is a major setback, most unfortunately, which will haunt us for years to come, unless the market itself takes care of the situation by making FB either change or disappear:

A SUPREME COURT DECISION COULD HAVE IMPLICATIONS FOR SOCIAL MEDIA FREE SPEECH
https://psmag.com/news/a-supreme-courts-decision-could-have-implications-for-social-media-free-speech
The court ruled that First Amendment protections don't apply to a corporation that operates a public access channel in New York.
In a 5-4 decision, split between the conservative and liberal justices, the court ruled that the Manhattan Neighborhood Network could not face lawsuits for deciding not to air content that criticized it. Two individuals had sued the corporation for removing their film, claiming that doing so violated their free speech rights.
Justice Brett Kavanaugh wrote on behalf of the majority that, while the First Amendment's free speech clause applies to "state actors" or governmental entities, the network is a private entity, not a state actor: "Providing some kind of forum for speech is not an activity that only governmental entities have traditionally performed," the decision reads. "Therefore, a private entity who provides a forum for speech is not transformed by that fact alone into a state actor."
SB: Боюсь, что Justice Brett Kavanaugh не до конца понял происходящее - или недооценил его. I'm afraid that judge Kavanaugh did not fully grasp what is going on -- or underestimated it. He played the part of a “strict constructionist” -- a position frequently taken by the conservative judges without taking into accout the changes that occurred within the society since the original laws were introduced -- without really appreciating what was at stake, imho.
Although this case does not deal with social media explicitly, it could have considerable implications for the regulation of free speech on such platforms. Sites like Facebook, Twitter, and YouTube provide forums for discussion, so if the First Amendment can't be enforced against a private entity that provides a public forum, that may also be applied to social media sites. SB: В данном конкретном случает, я согласен с либералами в Верховном Суде -- In this particular case, I am in agreement with the Supreme Court's liberal justices:
In their dissent, the Supreme Court's liberal justices maintained that First Amendment constraints should apply to the Manhattan Neighborhood Network: "By accepting that agency relationship, MNN stepped into the City's shoes and thus qualifies as a state actor," Justice Sonia Sotomayor wrote, "subject to the First Amendment like any other.”

SCOTUS Ruling Could Let Tech Platforms Avoid First Amendment Constraints
https://www.law.com/nationallawjournal/2019/06/17/scotus-ruling-could-let-tech-platforms-avoid-first-amendment-constraints/
The Internet Association, whose members include Facebook, Twitter and Google, filed a brief in the case urging a narrow definition of a state actor.
Facebook, Twitter and other tech firms could benefit from a U.S. Supreme Court ruling Monday that redefines when private companies can be treated like government entities under the First Amendment.

First Amendment constraints don’t apply to private platforms, Supreme Court affirms
https://www.theverge.com/2019/6/17/18682099/supreme-court-ruling-first-amendment-social-media-public-forum
The case had caused concern for some online speech advocates
Nowhere is the internet or social media discussed in the ruling, but the idea that the decision could be used to penalize social media companies was raised by groups like the Electronic Frontier Foundation. The groups argued that too broad of a decision could prevent other private entities like YouTube and Twitter from managing their platforms by imposing new constraints them. The Internet Association, a trade group, said last year that such a decision could mean the internet “will become less attractive, less safe and less welcoming to the average user.”
SB: This is incredibly disingenuous on the part of these companies, given that all that is asked of them is to uphold the principle of free speech and not to engage in censorship. We certainly don’t want government regulation of private companies - but we do want them to uphold our rights, which were enshrined in the Constitution more than 200 years ago. Problem is, the reality has changed since but the First Amendment hasn’t as it couldn’t foresee such revolutionary changes taking place now. Therefore, being strict constructionists, the conservative high court majority hurt our freedom of speech for a long time to come.
The liberal justices on the court, in a dissenting ruling, argued instead that the terms under which the nonprofit ran the channels for the city should have bound it to First Amendment constraints. The nonprofit, Justice Sonia Sotomayor wrote, “stepped into the City’s shoes and thus qualifies as a state actor, subject to the First Amendment like any other.”

SCOTUS: Private Firms Not Bound by First Amendment
https://freebeacon.com/issues/scotus-private-firms-not-bound-by-first-amendment/
Turns back suit that could have made social media subject to free speech law
A private corporation that runs a public "forum" is not bound by the First Amendment, the Supreme Court ruled Monday morning.
The case, which nominally concerns a public access channel in New York, has attracted attention as a potential vector for regulation of social media firms facing charges of viewpoint bias.
The case made its way to the Second Circuit Court of Appeals, which is where it got interesting. Normally, to assess a First Amendment claim, a court would first determine whether or not the alleged violator was a state actor. But in this case, taking its cues from an opinion of now-retired Justice Anthony Kennedy, the Second Circuit instead ruled that, while MNN was a private entity, its fulfillment of certain roles made it a "public forum," and therefore subject to the requirements of the First Amendment.
SB: Thus, it would seem that the appellate court got it right, but theirs was not the final say in the matter.
Many major social media sites-Twitter, Facebook, YouTube, and so forth-operate as platforms for discussion, and thereby claim no legal responsibility for the content published on them. But if the First Amendment can be enforced against a private entity serving as a public forum, then these sites risk similar lawsuits.
This concern was enough to motivate amicus curiae briefs from both the Internet Association, a trade group representing a number of major tech firms, and the Electronic Frontier Foundation, the preeminent digital rights advocacy organization. The latter argued stridently against the idea that the mere operation of a public forum could qualify an otherwise private firm a state actor subject to the First Amendment.
"Certainly, the mere fact that something is either labeled a ‘public forum’ or operated by a private entity as a space generally open for communication by others does not automatically transform that private entity into a state actor," the EFF's brief reads in part. "Internet users' rights are best served by preserving the constitutional status quo, whereby private parties who operate private speech platforms have a First Amendment right to edit and curate their sites, and thus exclude whatever other private speakers or speech they choose.” 
SB: Thus, the Electronic Frontier Foundation pushed us all to a different and rather authoritarian frontier than most of us want - namely, arbitrary decision making by private actors regarding our free speech rights.
This leaves unclear what, exactly, constitutes a public forum subject to the First Amendment. Sotomayor notes that "this Court has not defined precisely what kind of governmental property interest (if any) is necessary for a public forum to exist." Her dissent, and the majority ruling as well, is silent on the question of public fora that, while private firms, rely on a government-created resource, i.e. the internet, in their model.
SB: That last part was addressed by Daniel Greenfield in one of the articles cited above. But the Court chose not to deal with this major issue, which is a damn shame.
Still, the majority's ruling seems to preclude the application of the First Amendment to private actors like Twitter or Facebook. This is all the more significant because many prominent figures on the right-especially president Donald Trump-have invoked free speech norms to criticize perceived attacks by social media giants on conservatives. Today’s ruling means such an argument, at least in the courts, is unlikely to get very far.

SCOTUS: First Amendment Contradictions?
https://patriotpost.us/articles/63716-scotus-first-amendment-contradictions
Two cases of leftists either suppressing or compelling speech illustrate a big battle.
These aren’t your grandfather’s liberals. That’s the 75,000-foot view of the modern Left, which has declared its primary mission to suppress disfavored free speech. Two seemingly unrelated Supreme Court cases illustrate what has become the great battle of the early 21st century.
The broader implications are interesting. Does [Manhattan Community Access Corp. v. Halleck] provide a test case for how social media is governed under the First Amendment? In other words, can Facebook, Twitter, Google, et al. silence speech because they’re private companies not subject to the First Amendment?
On the one hand, we have private companies suppressing speech, while on the other we have private businesses compelled to make certain speech. In both, the authoritarian and totalitarian coercion is coming from one side: the Left. The specific First Amendment applications to private companies are not always clear, but what is plain as day is that the Left aims to constrain certain speech and compel other speech. In a nation built on the ideal of free speech, that’s a dangerous trend.

Google claims new Supreme Court ruling hurts PragerU's censorship claim
https://www.sott.net/article/415538-Google-claims-new-Supreme-Court-ruling-hurts-PragerUs-censorship-claim
As The Daily Wire first reported back in 2017, PragerU filed a lawsuit against YouTube and Google, its parent company, for "unlawfully censoring its educational videos and discriminating against its right to freedom of speech.”
PragerU CEO Marissa Streit underscored the far-reaching free speech implications of her organization's legal action against what has become "two of the most important public forums in the world” …
PragerU's legal team - which includes Harvard's Alan Dershowitz and former California Governor Pete Wilson and Eric George of Browne George Ross, among several others - laid out the rationale for the lawsuit, which was prompted by Google/YouTube restricting or "demonitizing" over 50 PragerU videos for what YouTube claims is "inappropriate" content for younger audiences.
From a black-letter legal perspective, this is almost assuredly the correct outcome. Private actors are not synonymous with state actors, and our legal tradition and case law has always been imbued with that carefully delineated distinction.
But now, Google is publicly boasting that the ruling in Manhattan Community Access Corp. undermines PragerU's legal claim. As Mediapost reports:
Google is now telling the 9th Circuit Court of Appeals that the [Manhattan Community Access Corp.] ruling protects companies like itself from lawsuits alleging "censorship.” 
"YouTube is a private service provider, not a state actor, and its editorial decisions are not subject to First Amendment scrutiny," Google writes in new court papers. … 
The tech company writes that the ruling "affirmed that the 'Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property.'"
Comment: Rather interesting that Google is going this route. What they are essentially arguing is that they are a publisher, not a utility, meaning they can pick and choose what is published on their platforms (as opposed to a utility, like the phone company, who have no say on how their service is used by the public). However, if they're claiming publisher status, this means that they are putting themselves in the position of being responsible for everything that is put onto their platform by users, in the same way that a newspaper would be held responsible for what they publish. It's a rather precarious position to put themselves in and could only make their job as content police more difficult. It also means that the future of YouTube, Google and other social media platforms will likely be more censorious than it currently is.
SB: This is a very important comment. This issue has been raised in at least one of the articles I cited above. Also see https://en.wikipedia.org/wiki/Manhattan_Community_Access_Corp._v._Halleck for more details. The mere fact that this was a 5:4 split decision probably means that this is not the last word on the matter.

(b) State of the state law:

However, not all is lost -- there is also the legal issue of states' rights and there will soon be a slug fest in the area of conflict of laws:

Texas bill would allow state to sue social media companies like Facebook and Twitter over free speech

https://www.texastribune.org/2019/04/23/texas-senate-bill-lets-state-sue-social-media-companies/

The proposal aims to protect users on social media platforms from censorship if a site advertises itself as impartial.
A bill before the Texas Senate seeks to prevent social media platforms like Facebook and Twitter from censoring users based on their viewpoints. Supporters say it would protect the free exchange of ideas, but critics say the bill contradicts a federal law that allows social media platforms to regulate their own content.
The measure - Senate Bill 2373 by state Sen. Bryan Hughes, R-Mineola - would hold social media platforms accountable for restricting users’ speech based on personal opinions. Hughes said the bill applies to social media platforms that advertise themselves as unbiased but still censor users. The Senate State Affairs Committee unanimously approved the bill last week. (Update: The Texas Senate approved the bill on April 25 in an 18-12 vote. It now heads to the House.)
“Senate Bill 2373 tries to prevent those companies that control these new public spaces, this new public square, from picking winners and losers based on content,” Hughes said in the committee hearing. “Basically if the company represents, ‘We’re an open forum and we don’t discriminate based on content,’ then they shouldn’t be able to discriminate based on content.”
The bill would apply the Texas Deceptive Trade Practices Consumer Protection Act, which protects consumers from bad or misleading actions in the trade industry. Users on social media platforms who feel like they are censored for their views would be able to file a consumer complaint with the Texas attorney general. The attorney general could then decide whether to bring a public case against the platform.
Other states have also filed legislation seeking to curb social media censorship. Lawmakers in California filed a bill that would prohibit anyone who operates a social media site in the state from removing content from the site based on the political affiliation or viewpoint.

It's up to the Congress and/or the individual states to deal with the freedom of speech issue now, after SCOTUS blew it. Will it happen? Time will tell.

#resist, Политика, #metoo, politics

Previous post
Up