Based on the chatter in the corridors of political power and from elite intellectuals these days, one can't help concluding that Twitter, Facebook and their social media cousins rank as the civilized world's leading threats to democracy.
The Information, a technology website, ponders "How to Curb Tech Executives' Power." Yaël Eisenstat, a former CIA officer, White House advisor, and overseer of election ads at Facebook, writes at Harvard Business Review about "How to Hold Social Media Accountable for Undermining Democracy."
Francis Fukuyama and two colleagues at Stanford in counseling "How to Save Democracy from Technology" in the pages of Foreign Affairs, decry what they call "big tech's information monopoly" — though they name Amazon, Apple, Google, Facebook and Twitter as the culprits.
Both Democrats and Republicans have vowed to do something. It's an exceptionally dangerous time for the internet we know and love.
Eric Goldman, Santa Clara University
That makes it a monopoly of five and overlooks that those companies often compete ferociously with one another.
Still, legislators are listening. Congressional hoppers are brimming with proposals to regulate tweets, Facebook posts and the methods those platforms use to winnow out objectionable content posted by their users.
But because so little of the debate is based on facts and real understanding of how the internet works, the chances are mounting that whatever solution they ultimately agree on will yield bad policy.
"Congress is poised to do something," says Eric Goldman, an expert on social media regulation at Santa Clara University Law School. "Both Democrats and Republicans have vowed to do something. It's an exceptionally dangerous time for the internet we know and love."
He's right, in that discussions about regulating online speech are hopelessly confused over whether the platform are doing too much moderation — sometimes called "censorship" — or not enough. The arguments are also colored by the behavior of one person: Donald Trump.
During Trump's presidency, the indulgence granted traditionally by news organization to presidents' speech, whatever their subject or their language, was constantly at war with the idea that the platforms should suppress hate speech and incitements to violence.
The balance only tipped decisively against Trump after the November election, when he went on a rampage claiming the election had been stolen, and eventually incited the Jan. 6 riot by his followers. That's when Twitter banned him permanently and Facebook suspended his account.
The difficulty in drawing the line evokes Supreme Court Justice Potter Stewart's lament in a 1964 obscenity case that he couldn't define hardcore pornography in advance, but "I know it when I see it."
The legislators' focus — as was that of Trump, when as president he threw a conniption over online platforms' regulation of his own tweets and posts — is Section 230 of the 1996 Communications Decency Act.
Put simply, as applied to platforms hosting third-party content — such as Twitter, Facebook, newspaper websites with reader comment forums — Section 230 says the hosts can't be held liable for the posters' content as though they're publishers. That's true even if the hosts take steps to moderate the content, say by rejecting a post or taking it down after it appears.
The Electronic Frontier Foundation aptly calls Section 230 "one of the most valuable tools for protecting freedom of expression and innovation on the Internet." It effectively places most responsibility for what appears in interactive forums where it belongs — on the author, not the forum.
Without it, Twitter and Facebook might not exist, for their legal exposure from anything appearing on their sites would be stupendous.
Yet Section 230 may be one of the most consistently misunderstood statutes on the books. Critics across the political spectrum have asserted that it applies only to "neutral" public forums.
The law says nothing of the kind. It "isn’t about neutrality," observes Daphne Keller, an expert on internet liability at Stanford and a former Google lawyer. "In fact, it explicitly encourages platforms to moderate and remove 'offensive' user content," leaving them to judge what's "offensive." (The law requires only that restrictions be imposed in "good faith.")
It's commonly asserted that 230's protections apply to virtually any content — that it has been interpreted as providing "blanket immunity to all internet companies ... for any third-party content they host," Eisenstat wrote.
Also not true. Among its exceptions, Section 230 doesn't indemnify platforms in cases of violations of federal law or where content infringes on intellectual property rights. Nor does it apply to content related to sex trafficking, an exception Congress added in 2017.
The debate over whether to narrow 230's protections further is also infected by misunderstandings and misrepresentations. Veteran Republican lawyer Stewart Baker asserted in a Washington Post op-ed after the Jan. 6 insurrection that "Trump wanted changes that would discourage the suppression of conservative voices, while many on the left think the platforms don't do enough to suppress right-leaning speech."
He's wrong on both counts. First, there's no evidence that the online platforms have systematically suppressed conservative opinion — that's just a talking point of conservatives like Sen. Ted Cruz (R-Texas) and Trump himself.
Second, the criticism from the left isn't generally about the platforms' approach to "right-leaning speech" — both articles Baker uses to make his point refer to the platforms' toleration of hate speech. Is Baker, a former aide to both Bush presidents, saying that right-leaning speech and hate speech are the same thing? I doubt it.
That said, it's obvious that the online platforms have done a lousy job of ferreting out hateful content, even though they have the technological capability to do much better. Their failure to police their frontyards has given more force to critics seeking to reduce their independence.
Some misconceptions about Section 230 may stem from misunderstanding about the internet's infrastructure. Let's take a brief tour.
One can think of the infrastructure as a stack. The bedrock comprises internet service providers such as cable operators and phone companies, bringing the web to your door or devices. We want them to be absolutely oblivious to what's in the bits they're carrying, just as a power utility has no interest in how the electricity it provides to ratepayers is used.
If the power, or the bits, are used to commit a crime, that's a concern of law enforcement, not the utility or ISP. This is the concept known as net neutrality.
Republicans on the Federal Communications Commission have undermined net neutrality, but that's been chiefly to give ISPs more latitude to disadvantage competitors in their side hustles, such as video-on-demand. A Biden FCC might well be inclined to revive the concept.
Social media platforms occupy the top of the stack. We want them to be judicious about the content they carry, but only within limits. That's where the debate over Section 230 should be focused.
The gray area in the middle includes hosting firms such as Amazon Web Services, which essentially give websites and social media platforms their springboard onto the web. Whether and how to regulate them became an issue after Jan. 6, when AWS suspended the right-wing social media site Parler, contending that some users' posts were violating its rules by inciting violence.
The AWS suspension had a greater effect of suppressing speech on Parler than almost anything done by Twitter or other platforms to moderate their users' content. Parler was effectively silenced until it could find alternative hosting services.
In a lawsuit it filed against AWS, Parler called the action a "death blow"; it's still struggling to find an alternative. (A federal judge in Seattle has refused to force Amazon to restore Parler's account.)
"Amazon's decision to stop hosting Parler felt more significant than a social media service terminating a single account or even a single item of content," Goldman told me. But it's also true that AWS has nothing approaching a monopoly over web hosting.
Nothing would have stopped Parler from contracting with a backup service so it wasn't dependent on AWS, or even spending the money to do its own hosting. (AWS controls 32% of the cloud services market).
That brings us back to the social media platforms and their role in public discourse. One problem with the assertion that they have become de facto the "modern public square" is that their role can be easily exaggerated.
Politicians, who debate restrictions on the platforms, and journalists, who cover the debate, have a skewed idea of the platforms' influence because they spend their waking hours drenched in tweets and Facebook posts.
That's not so for the majority of Americans. According to a 2019 survey by the Pew Research Center, only 22% of Americans use Twitter at all.
Twitter users tend to be younger, richer and better educated than the average American: About 73% are younger than 50, a cohort that comprises 54% of all adults; 42% of users are college graduates, who make up only 31% of all adults; and 41% have incomes over $75,000, compared with 32% of all adults.
The most popular source of news for Americans is still television (49% told Pew it's their source most often), followed by news websites (33%) and radio (26%), with social media coming in fourth at 20% and print newspapers last at 18%.
What has changed dramatically is Americans' perception of the influence of social media. Some 62% of Pew's respondents said that social media companies have too much control over the news, with Republicans especially fretful (75%). Republican efforts to put a leash on the social media platforms may be responding to those concerns — and feeding them, too.
Even if they have become the de facto public square, that doesn't mean they should be regulated. What's typically overlooked is that the labeling of social media platforms as the "modern public square" originated in a 2017 Supreme Court decision that overturned restrictions (enacted by North Carolina) on using the platforms, finding them to be infringements of 1st Amendment rights.
That hints at the difficulties legislators may face in trying to craft changes to Section 230 or finding other ways to regulate online platforms. No one seems to have found a comprehensive solution certain to pass legal muster or even to eradicate the noxious influences of social media without also suppressing the very features that make them popular.
Tom Wheeler, President Obama's FCC chairman, casts a wry eye at the platforms recommendation algorithms, which push content at users related to what they've already chosen to see. That magnifies the content's impact, a genuine problem when the content is hateful or anti-democratic. YouTube, which is owned by Google, acknowledged as much in 2019 when it tweaked its algorithms to cut back on recommendations of harmful content.
Yet, although the algorithms serve the platforms' purposes by encouraging users to stay connected, they also serve users looking for benign content that fits their interests.
Others advocate somehow placing caps on any platform's user base, to foster the growth of competing platforms. Yet that overlooks that the "network effect" — networks become exponentially more valuable as they expand — benefits not only the platform proprietors but also its users, who may enjoy being part of an expansive online community.
Another proposal would require platform to be more open about their procedures for suspending users or banning content, and even to offer banned users an appeal process. That could come close to an unconstitutional infringement of the platforms' speech and might be unworkable in practical terms.
What remains is self-regulation, the situation we're in today. The attention they're getting from political leaders may prompt Facebook, Twitter and other platforms to become more open and consistent about the rules they employ to make moderation decisions. (Facebook has referred its suspension of Trump's account to an ostensibly independent committee it created to review such actions, though there's widespread skepticism about whether the panel's deliberations will be truly independent of Facebook's commercial interests.
The fact is that Section 230 works as well as any alternative that has been proposed in balancing the principle that speech should be free online with consequences for those who breach social or legal norms. Even if one thinks the platforms deserve some blame for facilitating the Jan. 6 insurrection, which is debatable, it may be time to stop blaming them for social ills that arise from elsewhere.
"Riots and mobs have formed throughout our history without any help from social media," Goldman points out. "It's an error to think that social media are the source of the problem, when they're often the tail end of a much bigger social problem. There's this desire to see internet services achieve a level of pro-social behavior that we don't achieve in the offline world, and then to blame the internet for not achieving an impossible standard."
This story originally appeared in Los Angeles Times.
"politic" - Google News
January 23, 2021 at 04:13AM
https://ift.tt/3c4Hljm
Column: The drive to regulate social media is about partisan politics — and it won't work - Yahoo Finance
"politic" - Google News
https://ift.tt/3c2OaPk
https://ift.tt/2Wls1p6
No comments:
Post a Comment