by

Trump Is Right—And Wrong—About Section 230

Dive into Section 230 of the United States Communications Decency Act with GlobalGiving CEO Alix Guerrier, and discover why it matters to a nonprofit on a mission to accelerate community-led change—and you.


 

Trump’s tweet

Earlier this week, President Donald Trump shared a short but emphatic message on Twitter: “REPEAL SECTION 230!!!” This was apparently in response to a Facebook deletion of his post comparing COVID-19 to the flu and Twitter adding a misinformation warning to it. A brief flurry of social media activity followed, with some offering the view that repealing this law would likely have the effect of turning social media giants into political censors.

This is a tough topic to follow; it’s arcane, the naming of the issue (“Section 230”) is inscrutable, and the text of the legislation is abstract for non-lawyers (I’m not one). However, the impact—and the threat—is real for people everywhere on the internet.

A chilling effect

The history and importance of the law has been described elsewhere; Casey Newton of Platformer and formerly of the Verge did a particularly great job. In short, the law allows platforms to moderate content while preserving their immunity from being held legally responsible for users’ behavior on their platforms.

Ostensibly, the rationale behind a call to repeal 230 is to remove that immunity, so that platform companies can be held responsible for unfair moderation. However, a repeal would very likely have a dramatic chilling effect; companies would be incentivized to curtail content and discussion on any controversial topic, and people might lose their ability to effectively engage in important discussions, or to take action on matters important to them.

Conversely, there is a clear danger on the other side—what if platforms are not doing enough to remove material that is widely acknowledged to be harmful? How can we hold platforms accountable?

What Trump gets right

1. He knows that platforms have not been doing an excellent job on moderation. The exact reasons for his dissatisfaction may be the same as or quite different from yours: He argues the platforms unfairly target conservative viewpoints. But I would bet that regardless of political preferences, most people have a sense that the status quo is not the ideal. Interestingly, former vice president and Democratic presidential nominee Joe Biden has also called for a repeal of 230, although for a different reason: that too much non-factual content is allowed to stand.

2. His Department of Justice has sought to update Section 230, with a stated rationale that seems exactly right: the legislation was necessary for the initial growth of internet platforms but is now out of date. Facebook has acknowledged this.

3. The resulting DOJ proposals identify a key problem correctly: that platforms and their leaders can hide from their responsibility, whether that’s hiding behind the legislative shield of Section 230, the rules of their terms, or claims of neutrality that ultimately ring hollow.

What Trump gets wrong

1. The problem isn’t fundamentally about the squelching of conservative voices, nor is it about the reverse; rather, it is about striking the right balance between responsibility and protection.

2. Section 230 shouldn’t be repealed; as his DOJ, legislators from both major parties, and companies themselves suggest, it needs to be updated.

3. The DOJ’s proposed changes miss the opportunity to go beyond stated criteria and should instead create the opening and requirement for well-defined and transparent processes.

A nonprofit’s take on neutrality

We have been studying this issue intensively at GlobalGiving for the last 18 months. Why? Because we found that we were failing at the underlying problem. My colleague Rachel Smith wrote last year about our journey into the Neutrality Paradox. Even though we are small compared to Facebook, Twitter, and Reddit, and even though we are a nonprofit organization focused on supporting other nonprofits engaged in community-led change around the world, we found ourselves facing some of the very same dilemmas as the big companies. We checked with other social sector organizations and found the problem to be widespread.

The problem: At times, it can be very unclear whether a certain activity belongs on your platform or not. In fact, we found ourselves in situations where either choice—removing or keeping it on—seemed wrong. We called this the Neutrality Paradox.

We started a working group to systematically research the Neutrality Paradox. Some of the findings are being published in an ongoing series by Alliance Magazine. I’ll summarize a few below:

1. We found that the concept of neutrality as it applies to platforms is, at best, a well-intentioned but failed principle that has been proven inadequate by experience. At worst, it is a tool used deliberately by those looking to avoid responsibility.

2. There is no bright, sharp line between what’s acceptable and unacceptable on a platform. Each platform should invest in thoughtful terms of service, community standards, and moderation algorithms, but there will always be a gray space where it isn’t clear how the rules should apply.

3. As a corollary, any attempt by a platform to push off that responsibility onto a system of mechanistic rules and criteria will ultimately fail. We have seen platforms try this approach (“Well, our terms of service technically allow it…”) and ultimately end up reversing course. There’s no way to “set it and forget it” when it comes to moderating the really tough decisions.

Solving the Neutrality Paradox

Our work on these questions initially focused on deepening our understanding of the problem, but it is now starting to yield answers. We’re no longer claiming neutrality, but holding ourselves to a commitment of openness and inclusion. We acknowledge that some cases will require taking a human-centered (as opposed to algorithmic) approach to curation and moderation decisions. We also humbly offer that other platforms—both nonprofit and for profit—can adopt the basic skeleton of our emerging approach. If they did, they would:

1. Spend time defining the basic principles that guide their work (their working Ethos). These principles are choices—there’s no universally accepted set of ethical standards. Furthermore, these principles will never serve as the complete set of rules that algorithmically resolve all questions. However, some modes of expression for these principles are more effective than others.

2. Work through historical examples of dilemmas (whether directly experienced or taken from other organizations) to help translate their Ethos principles into a working platform participation policy (e.g., terms of service, community standards, etc.). After all, most cases don’t fall into the aforementioned gray zone. Part of the leaders’ responsibility is efficiently handling the clear-cut cases, and that includes anticipating the “known unknowns” represented by historical cases. If a dilemma that could have been foreseen catches a platform by surprise, that is a failure of leadership.

3. Have a process to handle the remaining true dilemmas. There are different structures for this; for example, some include councils of external stakeholders. However, they all share a few characteristics like ultimate accountability lying with an identified leader, or group, and the inclusion of a means of revisiting a decision.

4. Be as transparent as possible, recognizing that complete transparency is neither possible nor desirable.

5. Expect to learn and adjust, including changing the rules if necessary to protect a platform’s integrity. While this may upset participants, it recognizes our imperfection. Any system we design will have flaws that will in time be revealed; it would be irresponsible to cling to a system that has been proven flawed or incomplete. Even the powerful and time-tested document that is the US Constitution has a built-in means for amendment. In fact, we can draw lessons from how the legal system balances codified law with dynamic case law for insight.

Using this approach, the leaders of a platform organization are not only poised to handle dilemmas with confidence and fairness to their constituents; they can be prepared to create real value for those constituents. The unavoidable gray zone of Neutrality Paradox dilemmas simultaneously pose significant risk to platforms and the opportunity to do some of their most important work. Any organization undertaking a mission to connect people with other people, or people with powerful ideas, may end up delivering its greatest positive impact through responsibly and thoughtfully shepherding the cases where disagreement and controversy make those connections most difficult.

Learn more about GlobalGiving’s Ethos.

LEARN MORE

Featured Photo: Train 100 Nepali youth monitors so aid saves lives
by Integrity Action

Looking for something specific?

Find exactly what you're looking for in our Learn Library by searching for specific words or phrases related to the content you need.

WARNING: Javascript is currently disabled or is not available in your browser. GlobalGiving makes extensive use of Javascript and will not function properly with Javascript disabled. Please enable Javascript and refresh this page.