top of page

Who Is Responsible for Our Words on Social Media?

Social media is an important part of our lives, but it can be overwhelming. Anyone who has browsed Facebook, Twitter, Instagram, Reddit, or any of the myriad of platforms out there is all too familiar with the experience of getting lost in everything going on. Interesting, trendy threads and topics constantly appear at the top of your feed. Click me! scream the brightly colored buttons and notifications, desperate for your approval. It is no surprise, then, that there are bound to be things that are less than positive. Toxicity, discrimination, hate speech — these are all easy enough to post — but much harder to moderate.


The question is no longer whether hate speech on social media is a problem, but instead how we can approach and try to fix the issue. Part of that involves asking: who is responsible for moderating this content? Should platforms lead the way in patrolling hate speech? Is it up to users to watch their digital tongues? Should the government take a more active role in deterring hate speech online? We need to explore the major roles involved in social media and how they can contribute to solving this problem.


The Government

Government regulation of social media involves clearing several hurdles, the first being Section 230 of the Communications Decency Act (CDA) of 1996. It reads: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. In other words, companies cannot get into legal trouble for what users post on their websites. Section 230 has been instrumental in creating the Internet that we know today. Without it, web hosting platforms and social networks would not exist out of fear that they would face prosecution for someone else’s publication.


Unfortunately, it also enabled the proliferation of the hate speech we see today. Social media companies have no legal obligation to do anything about hate speech on their networks, and there is a lack of incentive for them to take any action. Like it or not, even if hate speech hurts people, users will still keep visiting Facebook and Twitter, driving up those companies’ share prices by promoting more ads and further demotivating action. After all, why would multi-billion dollar companies choose to spend money on technological advancement that decreases their profits?


In addition, the First Amendment is also a substantial hurdle that any legislation must pass. The First Amendment states that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech…”, making passing moderation laws difficult since any options have to stay within the bounds set by the First Amendment.


Despite these hurdles, the government can still help moderate hate speech. It can provide incentives to companies that moderate their platforms, draft laws that force social media platforms to follow their community guidelines, and fund digital citizenship and sensitivity classes to give the American public a chance to learn about tact online. Most of the time, toxicity and hate arise due to passionate misunderstandings, where two sides are often embroiled in a heated battle simply because there is a disconnect between the parties. By reducing the discord in our everyday conversations and educating the American public on treating others with compassion and respect, the government indirectly provides the foundation for a healthier virtual environment without writing legislation that directly censors anyone.


Companies

The current interpretation of CDA Section 230 allows social media companies to moderate content as they see fit. This elucidation sounds responsible in theory until companies realize that moderating content results in a net profit loss. For instance, Meta’s (formerly Facebook) XCheck program gives users who have large followings a lot of freedom. These users are moderated by a different system. They can post almost whatever content they want with little or no limitations and have the power to consistently violate Facebook’s community guidelines since, despite the harm their posts cause, they still earn the company money.


Even though companies have already come a long way since the early days of social media, moderation is still a work in progress. It is time-consuming, requires large amounts of employees and resources, and most importantly, does not generate large enough profit margins. As a result, while many companies could invest more into moderation development, some have decided to scale efforts back. In particular, while some sites can limit how much controversy gets posted on their site they specifically choose not to. The reason behind this decision is that controversy drives up engagement from users, which means more money from ads.


Additionally, while there are tools and software out there that social media platforms can use to moderate speech, it depends on how these platforms want to enforce them. Many companies pride themselves on their algorithms, but instead of only focusing on algorithms to recommend content to users, couldn’t they work on algorithms to detect content that is not suitable for users? Of course, they can. Social media companies have incredible amounts of user data and computing power. Out of all the companies in the world, they would be the most capable, however, it all depends on whether company leaders want to do the right thing and moderate their platforms.


Users

Last but not least, let us focus on the largest group, the group that connects all of us the most: the users themselves. Us.


What is our duty whenever we log onto the Internet? When you think about it, the answer is not exactly clear. “A duty? What duty? The Internet is a free place where I can pursue anything I want,” some might argue. They are not necessarily wrong. The internet is truly an unrestricted domain for anyone to build and create and browse and play and view and record and so much more. The question still arises: what is your duty as an integrated part of society? What is your job as a citizen? Is it to abide by the rules, to maintain order, to keep the peace, to protect the innocent, to establish justice, to be a model citizen? Yes, it is. We do all these things so that we and others may have a peaceful world to live, work, and play.


Well, the same can be said for the Internet. It is the digital realm of society, and social media is the public square where communication takes place the most. So why is it that on social media, hate is much more commonplace than it is in real life? The biggest thing is societal pressure. For example, what would happen if someone had walked into the public square with a megaphone and started shouting expletives, curse words, and hate speech? It is not hard to imagine. It is likely they would get beat up for disrupting the peace or be labeled a derogatory term for a long time. There are tangible consequences for this kind of behavior. However, there is no similar pressure in social media, as anyone can hide under a veil of anonymity and immediately disengage or disappear whenever anyone retaliates. Two minutes later, they make a new account, and the cycle repeats itself, and the user faces no consequences. Furthermore, the other parties could feel angry because of that altercation. They may lash out as well until everyone is shouting at each other online. The factory of hate starts to produce again.


Still, this does not have to be the case. I understand that people get emotional. However, users should not respond to hate speech with more hate. After all, neither action is justified, and all it does is create more toxicity. Users need to report this kind of behavior so that companies can have an easier time moderating content and are more willing to do so in the future. Even if you do not think your action will do much if more and more reports come in, social companies will listen and moderate their platforms better. Other users will catch on and start to report harmful behavior and also watch their behavior.


Furthermore, while algorithms and application interfaces are a good stepping stone, they are not the entire solution, nor will they ever be. After all, researchers and data points, which inherently are flawed, build these programs. Companies and businesses can do their best to try and moderate social media, but there will still be posts that escape their moderation. There is no way to collect all posts and calculate their perceived toxicity, with our current technology and investment into researching this issue. If moderation starts to be highly demanded, however, then companies will at least put effort into having a better moderation team or algorithm. They might even start paying users to moderate their platforms, encouraging the community to be on higher alert for hate speech. As for government action, political representatives would see the public interest in social media moderation and pledge to vote for grants and legislation that promote research. By being good digital citizens and regular citizens, users can shift the social media oligarchy to ensure that companies like Facebook, Youtube and Twitter don’t remain the unchecked royalty of social media.


Social media had great potential when it was invented. It still does. It does not need to be the cesspool of toxicity and hate speech that society has associated with social media. Especially with the impact social media has on our lives. I value the ease with which we can connect with our family and friends, the ability to share our stories and news, and the capability to connect with others. It is a wonderful tool in an ideal world. Alas, we do not live in such a paradise. There are always those who want to ruin it for their fun or more nefarious reasons. But in the face of such an adversary, we should not back down from the trolls, the fakers, the scammers, the racists, the sexists. We need to moderate our social media like we moderate our society. And just like how all citizens are involved in some way to ensure a peaceful world, social media moderation is not a single entity’s job — it is all of our responsibilities, and we need to hold each other accountable for doing so. All parties involved – the government, the companies, and the users – need to cooperate and take responsibility instead of finger-pointing. We must devise a solution together to address the issues facing social media.


After all, that was the point of social media in the first place: bringing people together.


bottom of page