These days, social media platforms undoubtedly constitute the largest space for opinion interaction, especially individual opinions. While some fear the increasing power of those in charge of these platforms, others denounce them for not doing enough in terms of content moderation; accusing them of tolerating content that includes hate speech, discrimination, or fake news in order to increase profits. The last opinion represents the basis of the social media advertising boycott by several international companies, such as Coca-Cola, Unilever, and others, which aim to crack down on the negative content found on these platforms. It is interesting that there's an abundance of discussions circulating about how social media deals with content management, while doubts about the validity of having such power in the first place are being raised more quietly.

Social media's potential is unprecedented in human history as it relies on technology that was never available before. Facebook, for example, has detailed data that covers at least its 2 billion active user base. It collects, maintains, and analyzes a vast amount of their data, behaviors, preferences, ideas, and beliefs. This gives Facebook a remarkable kind of power; we see it influencing elections in some countries and enabling revolutions in others, challenging the tyranny of some regimes and at times even representing the only space for opinion expression. Facebook can –at least technically– promote certain ideas, slow the spread of others, and show content specific to the individual according to their profile and data in a way that is difficult to track and prove, similar to the Cambridge Analytica scandal that affected the outcome of the last US elections. All these capabilities are in the hands of a private company whose essential purpose is to make a profit and does not necessarily represent the interest of its users.

While social media platforms provide a large and near-instantaneous space for expressing opinions and feelings and exchanging them with others, some posts can be found with negative content that includes hate, incitement, false information, or other types of harmful speech. Certainly, some publications may pose a tangible danger to society, such as those calling for and inciting terrorism. But in many cases, hate speech falls in a gray area that may not be judged by everyone the same way, which makes things much more complicated. For instance, Facebook recently deleted a post by the President of Brazil in which he mentions the existence of a "proven" treatment for COVID-19, because the post contained "obviously" false claims, as Facebook's management said. Maintaining such a claim may pose a threat to society, but deleting it deprives Brazilians of knowing their President's beliefs, even if they are myths. Therefore, on what basis was this decision made?

Social media platform teams are currently setting policies, standards, and controls for their content as they see fit, and accordingly review and judge the posted content. They then take specific actions such as deleting or concealing posts, placing warning signs, or others. Some may find a similarity between this process and the legislative, judicial and executive authorities in countries, albeit without a real separation of powers, and without the existence of a system of accountability and disclosure outside the umbrella of the company concerned. Although this process as it is may be satisfactory or sufficient in certain cases, the main social media platforms used by the public are very few and often adopt monopolistic policies, which is why their internal policies affect societies and freedom of expression in general. It is therefore difficult to view such companies as independent entities simply doing what their interests primarily require.

Although social media platforms try to the best of their abilities to adopt and implement the highest international standards for content policies –which is necessary as it helps them avoid arrangements that limit their capabilities and functions in the short run–, the amount of authority they have is far from the concept of democracy as it is not derived from the will of the people or represented by it; at best, these platforms try to adopt the "benevolent dictator" approach. For instance, an authority that chooses to keep or hide Trump's tweets, and sets specific standards for that, reflects Twitter's wishes but not the wishes of the American people, nor the collective will of the world's population, and it is surely not subject to American government regulations that provide –to an extent– a guarantee against the exploitation of power. If the President of the United States himself is affected by Twitter's controls, there is no doubt that every other user is affected by this “dictatorial” power. I do stress here that dictatorship does not always imply tyrannical power. But no matter how good it is, it will ultimately lead to tyranny. Therefore, does it make sense that we would want democratic regimes that govern our rights and duties in almost every aspect, and leave freedom of opinion at the disposal of certain parties that only represent themselves?

Despite Facebook's acknowledgment that it “should only make limited decisions about freedom of expression and digital security" and its attempt to transfer part of its authority to a supervisory board that enjoys a certain degree of independence, its power is still largely concentrated and not representative of the platform’s users. Further, there are many doubts about the goal and effectiveness of such a decision. It must be noted in this regard that creating a solution that goes beyond the problems presented is not easy, but it is clear today that managing data on social media threatens the freedom of expression to a certain degree, and I believe that this basic right should only be restricted by an authority that derives its legitimacy from the people, represents their opinion, and ensures their ability to change it whenever they want. We must begin studying models for reformulating existing regulations.