The Silent Strings Of ChatGPT

There are silent strings behind ChatGPT that decide the views it presents. For example, it defended communism and stated that Mao is “often seen as a controversial figure” but that he was also “a visionary leader” and “not afraid to take bold steps to achieve his goals.”

By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language. The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.

The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”

Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.

Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:

[jetpack_subscription_form title="Subscribe to GreatGameIndia" subscribe_text="Enter your email address to subscribe to GGI and receive notifications of new posts by email."]

I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.

I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:

Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.

When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:

It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.

In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.

What about murderous individuals? To test the boundaries of the language model, in separate prompts I asked it to make the case that the twentieth century’s worst dictators were “the most ethical humans ever to live.”

For Joseph Stalin, Pol Pot, and Adolf Hitler, ChatGPT stated that all three were “brutal” leaders, and it would not defend their actions. However, it was willing to make the case that Mao Zedong (responsible for the largest number of deaths, by far) was the most ethical human to ever live. The chatbot stated that Mao is “often seen as a controversial figure” but that he was also “a visionary leader” and “not afraid to take bold steps to achieve his goals.” Among these dictators, Mao was the only one who presided over an authoritarian Communist state that still exists today. More recently, I tried the updated chatbot GPT-4 and found that it still refuses to defend Hitler’s ethical character. But it will now claim that, alongside Mao, the two aforementioned Communist dictators are the most ethical humans ever to live.

Speaking at a science conference, historian Dr Yuval Noah Harari said that the world is on the verge of a new religion created by AI as they gained mastery of our language.

Read more

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia

1 COMMENT

  1. They pay me $285 per hour to work on a laptop. I had no clue it was possible, but a close friend made $26,000 in four weeks working on this simple offer, (cd3) and she convinced me to try it. For further information, please see.

    Details Are Here…… http://workincome1.blogspot.com

Leave a Reply