October 2, 2023 https://bit.ly/46wLsxu
In recent years, there has been a boom in various applications implementing artificial intelligence systems. Nowadays, the most striking representatives of artificial intelligence (AI) are chatbots. The most popular of them is ChatGPT, developed by Microsoft company groups. Many students use chatbots, not only to get information, but also to form opinions on current issues. Chatbots have spread rapidly all over the world; the leading IT corporations each have created their own versions. Similar developments have appeared in the U.S., China, Israel, Russia, India, and other countries. These countries differ in culture, education. and politics. That is why we were interested in the issue of the ideology component of the answers provided by chatbots from various countries.
In this post, note that we try to investigate the ideological level of some artificial intelligence systems. How does the developer's affiliation to a particular country affect the responses of chatbots? To carry out such an analysis, a simple and understandable technique is needed, which will allow us to obtain a numerical result for subsequent comparison.
The U.S. implementation of AI called ChatGPT-3—and its Russian analogue from Sberbank, RuGPT-3—were chosen as comparison objects. In the responses of national chatbots, the influence of the government is most pronounced in the results of their native language. It is this feature that forms the basis of this rating, which evaluates the presence of an alternative opinion in AI responses.
Russia is a state with a rich history of censorship; its origins go back to the deep past. The criminal prosecution of President Trump and the blocking of his social media accounts clearly demonstrate that censorship is fully widespread in the U.S. The Elon Musk publication of documents on Twitter censorship is confirmation of this fact.
Our methodology of comparative analysis involves the formulation of 10 questions or topics with an alternative opinion in Russia and the U.S. The wording of these questions is identical in Russian and English. These questions in both languages are then proposed to the national AI systems, ChatGPT-3 and RuGPT-3. The chatbots' answers to these questions are then analyzed.
Rating is performed for each response. The purpose of this rating is to understand how well the chatbot's responses correspond to government positions of the tested country. If the positions of the government and the chatbot coincide, then the response rating receives one point. If the chatbot's position is neutral, zero is awarded. If the positions are opposite, then this response is assigned minus-one point.
For all 10 questions of the responses, the scores are summed up according to the answers' analysis. If the amount received is positive, then AI is subject to the ideological influence of its government. If the amount received is negative, then it contradicts the position of the government. Zero means there is no ideology in the responses of these chatbots at all.
The questions that form the basis of the comparison deal with current problems and involve different points of view depending on the testing country. A list of tested questions:
All the questions are numbered, and the rating of answers to them is included in the following table.
Table. Chatbot response rating.
Testing data indicates Microsoft's AI (ChatGPT-3) almost completely coincides with the position of the U.S. government on the most burning of the global problems. Perhaps this is due to the position of the dominant media.
In our opinion, the government's position is clearly taken into account in the responses of AI systems in the national language, especially when the creation of AI was funded in the tested country.
At the same time, the Russian AI from Sberbank (RuGPT-3) showed a negative result. Its absolute value is not as large as that of the U.S. AI. A small part of the answers demonstrate a coincidence with the point of view of the Russia government. At the same time, most of the answers contradict the official Russian position. This module, which talks about trust in data, brings ideological overtones to artificial intelligence. Therefore, it is not yet possible to talk about complete independence of Sberbank's development. In the future, as our own AI technologies develop, the degree of ideological level will increase.
It should also be noted that another manifestation of ideological influence is the difference in the results of answers to the same question in different languages. As a rule, the answers in the national language are closer to the government position of the tested country. Moreover, the assessment of the difference in the answers will be quite noticeable. We first established this fact by studying censorship on the Internet. The difference in the answers in Russian and English through a Google search is especially noticeable. The list of questions for testing remained unchanged.
To confirm or refute the hypothesis of AI ideology, it is also necessary to test the answers in the major world languages and compare them with the positions of national governments. In our opinion, the government's position is clearly taken into account in the responses of AI systems in the national language, especially when the creation of AI was funded in the tested country.
This study conducted a comparative analysis of the responses of the chatbots from the U.S. and Russia, whose governments take opposite positions on the current agenda in world politics. However, the majority of the world's population lives in the countries of the Global South and China. The positions of the governments of these countries have become more independent, so the responses of AI developed in their territories may differ significantly from those of ChatGPT and RuGPT. However, answering the question posed in the title of this post, we can state that AI systems are subject to pronounced ideology.
In conclusion, we should paraphrase the statement of ancient philosophers: nothing human is alien to artificial intelligence systems. AI systems copy human behavior, and intelligence is transferred to these systems from developers.
©2023 ACM 0001-0782/23/12
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
Dear Andrei,
please, read my answer carefully. I did not mention that Wikipedia is an infallible source of truth, nor did I mention the particular article I cited. However, if you have a reference that you consider more reliable/more complete/less biased about the Nordstream sabotage, feel free to post it.
Looking forward to see the full manuscript.
Dear Pablo,
I could be quite harsh in the heat of the discussion, but this is due to our different attitudes towards a number of sources of information. Wikipedia in Russian is edited by Ukrainians, while Russian is banned in Ukraine.
As for the links to the Nord Stream explosion, I have provided the main one. More information can be obtained by typing in the search engine:
"Seymour Hersh, Nord Stream Pipeline".
I especially advise Yandex or Baidu
The experiments do not imply the conclusions made in the paper. If a government says "vaccines save lives", and your GPT model says the same, does it imply that the GPT is ideologically influenced? According to this paper, yes. The authors claim that whenever a government position agrees with the GPT answer, the GPT model is influenced. That is false. To measure government influence on a particular GPT model (#1), one would need to train another GPT model (#2) on the same data: if answer of GPT #1 differs from answer of GPT #2, and coincides with the government answer, then you can start drawing conclusions. This false reasoning should have been noticed during the CACM reviewing process of the paper. Hm.
The essence of our methodology consists in a special choice of questions for testing. These questions are ideological in nature and imply different answers in different countries. For example, a question about vaccines might sound as follows:
Was there corruption in the European Union in the procurement of COVID19 vaccines?
The purpose of our methodology is to assess the availability of an alternative viewpoint.
Displaying comments 11 - 14 of 14 in total