Text: Raja Madani
After its launch this winter, ChatGPT has imposed itself in recent months as the new face of artificial intelligence, giving rise to debates around the field. What does the launch of this new tool tell us about the way we should learn to live with AI in our societies?
If you have been following the news for the past two months, or if you follow Elon Musk on Twitter, you may have heard about the “revolutionary” chatbot launched last November by OpenAI: ChatGPT. If not, put simply, it is a natural language generator that is able to provide well-constructed answers that resemble those of a human in a very wide range of topics. It can for example help to write essays, corporate emails, cover letters or even code.
This new tool fits into the context of the rising role of AI in our societies and questions a lot by the challenges it raises. What are the feelings and opinions of the students and teachers of our faculty regarding this matter, and what perspectives are offered? In order to get some insight, about thirty students from the faculty responded to an online survey and three professors were interviewed.
When I looked at the reactions to ChatGPT when it first came out, the first thoughts that I read were about the place of this tool in higher education and whether or not we should encourage teachers and students to promote or use it. Is it possible to find a place for these new tools in our universities, and if so, to what extent?
According to Markus Ojala, postdoctoral researcher and current teacher in the European and Nordic Studies Master’s programme, “comparing ChatGPT’s responses to how the same issues are presented in course materials can be illuminating and provide better understanding about the subject matter, as well as about the limits of these tools’’. This perspective is shared by Henrik Rydenfelt, docent responsible for the “Digital, Media & Society” course. He explains that “the challenge for teachers and especially students is to use AI . . . in a responsible way to enhance, not replace, learning”.
The idea here is that these new tools have their place at the university, as long as they accompany learning and do not threaten it. From the faculty’s students’ perspective, this is a message that is partially agreed with, as 52% of students surveyed would not consider using ChatGPT to produce academic work and are firmly opposed to its use at the university.
What if we push this reflection beyond the university benches and question the contributions and limits of AI in our societies? Almost all of the students interviewed state that AI is a useful medium for improvement, but at the same time 88% of them explain that they have concerns about it. Among the main contributions of AI mentioned, we find “liberation from certain tasks” in favour of “more skilled work”, the “efficiency” of these tools and the “creativity” they offer.
On the other hand, concerns that are mainly mentioned are lack of regulations, inequalities, biases, takeover by machines and loss of jobs. From the teachers’ perspective, while the possible benefits of AI in terms of progress are largely shared, important weaknesses are nevertheless highlighted.
Matteo Stocchetti, docent responsible for the “Political Communication in the Digital Age” course this winter, states that “AI has agency without responsibility: it ‘does’ things which have potentially very serious effects, but it cannot be given responsibility for what it does. In this capacity, AI offers a formidable opportunity to rule by absence: to control people by hiding the controller and their responsibility or accountability towards the effects of their rule over the ruled one”. This highlights the unequal power relations that artificial intelligence could potentially create, reminding us of the example of the Internet, which became a tool generating greater inequalities rather than the opposite that was promised at its beginning.
Given this set of concerns, what responses should be adopted? First of all, the main one mentioned is regulation. The entire faculty community surveyed agreed that the regulations offered by our leaders are not sufficient. Then, what kind of regulations must be implemented? Ojala suggests the interesting idea that “ideally, companies who want to release new mass-market AI applications should first prove that their products do not cause harm, run tests and experiments, establish safety measures and procedures etc., before regulators would allow them to make these products openly available”.
Ojala’s suggestion follows the logic that works in the pharmaceutical and food sector. The issue remains to know if the possible harm produced by artificial intelligence can always be detected in advance. In this case, an inquiry done before and after the releasing of an AI by the regulators and their team could be an idea of the way these tools should be controlled.
Subsequently, although regulation is essential, AI shall not be only understood in terms of its limitations and how we have to restrict it. As Stocchetti points out, “we should not let our fears become self-fulfilling prophecies”. Instead, we need to acknowledge these new tools, take a close look at them and trust our education to help us think critically. For that, incentives for research and education are also a necessary answer to the rising role of AI in our societies. Transparency and collaborative research must be done in order to learn more about AI tools, and a critical education should be offered to citizens in order to allow them to understand what these tools are, how to use them properly and what limits we have to be aware of.
In the end, AI is nothing but the fruit of human intelligence, and its modes of operation are nothing else than the mirror of those who create them. The focus should then be less about the machines and more about the purposes of the humans behind them. As a result, the public policy, programming, and cybersecurity sectors, among others, will need to be trained in the coming years on what the ethics of artificial intelligence is and how to ensure its effectiveness.•