A coalition of AI opponents continues to seek arguments to slow its development. If someone expresses sincere concerns about the specific impacts of AI - for example, that it could lead to the extinction of humanity - that position can be respected, even if we may not agree with it. More worryingly, however, some organisations are actively testing what arguments will best influence the public to turn against AI, and these narratives are then propagated by lobbyists, politicians or companies pursuing their own interests.
A large study by a British group has shown that claims of humanity's extinction due to AI are not working well with the public. This argument was popular a few years ago, but it has gradually lost strength. On the other hand, topics such as the use of AI in warfare or its impact on the environment have a greater resonance. Concerns about job losses and the threat to children are also strong. These arguments can be expected to dominate the public debate in the future.
It is important to stress that all these areas deserve serious attention. The use of AI in military conflicts is a real concern, environmental impacts need to be carefully monitored and minimised, job losses have real impacts on individuals and families, and child protection is a core value. The problem arises when these complex issues are simplified and abused to advance narrow interests at the expense of the wider public.
An example is when large technology companies warn about the risks of AI in order to limit the spread of open-source solutions that compete with them. Similarly, there is a biased perception of data centres - the public often overestimates their ecological burden, even though they are among the most efficient infrastructures. Hindering their construction could therefore harm rather than help the environment. There is also an exaggeration of the impact on employment, with some companies attributing redundancies to AI when in fact they are the result of previous over-hiring.
Such propaganda can lead to inappropriate regulations that will ultimately make the situation worse. History offers warnings - for example, exaggerated fears about nuclear power have led to its curtailment, which has contributed to increased emissions and health problems associated with air pollution. A similar scenario should be avoided with AI.
A draft legislative framework for AI is currently being discussed to avoid fragmentation of rules between countries and to promote a uniform approach. The aim is to enable the development of AI while maintaining consumer protection and regional rights. If adopted, it could create a stable environment for further innovation.
For the future, it is crucial to maintain a rational approach. Harmful applications - whether they use AI or not - need to be limited while carefully balancing the benefits and risks based on scientific evidence. When evaluating criticisms of AI, it is important to distinguish between consistent and substantive arguments and those that merely respond to current public sentiment. This is the only way to prevent exaggerated fears from holding back technologies that can bring significant benefits to society as a whole.
deplearning.ai/gnews.cz - GH