Listen to this article
Artificial intelligence (AI) is the ability of machines to reason, learn or carry out tasks associated with human intelligence thanks to the input data they store. The concept comes from computer science and works thanks to algorithms and computer programs. Different global powers and companies are in a race to develop AI to apply it in the military, academic, economic or diplomatic fields. However, the opportunities of tools like ChatGPT have led to questions about the lack of data protection and uses with harmful biases or to misinform.
Simulate human intelligence
Artificial intelligence began to be considered in a technical way with the British mathematician Alan Turing, father of modern computing. In 1950, Turing proposed a test to distinguish human language from that generated by a machine. In 1956, American computer scientist John McCarthy introduced a technical definition: “It is the science and engineering of creating intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to be limited to biologically observable methods.”
With the development of computing, AI has extended to the ability of machines to simulate human learning and reasoning to carry out specific tasks. This is possible thanks to the data it stores through patterns and algorithms that improve it when it collects more information. There are two types of AI depending on its objective: weak AI, which carries out specific activities, such as mobile assistants, and strong AI, which seeks to equate machine intelligence with human intelligence (artificial general intelligence) and even beyond. (super artificial intelligence). Machine learning can be automatic, through human intervention, or deep, capable of making its own connections.
The geopolitics of artificial intelligence
Artificial intelligence is also a source of dispute between companies and countries for economic benefits and advantages in areas such as national security. The United States has strategies such as AI Next, from the Defense Advanced Research Projects Agency, and the National AI Initiative Act of 2020, which led to the creation of the National AI Research Resources Task Force. Some companies collaborate with governments, such as HP with the US National Security Agency. For its part, China has developed the strategy Made in China 2025 for its technology industry and the plan to lead in AI in 2030, while the European Union promotes specific projects.
There has also been progress in AI for the military field. Drones or autonomous weapons can lead to a new conception of war with more damage and fewer casualties. In the current war, Ukraine is using it to store data in case its ministries are destroyed and to detect the enemy with the help of citizens, while Russia is using it to spread disinformation. AI can also improve security, helping to identify suspects, creating military training simulations or increasing the precision of weapons. Likewise, it will represent an advance in diplomacy, because it will allow conversations to be held with speeches generated in real time or to propose different negotiation scenarios.
Opportunities, biases and misinformation
On a more daily basis, artificial intelligence can be used to make personalized recommendations, make translations or include automatic subtitles in videos. On a larger scale, also to develop autonomous vehicles, improve medical diagnoses or detect natural disasters. However, many AIs draw on user information, which could conflict with data protection laws such as those in Europe. Likewise, since AI is created by humans, in some cases its application has incorporated sexist or racist biases.
The recent success of ChatGPT, developed by the company OpenAI, has been proposed as the beginning of the AI technological revolution. This chatbot answers questions directly to find relevant information, translate texts or program, among many other possibilities. Its massive use has raised criticism and opportunities in education, privacy risks, and the possibility of machines replacing human work. Another problem is misinformation. False images of the arrest of Donald Trump, of the Pope with a modern style or Vladimir Putin kneeling before Xi Jinping, created with programs such as Dall-E, have already gone viral. Also videos like a fake interview with Bill Gates, which will look more and more real.