Lawmakers and business leaders struggle with AI

lawmakers business leaders AI
Image by Nikki Zalewski | Bigstockphoto

When generative AI and ChatGPT emerged earlier this year, many well-known business leaders, including Elon Musk, wrote an open letter and urged a stop to AI development because it was a societal risk.

Lawmakers plan to regulate AI

The theme of machines taking over the world and humans becoming their slaves has been popular in books and movies in the past decades. Nowadays, we are witnessing a lot of progress and new AI tools, and these scenarios have become popular again. Some technology leaders have been calling for a pause in AI development, lawmakers plan to regulate AI, and some business leaders wanted to go to China to agree on global principles. Many of those approaches sound very naïve.

The European Parliament passed a draft law known as the AI Act that aims to regulate AI and could serve as a model for policymakers around the globe. The law includes restrictions on facial recognition technology and requires transparency from AI system developers like ChatGPT.

The CEO of OpenAI, Sam Altman, has called for collaboration with China to counter AI risks. He spoke at a conference in Beijing and emphasized the importance of collaboration between American and Chinese researchers in the field of AI. Altman acknowledged China’s talent in AI and called for contributions from Chinese researchers.

Equally naïve

There has been a lot of criticism of the EU Parliament, claiming it is naïve to create regulations for AI, especially when it takes a couple of years to get any rules to the EU directives and national laws. Many rules are probably already obsolete with technology and solutions are developing so rapidly. People joke that the EU always wants to regulate everything. This is probably a fair criticism, but at the same time, it is common for laws and regulations to follow technology development, just as most businesses and technologies have some regulation.

The US and China will likely introduce laws for AI or AI-related solutions sooner or later, but they have different cultures on doing so. In the US, we will probably see AI company CEOs and technology experts testify in front of Congress, especially if any scandals occur. The Chinese government definitely follows AI development very carefully and also guides it.

Business leaders ask for a pause

If lawmakers are naïve about AI, how do business leaders who ask to pause its development believe their talks in Beijing or via business cooperation would remove tension and a potential AI arms race between China and the US? It sounds a bit like Elon Musk and Donald Trump telling us how they would make a peace deal in Ukraine.

We also see discussions among technology experts that AI could be a risk to the existence of the human race. As the Washington Post well reminded us, we must take a closer look at the concerns surrounding the existential threat of artificial AI and have a more nuanced understanding. While some computer scientists advocate aligning AI systems with human values, recent discussions about AI’s potential to destroy humanity have become exaggerated. This overplaying benefits leading AI companies by making it sound more exciting and making their systems appear more powerful. 

The focus on doomsday scenarios can divert our attention from other important issues, such as transparency, privacy and ethics in AI development. Instead of fixating on a distant apocalypse, it is more productive to examine AI systems critically and demand transparency from tech companies.

Realpolitik is back

The war in Ukraine, the tension between China and the US, the rising populism and a new arms race tell how realpolitik is back. During the last 30 years, we thought politics would be based on ideological values and businesses didn’t need to consider geopolitics. Maybe it was never true, but at least it is not the situation now.

The EU draft AI Act includes restrictions on facial recognition technology and requires transparency from AI system developers like ChatGPT. While the EU is ahead in AI regulation compared to the US and China, the effectiveness of such regulations remains uncertain. The law focuses on high-risk AI applications and proposes risk assessments similar to the drug approval process. However, there are ongoing debates, such as the use of facial recognition and data scraping. The final version of the law will be negotiated later this year.

Can AI be used against people?

We can easily also see why many EU lawmakers are interested in this law. They are worried about how AI can be used against people and, for example, if the privacy of the people is not respected or how an unregulated AI could make mistakes in drug development and approval processes. EU lawmakers have also emphasized that they don’t want to see systems like the Chinese social credit to score people and their behaviour replicated in Europe. And when we look at the development in some EU countries, like Hungary, this might be a relevant risk. So, this law has many political purposes.

Sam Altman and other American business leaders wanted to achieve global cooperation in AI, especially in developing research and its use for peaceful purposes. Despite the escalating competition between the US and China in AI, a conference aimed to foster cross-border connections and avoid issues like an AI arms race. The US has imposed sanctions on China’s access to cutting-edge chips for AI development, while China is prioritizing AI development and implementing regulations. Altman has been advocating for cautious regulation globally and engaging with leaders worldwide. Altman has a good purpose in his initiatives.

Acknowledge the facts

Acknowledging the facts is the beginning of wisdom. That is a popular phrase in Finland, where realpolitik has been present for decades. It may have come from former Finnish President J. K. Paasikivi, who tried to find a way for Finland to live with the Soviet Union. (I had a long chat with ChatGPT about who originally said that phrase, finally, it admitted it was Paasikivi, although first, it said it could have been Aristoteles or Socrates). Of course, we must remember that sometimes facts change, and sometimes we can change them too.

That is also a good principle for AI. We cannot pause AI development. There will always be some parties that continue the development, and we could say even if all the good guys agree to pause it, bad guys will continue to develop it. Whatever EU lawmakers try, they will always follow the development of the technology, and therefore it will always be reactive. They must also consider Europe’s competitiveness so that EU companies can try new things. Sam Altman and other AI business leaders simply cannot stop the arms race and get all parties to develop AI in harmony.

Recognizing the facts

All those targets are nice, and people probably mean to bring some good from them. Unfortunately, we are in a situation where realpolitik walks over ideological targets, and ideological behaviour can also cause harm. Of course, most of us want to develop the world into something better, and the current situation in world politics and how technology and data can be used against people is not ideal. But we cannot stop the development, and we cannot ignore politics. We need to recognize these facts and then take our own small steps towards something better.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.