Interview with Daron Acemoglu (Istanbul, 1967), a highly respected economist, professor and co-author of books whose work was recognised with the 2024 Nobel Prize in Economics, alongside Simon Johnson and James A. Robinson.
In your most recent book, Power and Progress, you mention that we are at a critical moment regarding the relationship between technology, equality and democracy. What consequences do you foresee if the world doesn’t address the complex relationship between these three forces?
There are two sets of problems which probably share common causes. First, democracy in the industrialised world seems more vulnerable now than at any other time since the Second World War, or perhaps even before. This situation has made democracy a key foundation for many other institutional features, such as civil rights, participation, freedom of speech and the media. In the West and in certain Latin American countries, when democracy is weakened, these rights and institutions also suffer. The entire institutional fabric of these societies appears more fragile. I don’t think we should exaggerate. The decline or collapse of democracy is not imminent. However, we can see what’s happening in the United States with Donald Trump and his strong anti-democratic agenda; and also, the support for democracy among young people is currently at its lowest. We see similar trends across much of Latin America, where support for democracy is significantly lower than it was in the 2000s. This poses a major threat with potentially harmful implications for prosperity, freedom of speech and equality.
At the same time, we are on the edge of major changes based on the developments of the last 40 years that are likely to speed up. Artificial intelligence, which relies on digital technologies, could amplify some of these trends in unique ways. Inequality has increased in many countries, including parts of Latin America, the United States and Europe, and this situation could get worse.
And what other factors?
Ageing is another critical factor. All industrialised countries are ageing, some faster than others. Latin America, in particular, will age at a rapid pace and is not adequately prepared to face these changes. Although we have examples in countries like Japan, South Korea and Germany, I don’t think we’re ready to face these demographic challenges, or climate change, or the transformations of globalisation.
Some of today’s democratic challenges and political tensions cannot be fully understood without considering globalisation, although its nature might evolve over the coming decades. All of this demands institutions that are stronger than ever before to promote engagement, consensus and new solutions based on social dialogue and expertise. However, our current institutions make this difficult. Take the United States, for example, where polarisation has reached such levels that it complicates passing laws on climate change, worker training, or AI regulation. These are critical issues we must address.
In one of your recent works, you talk about the power and wealth that large tech firms are accumulating. Has any organisation in history had as much influence and control as today’s corporate giants?
In my opinion, no. Although we might compare them to the East India Company, which, supported by military and political backing, controlled the Indian subcontinent, its dominance was relatively superficial.
What is surprising about today’s tech giants, especially Facebook, Google, and to some extent Apple, Amazon and Microsoft, is that they are not only huge and multinational but also control the very fabric of society. They shape information, are integrated into all aspects of daily life, and influence public opinion. We have never had companies as powerful as the big tech firms.
Not even the big oil companies?
No, because these tech entities have significant influence over civil society and even over journalists. Standard Oil, for example, was extremely large and controlled a vital resource, but it never managed to embed itself into the fabric of public thought. It never managed to convince the media and the public that its activities were for the common good, as today’s tech companies do. That’s the situation we’re facing.
It’s often said that reining them in creates barriers to competitiveness – even the well-known Draghi Report talks about this. Do you believe that’s the case?
Yes, but I think we must be realistic about it. Regulation can certainly slow down business, especially if it’s not well designed, which can lead to inefficiencies. However, that doesn’t mean regulation is inherently negative or unnecessary. It has both costs and benefits.
When it comes to some of the most powerful companies in history, regulation becomes essential. Although I believe some of the claims about AI’s potential are exaggerated, even if just a fraction of them are true, this technology will be transformative. We definitely need mechanisms to counterbalance that power, even if the process turns out to be somewhat inefficient.
What’s your opinion of European regulation on AI?
I have a three-part view on European regulation. Firstly, Europe – and the European Commission in particular – deserves praise. They have always been ahead of the curve. European regulations largely reflect solid values such as democratic governance, human rights, civil rights, freedom of speech and privacy.
However, European regulation has also had limitations in some areas. The Draghi Report points out that Europe lags behind the United States and China in the AI field – and even behind Canada. Effective regulation is a challenge; as even well-intentioned rules can have unintended consequences. A more flexible regulatory approach is needed. Take, for example, the historic General Data Protection Regulation (GDPR) of the EU. I fully support the values behind GDPR, like data privacy and personal information protection. If I had been tasked with designing data protection rules, I might not have done it better. However, GDPR has had adverse effects. In fact, it has harmed small businesses. While large tech companies found ways to comply without significantly improving privacy, smaller companies have struggled under its weight.
So, what do we do?
It’s not about rejecting regulation. It’s about improving it. We need to understand the legal gaps, address them, and recognise democracy’s limits to achieve this. Europe, like the United States, is polarised and the European Commission lacks a strong democratic mandate. It’s difficult for the Commission to say: “Our GDPR, our creation, didn’t work as expected; we need to revise it.”
Finally, I believe we may need a different approach to regulation. Although European regulations defend excellent values, I see a problem in their reactive nature. Tech companies make the first move, launching products that may infringe on rights or bypass laws, and regulators respond afterwards. We see the same reactive approach in the United States. My argument, as described in Power and Progress and other work, is that we need to adopt a proactive stance. Instead of waiting for AI companies to develop these technologies and then reacting, we should direct development from the start in ways that maximise social benefit.
It seems like quite a challenge to predict technological advances, doesn’t it?
That’s true, but you don’t always need precise foresight to create proactive regulation. For example, Europe and the United States, though imperfectly and on a limited scale, have implemented proactive regulation in the energy sector. Instead of waiting to observe energy companies’ behaviour, they imposed carbon taxes and gave subsidies for innovation to promote renewable energy and reduce dependence on fossil fuels. That’s a truly proactive approach.
In Power and Progress, you warn about the use of AI. Do you think it’s a tool that could boost wealth creation in many countries, or is it more likely to worsen economic inequality?
It’s a bit early to say for sure. A lot will depend on how generative AI develops. As a platform that combines ideas, techniques and practices, it’s very promising and could even surpass other AI applications, like predictive AI, which powers recommendation algorithms on platforms like Netflix.
Predictive AI has had a big impact and influenced how we interact with technology daily. Generative AI has the potential to go even further. But it could develop in different directions. It might become a tool based on specific and expert knowledge, supporting fields like healthcare, skilled trades and journalism, offering solutions tailored to each context. That would be very beneficial.
Or it might lean towards a general intelligence model, like what we see with ChatGPT, aiming for broad automation without a specialised focus, which might not be as useful or transformative.
What I’d like to see is for generative AI to stop trying to mimic general intelligence and instead focus on providing accurate and contextual knowledge that professionals – such as electricians, nurses, plumbers and journalists – can rely on.
Have you used ChatGPT?
I’ve used it in the past, though not as much anymore. Initially, I experimented with ChatGPT for a few hours to understand its capabilities. I wanted to see if it could help me in two specific areas.
First, I tested it to edit an opinion article of around 1,100 words, asking it to shorten it. But honestly, it didn’t do well. It couldn’t identify the main arguments or distinguish between essential points and counterpoints. I concluded that it lacks the judgement needed for that task.
The second area was background research. It works reasonably well in this respect, but it often includes inaccuracies, so I end up fact-checking thoroughly.
Currently, I use Google and Google Scholar more for this purpose, although generative AI now appears indirectly in my searches, as Google has integrated it into its search function.
See, «Nunca hemos tenido empresas tan poderosas como las grandes tecnológicas»
Leave a comment