Lately, there has been an abundance of news about the existence of a financial bubble in the sector of companies linked to Artificial Intelligence. The central flaw of this reasoning is that all existing problems are perceived as merely technical ones. From the year 2000 to the present, we can mention at least the dot-com bubble in 2000 and the real estate bubble in 2008, as well as the multiple bubbles and scams surrounding cryptocurrencies and NFTs from 2016 to 2020. Economic crises are so inherent to the system that they were even given a somewhat more neutral name: the 'business cycle'. The irrationality of the market leads to investing in the sectors where the highest return on investment is perceived, even if the growth is fictitious. Even Sam Altman and Jeff Bezos admitted that the sector is indeed immersed in a bubble dynamic. Despite these alarming diagnoses about the state of the industry, investments seem to continue their course and the CEOs of these companies remain optimistic. What we do know is that the current models used for LLMs are reaching the limit of their capabilities. So, the bubble exists and is of biblical proportions, but then why do the biggest entrepreneurs and the most powerful states in the world continue to invest billions of dollars in its development? The explanation probably has multiple facets. On the one hand, speculative bubbles and the resulting economic crises are a recurring phenomenon in capitalism. Even Sam Altman and Jeff Bezos admitted that, effectively, the sector is immersed in a bubble dynamic. All this while the costs of inference tend to rise. How will the company manage to achieve this enormous increase in profitability? So far, OpenAI, the largest company with the greatest potential in the sector, has a market valuation of 500 billion dollars, even while having losses of billions of dollars every year and inference costs that exceed its revenues. The only reason the company stays afloat is due to constant capital investments from the financial market. What environmental costs are these entrepreneurs willing to sacrifice to try to achieve it? Like the technical side, social problems also have their economic and political legs. Attempts at futurology are always cumbersome and prone to error. The logic of accumulation and exponential growth at a global level that capitalism demands is incompatible with the necessary reduction of production and consumption. For now, both visions of a fourth industrial revolution and of a Skynet-like future feed the expectation and investments in the AI sector, increasing the current bubble. The development of technique and science are not alien to society: they are shaped by the social, economic, and political context in which they are immersed. It doesn't matter what the scientific evidence or the most basic contrasts with reality are, AGI is imminent and will be able to solve any problem that threatens humanity. This is exactly the kind of comment that reveals the mentality these company CEOs have. Despite these alarming diagnoses about the state of the industry, investments seem to continue their course and the CEOs of these companies remain optimistic. After all, the great Sam Altman believes that the climate crisis is a minor problem that AGI will be able to solve easily. The rise of Palantir gives a clue to the current capability and future expectation placed in AI to increase the military, surveillance, and control capabilities of states and companies. Let's suppose for a moment that the development of AGI within the next 5 years is feasible. What is then the evidence of this new AI-based era? Even from the most polluting sources. During this last month, there have been abundant news about the existence of a financial bubble in the sector of companies linked to AI. Bloomberg deployed its infographic showing the circular investments that seek to overvalue these companies. Bloomberg deployed its infographic showing the circular investments that seek to overvalue these companies. For now, it is losing money even with its $200 subscriptions. Now, one could argue that once AGI is achieved within the next 5 years, a viable business model to sustain these investments would naturally emerge. The problem is that no one knows if achieving AGI is even possible. The incentive to resell shares at a higher price is enough to feed the bubble. Jason Furman's analysis estimates that 92% of the growth of the US economy this year is due solely to capital investments in AI; without this factor, its economy would be in recession. In turn, there are military and geopolitical interests that drive the involvement of states. In a system guided by the accumulation and perpetual growth of profits, we can expect nothing other than to see how major technological developments benefit the continuity of this logic to the detriment of any social utility. What is important to highlight is not which predictions will turn out to be true or false about technological development, but which interests benefit from certain narratives. The argument usually put forth is that even if there is some overvaluation in the current market, the revolutionary potential of the technology is real. The climate crisis, for example, would simply be a problem consisting of discovering and implementing superior technologies, but while it is true that technological advances are necessary and extremely useful to achieve an economy sustainable with the environment, we already largely know how to solve the climate crisis. The reason it continues its course is that it affects political and economic interests. When the bubble finally bursts, who will be the losers and who will benefit? On the other hand, there are also environmental problems. The energy cost of inference for a ChatGPT model is about 1 MWh, equivalent to about 120 tons of CO2. To meet its financial obligations, OpenAI must find a way to increase its revenue from $13 billion to $1 trillion in 5 years. We don't know when or how AGI will save us (or destroy us!). It is not convenient for oil companies to eliminate fossil fuels. The content generated by AI also has its costs: from heat generation and water use for cooling to the generation of e-waste. The latter could scale to absurd levels given that the AI race forces companies to adopt the most modern GPUs, which NVIDIA launches to the market annually. Both readings share the same premise: AGI is imminent and for better or worse, it will trigger a radical restructuring of society as we know it. There are multiple additional environmental problems beyond energy consumption. A quasi-religious fascination with the potential of this technology. When the bubble finally bursts, who will be the losers and who will benefit? Likewise, the depreciation of GPUs due to intensive use gives them a lifespan of only 2 to 6 years. It seems serious, but there's nothing to worry about! As Carl Sagan said, extraordinary claims require extraordinary evidence. Recently, Sam Altman announced a commitment to develop 17GW of infrastructure—the equivalent of 17 large nuclear power plants to power its data centers. In a context of a global climate crisis, a civilizational threat with much greater scientific backing than the risk of a malevolent AGI, immediate reductions in emissions are required. No one knows.
The AI Bubble: Why Billions Continue to Be Invested
Analyzing the current state of the artificial intelligence sector, the author concludes that the market is in a financial bubble. Despite multi-billion dollar losses and the lack of a clear business plan, giants like OpenAI continue to attract investment. The article explores the reasons for this paradox, examining the economic, political, and even ecological aspects of the race to develop AGI, questioning its inevitability and the risks it poses to society.