
12 DEC, 2025

By Raphael Olszyna-Marzys, International Economist at J. Safra Sarasin
Early data suggest that artificial intelligence is already contributing to higher labor productivity in the United States, with gains ranging between 0.1% and 0.9%. Whether this translates into sustained economic growth will depend on how quickly adoption spreads, how effectively AI is integrated into workflows, and whether large technology companies can turn massive investments into sustainable profits.
It is clear that more companies are experimenting with AI. According to the Business Trends and Outlook Survey conducted by the U.S. Census Bureau, 9% of firms reported using AI in the production of goods and services in August, up from around 5% last December. About 14% expect to use AI over the next six months. The direction of travel is unmistakable.
However, does greater usage automatically lead to better results? So far, the answer is mixed. A recent MIT study on AI pilot projects found that only 5% delivered a significant increase in productivity or profits. Weak organizational integration, line managers with limited authority, and misallocation of resources – too much focus on sales tools and too little on back-office automation – are partly to blame. Even firms developing their own GenAI tools tend to underperform expectations. Stanford’s AI Index 2025 notes that, for now, revenue gains and cost savings remain modest.
That said, large-scale task-level productivity studies are far more encouraging. On average, workers are found to be around 30% more productive per hour when using GenAI.
The impact varies significantly by sector. IT services show the largest productivity gains (2.6%), while leisure and hospitality remain almost unchanged (0.6%). Overall, between 1.3% and 5.4% of total working hours are currently supported by AI, implying an aggregate productivity boost of 0.4% to 1.8%, with a median of 1.1%, broadly consistent with higher-end macroeconomic estimates.
What explains this discrepancy? A key factor is that much of today’s AI use remains informal. Employees may complete tasks faster without their managers’ knowledge and then use the saved time for personal activities rather than additional work – likely improving workplace well-being, but not necessarily measured productivity. As companies increasingly embed AI into formal workflows, these gains should gradually show up in official statistics.
Still, a paradox persists: while individual users – often supported by free tools – are driving adoption, companies remain cautious when it comes to paid AI solutions. This raises important questions for the major technology players, the so-called “hyperscalers”, which have invested billions in AI but have yet to fully reap the rewards.
Second-quarter data show that aggregate capital expenditure (capex) by the five largest hyperscalers has risen to around USD 100 billion per quarter, equivalent to roughly 30% of total S&P 500 capex. However, their share of S&P 500 net income has plateaued below 20% in recent quarters, as the anticipated AI-driven earnings boost remains uncertain.
At present, AI-related revenues account for only a small fraction of hyperscalers’ total income, with most AI capex still financed through traditional revenue streams. As a result, free cash flow margins have nearly halved over the past year.
This sharp decline in free cash flow margins highlights the scale of resources being deployed. Unlike previous investment cycles – such as the mid-2010s shale boom or the early-2000s mobile expansion – this cycle has not relied on debt markets to finance data-center construction. While this underscores the strength of hyperscalers’ balance sheets, it also raises the risk of overinvestment, as the absence of creditor discipline leaves the profitability of these expenditures largely unchallenged.
Over the longer term, a key risk lies in the potential commoditization of AI models. While the cost of training models continues to rise exponentially, performance improvements are becoming increasingly incremental. At the same time, the cost of inference – the processing of user queries – continues to fall. This dynamic could ultimately lead to a market in which inference is priced at marginal cost, while large-scale model training becomes prohibitively expensive.
Much like infrastructure providers in the early days of the internet or mobile-network operators in the 2000s, hyperscalers could face margin compression in a highly competitive, oversupplied market. From the perspective of individual firms, the optimal response may be to scale rapidly, as competition increasingly hinges on cost rather than product differentiation. At the industry level, however, this strategy could further erode margins, with corporate clients, consumers and the broader economy emerging as the main beneficiaries.
In conclusion, as AI adoption accelerates, hyperscalers stand to benefit from a rapidly expanding market. However, they are also likely to face mounting margin pressures due to intensifying competition and the eventual commoditization of AI models. The primary winners of the fast-growing AI infrastructure are expected to be customers, who will enjoy a wide range of models at varying price and quality levels.