Please ensure Javascript is enabled for purposes of website accessibility

The Growing AI Divide

The emergence of AI has the potential to create society-wide benefits, but the costs must be borne equally as well. This is how we achieve it.

Canyon AI Divide.png

I have written on several occasions about the productivity benefits enabled by AI, and how that will drive growth, profits, and the stock market. I remain completely confident in that assessment.

What I haven’t written about the AI revolution is – at what cost?

A few weeks ago, I reposted a column from my friend and predecessor Tim Lutts on income disparity, opportunity, and the stock market.

In the article, he advocated for the idea of a guaranteed basic income. That was timely in 2015 when he wrote it and is even more timely today for one reason: AI.

In the AI era, we seem to be heading down a path where the productivity gains and profits overwhelmingly go to the wealthy, especially the megawealthy. At the same time, the costs are mostly external and will fall most heavily on those who benefit the least (at all?).

In the coming decades, the biggest costs of AI are likely to come from how those costs are distributed, not just from the technology itself.

The most plausible downside scenario is not “AI ruins everything,” but …

“AI creates a lot of value while concentrating too much of it, externalizing too many environmental costs, and moving faster than institutions can adapt.”

That is the thread running through a collection of studies and reports from an alphabet soup can-full of organizations –

  • The Organization for Economic Cooperation and Development (OECD)
  • The International Monetary Fund (IMF)
  • The International Labor Organization (ILO)
  • The UN Conference on Trade and Development (UNCTAD)
  • The UN Educational, Scientific, and Cultural Organization (UNESCO)
  • The UN Environment Programme (UNEP)
  • The World Health Organization (WHO)
  • The National Institute of Standards and Technology (NIST)
  • The International Energy Agency (IEA)

Economically, AI is likely to raise productivity and output but also to unsettle labor markets and bargaining power.

  • The IMF has warned that AI exposure is especially high in advanced economies, where many cognitive and office-based jobs can be augmented or partly automated; women, older workers and the highly educated are often among the most exposed groups.
  • The ILO’s 2025 update similarly finds that augmentation is more common than full automation, but the transition can still be painful: Jobs change faster than pay structures, retraining systems, and social insurance.
  • UNCTAD adds that the gains from AI are already concentrated in a small number of firms and countries, which raises the risk that the next wave of growth accrues mainly to capital owners, frontier companies, and digitally advanced economies.

That means the economic costs we are most likely to feel are these:

  • wage polarization
  • greater returns to superstar firms
  • weaker job security in routine cognitive work
  • slower upward mobility for people or regions that cannot access quality training or complementary infrastructure.

In plain English, AI could make many workers more productive without ensuring they share fairly in the gains.

It could also hollow out middle layers of clerical, administrative and analytic work, even if total employment does not collapse. That is a subtler but still serious cost: the ladder into the middle class can weaken even while GDP rises.

A second economic cost is market concentration.

UNCTAD’s 2025 report emphasizes that AI development is concentrated in a few countries and firms, with wide gaps in talent and digital infrastructure. That concentration can spill into pricing power, dependency on a handful of cloud platforms and model providers, and reduced competition in downstream sectors that rely on AI tooling.

Over 20 years, that could look like higher margins for platform owners, thinner margins for adopters, and less resilience if critical AI infrastructure is controlled by too few actors, concentrating even greater resources and power in the hands of a small, global multi-billionaire class.

While the economic benefits are likely to be more and more concentrated, the externalities – the economic, environmental, and social costs – are spread broadly and will disproportionately affect those who do not receive the economic benefits of AI.

Environmentally, the clearest likely cost is energy demand.

The IEA’s 2025 report says AI is set to drive surging electricity demand from data centers in the coming decade. The key point is not that AI necessarily makes climate goals impossible; it is that the power system has to keep up.

If grids expand slowly or rely heavily on fossil generation, AI-driven compute growth can raise emissions and local air pollution. If grids decarbonize quickly and data centers become more efficient, the footprint can be moderated. So the environmental cost is real but can be substantially mitigated if we get the regulations and infrastructure right.

Water is another likely constraint.

UNEP has highlighted AI’s environmental footprint across the full lifecycle, and outside the energy question, there is growing concern about freshwater use for cooling and for electricity generation serving data centers. Such new AI-oriented facilities can intensify local water stress, especially in drought-prone regions.

This becomes a social as well as environmental issue, because the burden is local: nearby communities feel the tradeoff between digital infrastructure and scarce water resources. Tech sector needs will compete with agriculture and direct human needs.

There is also a materials and waste problem.

UNEP argues that the environmental footprint of AI has to be assessed end-to-end, including semiconductor manufacturing, mining, transport and e-waste. Faster hardware refresh cycles can increase pressure on supply chains for critical minerals and create more discarded equipment. The next 20 years will likely bring better chip efficiency, but also far more total deployment. That means rebound effects matter: making each model iteration cheaper does not guarantee lower overall resource use if usage grows even faster.

Socially, one of the biggest likely costs is a worsening AI divide.

The ILO and UNCTAD both warn that unequal access to infrastructure, skills and adoption capacity can deepen existing inequalities within and across countries. In practice, that means rich schools, hospitals, firms and regions may get better tools and better outcomes, while poorer communities are left with weaker services, more surveillance, and fewer opportunities to shape how AI is used. This unequal access can compound just like unequal wealth does.

Another social cost is institutional fragility: bias, opacity, misinformation and overreliance in high-stakes settings.

UNESCO’s ethics framework centers human rights, dignity and democratic values, and WHO’s guidance on large multimodal models warns that AI in health can mislead, encode bias, or be adopted without adequate safety, oversight and accountability. The risk is not merely that models sometimes make mistakes. It is that organizations offload judgment onto systems that are hard to audit, while affected people have limited recourse. That tends to erode trust in schools, medicine, hiring, insurance, policing, and government.

Cybersecurity is another likely social and economic cost.

NIST stresses that AI introduces new attack opportunities while also being useful defensively. Over time, AI can lower the cost of phishing, fraud, vulnerability discovery, and persuasive impersonation, while making critical infrastructure more dependent on complex software stacks and supply chains. The cost is not just more cyber incidents, but a higher baseline of verification and distrust in digital communication.

[text_ad]

A Prescription for a Better Future

The good news is that these costs are not fixed. There are concrete steps that can spread the benefits and reduce the harms.

First, treat workforce adjustment as core infrastructure. That means wage insurance, portable benefits, sectoral retraining, mid-career education, apprenticeships for AI-complementary jobs, and redesign of jobs around augmentation rather than headcount reduction. The IMF and ILO both point toward preparedness as a major determinant of outcomes. The earlier institutions invest in adaptation the less likely AI is to become a one-way transfer from labor to capital.

Second, use competition and interoperability policy to prevent excessive concentration. That includes antitrust scrutiny where warranted, open standards, portability, procurement that avoids single-vendor lock-in, and public investment in shared research and compute capacity. UNCTAD’s warning about concentration implies that inclusive growth will require more than innovation policy; it will require market structure policy too.

Third, make AI infrastructure environmentally accountable. The IEA and UNEP point toward a practical agenda: require transparent reporting of electricity, water and lifecycle impacts; steer new data centers toward cleaner grids and lower-water cooling options; improve chip and model efficiency; and align permitting with local environmental constraints. Sustainable AI is not just about better models, but about where facilities are built, how they are powered and cooled, and whether communities share in the benefits.

Fourth, reserve the strictest rules for the highest-risk uses. UNESCO, WHO and OECD all point in the same direction: rights-sensitive domains such as health, employment, education, policing, credit, and public services need stronger governance than low-stakes consumer uses. Minimum requirements should include human accountability, redress mechanisms, testing for bias and failure modes, transparency about when AI is being used, and limits on uses that are incompatible with democratic values or human rights.

Fifth, build security in from the start. NIST’s AI RMF and Cyber AI Profile give a usable template: threat modeling, secure development, incident reporting, provenance, access controls, continuous monitoring and clear responsibility for downstream use. That does not eliminate abuse, but it can prevent the worst pattern of the last tech era, where safety came after scale.
In the coming decades, AI will probably create enormous value, but the main costs are likely to be higher inequality, heavier infrastructure footprints, more institutional strain and more concentrated power.

None of that is inevitable.

The best way to share the benefits is to stop treating AI as purely a technology story and treat it as a labor, competition, infrastructure, environmental, and governance story at the same time.

If we do that, the benefits can be broad. If we do not, AI will still work technically, but it will work politically and socially for too few people.

The industrial revolution had produced big winners and many, many losers. It took health and safety regulation and enforcement, support for unions, creation of social safety nets, and more than a little violence and death to fully realize the benefits of what the industrial revolution had to offer. That divide will pale in comparison to the AI divide we are moving into. Getting this one right won’t be easy, but the greater stability and benefits of doing so will make the effort more than worthwhile.

______________

What do you think? Is AI just an opportunity? Just a threat? What are your greatest concerns about AI – how to maximize the benefit while minimizing the costs? Write to me at CEO@cabotwealth.com.

[author_ad]

Ed Coburn has run Cabot Wealth Network since 2018 when he bought the company from longtime friend and colleague Tim Lutts. Ed is a graduate of Cornell University and holds an MBA from the Olin School of Management at Babson College. His career has brought him into many different sectors of the economy, from software and healthcare to transportation and manufacturing, and even oil spills. He is active in the Financial Media Association, a past Director of the Software & Information Industry Association, a member of the American Association of Individual Investors, and a frequent speaker at industry events.