AI Doomers, Optimists and Just another toolers

September 17, 2025

The recent release of ChatGPT 5, represented a further step along the road to cementing Artificial Intelligence (AI) as an integral part of modern life and work, but was also a disappointment because it represented a slowing in the upward improvement trajectory of Large Language Models (LLMs).

Technically, ChatGPT 5 is a tour de force - it features improved reasoning, longer context handling, and reliability in multistep problems and real world planning - but it mostly does what it's predecessor does, albeit much better.

So it is a good time then, to pause and review the LLM-AI journey so far.

There are three general viewpoints about where things are headed.

DOOMERS

AI "doomers" believe that AI would eventuality threaten humanity. That AI systems would become extremely capable and unpredictably misaligned with human goals, for example by optimizing for the wrong objective, or amplifying bias and disinformation at scale, by potentially taking control of critical infrastructure, or by enabling new forms of cyberwarfare and biothreats.

Some see scenarios where there is mass unemployment and social destabilization due to AI powered systems taking over most of the productive work currently done by humans.

OPTIMISTS

AI "optimists" on the other hand, start from the opposite premise - that AI is an unprecedented engine for economic growth and scientific discovery. They argue that each new technological wave has ultimately created more and better jobs than it eliminated, and that AI will do the same thing but much faster!

The emphasis is on AI as a force multiplier for human creativity and productivity, that AI will be a co-pilot that handles drudgery so people can spend more time on high-value, human-centered work. Their expectation is for rapid breakthroughs in healthcare, education, climate modeling, and accessibility, leading to new industries and higher living standards. So while acknowledging the need for safety and governance, they don't want heavy handed restrictions which would introduce barriers and slow down advancements.

JUST ANOTHER TOOLERS

In the middle are those who believe that AI is just another tool in the long chain of technologies that have helped humanity improve it's condition, like fire, farming, the wheel, books, electricity, the internet, which in turn spawned powerful secondary benefits - settlements, travel and transport, knowledge retention and advancement, industrialisation, ecommerce and social media.

So, AI is powerful but ordinary in the sense that it will be domesticated into the same socio-technical patterns as were spreadsheets, cameras and robotic manufacturing. The expectation is for local, not apocalyptic failures, incremental, not utopian payoffs and adoption curves that bend to budgets, workflows, liability, and culture.

Rather than arguing about extinction or abundance, toolers ask - Where’s the ROI? Who signs off? What are the failure modes we can test? They prefer guardrails, audits, and training to sweeping legislation or sweeping promises.

In their story, AI will slot into existing institutions as yet another tool in the metaphorical kit - sometimes replacing tasks, often reshuffling them, almost always requiring humans in the loop to define taste, provide context, and own accountability.

MIXED MOTIVES?

Why, then, do AI doomers so often include people with strong incentives to generate investment for their ventures?

Starting with the dynamics of attention and capital, it is a fact that markets reward narratives that make a technology feel both inevitable and scarce, both potent and perilous. A founder who convinces investors that their system is on the cusp of superhuman capability will attract investment from those who fear missing out.

If doomers sometimes overstate threats to attract capital and control, optimists sometimes overstate frictionless progress, and in doing so, they make too little room for the ongoing, irreducible role of human creative employment.

In the optimist’s favorite graph, productivity soars as generative systems draft text, compose music, design interfaces, and storyboard video on demand. But this ignores the problem that when models can flood the zone with "good enough" content, the price of content tends toward zero - but the premium on curation, narrative voice, and live presence rises, and those are human-heavy tasks.

The world becomes awash with AI slop in both the doomers and optimists' dream world.

AGI AND AGENTS

So where do Artificial General Intelligence (AGI) and Agents fit in?

AGI first. This one suffers from the problem of defining what it is. Probing into what different people say it is, runs into the problem of mixed motives already discussed, by the people who seek to benefit from "delivering AGI". A common easy to understand definition goes - when a machine is more intelligent than humans in most areas. And yet, expert systems have long surpassed humans in specific domains - Chess, Go and now knowledge parsing and retrieval etc etc, but stringing together a series of expert capabilities in one unit does not AGI make.

There is the argument that "wet systems" are different - our reasoning and electrical impulses which form the basis of our neural networks, are conducted in a substrate of biology and chemistry, and we will always reason differently from machines. Going into this fully - how we are trained over our lifetimes, the interaction of our knowledge with our experience - mind and body - which is perceived differently by each human and so forth, is not in the scope of this short article. But many experts think that AGI in the sense of an artificial system which resembles humans at a sematic rather than syntactic level, is not yet on the horizon.

Agentic systems have also existed for ages - auto pilot, fly by wire, automated trading, giving out loans - where the algorithm is given the agency to take actions, the outcome of which frequently is better than what a human would achieve.

But humans still set the parameters and benefit from the results.

FINALLY

So the dial oscillates and eventually settles in the middle - for now AI is just another tool, briefly over hyped, but settling down to be a solid helper and enabler. Personally, I would estimate that it is making me at least 2-3 times as productive in the area I spend most of my time - software development. Imagine the benefits if we could harness that sort of improvement over all of work and life.

History suggests that new technologies rewire work and make people more creative and employable, sometimes, after painful transitions.

The Industrial Revolution mechanized muscle and shattered cottage industries, but it also spawned new whole categories of jobs that didn't previously exist - machinists, mechanics, managers, industrial designers, advertisers, and journalists to chronicle and critique it all.

Factories standardized components, designers and artists invented new aesthetics to humanize mass production, publishing and entertainment scaled with urbanization, education systems expanded to train the new workforce.

The internet democratized distribution and discovery. A single person could build a store, publish a newsletter, upload an album, run a micro-studio, and reach global audiences. The "long tail" of creators, coders, marketers, editors, and community managers flourished around platforms that, for all their problems, made entry cheap and iteration fast.

Mobile phones dissolved the gap between "at work" and "at inspiration," turning every pocket into a studio and every commute into a marketplace. The app economy created developers, product managers, designers, influencers, content moderators.

After the dust settled, the new technologies behaved like tools in the most important sense - they expanded what people could make and do, they raised the ceiling on productivity and expression, and they opened more kinds of jobs than they closed. AI is likely to rhyme with this pattern.

This is not the first time there has been the promise and hype of AI providing a breakthrough, and improving how we work and play. Several times, the initial promise was followed by an "AI winter" where investment dried up and AI researchers retreated into the gloom to painfully make progress away from public attention.

But not this time.