Last week, the ABC interviewed ABC Finance presenter Alan Kohler.
Alan Kohler has found himself at the center of an artificial intelligence controversy. An AI generated article has fabricated a confrontation with the CEO of the Commonwealth Bank. The story includes several AI images on the set of the ABC's 7:30 program with Sarah Ferguson.”
Kohler explained.
Well, it was just supposedly confrontation between me and Matt Common on the set of 7:30 and it was a word article that appeared. It was on it would looked like it was on the ABC's website. Had all the ABC signage on it. It looked really authentic and was complete with pictures of me and Matt Common having a Barney and that he apparently threw off his microphone and stormed off the set.
He also reported on two high profile resignations.
In the last couple or last week there were two fairly seismic shifts with resignations and and a really interesting article by an AI entrepreneur Matt Schumer and it was titled “Something big is happening” ... I thought it was fascinating ... and really sobering ... It's 5,000 words ... But also, a bloke from Anthropic which made Claude AI, he was in charge of safeguards at Anthropic. He resigned saying the world is in peril.
Somewhat oddly, we got this:
The first thing that comes to mind is that social media is dead. I mean you can't believe what you see on social media anymore. Nobody's going to use it - it's finished. I also think that photographic and video evidence in court will be useless in future because you can't believe it. So what a lot of our legal system is based on it seems to me becomes worthless as well.
Kohler wrote an article in the ABC (Monday 16 February) - “AI is evolving fast and may bring the fourth industrial revolution with it.”
The price of intelligence is falling towards zero as AI grows more sophisticated.
The article describes his experience and quotes Schumer:
"… on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch … more like the moment you realise the water has been rising around you and is now at your chest".
"I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just … appears."
Kohler suggests that the autonomous car is evidence of a great leap forward. In contrast, he cites Mrinank Sharma, the head of safeguard research at Anthropic, with his doomsday perspective:
"The world is in peril. Not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
Apparently, he said he's going off to study poetry and devote himself to the practice of "courageous speech".
Other gurus are cited, including Jimmy Ba, one of the co-founders of Elon Musk's xAI, which developed Grok AI, who resigned with the post:
"We are heading to an age of 100x productivity with the right tools. Recursive self-improvement loops likely go live in the next 12mo. It's time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species."
The gloom continues with the head of AI at Microsoft, and one of the founders of Google's DeepMind, Mustafa Suleyman:
"White-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months."
Kohler then summarises:
Recent developments in AI have sparked widespread concern, including in financial markets, as the technology is advancing faster than plans to manage its impact. In early February, two key events intensified “AI fear” among investors - Anthropic released plug-ins for its Claude Cowork AI, enabling automation of professional tasks (starting with legal back-office work), causing shares of software and data firms like Salesforce, Thomson Reuters, Adobe, and Australian companies like Technology One, Xero, and Wisetech to drop sharply. OpenAI released GPT-5.3 Codex around the same time. Shortly after, a Chinese lab, OpenBMB, launched an open-source AI agent, MiniCPM-o 4.5, with 9 billion parameters that runs locally on devices like laptops and smartphones - challenging the need for massive data center investments.
When asked, Google’s Gemini AI noted that while it likely has over a trillion parameters, the comparison highlights different design philosophies rather than simple superiority. The central question, Kohler suggests, remains whether AI will replace jobs or serve as a powerful assistant.
Now, you might think that Kohler has covered all that needs to be said about where AI is heading. Well, he’s really just reporting the obvious – larger models, seemingly more capability and even more threats to our jobs. But he’s missing some of the most pertinent challenges. Kohler outlines the obvious trajectory of AI: larger models, greater capability, and increasing pressure on human labour. But he misses a more fundamental constraint — not scale, but architecture.
The human brain operates on roughly 20 watts — about the energy of a dim lightbulb — yet performs massively parallel processing in a system where memory, perception, and action are tightly integrated. Memory is not stored in discrete locations but distributed across networks and reconstructed through association. There is no clear boundary between storage and processing, and cognition is grounded in a body interacting continuously with the physical world.
In other words, in a world where we crave ever decreasing energy use by devices, evolution has developed the ultimate system to save energy. Nothing with its processing power to energy use ratio in the electronic sphere comes even close.
By contrast, modern AI systems are energy-intensive, centrally manufactured, and architecturally separated from the environments they model. Their recent advances are real, but they are built on pattern recognition over vast datasets, not on embodied interaction with reality.
This distinction becomes most obvious when AI is applied outside clean, structured domains. In messy, real-world data — where categories blur and context matters — AI systems often struggle without extensive human intervention. What humans do naturally, by linking partial, ambiguous information through context and experience, machines still find difficult.
Similarly, AI-generated images can convincingly reproduce the appearance of phenomena like water, but this is not the same as modelling the underlying physics. In domains where behaviour must follow physical laws — such as simulation — developers rely on explicit mathematical models, not pattern generation.
If you have ever tried to use an LLM to tidy messy data (which is a characteristic of most information systems in the world), you realise that AI is close to useless. And getting it to create something in SVG (a vector image) based on what it ‘looks’ like demonstrates that it entirely fails to integrate a mathematical function describing something visual with the perception we have of that.
The same applies to modelling muscles beneath skin. Rather than simulating full biomechanics, game developers rely on simplified rigging and deformation systems that approximate how bodies move. These are not failures of understanding, but necessary compromises between physical accuracy and computational limits.
Humans do not need to simulate reality to understand it. We perceive and predict physical behaviour intuitively, grounded in embodied experience. Machines, by contrast, must either explicitly model the underlying physics or approximate the visible result. Until we can link AI directly to a sensory system, their intelligence will be disembodied prediction and simulation, not learning.
The difference is not that AI cannot produce outputs, but that it lacks grounded understanding. A human can intuitively predict that an impossibly long neck will fail under its own weight, drawing on embodied experience of gravity and structure. AI, unless explicitly trained or constrained, has no such intuition - only patterns that resemble it.
The real limitation of AI, then, is not scale but separation: it processes representations of the world without being situated within it. Until that gap is addressed, its capabilities will remain impressive, but uneven — powerful in narrow domains, and fragile outside them.
Brains are plentiful and, apart from slavery, not owned by anybody. The recent skyrocketing in chip cost predicated on supply essentially coming from one maker illustrates how vulnerable AI really is.
Even the brilliant success of AlphaFold, the AI that can predict the shape of proteins, is only useful at the level of a single protein. Investigating the behaviour of that protein within a natural context is completely beyond AI, precisely because it baulks at complex contexts. How does that protein interact with blood cells? We cannot know without laborious testing.
Complex contexts baffle AI. In the animal world, intelligence often emerges from simple rules: ants, for example, can solve spatial problems that would challenge even groups of humans. This intelligence does not reside in any individual ant’s brain, but in the colony’s evolved physical and social structure. It is distributed — and cannot be reduced to a matrix.
When we say, "A collective behavior is intelligent," what do we mean by intelligence here? How does a system solve a problem? "I need to reach food because if I don't get that food, my adaptive value is gonna drop very quickly because I will starve and not be able to survive and reproduce." According to this definition, if an individual can solve a survival problem better than chance, they qualify as having some form of intelligence.
If a group can solve a problem better than chance and without a central leader telling the group what to do, that group has swarm intelligence; flocks of geese self-assemble into a V shape that slices through the air, providing a lift advantage that's more efficient than a single goose flying alone, despite no individual telling the others what to do. Schools of fish dazzle and confuse predators by sensing their neighbor's movements and synchronizing their motion in a large group. Even the way we as humans, without anyone telling us what to do, unconsciously organise into lanes when walking in crowded spaces, that is also a form of unplanned emergent swarm intelligence.
The ‘suggestion’ from evolution is that intelligence is best derived from experimentation and experience within a social context where much of the intelligence will come from ‘natural formations’ or ‘group consciousness’. So, unless our AI can get on with co-operative learning some time soon, it will fail at anything that has any kind of complexity.
And, most of us who work in a field that requires problem solving in a complex context will be in no danger of being replaced. Instead, our problem solving will be enhanced by not having to do the drudgery work.
Comments
Leave a Comment
Sign in to have your comments approved automatically.
No comments yet. Be the first to comment!