Hi
There used to be a saying in manufacturing circles that the factory of the future would only need two living things: a man and a dog. The man would be there to feed the dog, and the dog
would be there to stop the man from touching the machines.
It was a tongue-in-cheek nod to automation — and how humans often do more harm than good when interfering with highly optimised processes. But fast forward to today, and that joke is starting to feel like a preview, not a punchline.
In the age of AI, the dog doesn’t need feeding. In fact, it’s not even a dog anymore. It's a robot dog — intelligent, self-charging, always on duty, always alert. And the man? Well, he might not be needed at all.
This isn’t science fiction. Walk into a modern distribution centre, a semiconductor cleanroom, or a high-tech food production line, and you’ll see
machinery running with minimal human oversight. Software manages inventory, predicts maintenance needs, adjusts for demand, and flags anomalies. Physical robots handle lifting, packing, sorting, and even cleaning.
And now we’re layering on agentic AI — intelligent, task-completing systems that not only follow rules, but understand objectives, query data in real time, and make decisions
without waiting for a human in the loop. These agents don’t just respond to inputs. They understand outcomes. That’s the real shift.
So where does this leave people?
Not necessarily unemployed — but definitely repositioned.
In a traditional business, human effort flows into production. We make, we build, we ship, we fix. That labour is gradually being abstracted away. But the thinking — the strategic design, the ethical oversight, the creative problem-solving — is becoming more important than ever.
The challenge is that many businesses haven’t yet adjusted.
They still build organisations based on headcount and hierarchy, rather than outcome and capability. They measure productivity in hours worked, not in value created.
In a factory run by intelligent systems, the old models become irrelevant.
We’re heading towards a future where businesses
will be leaner, smarter, more data-driven — and potentially less forgiving of inefficiency or indecision. In such a world, the question becomes: what is the human role?
It’s not pushing buttons or feeding data into spreadsheets. Those jobs are already disappearing. The real opportunity is in asking better questions. Designing better systems. Holding AI accountable to human values — fairness,
transparency, sustainability.
And yes, sometimes stepping away from the machine entirely, to think about what we should be building, not just what we can automate.
So maybe the new factory won’t need a man and a dog after all.
But every business will still need someone — someone with curiosity, vision, and judgement — to set the direction, challenge the assumptions, and ask that vital question:
But there's a deeper problem looming — one that doesn’t get enough attention.
As AI takes over entry-level roles — not just in factories but across the professions — we’re seeing the erosion of the traditional learning ladder. In law, accountancy, journalism, software development, and even medicine, the junior work that once formed the foundation of professional expertise is vanishing.
No one is doing the photocopying, reconciling the
ledger, drafting the first version of the contract, or running the basic analysis. Because a machine can do it faster, cheaper, and with fewer errors.
From a cost perspective, this makes sense. But from a long-term capability perspective, it’s dangerous.
If early-career professionals no
longer have access to the routine work where they build judgement, learn context, and absorb standards, where will tomorrow’s experienced workers come from?
We’re not just risking job losses — we’re risking the atrophy of professional skill itself. If the pipeline of human expertise dries up, we’ll become entirely reliant on the AI that replaced it. And when that AI fails — or is misused,
misunderstood, or corrupted — who will be left to fix it?
No AI today can explain why its output is right, or what principles underlie a correct answer. That still requires a human brain — one that's been trained, tested, and exposed to complexity. The risk isn’t that we replace humans too quickly. It’s that we stop training them at all.
So while the robot dog might be impressive, and the factory may no longer need a man, we do still need people — not just to run things now, but to be ready to lead later.
If we remove the ladder, we lose the next generation of leaders.
So, is there a better way? Yes. But it starts with rethinking how we design work — not just for efficiency, but for development. It means using AI not to replace early-career roles, but to augment them — to give junior people smarter tools while still giving them ownership and exposure. It means treating learning and judgement as assets, not inefficiencies.
The future will be automated, yes. But the future still needs humans — experienced, capable ones — to ask the right questions, spot the grey areas, and steer the machine in the right direction.
Noel Guilford