THE UNWRITTEN DATASET
By Anne Canal
Publication Date 25th January 2026: 11:04 GMT
(Image Credit: Freepic)
How Everyday Life Became the Quiet Superpower in the Global Robotics Race
By the time the future arrives, it rarely announces itself with trumpets. It tends to slip in sideways—through glasses, phones, assistants, and appliances—until one morning we wake up and realise that history has already moved.
Much of today’s debate about artificial intelligence and robotics still clings to the visible drama: export controls, semiconductor chokepoints, boardroom showdowns over access to NVIDIA’s most advanced chips. These are serious matters, and they dominate headlines for good reason. But they may also be the wrong battlefield on which to fix our gaze.
Because beneath the spectacle of silicon geopolitics lies something quieter, more human—and potentially far more decisive.
The Hidden Modality of Modern Robots
Robots are not born intelligent. They are trained.
Whether embodied as domestic assistants, warehouse automatons, humanoid prototypes, or autonomous vehicles, modern robots learn through multimodal data:
- vision (what the world looks like),
- proprioception (how movement feels),
- interaction (what happens when objects are touched, pushed, grasped, dropped),
- and context (how humans behave around other humans, tools, furniture, and machines).
This is why, in the United States, some of the most advanced robotics firms still rely on what can only be described as artisanal data collection. At companies such as Boston Dynamics and newer humanoid ventures, teams of hundreds of paid operators don motion-capture suits, VR headsets, and robotic gloves—teleoperating machines to teach them how to walk, balance, grasp, open doors, fold laundry, or navigate a kitchen.
Five hundred humans training robots is considered a serious investment.
Five hundred humans is also a rounding error.
When a Society Becomes the Dataset
China, unlike liberal democracies, is a confirmed surveillance state. That fact is neither speculative nor controversial. Cameras, sensors, biometric systems, and increasingly consumer wearables are woven into daily life at a scale unprecedented in human history.
Among the most consequential of these devices are smart glasses.
While no official figure exists for the precise number of citizens actively wearing them at any given moment, market data indicates that approximately 2.5–2.75 million units are in circulation or expected to be in circulation within a single year.
That number should stop any serious analyst cold.
Because smart glasses are not merely gadgets. They are eyes. They capture:
- first-person human vision,
- natural head movement,
- object recognition in uncontrolled environments,
- real-world hand–object interactions,
- social spacing, gait, posture, and micro-behaviour.
And they do so passively, continuously, without the friction of a lab, a headset, or a consent form that reads like a legal novella.
The citizen is not “training a robot.”
The citizen is living.
Yet from a robotics perspective, living is the most valuable training protocol ever devised.
Unknowing Researchers of the State
Here is the uncomfortable thought Western discourse avoids:
Millions of ordinary people, simply by moving through their daily lives, may already be conducting the most extensive robotics research programme in history.
- Walking through doorways teaches spatial navigation.
- Cooking dinner teaches object affordances.
- Riding public transport teaches balance and anticipation.
- Navigating crowds teaches social robotics.
- Shopping teaches manipulation under uncertainty.
Every glance, every reach, every hesitation becomes labelled data when paired with modern computer vision and state-scale storage.
In the United States, a humanoid robot learns to pick up a cup because an engineer told it to.
In China, a robot may learn to pick up a cup because two million people picked up cups today.
Why Chips Alone Don’t Win Wars
America’s strategy—restricting access to top-tier GPUs—assumes that compute is the bottleneck.
But compute is only half the equation.
The other half is experience.
A robot trained on limited, staged, teleoperated scenarios will always lag behind a robot trained on natural human behaviour at civilizational scale. No amount of FLOPS can compensate for impoverished sensory grounding.
This is the paradox of the moment:
- The West leads in model architecture.
- China may be leading in embodied experience.
And experience compounds.
The Army That Doesn’t Know It’s Enlisted
The most striking asymmetry is not technological but philosophical.
In the West, participation in AI training is explicit, contractual, compensated, and limited.
In China, participation may be ambient, continuous, and unremarked upon.
Two and a half million smart-glasses wearers are not a focus group.
They are not a pilot programme.
They are not a beta test.
They are an informal, distributed, civilian sensor network—one that could, if aligned with state-sponsored robotics firms, accelerate learning curves by orders of magnitude.
Not tomorrow.
Not in theory.
But quietly, already.
How on earth can our ones compete.
The Question No One Wants to Ask
America may well have impeded the importation of top-tier chips.
But what, precisely, can it do to combat a potential army of 2.5 million individuals, unknowingly contributing essential robotic data—vision, movement, interaction—to the state for the machines of tomorrow?
You cannot sanction a gesture.
You cannot embargo a glance.
You cannot regulate lived reality once it has been recorded.
The AI race, we are told, is about intelligence.
It may, in fact, be about who gets to learn from being human at scale.
And history suggests that whoever controls that learning will not merely build better robots.
They will define the future in which those robots live.
You May also like
By Anne Canal
By Rose Polipi
