Select Page

AI Steps Out of the Screen at CES. These 5 Things Stopped Me in My Tracks

James Thompson

James Thompson

By the time you’d crossed the central hall at CES 2026, it was obvious: artificial intelligence had stepped out of the browser window and onto the show floor. This was the year AI stopped being an abstraction in the cloud and started moving through the world on wheels, legs, wings and wearables. “Physical AI” and edge intelligence were everywhere, turning what had long been pitched as a software revolution into something you could watch, touch and, in some cases, be quietly watched by yourself.

The first thing that stopped me was a robot that didn’t look like a gadget at all, but like infrastructure in waiting. At Hyundai and Boston Dynamics’ booth, the latest version of the Atlas humanoid was surrounded by a different kind of attention than previous years. This wasn’t just about acrobatics or viral backflips. The focus was on autonomy: perception, motion planning and manipulation running on-device, with Arm-based compute handling decisions in milliseconds rather than shuttling every frame to the cloud. Nearby, logistics and service bots from companies like PUDU Robotics and AGIBOT traced careful paths through mock warehouses and hotel lobbies, adjusting their routes on the fly as crowds thickened. Watching them, you could almost see the business model forming: fleets of AI workers hired not by the hour, but by the square foot they’re able to cover safely and efficiently. For out-of-home advertisers, these are future moving media platforms as much as labor-saving devices.

Just across the aisle, another category of physical AI was quietly rewriting what “screen” even means. Lightweight AR smart glasses from XREAL and ThinkAR looked more like everyday eyewear than sci-fi props, but what mattered was what you couldn’t see: neural networks running locally to understand space, gestures and context. The glasses mapped booth layouts, overlaid directions, and responded to hand movements without a noticeable cloud round trip. For brands, that redefines the canvas of OOH. Instead of fighting for a slice of a static billboard, advertisers will soon be able to paint personalized layers over the physical world—different creative for a commuter, a tourist or a VIP attendee walking the same corridor. The infrastructure for that future—on-device AI, edge rendering, ultra-low-power chips—is suddenly here, and very real.

If the glasses hinted at a new kind of OOH surface, the next showstopper was a reminder that the most valuable real estate is still the human body. AI wearables had their own mini-moment at CES, but one device summed up the shift. SwitchBot’s MindClip, showcased in the start-up zones, is a thumb-sized clip that promises to act like a “second brain,” recording, transcribing and organizing your life in real time using AI. It sits almost invisibly on clothing and continuously captures conversations and ambient sound, turning them into searchable memory. Other exhibitors showed AI rings and neural sensor bands that track micro-gestures or cognitive load using similarly tiny, always-on models. For marketers, this is not just another data stream; it is a persistent, context-rich log of what people do, where they go and what they react to in physical space. The privacy questions are profound and unresolved, but in an OOH context, it hints at measurement tools capable of connecting a glance at a digital screen to a purchase days later, without ever firing a browser cookie.

Back in the home, LG’s CLOiD robot drew crowds for a different reason: it looked like a domestic worker you might actually want—and be able—to live with. Billed as part of a “Zero Labor Home” vision, CLOiD combined a wheeled base with a tilting torso and two articulated arms that could fold laundry, retrieve items from the fridge and load an oven. Its arms, with human-like range of motion and multi-fingered hands, performed surprisingly delicate tasks, all guided by onboard AI perception and control. Where last decade’s smart speakers turned the home into an audio interface, CLOiD turns it into a physical stage. Every surface becomes interactive, every room a potential experience zone. As home robots gain adoption, OOH won’t stop at the front door; brands will need to design campaigns that travel from the mall display to the living room robot that can act on a user’s interest—preheating, restocking or even physically assembling the thing you saw advertised.

The most unexpected manifestation of physical AI came rolling by in the outdoor mobility area: Jackery’s Solar Mars Bot, a mobile solar generator that can autonomously seek out the best sunlight. Using AI-enhanced computer vision, it navigates terrain, tracks the sun, and repositioning throughout the day to maximize energy capture, then folds its panels away for storage and transport. It felt like a prototype for a new class of self-deploying infrastructure—machines that don’t just operate in the field, but create the power and data backbone for other systems around them. For OOH media owners, that’s an intriguing proposition: imagine pop-up digital installations that arrive on their own, power themselves, reposition to catch both the crowd and the sun, and then roll away after the campaign. When your energy source has wheels and a brain, the constraints on where a screen can go start to disappear.

Threaded through all these demos was an industrial story that may matter more to the long-term evolution of OOH than any single gadget. Siemens used its CES keynote to spotlight the Digital Twin Composer, a platform that marries its industrial digital twin technology with NVIDIA Omniverse simulations and real-time engineering data. The idea: create a living 3D model of a product, process or even entire plant, then test how it behaves over time, under different weather patterns, traffic conditions or usage scenarios. For cities and media networks, that kind of simulation is a preview engine for the urban canvas itself—planning where to place connected screens, how to choreograph fleets of robots, where sensors and solar bots should live, and what audiences will actually see. It turns the messy, analog world that OOH inhabits into something you can A/B test before a single panel is bolted into concrete.

Taken together, these five encounters—the humanoid workers, the AR glasses, the “second brain” wearables, the chore-capable home robot and the self-aiming solar bot—tell a clear story. AI is no longer a thin layer on top of digital media. It is becoming the infrastructure of the physical world: the legs on the robot, the eyes in the glasses, the memory on your lapel, the arms in your kitchen, the wheels under your power supply. For the OOH industry, that means the medium itself is about to move, watch, respond and, occasionally, talk back.