AI is no longer a feature, but an infrastructure
Virtually every major announcement at CES 2026 shows the same pattern:
AI is not a bonus, but a fundamental part of the product.
We see that in:
- Laptops and chips with powerful on-device AI (NPUs of 80 TOPS and more),
- Vehicles designed with a software-first approach
- Displays and smart home devices that learn to recognize context and behavior
- Robots that do not just move, but also reason
The common thread: AI must deliver measurable value, not just be clever marketing.
Sony & Honda: AFEELA is becoming a reality

Sony Honda Mobility showcased a new AFEELA Prototype 2026 and confirmed that the first production version, AFEELA 1, will actually be delivered in California later this year.
The focus is not on "the car as a means of transport", but on:
1. Vehicle AI (Autonomous Empathy)
The AFEELA uses AI not only to drive (Level 3 ADAS), but to understand context. Thanks to more than 45 sensors (cameras, radar, LiDAR) and AI computing power, the car anticipates the driver's state of mind and the traffic situation. It is an "expressive" interface that communicates with the outside world via the light bar (Media Bar).
2. Entertainment and Content (The driving PS5)
Thanks to Sony’s expertise, the car is an entertainment hub. The AFEELA integrates Unreal Engine 5 for its interface and offers access to the full Sony Pictures catalog and PlayStation gaming via the cloud or local hardware. The interior is designed as a cocoon for media consumption, featuring a panoramic screen that spans the entire width of the dashboard.
3. Always-connected Software (SaaS on Wheels)
The AFEELA is never "finished." Via 5G and satellite connections, the car receives continuous updates. This goes beyond mere navigation; the AI assistant learns from your daily routines. SHM also introduced an open ecosystem where external developers can build apps for the car, similar to the App Store.
4. Collaboration with Qualcomm (The Snapdragon Digital Chassis)
The engine behind this experience is Qualcomm’s Snapdragon Digital Chassis. This SoC (System on a Chip) delivers the immense computing power required for both the AI safety systems and the graphical splendor of the entertainment. It confirms our earlier conclusion: hardware and software are fully converging.
This is not a traditional car. This is a rolling platform — and a clear precursor of where mobility is headed.
Samsung sets a new standard with Micro RGB

Samsung presented the world's first 130-inch Micro RGB TV. Not a gimmick, but a serious step forward in display technology:
1. Full BT.2020 color gamut coverage
While most current high-end screens already struggle with the DCI-P3 color space, this display reaches the BT.2020 standard. This means it can reproduce colors that were previously only visible in nature or in professional cinemas. The Micro RGB technique uses microscopic, self-illuminating LEDs that emit pure red, green, and blue light without color filters, resulting in unparalleled color purity.
2. AI-driven image processing (The NPU at work)
The screen is powered by Samsung’s latest AI Quantum Processor (2026). Because the display is 130 inches, upscaling is of critical importance. The AI analyzes the image frame by frame at an object level, recognizes textures, and adjusts local brightness and sharpness. It removes noise without the image looking "polished," which is essential at this scale.
3. Extremely high brightness and No reflections
With a peak brightness well exceeding 4000 nits, this screen remains perfectly visible even in a sun-drenched room. Combined with the new anti-reflection nanocoating (virtually 0% reflection), glare from windows or lamps disappears completely. This is a massive leap forward for both the living room and the corporate boardroom.
4. The screen as an "Architectural Object"
This is the most important strategic shift. Samsung is moving away from the concept of the "black box on the wall."
- Bezel-less Design: The screen literally has no bezels.
- Ambient Mode 4.0: When the TV is "off," it transforms into a digital canvas that seamlessly blends with the wall texture or displays artworks with the texture of a real painting.
- Modular DNA: Although this is a 130-inch model, the technology is based on modules. This suggests that in the near future, screens can be custom-made for specific walls or architectural corners.
More importantly: Samsung positions the screen as an architectural object, no longer just as a "television on the wall". Displays are becoming interfaces — especially in combination with AI.
Robots get smart: Boston Dynamics & Google DeepMind

One of the most meaningful collaborations to date: Boston Dynamics + Google DeepMind.
The Atlas humanoid is being linked to Gemini Robotics, with the aim of:
1. Better perception (Multimodal Intelligence)
Thanks to Gemini, Atlas doesn't just "see" objects; it understands them. Where previous robots had to be programmed to recognize a specific lever, the new Atlas can now scan a cluttered workspace and understand what a "wrench" is, even if it's covered in dust or located in an unfamiliar spot.
2. Reasoning (Logic in Action)
This is the biggest breakthrough. If you give the robot the command: “Clean up the spilled liquid,” you don’t have to explain how. The robot reasons independently: it must find an absorbent cloth, dab the liquid, and then throw the cloth in the trash. If the trash can is full, it looks for an alternative.
3. Using tools (Precision Motor Skills)
The new electric Atlas has a much wider range of motion and higher precision than its hydraulic predecessor. Combined with Gemini, it can use tools designed for humans — from power drills to fragile measuring instruments — without the software having to be rewritten for each specific tool.
4. Executing tasks independently (From Demo to Factory)
Boston Dynamics confirmed that Atlas is no longer training solely in a lab but is already being deployed in Hyundai’s automotive factories. There, the robot performs tasks that are too heavy, too repetitive, or too dangerous for humans, such as moving engine components in cramped spaces.
The first applications focus on industry and manufacturing. This is a clear step toward practically deployable humanoid robots, not just demonstrations.
Nvidia warns: AI is about computing power, not ideas

During his keynote, Jensen Huang was remarkably clear:
“The future of AI will be defined by compute.”
According to Nvidia, reasoning models, long contexts, and physical AI are creating an explosive demand for:
1. Computing Power (The Vera Rubin Superchip)
The new Rubin GPU delivers 5x the AI computing power of the previous (Blackwell) generation. Even more important is the collaboration with the Vera CPU (based on ARM architecture), which is specifically designed to keep up with the data hunger of modern AI models. Nvidia states that this combination reduces the costs of AI interactions (inference) by a factor of 7.
2. Memory (NPU-friendly storage)
A major innovation is Inference Context Memory Storage. Reasoning models (such as OpenAI’s o1 successors) need to "think" before they answer. This requires them to retrieve massive amounts of context information lightning-fast. Nvidia is now building this memory directly into the system architecture, eliminating the delay in long conversations or complex analyses.
3. Networking (Spectrum-X Photonics)
When AI systems are scaled up to thousands of chips (so-called “pods”), the speed of the connection between those chips becomes the limiting factor. With Spectrum-X Ethernet Photonics, Nvidia integrates light technology (optics) directly into the network, delivering a 10x improvement in data transfer between systems.
4. Energy (Efficiency at scale)
Huang was honest: the demand for power is explosive. The Rubin platform is therefore designed to be much more efficient per calculated “token” (the basic unit of AI output). By extremely closely aligning hardware and software (co-design), Nvidia can deliver more intelligence with lower energy consumption per task.
Nvidia no longer positions itself as a chip manufacturer, but as a builder of complete AI systems. This is crucial for companies that want to get serious about AI.
Qualcomm brings AI PCs to a broader audience

With the Snapdragon X2 Plus, Qualcomm explicitly targets affordable yet AI-ready laptops.
Key specifications:
1. The 80 TOPS NPU (The “AI Engine”)
The jump from 45 to 80 TOPS (Trillion Operations Per Second) is gigantic. This means the laptop can do more than just simple background blurring; it can run full agentic AI experiences locally. Imagine an assistant that proactively manages your mailbox or performs complex Excel analyses without any data being sent to a server. It is the most powerful NPU in this price segment.
2. 3rd Generation Oryon CPU (Speed & Efficiency)
The new 3nm architecture of the Oryon CPU (available in 6 or 10 cores) delivers 35% faster single-core performance than its predecessor. For the business user, this means heavy applications launch faster, while the chip consumes 43% less power. This results in a true “multi-day” battery life.
3. Windows 11 Copilot+ & 26H1
The X2 Plus is specifically built for the new generation of Windows 11 features (version 26H1). This includes enhanced Recall functions and local AI security. Qualcomm and Microsoft have tuned the software so closely to the chip that AI tasks barely impact the battery anymore.
4. No performance loss on battery
A crucial point for EasyComp Zeeland: unlike many traditional x86 laptops (Intel/AMD), the Snapdragon X2 Plus delivers the same performance on battery as it does when plugged in. This makes it the ideal machine for the mobile professional working on-site in Zeeland.
This is important for the business market: on-device AI becomes accessible, without cloud dependency.
Google TV gets creative with Gemini AI

Google expands Google TV with Gemini AI, including:
1. Nano Banana: Instant image creation and editing
Nano Banana is the engine behind Photos Remix. Users can bring up their own Google Photos on the big screen and completely transform them using voice commands. You can turn a vacation photo into an “Art Deco” painting or a “3D character.” It turns your TV into an interactive canvas where you can design images together with your family.
2. Veo: From static image to cinematic atmosphere
With Veo (Google’s most advanced video model), the TV goes a step further. You can now generate short, cinematic scenes or convert static photos into moving images. Imagine creating an atmospheric visual for a party by simply saying: “Create a calm video of a sunset on a Zeeland beach in the style of a movie.”
3. Deep Dive & Visual Search Results
Searching on Google TV is no longer just a list of titles. When you ask a question via Gemini, you get a “Deep Dive”: a visually rich presentation with videos, images, and real-time data (such as sports scores) specifically designed for the big screen. It turns the TV into an educational hub for the entire family.
4. Voice-Controlled Settings (The End of the Menu)
This is the most practical update: you never have to search through complicated menus again. By simply saying “Gemini, the picture is too dark” or “I can’t hear the voices clearly,” the AI instantly adjusts the brightness or audio settings (such as dialogue boost) without you having to pause the movie.
With this, the living room shifts from passive consumption to interactive creation. The TV becomes an interface, not just a screen.
LEGO Smart Play: merging physical and digital
LEGO introduced Smart Play, featuring a smart brick packed with sensors, light, and sound. Creations respond to touch and movement.
This is not “toy with an app,” but a new platform for:
- education
- creative learning,
- interactive experience.
Pre-orders start on January 9 in selected markets.
OLED as an AI interface

Samsung Display showed how OLED screens are becoming the ultimate interface for AI:
1. Flexible (Form follows Function)
Samsung showcased the Flex Hybrid, a screen that can both fold and slide out. For an AI PC, this means the screen physically adapts to the task: a compact 10-inch display for a quick email check via AI voice commands, expanding into a 17-inch canvas when the AI detects you're starting a creative graphic design.
2. Expressive (Emotion through Light)
In the AFEELA and in new AI home assistants, Samsung uses OLED to show “emotion.” By utilizing transparent OLED panels and light strips (the Media Bar), a device can use subtle color patterns and animations to show whether the AI is “thinking,” “listening,” or giving a warning. This makes the interaction between human and machine more human and intuitive.
3. Context-Aware (Adaptive UI)
This is the most groundbreaking innovation: the Sensor OLED. Samsung integrates sensors (such as fingerprint scanners and heart rate monitors) directly between the pixels across the entire surface of the display.
- AI Integration: If you hold your hand over the screen, the AI recognizes who you are and instantly adapts the interface (UI) to your preferences and authorization level.
- Health: In a business setting (AI PC), the screen can measure your stress level or fatigue through your fingertips and, via the AI, advise you to take a break.
From cars to laptops and assistant displays: visual feedback and adaptive UIs are becoming crucial in AI-driven products.
What does this mean for businesses and consumers?
The most important conclusion so far:
- AI has come of age.
- Hardware and software are converging once again.
- Local processing (on-device AI) is becoming increasingly important.
- “Smart” now means: reliable, efficient, and practically applicable.
At EasyComp Zeeland, we focus primarily on what is now applicable:
- AI PCs for workplaces,
- smart infrastructure,
- edge AI,
- security and privacy by design.
CES 2026 shows: the experimental phase is over. The implementation phase has begun.
👉 We continue to follow CES 2026 and share relevant updates that truly have an impact — not just pretty demos.
To be continued.



