Meta’s AR glasses hit the streets
Meta brings AR screens to Ray-Ban smart glasses for live info and translations.
Key Highlights:
- Meta has launched Ray-Ban Display smart glasses featuring an in-lens AR screen that shows text, images, and supports live calls.
- The glasses work with a Neural Band wrist bracelet that uses EMG signals to enable pinch and swipe gestures, delivering up to six hours of battery life.
- A performance-focused Oakley Meta Vanguard model targets athletes, combining cameras and AI features with workout tracking and integration with Garmin devices.
What’s Happening:
Meta introduced a lineup of three AI-powered smart glasses, headlined by the Ray-Ban Display. These glasses include a heads-up AR interface built directly into the lens, allowing users to view live captions, translations, navigation prompts, and notifications without looking at a phone.
Control comes from a Neural Band bracelet, which interprets subtle hand and finger movements for input. Alongside this, Meta revealed the Oakley Meta Vanguard—a sport-centric variant designed to track workouts, capture POV footage, and automatically generate highlight clips for training or sharing. Pricing begins at $799, with U.S. availability starting this month.
Why This Matters:
Everyday phone checks—reading messages, following directions, translating conversations, or setting timers—move directly into the user’s field of view. The EMG-based wristband tackles one of wearables’ biggest challenges by turning small, natural finger motions into reliable controls, making the glasses practical for commuting, multitasking, and hands-busy work.
For athletes, Garmin-linked performance stats and auto-generated highlight videos add training value. Creators gain seamless first-person capture, while users with hearing loss benefit from real-time subtitles. The biggest unanswered questions remain comfort, battery endurance, and social acceptance: are public spaces and workplaces ready for always-on, face-worn displays?
Google brings Gemini to Chrome

Source: Jaque Silva / Nurphoto / Getty Images
Google rolls out Gemini inside Chrome to bring AI help straight into browsing.
Key Highlights:
- Google has integrated Gemini directly into Google Chrome on Mac, Windows, and mobile—no additional sign-ups required.
- Users can ask Gemini to summarize webpages, work across multiple tabs, or schedule tasks directly from the browser.
- Upcoming agent-style capabilities will allow Gemini to handle actions like bookings and online shopping automatically.
What’s Happening:
Google is embedding Gemini into every Chrome browser, enabling users to interact with webpages conversationally. People can ask the AI to explain content, locate videos, set meetings, or complete actions without leaving the current tab.
Gemini connects seamlessly with Google services such as Google Calendar, Google Maps, and YouTube. Google also plans to expand these features to Workspace users and introduce more advanced agent-like tools capable of managing everyday tasks, from booking appointments to ordering groceries.
Why This Matters:
Chrome is evolving from a passive browsing tool into an active productivity hub. With Gemini handling summaries, scheduling, and soon end-to-end tasks like shopping or bookings, simple clicks can turn into completed actions—all without switching tabs or apps. This shift could fundamentally change how people navigate the web, making the browser itself a central AI-powered workspace rather than just a window to websites.
Nvidia and Intel team up on AI chips

Nvidia invests $5B in Intel to co-develop AI-ready chips for data centers and PCs.
Key Highlights:
- Nvidia and Intel will co-develop custom x86 CPUs and system-on-chips, combining Intel processors with Nvidia RTX GPU chiplets.
- Nvidia will support the partnership with a $5 billion equity investment in Intel at $23.28 per share.
- The collaboration targets data centers, enterprise systems, and consumer PCs, with plans spanning multiple generations of AI products.
What’s Happening:
Nvidia and Intel are planning a deep hardware collaboration to design and manufacture next-generation CPUs and SoCs that tightly integrate Nvidia’s AI and graphics architecture with Intel’s x86 platform. Intel will produce custom x86 CPUs tailored for Nvidia’s AI infrastructure, while also building PC chips that merge Intel CPUs with Nvidia RTX GPU chiplets using high-speed interconnects.
The goal is to create platforms optimized for AI workloads across servers and personal computers. Nvidia’s $5 billion investment cements the alliance and signals long-term commitment to jointly developed silicon.
Why This Matters:
The partnership could bring AI-heavy tasks—such as training large models or running local generative AI applications—closer to both servers and everyday PCs, reducing dependence on cloud-only GPUs. PC manufacturers may soon ship laptops capable of native 8K video editing or on-device AI coding assistants, while data centers gain new server configurations beyond traditional GPU rack setups.
Overall, the deal reshapes the competitive landscape by blending Nvidia’s AI acceleration with Intel’s CPU ecosystem, potentially redefining how AI workloads are built and deployed across devices.
Disclaimer: All logos,images, videos, trademarks, and brand images used in this blog are the property of their respective owners. They are used here for informational and educational purposes only. We do not claim any ownership or affiliation with these brands.