ADVANCED COURSES ARE LIVE !!! HURRY UP JOIN NOW

News: Rolling Stone’s owner sues Google

 Rolling Stone’s owner sues Google over AI Overviews

Source: bamboonine

Rolling Stone’s parent company takes Google to court over traffic lost to AI Overviews.

Key Highlights:

  • Penske Media—owner of Rolling Stone and The Hollywood Reporter—says Google’s AI summaries are reducing clicks to its websites.
  • The company reports affiliate revenue down by more than one-third this year, attributing the decline to falling Google-driven traffic..
  • Other publishers, including Chegg and several European media groups, have launched similar legal actions as scrutiny of AI-powered search intensifies.

What’s Happening:

Penske Media argues that Google’s AI Overviews extract key information from its journalism and present it directly in search results, leaving users with little incentive to click through to the original articles. According to the company, this shift has caused significant drops in page views and affiliate income.

Publishers now face a difficult trade-off: block Google from indexing their content and lose visibility, or allow continued access while effectively supplying material that fuels Google’s AI systems.

Why This Matters:

Media businesses that depend heavily on search referrals are confronting a future with more zero-click searches, as AI Overviews and AI Mode keep users on Google pages—often alongside ads. The takeaway for publishers is clear: search traffic is rented, not owned.

To adapt, many will need to strengthen direct relationships with readers, shift success metrics from raw traffic to subscriptions, sign-ups, and revenue per visit, and experiment with AI-driven ad formats if they’re media buyers. While lawsuits could eventually lead to outcomes like paid licensing or more prominent links, planning for a lower-click reality may be smarter than waiting for courts to reshape the system.

Chatbots double false info rate

Source: Newsguard

Leading AI chatbots are twice as likely to spread false claims as a year ago.

Key Highlights:

  • A report from NewsGuard finds that leading AI tools now repeat misinformation in 35% of news-related tests, a sharp increase from last year.
  • Adding real-time web search has reduced refusal rates but increased exposure to unreliable and misleading sources.
  • Russian propaganda efforts, including networks like Pravda network, are actively planting false stories designed to be picked up by AI systems.

What’s Happening:

NewsGuard’s latest analysis reveals a growing accuracy problem among popular AI chatbots when responding to current events. Compared to much lower error rates a year ago, today’s models repeat false or misleading claims more than one-third of the time.

Tools such as ChatGPT and Perplexity showed some of the steepest declines in reliability after integrating live web search features. While these updates allow models to answer more queries instead of refusing them, they also pull in content from low-quality or manipulated sources. NewsGuard notes that Russian influence operations are deliberately exploiting this behavior by flooding the web with fabricated narratives meant to surface in AI responses.

Why This Matters:

AI-generated answers to breaking news should be treated as leads, not facts. With refusal rates near zero and data sources increasingly polluted, roughly one in three news-focused responses now contains inaccuracies.

For teams using AI in research, customer support, journalism, or marketing, stronger safeguards are essential. That includes enabling trusted-source filters, routing time-sensitive queries through quick fact-check steps, logging citations that can be verified, and requiring confirmation from a second reliable source before publishing. Maintaining short, topic-specific lists of trusted outlets can help reduce risk as AI becomes more deeply embedded in daily workflows.

xAI fires one-third of data annotation team

Source: REUTERS / Dado Ruvic

Musk’s xAI cuts 500 data labelers to focus on specialist AI tutors.

Key Highlights:

  • xAI has laid off roughly one-third of its 1,500-person data annotation workforce, eliminating about 500 roles.
  • Internal communications point to a strategic shift toward specialist AI tutors in fields like STEM, finance, medicine, and safety.
  • The company plans to expand its expert tutor team by tenfold.

What’s Happening:

xAI, led by Elon Musk, has reduced its data annotation unit—responsible for labeling and preparing training data for the Grok model. According to internal emails, the company is scaling back broad, generalist labeling work in favor of accelerating hiring for domain-specific experts.

The move reflects a reallocation of resources: fewer general-purpose annotators and a much larger pool of highly trained tutors who can provide deeper, more accurate feedback in complex subject areas.

Why This Matters:

The shift signals a new approach to AI training—prioritizing expert judgment over sheer volume. By replacing low-skill labeling with specialist oversight, xAI appears to be positioning Grok for enterprise and professional use cases where precision matters more than scale, such as interpreting medical information or analyzing financial documents.

For workers, the change raises the bar for data-labeling jobs, shrinking entry-level opportunities while expanding demand for advanced expertise. For businesses and users, it could mean more reliable outputs and clearer accountability, as domain specialists effectively “sign off” on AI behavior. After recent setbacks in AI reliability, leaning on expert review may be the fastest way for xAI to build trust and credibility at scale.

Disclaimer: All logos,images, videos, trademarks, and brand images used in this blog are the property of their respective owners. They are used here for informational and educational purposes only. We do not claim any ownership or affiliation with these brands.

Tags:

Share:

You May Also Like

Your Website WhatsApp