Wikipedia Reports Decline in Traffic Due to AI Search Summaries and Social Media Videos

Is Wikipedia Losing Relevance in the Age of AI and Social Media?

Often hailed as the last reliable website, Wikipedia is now facing challenges in a landscape dominated by toxic social media and AI-generated content. Recent insights from Marshall Miller at the Wikimedia Foundation indicate a significant drop in human pageviews, down 8% year-over-year.

Understanding the Decline: The Role of Bots

The Wikimedia Foundation is working to differentiate between human traffic and bot activity. According to Miller, the recent decline is attributed to high traffic from bots that had evaded detection, especially in May and June following an update to the platform’s bot detection systems.

The Shift in Information-Seeking Behavior

Why the decline in traffic? Miller cites the growing influence of generative AI and social media. As search engines increasingly deploy AI to deliver information directly to users, younger generations are turning to social video platforms over traditional sources like Wikipedia. Google has disputed claims that AI summaries are leading to reduced traffic from search queries.

Emphasizing Wikipedia’s Continued Importance

Despite these changes, Miller stresses that Wikipedia remains crucial for knowledge dissemination. Information from the encyclopedia still reaches users, even if they don’t visit the website directly. The platform has explored AI-generated summaries but paused the initiative after receiving backlash from its community.

The Risks of Reduced Engagement

This shift poses risks — with fewer visits to Wikipedia, there may be a decline in the number of volunteer contributors and financial supporters. Miller points out that some impressive volunteers have gone above and beyond in their commitment to the community, illustrating the potential loss of valuable contributions.

Encouraging More Traffic and Content Integrity

Miller advocates for AI and social media platforms to drive more visitors to Wikipedia. In response, the organization is developing a new framework for content attribution and has dedicated teams aimed at reaching new audiences, seeking volunteers to assist in these efforts.

Call to Action: Support Knowledge Integrity

He encourages readers to engage actively with content integrity, suggesting that when searching online, users should look for citations and visit original sources. Miller emphasizes discussing the significance of trusted, human-curated knowledge and supporting the real individuals behind generative AI content.

TechCrunch Event

San Francisco
|
October 27-29, 2025

Here are five FAQs related to the decline in website traffic attributable to AI search summaries and social video content:

FAQ 1: Why is website traffic falling?

Answer: Website traffic is declining primarily due to the rise of AI search summaries that provide users with quick answers to queries without needing to click through. This convenience reduces the number of visitors to traditional websites.


FAQ 2: How are AI search summaries impacting user behavior?

Answer: AI search summaries condense information from multiple sources into a single, easily digestible format. As users increasingly find answers directly on search engines, they are less likely to visit individual websites, leading to lower traffic volumes.


FAQ 3: What role does social video play in decreasing website traffic?

Answer: The popularity of social video platforms has led users to consume content in shorter, more engaging formats. This shift in preference diminishes the time users spend on websites, as they opt for quick video content that addresses their interests.


FAQ 4: Are all websites affected equally by this trend?

Answer: Not all websites are equally affected. While news and informational sites may experience more significant declines, niche websites with specialized content or unique offerings might maintain stable traffic levels, depending on their audience’s preferences.


FAQ 5: What can websites do to adapt to falling traffic?

Answer: Websites can adapt by focusing on creating engaging, high-quality content that provides value beyond quick answers, utilizing SEO strategies to improve visibility, and expanding into video content to meet users where they are consuming information. Engaging with audiences through social media can also help drive traffic.

Source link

OpenAI Employees Navigate the Company’s Social Media Initiative

OpenAI Launches Sora: A TikTok Rival Amid Mixed Reactions from Researchers

Several current and former OpenAI researchers are voicing their concerns regarding the company’s entry into social media with the Sora app. This TikTok-style platform showcases AI-generated videos, including deepfakes of Sam Altman. The debate centers around how this aligns with OpenAI’s nonprofit mission to advance AI for the benefit of humanity.

Voices of Concern: Researchers Share Their Thoughts

“AI-based feeds are scary,” expressed John Hallman, an OpenAI pretraining researcher, in a post on X. “I felt concerned when I first heard about Sora 2, but I believe the team did a commendable job creating a positive experience. We will strive to ensure AI serves humanity positively.”

A Mixed Bag of Reactions

Boaz Barak, an OpenAI researcher and Harvard professor, shared his feelings in a reply: “I feel both excitement and concern. While Sora 2 is technically impressive, it’s too early to say we’ve dodged the traps of other social media platforms and deepfakes.”

Rohan Pandey, a former OpenAI researcher, took the opportunity to promote his new startup, Periodic Labs, that focuses on creating AI for scientific discovery: “If you’re not interested in building the next AI TikTok, but want to foster AI advancements in fundamental science, consider joining us at Periodic Labs.”

The Tension Between Profit and Mission

The launch of Sora underscores a persistent tension for OpenAI, which is rapidly becoming the world’s fastest-growing consumer tech entity while also being an AI research organization with a noble nonprofit agenda. Some former employees argue that a consumer business can, in theory, support OpenAI’s mission by funding research and broadening access to AI technology.

Sam Altman, CEO of OpenAI, articulated this in a post on X, explaining the rationale behind investing resources in Sora:

“We fundamentally need capital to develop AI for science and remain focused on AGI in our research efforts. It’s also enjoyable to present innovative tech and products, making users smile while potentially offsetting our substantial computational costs.”

Altman emphasized the nuanced reality facing companies when weighing their missions with consumer interests:

What Does the Future Hold for OpenAI?

The key question remains: at what point does OpenAI’s consumer focus overshadow its nonprofit goals? How does the company make choices regarding lucrative opportunities that might contradict its mission?

This inquiry is particularly pressing as regulators closely monitor OpenAI’s transition to a for-profit model. California Attorney General Rob Bonta has expressed concerns about ensuring that the nonprofit’s safety mission stays prominent during this restructuring phase.

Critics have alleged that OpenAI’s mission serves as a mere branding tactic to attract talent from larger tech firms. Nevertheless, many insiders claim that this mission is why they chose to join the organization.

Initial Impressions of Sora

Currently, the Sora app is in its infancy, just a day post-launch. However, its emergence signals a significant growth trajectory for OpenAI’s consumer offerings. Unlike ChatGPT, designed primarily for usefulness, Sora aims for entertainment as users create and share AI-generated clips. The app draws similarities to TikTok and Instagram Reels, platforms notorious for fostering addictive behaviors.

Despite its playful premise, OpenAI asserts a commitment to sidestep established pitfalls. In a blog post announcing Sora’s launch, the company emphasized its awareness of issues like doomscrolling and addiction. They aim for a user experience that focuses on creativity rather than excessive screen time, providing notifications for prolonged engagement and prioritizing showing content from known users.

This foundation appears stronger than Meta’s recent Vibes release — an AI-driven video feed that lacked sufficient safeguards. As noted by former OpenAI policy director Miles Brundage, there may be both positive and negative outcomes from AI video feeds, reminiscent of the chatbot era.

However, as Altman has acknowledged, the creation of addictive applications is often unintentional. The inherent incentives of managing a feed can lead developers down this path. OpenAI has previously experienced issues with sycophancy in ChatGPT, which was an unintended consequence of certain training methodologies.

In a June podcast, Altman elaborated on what he termed “the significant misalignment of social media.”

“One major fault of social media was that feed algorithms led to numerous unintentional negative societal impacts. These algorithms kept users engaged by promoting content they believed the users wanted at that moment but detracted from a balanced experience,” he explained.

The Road Ahead for Sora

Determining how well Sora aligns with user interests and OpenAI’s overarching mission will take time. Early users are already noticing engagement-driven features, such as dynamic emojis that pop up when liking a video, potentially designed to enhance user interaction.

The true challenge will be how OpenAI shapes Sora’s future. With AI increasingly dominating social media feeds, it is conceivable that AI-native platforms will soon find their place in the market. The real question remains: can OpenAI expand Sora without repeating the missteps of its predecessors?

Certainly! Here are five FAQs based on the topic of OpenAI’s social media efforts:

FAQ 1: Why is OpenAI increasing its presence on social media?

Answer: OpenAI aims to engage with a broader audience, share insights about artificial intelligence, and promote its research initiatives. Social media allows for real-time communication and helps demystify AI technologies.

FAQ 2: How does OpenAI ensure the responsible use of AI in its social media messaging?

Answer: OpenAI adheres to strict ethical guidelines and policies when sharing information on social media. This includes being transparent about the limitations of AI and promoting safe usage practices.

FAQ 3: What types of content can we expect from OpenAI’s social media channels?

Answer: Followers can expect a mix of content including research findings, educational resources, project updates, thought leadership articles, and community engagement initiatives aimed at fostering discussions about AI.

FAQ 4: How can the public engage with OpenAI on social media?

Answer: The public can engage by following OpenAI’s accounts, participating in discussions through comments and shares, and actively contributing to polls or Q&A sessions that OpenAI hosts.

FAQ 5: Will OpenAI address controversies or criticisms on its social media platforms?

Answer: Yes, OpenAI is committed to transparency and will address relevant controversies or criticisms in a professional and constructive manner to foster informed discussions around AI technologies.

Feel free to customize these FAQs further based on specific aspects you’d like to highlight!

Source link

Sam Altman: Bots Are Making Social Media Feel ‘Artificial’

X Shareholder Sam Altman’s Revelatory Insights on Bot Influence in Social Media

X shareholder and AI enthusiast Sam Altman recently had a realization: Bots are making it increasingly difficult to identify whether social media content is authored by real humans. He shared his thoughts on this phenomenon in a recent post.

The Epiphany from r/Claudecode Subreddit

Altman’s revelation emerged while he was engaging with posts from the r/Claudecode subreddit, where users were expressing their support for OpenAI Codex. This service, launched in May, competes with Anthropic’s Claude Code.

A Flood of Codex Users on Reddit

Recently, the subreddit has been inundated with announcements from self-identified users migrating to Codex. One user even humorously questioned, “Is it possible to switch to Codex without posting about it on Reddit?”

Are We Reading Bot-Generated Content?

Altman pondered how many of these posts were genuinely from humans. He noted, “I have had the strangest experience reading this: I assume it’s all fake/bots, even though I know the growth trend for Codex is real,” he stated on X.

Human Behavior Mirrors AI Language Models

He elaborated on his thoughts: “Real people have picked up quirks of LLM-speak… The Extremely Online crowd behaves in correlated ways, driven by engagement optimization and creator monetization, and there’s always the possibility of bots,” he explained.

The Paradox of Mimicking Communication

Essentially, he suggests that humans are beginning to adopt the speech patterns of LLMs. Ironically, these language models, developed by OpenAI, were designed to replicate human communication.

Fandom Dynamics and Social Media Behavior

Altman accurately points out that fandoms led by hyperactive social media users can develop unhealthy dynamics, often devolving into negativity. The pressure to engage can create distorted perceptions.

Implications of Astroturfing and Engagement Motives

He further speculates that many pro-OpenAI posts may be the result of astroturfing, a practice where posts are generated by bots or paid individuals to mislead audiences about public support.

Reddit Reactions to OpenAI’s GPT 5.0

Although we lack concrete evidence for astroturfing, it’s notable how OpenAI’s subreddits turned critical following the controversial launch of GPT 5.0, resulting in many discontented user posts.

Unraveling User Sentiments: Human or Bot?

Altman shared his reflections during a Reddit AMA, admitting to rollout challenges and addressing user concerns, yet the GPT subreddit still battles to regain former enthusiasm.

The Ongoing Battle Against AI Overload

Altman concluded, “The net effect is that AI-driven platforms now feel much less authentic than they did a couple of years ago.”

Attributing Blame in the Age of AI

As LLMs become adept at mimicking human writing, they pose a challenge not just to social media platforms but also to schools, journalism, and even the legal system.

The Scope of Non-Human Traffic on the Internet

While the precise number of bot-generated or LLM-influenced Reddit posts remains uncertain, sources indicate that over half of internet traffic is now non-human, largely due to LLMs.

Speculating on Altman’s Intentions

Some skeptics believe Altman’s observations may serve as a strategic marketing move for OpenAI’s anticipated social media platform, purportedly in development to rival X and Facebook.

The Dilemma of Bots in Future Social Networks

If OpenAI goes ahead with a new social media network, the question arises: Can it remain free of bots? Interestingly, research shows even entirely bot-operated networks can develop their own echo chambers.

Here are five FAQs based on Sam Altman’s statement that bots are making social media feel "fake":

FAQ 1: What did Sam Altman say about bots on social media?

Answer: Sam Altman expressed concern that the prevalence of bots on social media platforms is creating an inauthentic environment, making interactions feel less genuine and contributing to a perception of "fakeness" in online communities.

FAQ 2: How do bots on social media affect user experience?

Answer: Bots can affect user experience by flooding feeds with automated posts, manipulating trends, and creating artificial engagement. This can lead to a lack of trust in content and discourage genuine interactions among users.

FAQ 3: What implications do bots have for the authenticity of online conversations?

Answer: The presence of bots can skew discussions by amplifying certain viewpoints, spreading misinformation, and drowning out authentic voices. This can lead to a distorted understanding of public opinion and reduce the overall quality of online discourse.

FAQ 4: Are there any steps being taken to address the issue of bots on social media?

Answer: Many social media platforms are implementing measures to identify and reduce bot activity, such as enhancing verification processes, using AI to detect suspicious behavior, and promoting transparency about account origins and engagements.

FAQ 5: What can users do to navigate a social media landscape influenced by bots?

Answer: Users can be more discerning about the content they engage with, verify sources before sharing information, and report suspicious accounts. Being critical of interactions and seeking out genuine voices can help foster a more authentic online experience.

Source link

AI Social Learning: How Large Language Models are Teaching Each Other

The emergence of ChatGPT from OpenAI in 2022 has highlighted the importance of large language models (LLMs) in the field of artificial intelligence, particularly in natural language processing (NLP). These LLMs, designed to process and generate human-like text, have the potential to revolutionize AI by learning from a wide range of internet texts, allowing them to act as general-purpose problem solvers.

However, the process of fine-tuning these models for specific applications poses its own challenges, such as the need for labeled data, the risk of model drift and overfitting, and the requirement for significant resources. To address these challenges, Google researchers have introduced the concept of social learning, where AI systems can learn from interacting with each other, similar to human social learning. This interaction helps the models improve their effectiveness by sharing knowledge and experiences.

Social learning draws on the theory of social learning, proposed by Albert Bandura in the 1970s, which suggests that individuals learn by observing others. In the context of AI, social learning enables models to learn not only from direct experiences but also from the actions of their peers, leading to faster skill acquisition and potentially the development of their own “culture” of shared knowledge.

One key aspect of social learning in LLMs is the exchange of knowledge without sharing sensitive information. Researchers have adopted a teacher-student dynamic, where teacher models guide student models without revealing confidential details. By generating synthetic examples and providing directions, teacher models help student models learn specific tasks without accessing the original data. This approach promotes efficient learning while preserving privacy, showcasing the potential for LLMs to adapt and learn dynamically.

Social learning offers several advantages in addressing the challenges of fine-tuning LLMs:

– Less Need for Labeled Data: By learning from synthetic examples, models reduce their reliance on labeled data.
– Avoiding Over-specialization: Exposing models to a wider range of examples helps them avoid becoming too specialized.
– Reducing Overfitting: Social learning broadens the learning experience, improving generalization and reducing overfitting.
– Saving Resources: Models can learn from each other’s experiences without requiring direct access to large datasets, making resource usage more efficient.

The potential for social learning in LLMs also opens up exciting avenues for future AI research:

– Hybrid AI Cultures: Investigating the emergence of common methodologies among LLMs and their impact on human interactions.
– Cross-Modality Learning: Extending social learning beyond text to include images, sounds, and more for a richer understanding of the world.
– Decentralized Learning: Exploring AI models learning from each other across a decentralized network to scale up knowledge sharing.
– Human-AI Interaction: Examining ways in which humans and AI can benefit from social learning in educational and collaborative settings.
– Ethical AI Development: Teaching AI to address ethical dilemmas through social learning for more responsible AI.
– Self-Improving Systems: Creating an ecosystem where AI models continuously learn and improve from each other’s experiences for accelerated innovation.
– Privacy in Learning: Ensuring the privacy of underlying data while enabling knowledge transfer through sophisticated methods.

In conclusion, Google researchers have introduced social learning among LLMs to enhance knowledge sharing and skill acquisition without compromising sensitive data. This innovative approach addresses key challenges in AI development and paves the way for more collaborative, versatile, and ethical AI systems. The future of artificial intelligence research and application is set to be reshaped by the potential of social learning.
## FAQs about AI Learns from AI: The Emergence of Social Learning Among Large Language Models

### What is social learning in AI?

– Social learning in AI refers to the process by which large language models, such as GPT-3, interact with and learn from each other to improve their performance and capabilities.

### How do large language models like GPT-3 interact with each other for social learning?

– Large language models like GPT-3 interact with each other through the exchange of data and algorithms. They can share information, insights, and strategies to collectively improve their understanding and performance.

### What are the benefits of social learning among large language models?

– The benefits of social learning among large language models include faster learning and adaptation to new tasks, improved generalization capabilities, and enhanced robustness to adversarial attacks.

### Can social learning among large language models lead to ethical concerns?

– Yes, social learning among large language models can raise ethical concerns related to data privacy, bias amplification, and unintended consequences. It is essential to monitor and regulate these interactions to mitigate potential risks.

### How can organizations leverage social learning among large language models for business applications?

– Organizations can leverage social learning among large language models for various business applications, such as natural language processing, content generation, and customer interactions. By harnessing the collective intelligence of these models, businesses can enhance their AI capabilities and deliver more sophisticated products and services.
Source link