TikTok Introduces Option to Control AI-Generated Content Visibility

TikTok Empowers Users to Control AI-Generated Content

TikTok is evolving beyond user-generated content with the launch of a new feature that lets users customize how much AI-generated content appears in their “For You” feed. The update includes advanced labeling technologies for better transparency over AI-generated content.

New AI Content Control in the “Manage Topics” Tool

The AI-generated content (AIGC) control will be integrated into TikTok’s “Manage Topics” feature, allowing users to select what content they wish to view.

Tailoring Your Feed: Adjusting Content Preferences

According to TikTok, “Manage Topics allows users to customize the frequency of content across more than 10 categories such as Dance, Sports, and Food & Drinks.” The AIGC feature aims to diversify feeds without completely removing any types of content.

Industry Trends: The Rise of AI-Only Feeds

This update comes in response to competitors like OpenAI and Meta, both of whom have launched AI-centric platforms. Meta introduced Vibes, a feed for short AI-generated videos, while OpenAI quickly followed with Sora, a new social media app.

Creative Uses of AI on TikTok

Following Sora’s launch, TikTok has seen a surge in realistic AI-generated videos, with users creatively using AI to produce visuals related to diverse topics such as history and celebrities.

Adjust Your Content Preferences with Ease

Users can easily access this feature by navigating to Settings, selecting “Content Preferences,” and using the “Manage Topics” option to adjust their interest in AI-generated content.

Upcoming Rollout and Advanced AI Labeling Technology

TikTok plans to roll out these changes in the coming weeks. Additionally, they are testing a new technology called “invisible watermarking” for improved labeling of AI-generated content.

The Importance of Reliable Content Labeling

Currently, TikTok requires users to label AI-generated videos and employs a cross-industry technology called Content Credentials. However, these labels can be altered or removed when content is shared elsewhere.

New Watermarking Technology for Enhanced Security

The forthcoming invisible watermarks will provide an extra layer of security, making it more difficult for users to remove identification from AI content created with TikTok’s in-app tools. This will bolster the platform’s ability to accurately categorize and label AI-generated content.

A $2 Million Fund for AI Literacy Initiatives

In conjunction with these improvements, TikTok has announced a $2 million AI literacy fund aimed at organizations such as the nonprofit Girls Who Code, to help educate the public on AI safety and literacy.

Here are five FAQs about TikTok’s new feature that allows users to choose how much AI-generated content they want to see:

FAQ 1: How does TikTok’s new AI content feature work?

Answer: TikTok now allows users to customize their experience by choosing how much AI-generated content they’d like to see. Users can adjust settings in their preferences to either increase or decrease the amount of AI-generated posts in their feed, giving them more control over their viewing experience.

FAQ 2: Why did TikTok introduce the option for AI-generated content?

Answer: TikTok introduced this feature to enhance user experience and cater to individual preferences. By allowing users to choose their level of AI-generated content, TikTok aims to create a more personalized feed, ensuring that users engage with content that resonates with them.

FAQ 3: How can I adjust my settings for AI-generated content on TikTok?

Answer: To adjust your AI content settings, go to your profile, tap on the settings icon, and look for the "Content Preferences" section. Here, you can specify how much AI-generated content you want to see by sliding the relevant settings to your preferred level.

FAQ 4: Will adjusting my AI content settings affect my overall TikTok experience?

Answer: Yes, adjusting your AI content settings will influence the types of videos that appear in your feed. By customizing these settings, you can enhance the relevance of the content you see, allowing for a more enjoyable and tailored TikTok experience.

FAQ 5: Is AI-generated content clearly labeled on TikTok?

Answer: TikTok aims for transparency and is working on labeling AI-generated content so users can easily identify it. This way, users can make informed choices about the content they engage with, ensuring they are comfortable with the type of posts appearing in their feed.

Source link

Studio Ghibli and Other Japanese Publishers Urge OpenAI to Cease Training on Their Content

Japanese Publishers Demand OpenAI Halt Unauthorized Use of Copyrighted Content

The Content Overseas Distribution Association (CODA), representing prominent publishers such as Studio Ghibli, has sent a letter to OpenAI, urging the tech giant to cease training its AI models on copyrighted materials without explicit consent.

Studio Ghibli’s Position in the AI Landscape

Studio Ghibli, renowned for classics like “Spirited Away” and “My Neighbor Totoro,” has faced significant repercussions from OpenAI’s AI capabilities. Following the launch of ChatGPT’s image generator in March, users began requesting AI re-creations of personal photos in the distinctive style of Ghibli films. OpenAI CEO Sam Altman even altered his profile picture on X to sport a “Ghiblified” version.

CODA’s Formal Request Amid Growing Concerns

As access to OpenAI’s Sora app expands, CODA has formally called on OpenAI to refrain from utilizing its members’ content for AI training without prior agreement.

OpenAI’s History of Handling Copyrighted Content

This request reflects ongoing frustration with OpenAI’s tendency to adopt a “forgive, don’t ask” approach when it comes to copyrighted material. This has led to widespread user-generated creations involving copyrighted characters and deceased celebrities. Institutions like Nintendo and the estate of Dr. Martin Luther King, Jr. have raised concerns about this practice.

The Legal Landscape and Copyright Implications

Ultimately, it will be OpenAI’s decision to comply with these requests. If the company chooses not to, affected parties may resort to litigation. However, U.S. copyright law offers limited clarity regarding the use of copyrighted materials for AI training.

With few legal precedents, judges find themselves navigating uncharted waters regarding copyright interpretations, especially since the law has not been significantly updated since 1976. A recent case ruled by U.S. federal judge William Alsup found that Anthropic did not breach laws by training its AI on copyrighted texts, although it was penalized for using pirated copies.

In contrast, CODA asserts that such practices could qualify as copyright infringement under Japanese law.

Japanese Copyright Law and AI Training

CODA has expressed that for AI applications like Sora 2, the reproduction of specific copyrighted works may indeed constitute a copyright violation. According to Japan’s copyright system, prior permission is usually necessary for the use of copyrighted content, eliminating any opportunity to evade liability through subsequent objections.

Hayao Miyazaki’s Stance on AI Interpretations

Hayao Miyazaki, co-founder of Studio Ghibli, has yet to publicly address the rise of AI-generated content inspired by his work. However, his previous response to seeing AI-generated 3D animation was one of profound disapproval; he stated in 2016 that he was “utterly disgusted,” emphasizing, “I can’t watch this stuff and find it interesting. I feel strongly that this is an insult to life itself.”

Here are five FAQs regarding the situation with Studio Ghibli and other Japanese publishers wanting OpenAI to stop training on their work:

FAQ 1: Why are Studio Ghibli and other Japanese publishers asking OpenAI to stop training on their works?

Answer: Studio Ghibli and other Japanese publishers are concerned about the use of their intellectual property in developing AI models. They want to protect their creative works from unauthorized use and ensure that their content is not exploited without proper consent or compensation.

FAQ 2: What specific works are these publishers concerned about?

Answer: The concerns revolve around a range of intellectual properties, including animated films, characters, and other creative works produced by these publishers. This encompasses both popular titles from Studio Ghibli and other regional media that embody unique storytelling and artistry.

FAQ 3: How might this request impact AI development?

Answer: If OpenAI were to comply with this request, it could limit the training data available for AI models, potentially reducing the models’ exposure to certain cultural nuances and storytelling techniques found in Japanese media. This could affect the richness and diversity of AI-generated content.

FAQ 4: Are there any legal implications for OpenAI regarding this request?

Answer: Yes, if these publishers choose to pursue legal action, there could be significant implications. Copyright laws protect their works, and unauthorized use in AI training could be deemed an infringement, which might lead to lawsuits or demands for licensing agreements.

FAQ 5: How can fans of Studio Ghibli and Japanese media respond to this situation?

Answer: Fans can support the creators by advocating for fair compensation and recognition of intellectual property rights. Engaging in discussions about copyright laws, attending events, and supporting official merchandise from these publishers can also help raise awareness of the importance of protecting creative works.

Source link

The Dangers of AI Built on AI-Generated Content: When Artificial Intelligence Turns Toxic

In the fast-evolving landscape of generative AI technology, the rise of AI-generated content has been both a boon and a bane. While it enriches AI development with diverse datasets, it also brings about significant risks like data contamination, data poisoning, model collapse, echo chambers, and compromised content quality. These threats can lead to severe consequences, ranging from inaccurate medical diagnoses to compromised security.

Generative AI: Dual Edges of Innovation and Deception

The availability of generative AI tools has empowered creativity but also opened avenues for misuse, such as creating deepfake videos and deceptive texts. This misuse can fuel cyberbullying, spread false information, and facilitate phishing schemes. Moreover, AI-generated content can significantly impact the integrity of AI systems, leading to biased decisions and unintentional leaks.

Data Poisoning

Malicious actors can corrupt AI models by injecting false information into training datasets, leading to inaccurate decisions and biases. This can have severe repercussions in critical fields like healthcare and finance.

Model Collapse

Using datasets with AI-generated content can make AI models favor synthetic data patterns, leading to a decline in performance on real-world data.

Echo Chambers and Degradation of Content Quality

Training AI models on biased data can create echo chambers, limiting users’ exposure to diverse viewpoints and decreasing the overall quality of information.

Implementing Preventative Measures

To safeguard AI models against data contamination, strategies like robust data verification, anomaly detection algorithms, diverse training data sources, continuous monitoring, transparency, and ethical AI practices are crucial.

Looking Forward

Addressing the challenges of AI-generated content requires a strategic approach that blends best practices with data integrity mechanisms, anomaly detection, and ethical guidelines. Regulatory frameworks like the EU’s AI Act aim to ensure responsible AI use.

The Bottom Line

As generative AI evolves, balancing innovation with data integrity is paramount. Preventative measures like stringent verification and ethical practices are essential to maintain the reliability of AI systems. Transparency and understanding AI processes are key to shaping a responsible future for generative AI.

FAQ

Can AI-generated content be harmful?

– Yes, AI-generated content can be harmful if used irresponsibly or maliciously. It can spread misinformation, manipulate public opinion, and even be used to generate fake news.

How can AI poison other AI systems?

– AI can poison other AI systems by injecting faulty data or misleading information into their training datasets. This can lead to biased or incorrect predictions and decisions made by AI systems.

What are some risks of building AI on AI-generated content?

– Some risks of building AI on AI-generated content include perpetuating biases present in the training data, lowering the overall quality of the AI system, and potentially creating a feedback loop of misinformation. It can also lead to a lack of accountability and transparency in AI systems.
Source link