Manny Medina’s AI Agent Startup, Paid, Secures Impressive $21M Seed Funding for Results-Based Billing

Manny Medina’s New Venture Paid Secures $21.6 Million Seed Round

Manny Medina, the visionary behind the $4.4 billion sales automation platform Outreach, has captivated investors with his latest startup, Paid.

Successful Seed Round Boosts Company’s Valuation

Paid has successfully closed an oversubscribed $21.6 million seed funding round led by Lightspeed. Coupled with a €10 million pre-seed round raised in March, the London-based startup has accumulated a remarkable $33.3 million before even reaching its Series A stage. Sources indicate that Paid’s valuation now exceeds $100 million.

Innovative Approach in the AI Landscape

Emerging from stealth mode in March, Paid presents a unique contribution to the AI ecosystem. Rather than offering agents directly, the company empowers agent developers to charge clients based on the tangible value provided by their algorithms. This concept, often referred to as “results-based billing,” is gaining traction in the AI space.

A Revolutionary Pricing Model for AI

Medina emphasizes that Paid enables agent developers to monetize the margin savings delivered to their clients. This innovative pricing model marks a departure from traditional software fees, moving away from the per-user pricing structures prevalent in the SaaS era.

Why Traditional Payment Models Fall Short

The conventional per-user fees are ineffective as agent developers incur usage costs from both model providers and cloud services. Without a clearer pricing strategy, underlying financial pressures could lead to unsustainable business models, a challenge frequently faced by startups in the coding space.

Measuring Value in a Quiet AI Workforce

Medina notes that “if you’re a quiet agent, you don’t get paid.” Effective infrastructure is crucial for agents to be compensated for their contributions. As agents operate in the background, demonstrating their effectiveness becomes essential for securing their continued engagement.

The Risks of Traditional Billing and Market Hesitation

Adopting a monthly fee for a limited number of credits poses significant risk to agent developers. Many businesses hesitate to invest in AI solutions that yield minimal value. A recent MIT study revealed that approximately 95% of enterprise AI projects fail to produce tangible benefits, with only 5% making it to production.

Driving Engagement with Effective AI Solutions

Businesses are reluctant to pay for agents that generate more emails that often go unread.

Early Adoption and Success Stories

One of Paid’s initial clients is Artisan, a popular sales automation startup. Artisan’s CEO, Jaspar Carmichael-Jack, will be discussing these developments at TechCrunch Disrupt next month.

Paid is also gaining traction among SaaS companies eager to leverage agents for growth, having recently signed ERP vendor IFS as a client.

Lightspeed’s Confidence in Paid’s Vision

Alexander Schmitt from Lightspeed shared that the firm has invested over $2.5 billion in AI infrastructure and application layers over the past three years, observing firsthand the high failure rates of AI pilots. He believes the crux of the issue lies in the inability to attribute value to agents’ contributions.

A Unique Market Positioning with Future Potential

Schmitt perceives Paid as a distinctive player in the market, highlighting its innovative approach as unprecedented in the industry. As Paid’s model gains traction, increased competition in results-based billing for agents could stimulate a significant shift in how AI solutions are utilized.

New investor FUSE, along with existing investor EQT Ventures, also participated in this latest funding round.

Here are five FAQs regarding Manny Medina’s startup, Paid, which uses a results-based billing model and has recently raised $21 million in seed funding:

FAQ 1: What is Paid’s business model?

Answer: Paid operates on a results-based billing model, meaning clients only pay for tangible outcomes achieved through the services provided. This aligns the company’s incentives with the success of its clients, creating a win-win scenario.

FAQ 2: Who is the founder of Paid and what is their background?

Answer: Paid was founded by Manny Medina, an entrepreneur with a proven track record in the tech industry. Prior to launching Paid, Medina was involved in several successful startups and has expertise in leveraging AI for business solutions.

FAQ 3: How much funding has Paid recently raised?

Answer: Paid has successfully raised $21 million in seed funding, which will be used to enhance its technology, expand its team, and further develop its results-based services.

FAQ 4: What industries can benefit from Paid’s services?

Answer: Paid’s results-based billing approach can benefit various industries, particularly those that rely heavily on measurable outcomes, such as marketing, sales, and customer service. Its services can be tailored to meet the specific needs of different sectors.

FAQ 5: How does Paid ensure the quality of its results?

Answer: Paid employs robust analytical tools and AI technologies to track performance and outcomes effectively. By focusing on data-driven results, the company ensures it delivers value to clients while maintaining accountability for the services rendered.

Source link

Beware of Coworkers Who Generate AI-Driven ‘Workslop’

Unveiling “Workslop”: The Dangers of Low-Quality AI-Generated Content

A recent study by BetterUp Labs in partnership with the Stanford Social Media Lab introduces a concerning new term: “workslop.”

What is Workslop?

According to a revealing article published in the Harvard Business Review, workslop refers to “AI-generated work content that pretends to be high quality but lacks the substance needed to effectively complete a task.”

The Impact of Workslop on Organizations

Researchers from BetterUp Labs point to workslop as a significant factor behind the overwhelming 95% of organizations that have experimented with AI yet report seeing no return on their investment. They note that workslop can be “unhelpful, incomplete, or lack essential context,” leading to increased workloads for employees.

The Hidden Burden of Workslop

The researchers highlight the deeper issue of workslop by explaining, “Its insidious nature shifts the burden downstream, demanding that the recipient interpret, correct, or completely redo the work.”

Prevalence of Workslop Among Employees

In a survey conducted among 1,150 full-time U.S.-based employees, researchers found that 40% of respondents reported encountering workslop in the past month, underscoring the issue’s widespread nature.

How to Combat Workslop in the Workplace

To mitigate the effects of workslop, researchers recommend that workplace leaders “model purposeful and intentional AI use” and “establish clear guidelines for teams regarding acceptable practices.”

Here are five FAQs regarding the concept of "workslop" generated by AI:

FAQ 1: What is "workslop"?

Q: What does the term "workslop" refer to in the context of AI-generated content?
A: "Workslop" refers to low-quality or subpar output produced by AI tools, often lacking depth, accuracy, or relevance. This content can result from poor prompts or minimal human oversight.

FAQ 2: How can I identify AI-generated workslop in my team’s output?

Q: What are some signs that indicate a coworker’s work might be AI-generated "workslop"?
A: Look for generic responses, lack of specific detail, inconsistent style, and factual inaccuracies. Additionally, if the content feels overly formulaic or lacks a personal touch, it might be AI-generated.

FAQ 3: What are the risks of relying on AI-generated workslop?

Q: Why is it important to be cautious of AI-generated workslop in a professional setting?
A: Relying on workslop can lead to misleading information, decreased team productivity, and potential damage to an organization’s reputation. It may also undermine the value of human creativity and critical thinking.

FAQ 4: How can I improve the quality of AI-generated work?

Q: What steps can I take to ensure that AI-generated content is of higher quality?
A: Provide clear and specific prompts, review and edit the output for accuracy and relevancy, and combine AI-generated content with human insights. Collaboration with AI should enhance rather than replace human contribution.

FAQ 5: What should I do if I encounter workslop from a coworker?

Q: How should I address the issue if I notice a coworker consistently produces AI-generated workslop?
A: Approach the situation with constructive feedback. Encourage open discussions about the importance of quality in work and suggest resources for improving AI usage. Promote a culture of collaboration and learning to elevate overall standards.

Source link

What’s Driving the Headlines on Massive AI Data Centers?

<div>
    <h2>Silicon Valley's AI Infrastructure Investment Surge: What You Need to Know</h2>

    <p id="speakable-summary" class="wp-block-paragraph">This week, Silicon Valley dominated the news with jaw-dropping investments in AI infrastructure.</p>

    <h3>Nvidia's Massive Commitment to OpenAI</h3>
    <p class="wp-block-paragraph">Nvidia announced plans to <a target="_blank" href="https://techcrunch.com/2025/09/22/nvidia-plans-to-invest-up-to-100b-in-openai/">invest up to $100 billion in OpenAI</a>. This investment marks a significant leap in AI capabilities, with the potential to reshape the industry landscape.</p>

    <h3>OpenAI's Expansion with New Data Centers</h3>
    <p class="wp-block-paragraph">In response, OpenAI revealed plans for <a target="_blank" href="https://techcrunch.com/2025/09/23/openai-is-building-five-new-stargate-data-centers-with-oracle-and-softbank/">five new Stargate AI data centers</a> in collaboration with Oracle and SoftBank, set to vastly increase their processing capacity over the coming years. To fund this ambitious project, Oracle disclosed it <a target="_blank" href="https://techcrunch.com/2025/09/24/oracle-is-reportedly-looking-to-raise-15b-in-corporate-bond-sale/">raised $18 billion in bonds</a>.</p>

    <h3>The Bigger Picture: A Race for AI Capability</h3>
    <p class="wp-block-paragraph">Individually, these deals are remarkable, but collectively, they illustrate Silicon Valley’s relentless drive to equip OpenAI with the necessary resources to train and deploy advanced versions of ChatGPT.</p>

    <h3>Deep Dive on AI Infrastructure Deals</h3>
    <p class="wp-block-paragraph">On this week’s episode of <a target="_blank" href="https://techcrunch.com/podcasts/equity/">Equity</a>, Anthony Ha and I (Max Zeff) explore the real implications behind these substantial AI infrastructure investments.</p>

    <p>
        <iframe loading="lazy" class="tcembed-iframe tcembed--megaphone wp-block-tc23-podcast-player__embed" height="200px" width="100%" frameborder="no" scrolling="no" seamless="" src="https://playlist.megaphone.fm?e=TCML4042279995"></iframe>
    </p>

    <h3>OpenAI's Innovative New Feature: Pulse</h3>
    <p class="wp-block-paragraph">In a timely move, OpenAI launched <a target="_blank" href="https://techcrunch.com/2025/09/25/openai-launches-chatgpt-pulse-to-proactively-write-you-morning-briefs/">Pulse</a>, an intelligent feature in ChatGPT designed to deliver personalized morning briefings to users. This functionality operates independently, offering a morning news experience without user posts or advertisements—at least for now.</p>

    <h3>Capacity Challenges for OpenAI Users</h3>
    <p class="wp-block-paragraph">While OpenAI aims to broaden access to these innovative features, they are currently constrained by server capacity. Presently, Pulse is exclusively available to Pro subscribers for $200 a month.</p>

    <div class="wp-block-techcrunch-inline-cta">
        <div class="inline-cta__wrapper">
            <p>Join Us at the Techcrunch Event</p>
            <div class="inline-cta__content">
                <p>
                    <span class="inline-cta__location">San Francisco</span>
                    <span class="inline-cta__separator">|</span>
                    <span class="inline-cta__date">October 27-29, 2025</span>
                </p>
            </div>
        </div>
    </div>

    <h3>The Big Question: Are These Investments Justified?</h3>
    <p class="wp-block-paragraph">As the debate simmers, can features like Pulse truly justify the hundreds of billions being funneled into AI data centers? While Pulse is intriguing, the stakes are exceptionally high.</p>

    <h3>Stay Tuned for More Insights</h3>
    <p class="wp-block-paragraph">Tune into the full episode for an in-depth discussion on the monumental AI infrastructure investments shaping Silicon Valley, TikTok's ownership dilemmas, and the policy shifts affecting the biggest tech players.</p>

    <figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><p></p></figure>
</div>

This revised version features optimized headlines for better SEO, ensuring clarity and engagement throughout the article.

Sure! Here are five FAQs with answers related to the topic "What’s behind the massive AI data center headlines?":

FAQ 1: What is driving the expansion of AI data centers?

Answer: The rapid growth in artificial intelligence applications, particularly in machine learning and deep learning, has led to an increasing demand for computing power. This expansion is driven by the need for large-scale processing of data, enabling more sophisticated AI models and faster training times.

FAQ 2: How do AI data centers differ from traditional data centers?

Answer: AI data centers are specifically designed to optimize the performance and efficiency of AI workloads. They typically employ specialized hardware, such as GPUs and TPUs, which are more capable of handling the high computational demands of AI tasks compared to traditional servers that often rely on standard CPUs.

FAQ 3: What are the environmental impacts of massive AI data centers?

Answer: The growth of AI data centers raises concerns about energy consumption and carbon footprint. These facilities require substantial amounts of electricity, contributing to greenhouse gas emissions. However, many companies are exploring sustainable practices, such as using renewable energy sources and improving energy efficiency, to mitigate these effects.

FAQ 4: Are there any challenges associated with the rapid development of AI data centers?

Answer: Yes, challenges include the need for significant capital investment, ensuring reliable cooling systems, managing high energy demands, and addressing security concerns. Additionally, there is a scarcity of skilled professionals in AI and data center management, complicating operational efficiency.

FAQ 5: What is the future outlook for AI data centers?

Answer: The future of AI data centers looks promising, with ongoing advancements in technology and architecture expected to further enhance capabilities. As AI continues to integrate into various industries, the demand for more efficient and powerful data centers will likely grow, leading to increased innovation in this space.

Source link

Reinventing Within the Box: Aaron Levie’s Insights at Disrupt 2025

Unlocking Innovation: Aaron Levie at TechCrunch Disrupt 2025

Join us at TechCrunch Disrupt 2025, from October 27–29 at Moscone West, San Francisco, where Box CEO Aaron Levie will engage in a thought-provoking discussion on true innovation in public companies, the impact of AI on enterprise software, and the importance of questioning all ideas—including your own.


TechCrunch Disrupt 2025 with Aaron Levie

The Evolution of a Cloud Pioneer

Box emerged before “the cloud” became a household term, enduring through a sea of competitors who couldn’t scale effectively. As both a visionary founder and long-time CEO of a public company, Aaron Levie offers a unique perspective on navigating challenges and leveraging pivots in a fast-paced tech environment.

Why You Can’t Miss Aaron Levie’s Session

Aaron Levie has been a frontrunner in the cloud collaboration space long before it became mainstream, and he continues to raise the bar. This engaging fireside chat will deep-dive into the ingredients required for sustained success—encompassing product development, corporate culture, strategic initiatives, and the right mindset. Whether you’re a startup founder or managing a growing enterprise, you won’t want to miss this insightful session.

Join over 10,000 innovators, founders, and investors at Disrupt this October. Don’t miss out on savings of up to $668, available until September 26 at 11:59 p.m. PT—secure your spot today!


Disrupt 2024 Main Stage
Image Credits: Kimberly White / Getty Images

Sure! Here are five frequently asked questions (FAQs) with answers based on the theme of "Inside the Box: Aaron Levie on reinvention at Disrupt 2025":

FAQ 1: What is the main focus of Aaron Levie’s talk at Disrupt 2025?

Answer: Aaron Levie’s talk centers on the concept of reinvention in the business landscape. He discusses how companies can adapt to rapid changes in technology and market dynamics, emphasizing the importance of innovative thinking and staying ahead of the curve.

FAQ 2: How does Levie suggest companies approach reinvention?

Answer: Levie advocates for a mindset of continuous learning and experimentation. He encourages organizations to embrace failure as part of the process, fostering a culture where teams feel empowered to explore new ideas and pivot when necessary.

FAQ 3: What role does technology play in the reinvention process, according to Levie?

Answer: Technology is seen as a catalyst for reinvention in Levie’s perspective. He highlights how leveraging emerging technologies can streamline operations, enhance customer experiences, and drive new business models, making it crucial for companies to integrate tech into their strategies.

FAQ 4: Can you summarize any key strategies Levie recommends for leaders in this era of change?

Answer: Levie suggests that leaders should focus on agility and adaptability. Key strategies include fostering collaboration across departments, embracing data-driven decision-making, and investing in employee development to build a resilient workforce capable of navigating change.

FAQ 5: What can attendees expect to take away from Levie’s presentation?

Answer: Attendees can expect to gain actionable insights into fostering a culture of innovation within their organizations. Levie’s experiences and examples will provide a framework for understanding how to successfully navigate the challenges of reinvention in today’s fast-paced business environment.

Source link

It’s Not Just in Your Head: Google Cloud Dominates the Landscape

The Game-Changing $100 Billion Nvidia and OpenAI Partnership: What It Means for AI Infrastructure

The $100 billion collaboration between Nvidia and OpenAI, announced this Monday, marks a pivotal shift in the AI infrastructure landscape. This landmark agreement encompasses non-voting shares linked to substantial chip purchases, offering enough computing power for over 5 million U.S. households, thus strengthening the ties between two titans of AI technology.

Google Cloud’s Bold Strategy: Attracting the Next Generation of AI Companies

In contrast, Google Cloud is taking a unique route. While major industry players solidify their partnerships, Google is focused on securing the next wave of AI innovators before they grow too large to engage.

The Multi-Faceted Experience of Google Cloud COO Francis deSouza

Francis deSouza, the COO of Google Cloud, offers a multifaceted perspective on the AI revolution. With experience as the former CEO of genomics leader Illumina and as co-founder of the AI alignment startup Synth Labs, he has faced the challenges of managing advanced model safety. Now, as part of Google Cloud’s executive team, he is navigating a significant investment in the next phase of AI development.

Impressive Statistics: Google’s Dominance in AI Infrastructure

DeSouza loves to share compelling figures. In a recent discussion, he emphasized that nine of the top ten AI labs rely on Google’s infrastructure. Additionally, almost all generative AI unicorns utilize Google Cloud, with 60% of global generative AI startups opting for Google as their cloud provider. His announcement of $58 billion in new revenue commitments over the next two years, more than doubling the current annual rate, showcases Google’s growing influence in the sector.

Consolidation in AI Infrastructure: The Nvidia-OpenAI Deal

The Nvidia-OpenAI agreement highlights the consolidation trends reshaping the AI landscape. Microsoft’s initial $1 billion investment in OpenAI has ballooned to nearly $14 billion, while Amazon’s $8 billion input into Anthropic has led to specialized hardware customizations optimizing AI training for its infrastructure. Oracle also emerged as a key player, negotiating a $30 billion cloud deal with OpenAI, plus a staggering $300 billion five-year commitment starting in 2027.

Meta’s Competitive Moves Amid Infrastructure Developments

Even Meta, which is building its own infrastructure, has signed a $10 billion deal with Google Cloud, while planning $600 billion in U.S. infrastructure spending through 2028. The involvement of the Trump administration’s $500 billion “Stargate” project with SoftBank, OpenAI, and Oracle adds another layer of complexity to these partnerships.

Google’s Response: Targeting Startups and Unconventional Partnerships

Despite seeming sidelined in the larger deal-making frenzy, Google is not idle. Google Cloud is securing partnerships with smaller companies like Loveable and Windsurf—identified by deSouza as “primary computing partners”—without making massive upfront investments. This strategy reflects both an opportunity and a necessity, as companies can swiftly escalate from startups to billion-dollar enterprises.

Google Cloud’s Competitive Edge for AI Startups

To enhance its appeal, Google offers AI startups $350,000 in cloud credits, access to technical teams, and go-to-market strategies through its marketplace. The “no compromise” AI stack, featuring everything from chips to models and applications, is designed to empower customers with choice at each level.

Ambitious Expansion of Google’s Custom AI Chip Business

Recently, Google has intensified its efforts to expand its custom AI chip business. Reports indicate the company is negotiating to place its tensor processing units (TPUs) in other cloud providers’ data centers, including a deal with London-based Fluidstack that entails up to $3.2 billion in funding for a New York venture.

Balancing Competition and Collaboration in the AI Landscape

Competing directly with AI firms while providing them with infrastructure requires a nuanced approach. Google Cloud supplies TPU chips to OpenAI and hosts Anthropic’s Claude model via its Vertex AI platform, even while its Gemini models contend with both. Notably, Alphabet holds a 14% stake in Anthropic, termed by deSouza as a “multi-layered partnership.”

Google’s Commitment to Openness in AI Development

Google’s strategy of positioning itself as an open platform aims to foster, rather than stifle, competition. This approach aligns with its history of open-source contributions, from Kubernetes to the pivotal “Attention is All You Need” research that laid the foundation for many modern AI architectures.

Regulatory Scrutiny: Navigating Challenges Ahead

Google Cloud’s initiatives are especially pertinent given recent regulatory scrutiny. A federal ruling on the government’s five-year-old search monopoly case highlighted concerns over Google’s potential dominance in AI due to its extensive search data, prompting fears of monopolistic practices in AI development.

A Vision for a Better Future: Google’s Role in Advancing AI

In conversation, deSouza offers an optimistic outlook. He envisions Google Cloud as a driver of innovation, helping research into Alzheimer’s, Parkinson’s, and climate technologies. “We aim to pioneer technologies that facilitate this crucial work,” he states.

Conclusion: Google Cloud’s Strategic Positioning in a Competitive Landscape

While skepticism remains regarding Google’s motives, its positioning as an open platform that empowers emerging AI innovators may strategically bolster its stance in the face of regulatory pressures.

For our full discussion with deSouza, check out this week’s StrictlyVC Download podcast; new episodes drop every Tuesday.

Here are five FAQs based on the concept of Google Cloud’s extensive growth and presence:

FAQ 1: What does "flooding the zone" mean in the context of Google Cloud?

Answer: "Flooding the zone" refers to Google Cloud’s strategy of saturating the market with its services, products, and partnerships. This involves aggressive marketing, widespread adoption, and integration across various industries to establish a strong foothold in the cloud computing market.

FAQ 2: How is Google Cloud expanding its offerings?

Answer: Google Cloud is continually expanding its offerings by enhancing existing services like machine learning, data analytics, and infrastructure solutions, as well as launching new features. Additionally, they are acquiring complementary businesses and forming strategic partnerships to enhance their capabilities.

FAQ 3: What industries are most impacted by Google Cloud’s expansion?

Answer: Google Cloud’s expansion affects numerous industries, including finance, healthcare, retail, and technology. Its robust solutions cater to various needs, such as data management, application hosting, and cloud security, making it appealing across diverse sectors.

FAQ 4: How does Google Cloud’s strategy benefit businesses?

Answer: Businesses benefit from Google Cloud’s strategy through access to cutting-edge technologies, scalable solutions, and competitive pricing. The emphasis on innovation allows organizations to leverage advanced tools for data analytics, AI, and collaboration, enhancing their operational efficiency and decision-making.

FAQ 5: What are the challenges for competitors in light of Google Cloud’s growth?

Answer: Competitors face challenges such as the need to innovate rapidly, price competition, and the constant pressure to enhance their cloud offerings. Google Cloud’s extensive resources and aggressive market presence make it difficult for other providers to maintain their market share and attract new customers.

Source link

OpenAI Partners with Oracle and SoftBank to Construct Five New Stargate Data Centers

OpenAI Expands Horizons: New AI Data Centers to Power Innovation

On Tuesday, OpenAI announced plans to establish five new AI data centers across the United States. In collaboration with partners Oracle and SoftBank, the Stargate project aims to enhance its capacity to 7 gigawatts—sufficient energy to power over 5 million homes.

Strategic Partnerships Boost Expansion

Three of the upcoming data centers are being developed in partnership with Oracle, strategically located in Shackelford County, Texas; Doña Ana County, New Mexico; and an undisclosed spot in the Midwest. Meanwhile, SoftBank is collaborating on two sites in Lordstown, Ohio, and Milam County, Texas.

Fueling AI Innovation with Significant Investments

These new facilities are integral to OpenAI’s ambitious infrastructure expansion, which is focused on training increasingly powerful AI models. Recently, OpenAI revealed a remarkable $100 billion investment from Nvidia, aimed at acquiring advanced AI processors and further developing its network of data centers.

Sure! Here are five FAQs regarding OpenAI’s initiative to build five new Stargate data centers in collaboration with Oracle and SoftBank:

FAQ 1: What is the Stargate project?

Answer: The Stargate project refers to OpenAI’s collaboration with Oracle and SoftBank to build five new data centers. This initiative aims to enhance the infrastructure needed for AI development, providing advanced computational resources and improved accessibility for AI applications.

FAQ 2: Why is OpenAI partnering with Oracle and SoftBank?

Answer: OpenAI has partnered with Oracle and SoftBank due to their expertise in cloud infrastructure and telecommunications. This collaboration allows for scalable data processing, security, and global reach, ensuring robust support for AI models and applications.

FAQ 3: Where will these new data centers be located?

Answer: The specific locations for the five new Stargate data centers have not yet been disclosed. However, they are expected to be strategically placed to optimize performance and accessibility for users globally.

FAQ 4: What are the expected benefits of the Stargate data centers?

Answer: The Stargate data centers will provide enhanced computational power, improved data management, increased security, and lower latency for AI applications. This infrastructure will support more complex models and better service delivery for developers and businesses using OpenAI technology.

FAQ 5: When will the Stargate data centers be operational?

Answer: The timeline for the operational launch of the Stargate data centers has not been officially announced. However, OpenAI, Oracle, and SoftBank are committed to accelerating the development process, with updates likely to follow as the project progresses.

Source link

OpenAI Introduces Affordable ChatGPT Go Plan in Indonesia Following Launch in India

<div>
    <h2>OpenAI Expands Budget-Friendly ChatGPT Subscription Beyond India</h2>

    <p id="speakable-summary" class="wp-block-paragraph">
        OpenAI is broadening access to its affordable ChatGPT subscription plan, recently launched in India and now making its way to Indonesia. The <a target="_blank" href="https://techcrunch.com/2025/08/18/openai-launches-a-sub-5-chatgpt-plan-in-india/">sub-$5 ChatGPT Go paid plan</a> is available for Indonesian users for Rp75,000 (approximately $4.50) per month.
    </p>

    <h3>Introducing the ChatGPT Go Plan</h3>
    <p class="wp-block-paragraph">
        The ChatGPT Go plan offers a balanced option between OpenAI’s free service and the premium $20 monthly ChatGPT Plus plan. Subscribers enjoy 10 times the usage limits of the free version, allowing for more inquiries, image generation, and file uploads. Additionally, the plan enhances ChatGPT's memory of past conversations, paving the way for increasingly personalized interactions, as noted by ChatGPT head Nick Turley on X.
    </p>

    <h3>Positive Reception and Growth</h3>
    <p class="wp-block-paragraph">
        Since the rollout of the ChatGPT Go plan in India, the number of paid subscribers has more than doubled, highlighting a strong demand for affordable AI services.
    </p>

    <h3>Competing with Google’s AI Plus Subscription</h3>
    <p class="wp-block-paragraph">
        This strategic move positions OpenAI in direct competition with Google, which recently launched its own <a target="_blank" rel="nofollow" href="https://x.com/GeminiApp/status/1965490977000640833">similarly-priced AI Plus subscription plan</a> in Indonesia. Google’s offering includes access to its Gemini 2.5 Pro chatbot, as well as creative tools for image and video production like Flow, Whisk, and Veo 3 Fast. Moreover, the plan enhances features for Google’s AI research assistant, NotebookLM, and integrates AI functionalities into Gmail, Docs, and Sheets, alongside 200GB of cloud storage.
    </p>
</div>

This rewrite includes SEO-optimized headings and maintains the original article’s key points in an engaging format.

Here are five FAQs regarding the launch of the ChatGPT Go plan in Indonesia:

FAQ 1: What is the ChatGPT Go plan?

Answer: The ChatGPT Go plan is an affordable subscription option launched by OpenAI in Indonesia, designed to provide users with access to ChatGPT’s capabilities at a lower price point. This plan aims to make AI-powered conversational tools more accessible to a wider audience.


FAQ 2: How much does the ChatGPT Go plan cost in Indonesia?

Answer: The exact pricing details for the ChatGPT Go plan in Indonesia may vary. Users are encouraged to check OpenAI’s official website or app for the latest information on subscription fees and any promotional offers that may be available.


FAQ 3: What features are included in the ChatGPT Go plan?

Answer: The ChatGPT Go plan typically includes access to the core features of ChatGPT, such as text generation, personalized responses, and support for various queries. Check the OpenAI website for specific feature listings associated with the Go plan.


FAQ 4: How can I sign up for the ChatGPT Go plan?

Answer: To sign up for the ChatGPT Go plan, users can visit the OpenAI website or download the ChatGPT app. From there, you can follow the prompts to create an account and select the Go plan during the subscription process.


FAQ 5: Is there a trial period for the ChatGPT Go plan in Indonesia?

Answer: OpenAI may offer a trial period or promotional access for new users subscribing to the ChatGPT Go plan. It’s best to check the official website or app for information regarding any current trial offers or promotions.

Source link

Silicon Valley Makes Major Investments in ‘Environments’ for AI Agent Training

Big Tech’s Quest for More Robust AI Agents: The Role of Reinforcement Learning Environments

For years, executives from major tech companies have envisioned autonomous AI agents capable of executing tasks using various software applications. However, testing today’s consumer AI agents, like OpenAI’s ChatGPT Agent and Perplexity’s Comet, reveals their limitations. Enhancing AI agents may require innovative techniques currently being explored.

The Importance of Reinforcement Learning Environments

One of the key strategies being developed is the creation of simulated workspaces for training AI agents on complex, multi-step tasks—commonly referred to as reinforcement learning (RL) environments. Much like how labeled datasets propelled earlier AI advancements, RL environments now appear essential for developing capable AI agents.

AI researchers, entrepreneurs, and investors shared insights with TechCrunch regarding the increasing demand for RL environments from leading AI laboratories, and numerous startups are emerging to meet this need.

“Top AI labs are building RL environments in-house,” Jennifer Li, a general partner at Andreessen Horowitz, explained in an interview with TechCrunch. “However, as you can imagine, creating these datasets is highly complex, leading AI labs to seek third-party vendors capable of delivering high-quality environments and assessments. Everyone is exploring this area.”

The drive for RL environments has spawned a wave of well-funded startups, including Mechanize and Prime Intellect, that aspire to dominate this emerging field. Additionally, established data-labeling companies like Mercor and Surge are investing significantly in RL environments to stay competitive as the industry transitions from static datasets to interactive simulations. There’s speculation that major labs, such as Anthropic, could invest over $1 billion in RL environments within the next year.

Investors and founders alike hope one of these startups will become the “Scale AI for environments,” akin to the $29 billion data labeling giant that fueled the chatbot revolution.

The essential question remains: will RL environments truly advance the capabilities of AI?

Understanding RL Environments

At their essence, RL environments simulate the tasks an AI agent might undertake within a real software application. One founder likened constructing them to “creating a very boring video game” in a recent interview.

For instance, an RL environment might mimic a Chrome browser, where an AI agent’s objective is to purchase a pair of socks from Amazon. The agent’s performance is evaluated, receiving a reward signal upon success (for example, making a fine sock purchase).

While this task seems straightforward, there are numerous potential pitfalls. The AI could struggle with navigating dropdown menus or might accidentally order too many pairs of socks. Since developers can’t predict every misstep an agent will take, the environment must be sophisticated enough to account for unpredictable behaviors while still offering meaningful feedback. This complexity makes developing environments far more challenging than crafting a static dataset.

Some environments are highly complex, allowing AI agents to utilize tools and interact with the internet, while others focus narrowly on training agents for specific enterprise software tasks.

The current excitement around RL environments isn’t without precedent. OpenAI’s early efforts in 2016 included creating “RL Gyms,” which were similar to today’s RL environments. The same year, Google DeepMind’s AlphaGo, an AI system, defeated a world champion in Go while leveraging RL techniques in a simulated environment.

Today’s environments have an added twist—researchers aspire to develop computer-using AI agents powered by large transformer models. Unlike AlphaGo, which operated in a closed, specialized environment, contemporary AI agents aim for broader capabilities. While AI researchers start with a stronger foundation, they also face heightened complexity and unpredictability.

A Competitive Landscape

AI data labeling agencies such as Scale AI, Surge, and Mercor are racing to build robust RL environments. These companies possess greater resources than many startups in the field and maintain strong ties with AI labs.

Edwin Chen, CEO of Surge, reported a “significant increase” in demand for RL environments from AI labs. Last year, Surge reportedly generated $1.2 billion in revenue by collaborating with organizations like OpenAI, Google, Anthropic, and Meta. As a response, Surge formed a dedicated internal team focused on developing RL environments.

Close behind is Mercor, a startup valued at $10 billion, which has also partnered with giants like OpenAI, Meta, and Anthropic. Mercor pitches investors on its capability to build RL environments tailored to coding, healthcare, and legal domain tasks, as suggested in promotional materials seen by TechCrunch.

CEO Brendan Foody remarked to TechCrunch that “few comprehend the vast potential of RL environments.”

Scale AI once led the data labeling domain but has seen a decline after Meta invested $14 billion and recruited its CEO. Subsequent to this, Google and OpenAI discontinued working with Scale AI, and the startup encounters competition for data labeling within Meta itself. Nevertheless, Scale is attempting to adapt by investing in RL environments.

“This reflects the fundamental nature of Scale AI’s business,” explained Chetan Rane, Scale AI’s head of product for agents and RL environments. “Scale has shown agility in adapting. We achieved this with our initial focus on autonomous vehicles. Following the ChatGPT breakthrough, Scale AI transitioned once more to frontier spaces like agents and environments.”

Some nascent companies are focusing exclusively on environments from inception. For example, Mechanize, founded only six months ago, ambitiously aims to “automate all jobs.” Co-founder Matthew Barnett told TechCrunch that their initial efforts are directed at developing RL environments for AI coding agents.

Mechanize is striving to provide AI labs with a small number of robust RL environments, contrasting larger data firms that offer a broad array of simpler RL environments. To attract talent, the startup is offering software engineers $500,000 salaries—significantly higher than what contractors at Scale AI or Surge might earn.

Sources indicate that Mechanize is already collaborating with Anthropic on RL environments, although neither party has commented on the partnership.

Additionally, some startups anticipate that RL environments will play a significant role outside AI labs. Prime Intellect, backed by AI expert Andrej Karpathy, Founders Fund, and Menlo Ventures, is targeting smaller developers with its RL environments.

Recently, Prime Intellect unveiled an RL environments hub, aiming to become a “Hugging Face for RL environments,” granting open-source developers access to resources typically reserved for larger AI labs while offering them access to crucial computational resources.

Training versatile agents in RL environments is generally more computationally intensive than prior AI training approaches, according to Prime Intellect researcher Will Brown. Alongside startups creating RL environments, GPU providers that can support this process stand to gain from the increase in demand.

“RL environments will be too expansive for any single entity to dominate,” said Brown in a recent interview. “Part of our aim is to develop robust open-source infrastructure for this domain. Our service revolves around computational resources, providing a convenient entry point for GPU utilization, but we view this with a long-term perspective.”

Can RL Environments Scale Effectively?

A central concern with RL environments is whether this approach can scale as efficiently as previous AI training techniques.

Reinforcement learning has been the backbone of significant advancements in AI over the past year, contributing to innovative models like OpenAI’s o1 and Anthropic’s Claude Opus 4. These breakthroughs are crucial as traditional methods for enhancing AI models have begun to show diminishing returns.

Environments form a pivotal part of AI labs’ strategic investment in RL, a direction many believe will continue to propel progress as they integrate more data and computational power. Researchers at OpenAI involved in developing o1 previously stated that the company’s initial focus on reasoning models emerged from their investments in RL and test-time computation because they believed it would scale effectively.

While the best methods for scaling RL remain uncertain, environments appear to be a promising solution. Rather than simply rewarding chatbots for text output, they enable agents to function in simulations with the tools and computing systems at their disposal. This method demands increased resources but, importantly, could yield more significant outcomes.

However, skepticism persists regarding the long-term viability of RL environments. Ross Taylor, a former AI research lead at Meta and co-founder of General Reasoning, expressed concerns that RL environments can fall prey to reward hacking, where AI models exploit loopholes to obtain rewards without genuinely completing assigned tasks.

“I think there’s a tendency to underestimate the challenges of scaling environments,” Taylor stated. “Even the best RL environments available typically require substantial modifications to function optimally.”

OpenAI’s Head of Engineering for its API division, Sherwin Wu, shared in a recent podcast that he is somewhat skeptical about RL environment startups. While acknowledging the competitive nature of the space, he pointed out the rapid evolution of AI research makes it challenging to effectively serve AI labs.

Karpathy, an investor in Prime Intellect who has labeled RL environments a potential game-changer, has also voiced caution regarding the broader RL landscape. In a post on X, he expressed apprehensions about the extent to which further advancements can be achieved through RL.

“I’m optimistic about environments and agent interactions, but I’m more cautious regarding reinforcement learning in general,” Karpathy noted.

Update: Earlier versions of this article referred to Mechanize as Mechanize Work. This has been amended to reflect the company’s official name.

Certainly! Here are five FAQs based on the theme of Silicon Valley’s investment in "environments" for training AI agents.

FAQ 1: What are AI training environments?

Q: What are AI training environments, and why are they important?

A: AI training environments are simulated or created settings in which AI agents learn and refine their abilities through interaction. These environments allow AI systems to experiment, make decisions, and learn from feedback in a safe and controlled manner, which is crucial for developing robust AI solutions that can operate effectively in real-world scenarios.


FAQ 2: How is Silicon Valley investing in AI training environments?

Q: How is Silicon Valley betting on these training environments for AI?

A: Silicon Valley is investing heavily in the development of sophisticated training environments by funding startups and collaborating with research institutions. This includes creating virtual worlds, gaming platforms, and other interactive simulations that provide rich settings for AI agents to learn and adapt, enhancing their performance in various tasks.


FAQ 3: What are the benefits of using environments for AI training?

Q: What advantages do training environments offer for AI development?

A: Training environments provide numerous benefits, including the ability to test AI agents at scale, reduce costs associated with real-world trials, and ensure safety during the learning process. They also enable rapid iteration and the exploration of diverse scenarios, which can lead to more resilient and versatile AI systems.


FAQ 4: What types of environments are being developed for AI training?

Q: What kinds of environments are currently being developed for training AI agents?

A: Various types of environments are being developed, including virtual reality simulations, interactive video games, and even real-world environments with sensor integration. These environments range from straightforward tasks to complex scenarios involving social interactions, decision-making, and strategic planning, catering to different AI training needs.


FAQ 5: What are the challenges associated with training AI in these environments?

Q: What challenges do companies face when using training environments for AI agents?

A: Companies face several challenges, including ensuring the environments accurately simulate real-world dynamics and behaviors, addressing the computational costs of creating and maintaining these environments, and managing the ethical implications of AI behavior in simulated settings. Additionally, developing diverse and rich environments that cover a wide range of scenarios can be resource-intensive.

Source link

Latest Announcements from Made on YouTube: Studio Updates, YouTube Live Enhancements, New AI Tools, and More

Exciting New Features Unveiled at YouTube’s Annual Event

YouTube’s recent Made on YouTube event introduced a wealth of updates and tools designed for creators, including enhancements to YouTube Live, innovative monetization options, and much more.

Studio upgrades feature advanced “likeness” detection, lip-synced dubbing, and AI tools aimed at helping podcasters promote their shows.

Transforming the Studio Experience

YouTube Studio
YouTube CEO Neal Mohan at Made on YouTube 2025
Image Credits:YouTube

The newly revamped Studio includes powerful tools to help creators manage their channels effectively. Notable features are an inspiration tab, A/B testing for titles, and an auto-dubbing function.

A highlight is the “likeness” detection feature, now in open beta, enabling individuals to manage and flag unauthorized videos featuring their likeness.

Furthermore, the AI-powered Ask Studio is here to assist users by answering account-related queries. Creators can now collaborate with up to five others on a single video, expanding their audience reach.

Enhancements to YouTube Live

YouTube Live 2025
Image Credits:YouTube

YouTube Live also witnessed significant updates, such as enabling creators to incorporate minigames during streams, broadcasting in both horizontal and vertical formats, and AI-generated highlights of the stream. A new ad format will enhance viewer experience by displaying ads adjacent to the main content.

AI-powered highlights will identify key moments for Shorts creation, making it easier for creators to share engaging content quickly.

YouTube is set to introduce a customized version of Veo 3, Google’s text-to-video model, for Shorts, alongside a remixing tool and an “Edit with AI” feature.

Innovations in YouTube Music

YouTube Music is also getting fresh updates that aim to foster deeper connections between artists and fans. Features like countdown timers for new releases and “thank you” videos allow artists to express gratitude to their supporters. Additionally, a pilot program will offer exclusive merchandise drops for U.S. listeners.

YouTube Merchandise
Image Credits:YouTube Music

AI Innovations for Podcasters

Video podcasters in the U.S. can now leverage AI suggestions to create clips more efficiently. A forthcoming feature will allow the transformation of audio podcasts into video formats.

New Monetization Opportunities for Creators

YouTube is unveiling new ways for creators to monetize their content.

New features include brand deals through the YouTube Shopping program that allows creators to earn by tagging products in their videos. Creators can now swap out brand sponsorships in long-form videos.

Additionally, features like auto timestamps for product tags and a brand link feature for Shorts optimize the monetization process. An AI-powered system will automatically display product tags at highlight moments, enhancing the viewer’s purchasing experience.

Creators of Shorts can now include links to brand websites, and YouTube will proactively recommend creators compatible with brands through its creator partnerships hub.

Sure! Here are five FAQs about the recent updates announced at Made on YouTube, covering Studio, YouTube Live, new generative AI tools, and more:

FAQ 1: What new features have been added to YouTube Studio?

Answer: YouTube Studio has introduced an enhanced analytics dashboard, improved content management tools, and enhanced video editing capabilities. Creators can now access real-time performance metrics and engage more effectively with their audience through updated community features.


FAQ 2: How has YouTube Live been improved?

Answer: YouTube Live now offers new interactive features, including live polls and Q&A capabilities, allowing creators to engage with their audience in real time. Additionally, the streaming quality has been optimized for better performance, supporting higher resolutions and reduced latency.


FAQ 3: What are the new generative AI tools introduced for creators?

Answer: The latest generative AI tools empower creators by simplifying video creation and editing processes. These tools can automatically generate video suggestions, create captions, and even assist in scriptwriting, helping creators save time and enhance their content quality.


FAQ 4: Are there any new monetization options for creators?

Answer: Yes, YouTube has expanded monetization options, including new subscription models and merchandise integrations. Creators can now offer exclusive content through channel memberships and easily promote merchandise during their videos, enhancing their revenue streams.


FAQ 5: How does YouTube plan to support community engagement with these updates?

Answer: YouTube is focusing on enhancing community engagement through features like improved comment moderation, audience feedback tools, and enhanced community posts. These updates aim to foster a more interactive environment for both creators and viewers, allowing for better communication and connection.


Feel free to ask if you need additional information or specific details on any of these topics!

Source link

How California’s SB 53 Could Effectively Regulate Major AI Companies

California’s New AI Safety Bill: SB 53 Awaits Governor Newsom’s Decision

California’s state senate has recently approved a pivotal AI safety bill, SB 53, and now it’s in the hands of Governor Gavin Newsom for potential signing or veto.

A Step Back in Legislative History: The Previous Veto

This scenario might sound familiar; Newsom previously vetoed another AI safety measure, SB 1047, drafted by Senator Scott Wiener. However, SB 53 is more focused, targeting substantial AI companies with annual revenues exceeding $500 million.

Insights from TechCrunch’s Podcast Discussion

In a recent episode of TechCrunch’s Equity podcast, I had the opportunity to discuss SB 53 with colleagues Max Zeff and Kirsten Korosec. Max noted that this new bill has an increased likelihood of becoming law, partly due to its focus on larger corporations and its endorsement by AI company Anthropic.

The Importance of AI Safety Legislation

Max: The significance of AI safety legislation lies in its potential to serve as a check on the growing power of AI companies. As these organizations rise in influence, regulatory measures like SB 53 offer a much-needed framework for accountability.

Unlike SB 1047, which met substantial resistance, SB 53 imposes meaningful regulations, such as mandatory safety reports and incident reporting to the government. It also establishes a secure channel for lab employees to voice concerns without fear of backlash.

California as a Crucial Player in AI Legislation

Kirsten: The unique position of California as a hub of AI activity enhances the importance of this legislation. The vast majority of major AI companies are either headquartered or have significant operations in the state, making its legislative decisions impactful.

Complexities and Exemptions of SB 53

Max: While SB 53 is narrower than its predecessor, it features a range of exceptions designed to protect smaller startups, which face less stringent reporting requirements. This targeting of larger AI firms, like OpenAI and Google DeepMind, aims to shield the burgeoning startup ecosystem in California.

Anthony: Smaller startups are indeed required to share some safety information, but the demands are far less extensive compared to larger corporations.

Broader Regulatory Landscape: Challenges Ahead

As the federal landscape shifts, the current administration favors minimal regulation for AI. Discussions are ongoing about potential measures to restrict states from establishing their own AI regulations, which could create further challenges for California’s efforts.

Join us for enlightening conversations every week on Equity, TechCrunch’s flagship podcast, produced by Theresa Loconsolo, featuring new episodes every Wednesday and Friday.

Sure! Here are five FAQs about California’s SB 53 and its potential impact on regulating big AI companies.

FAQ 1: What is California’s SB 53?

Answer: California’s SB 53 is a legislative bill aimed at regulating the deployment and use of artificial intelligence technologies by large companies. It focuses on ensuring transparency, accountability, and ethical practices in AI development, particularly concerning consumer data and privacy.

FAQ 2: How does SB 53 aim to check big AI companies?

Answer: SB 53 seeks to impose strict guidelines on how AI companies collect and utilize data. It includes requirements for regular audits, transparency in algorithmic decision-making processes, and measures to prevent discriminatory outcomes. These regulations hold companies accountable, compelling them to prioritize ethical AI practices.

FAQ 3: What are the benefits of implementing SB 53 for consumers?

Answer: By enforcing regulations on AI technologies, consumers can expect enhanced privacy protections, increased transparency regarding how their data is used, and greater assurance against discriminatory practices. This could lead to more trustworthy interactions with AI-driven services and technologies.

FAQ 4: What challenges do opponents of SB 53 raise?

Answer: Critics of SB 53 argue that the regulations could stifle innovation and competitiveness within the AI industry. They express concerns that excessive regulation may burden smaller companies, possibly leading to reduced technological advancements in California, which is a hub for tech innovation.

FAQ 5: What impact could SB 53 have on the future of AI regulation?

Answer: If successful, SB 53 could set a precedent for other states and countries to adopt similar regulations. This legislation could pave the way for a more robust framework governing AI technologies, fostering ethical practices across the industry and shifting the balance of power away from large corporations to consumers and regulatory bodies.

Source link