European AI Innovator Nexos.ai Secures €30M to Accelerate Enterprise AI Adoption

<div>
    <h2>Nexos.ai: Bridging the Gap Between AI Potential and Corporate Security</h2>

    <p id="speakable-summary" class="wp-block-paragraph">For many enterprise companies, AI remains either an unfulfilled promise or a security threat. Lithuania's entrepreneurial duo is tackling this challenge, attracting interest and substantial funding.</p>

    <h3>Significant Funding Boost for AI Startup</h3>
    <p class="wp-block-paragraph">Just months after emerging from stealth mode with an <a target="_blank" rel="nofollow" href="http://nexos.ai">$8 million funding round led by Index Ventures</a>, Nexos.ai’s founders, Tomas Okmanas and Eimantas Sabaliauskas, have successfully secured a €30 million Series A (approximately $35 million). This platform aims to help businesses implement AI tools safely, acting as an intermediary between employees and AI systems.</p>

    <h3>Tackling Data Security Challenges</h3>
    <p class="wp-block-paragraph">Okmanas warns of “the biggest corporate data leak” impending as employees upload sensitive information to large language models (LLMs). Rather than suppressing AI, he envisions Nexos.ai as a “Switzerland for LLMs,” ensuring data remains under control while enhancing productivity.</p>

    <h3>Why Investors are Taking Notice</h3>
    <p class="wp-block-paragraph">With a valuation of €300 million (approximately $350 million), this funding round attracted notable investors, including Evantic Capital, which was persistent in facilitating this investment despite Nexos.ai not actively seeking funds. Their founders, well-versed in bootstrapping their ventures, now recognize the value VCs bring.</p>

    <h3>Leveraging Expertise for Product Development</h3>
    <p class="wp-block-paragraph">Nexos.ai benefits from the guidance of former Sequoia Capital partner Matt Miller and his <a target="_blank" rel="nofollow" href="https://www.linkedin.com/pulse/copy-welcome-evantic-matt-miller-irpye/">‘Legends’ network</a>, comprising 140 industry veterans advising on strategy. Okmanas, a member of this network, aims to utilize this knowledge to refine their offerings.</p>

    <h3>An Innovative Approach to AI Integration</h3>
    <p class="wp-block-paragraph">Currently, Nexos’ AI product includes an AI Workspace for employees and an AI Gateway for developers, functioning as a control hub for security and compliance while minimizing fragmentation. The gateway grants access to around 200 AI models, with plans to enhance support for private models handling sensitive data.</p>

    <h3>Focus on Tech-Savvy Enterprises and Regulated Industries</h3>
    <p class="wp-block-paragraph">Okmanas reports the team conducts 50 to 60 demo calls weekly but notes that many traditional businesses have a lot of work ahead to persuade their boards to adopt AI. Nexos.ai is initially targeting tech-savvy firms and those in regulated sectors, concerned about governance surrounding sensitive data transmission.</p>

    <h3>Addressing the AI Governance Gap</h3>
    <p class="wp-block-paragraph">The founders identified a significant void in AI governance while overseeing a diverse portfolio at their company, Tesonet. Among the clientele benefiting from Nexos.ai are shared portfolio companies and the Bulgarian fintech unicorn Payhawk. The recent funding will fuel expansion efforts in Europe and North America.</p>

    <h3>Realizing the Value of AI Adoption</h3>
    <p class="wp-block-paragraph">Okmanas' mission is to eliminate obstacles hindering broader AI adoption. Within Tesonet's portfolio, they’ve observed success stories, including Hostinger, where an AI assistant minimized human support needs, thereby saving €10 million this year alone.</p>

    <h3>Future Growth Plans for Nexos.ai</h3>
    <p class="wp-block-paragraph">While Okmanas refrained from disclosing Nexos.ai's revenue figures, he anticipates the team will expand to 100 personnel by the time they celebrate their first anniversary, primarily in Europe. This growth may also lead to new opportunities within public institutions facing data sovereignty issues.</p>

    <p class="wp-block-paragraph"><em>The headline of this story has been corrected for accuracy.</em></p>
</div>

This version provides a structured and engaging rewrite, formatted with SEO-friendly headings and subheadings, all while retaining the key information from the original article.

Here are five FAQs based on the news about Nexos.ai raising €30 million to enhance enterprise AI adoption:

FAQ 1: What is Nexos.ai?

Answer: Nexos.ai is a European AI company focused on improving enterprise adoption of artificial intelligence technologies. It aims to streamline AI integration in businesses, enhancing operational efficiency and decision-making processes.


FAQ 2: How much funding did Nexos.ai raise, and what is its purpose?

Answer: Nexos.ai raised €30 million in funding. The capital will be used to develop their AI solutions further, expand their market presence, and assist enterprises in implementing AI technologies more effectively.


FAQ 3: Why is enterprise AI adoption important?

Answer: Enterprise AI adoption is crucial because it enables organizations to leverage AI for improved data analysis, automation of processes, and enhancing customer experiences. This ultimately leads to increased productivity, innovation, and competitive advantage.


FAQ 4: How will Nexos.ai support businesses in adopting AI?

Answer: Nexos.ai plans to provide tailored AI solutions, training, and support services to help businesses integrate AI into their operations smoothly. This includes tools for data management, algorithm development, and user-friendly interfaces for non-technical staff.


FAQ 5: What industries can benefit from Nexos.ai’s solutions?

Answer: Nexos.ai’s solutions are applicable across a wide range of industries, including finance, healthcare, manufacturing, and retail. Any sector looking to improve efficiency through data-driven decision-making and automation can benefit from their offerings.

Source link

Scale AI Alumni Secures $9M Funding for AI Solutions in Critical MENA Industries

Introducing 1001 AI: Revolutionizing Decision-Making in MENA’s Critical Industries

Bilal Abu-Ghazaleh has recently relocated to London, dividing his time between the UK and Dubai.

A New Venture Backed by Strong Investors

After nearly ten years in the U.S., including a role at Scale AI, Abu-Ghazaleh is channeling his experience into 1001 AI, a startup focused on developing AI infrastructure for essential sectors across the Middle East and North Africa (MENA).

Recently, the startup secured a $9 million seed round led by CIV, General Catalyst, and Lux Capital, with additional support from notable angel investors including Chris Ré, Amjad Masad (Replit), and others.

Addressing Inefficiencies in High-Stakes Industries

Abu-Ghazaleh asserts that 1001 AI aims to streamline operations in sectors such as aviation, logistics, and oil and gas through a sophisticated AI-native operating system for smarter decision-making.

“In just a few industries like airports, ports, and construction, we recognize over $10 billion in inefficiencies across the Gulf,” he shared in an interview with TechCrunch. “This represents a monumental opportunity in markets like the UAE, Saudi Arabia, and Qatar.”

Improved efficiencies in airport operations, for instance, can generate substantial savings that benefit both the airport and its airlines. Notably, nine out of ten major projects in the region often exceed budgets or timelines, indicating that even marginal enhancements can lead to significant financial resourcing.

Strategic Partnerships and Future Goals

1001 AI aims to market its decision-making technology to new projects, with plans for its inaugural product launch by the end of the year. The company is already in discussions with some of the largest construction firms and airports in the Gulf, according to Abu-Ghazaleh.

From Jordan to Silicon Valley: A Journey of Innovation

Born in Jordan, Abu-Ghazaleh ventured to the U.S. for his education before immersing himself in the Bay Area’s startup ecosystem. He began his career at Hive AI and later advanced to a director role at Scale AI, responsible for scaling its contributor network for training data annotation.

Initially set to join Scale’s public sector unit focused on AI solutions for governments, he pivoted to establish 1001 AI when opportunities shifted following Meta’s investment in Scale.

The Gulf: A Hotbed for AI Innovation

The Gulf region, particularly the UAE and Saudi Arabia, has emerged as a leader in AI adoption, with substantial investments targeting the development of local infrastructure and the attraction of global talent.

For Abu-Ghazaleh, the convergence of appetite, funding, and urgency presents a unique opportunity for 1001 AI, which focuses on transforming physical operations—an area identified by investors as possessing exceptional potential.

Innovative Solutions for Real-World Challenges

Although the product is under development, Abu-Ghazaleh shared insights into its functionality. The system utilizes data from existing software, models workflows, and provides real-time directives to enhance efficiency.

“In today’s landscape, an operations manager might need to manually reroute a fuel truck or assign a cleaning crew,” he explained. “Our AI-driven system automates this orchestration, using real-time data to optimize operations seamlessly.”

Building the Future: Deployment and Growth Plans

Different from many early-stage AI startups concentrating on specific sectors, 1001 AI aims to create solutions applicable across a variety of industries, where operational flows share similar features.

The company employs a hands-on approach, engaging closely with clients to customize its systems over weeks of co-development workshops.

“Bilal is constructing a decision engine to streamline complexities with proven execution tailored for the region,” noted Neeraj Arora from General Catalyst.

Funding to Boost Deployment and Team Growth

The recent funding will expedite initial deployments in aviation, logistics, and infrastructure while also facilitating recruitment in engineering and operational roles as the team expands across Dubai and London.

Looking Ahead: Plans for Expansion

1001 AI plans to commence its first customer deployment in construction by the year’s end. Over the next five years, Abu-Ghazaleh envisions the company becoming the primary orchestration layer for these industries in the Gulf, with aspirations for global reach.

Here are five FAQs based on the news about a Scale AI alum raising $9 million for AI serving critical industries in the MENA region:

FAQ 1: What is the primary focus of the AI startup founded by the Scale AI alum?

Answer: The startup focuses on applying artificial intelligence solutions to critical industries in the MENA region, enhancing efficiency, productivity, and decision-making processes in sectors such as healthcare, logistics, and energy.

FAQ 2: How much funding has the startup raised, and who provided it?

Answer: The startup has successfully raised $9 million in funding, with investments from various venture capital firms and angel investors interested in innovative technology solutions for the MENA region.

FAQ 3: What challenges does the startup aim to address in the MENA region?

Answer: The startup aims to tackle challenges like operational inefficiencies, labor shortages, and the need for advanced data analytics in critical sectors, helping to drive digital transformation and improve service delivery in the region.

FAQ 4: How does the experience at Scale AI contribute to the startup’s success?

Answer: The founder’s experience at Scale AI, a leader in AI data services, provides valuable insights into best practices for AI development, project execution, and scaling operations, which are crucial for the startup’s growth and innovation in the MENA market.

FAQ 5: What impact is this funding expected to have on the startup’s operations?

Answer: The $9 million funding will enable the startup to enhance its product offerings, expand its team, and accelerate its go-to-market strategy, positioning it to better serve critical industries and contribute to economic development in the MENA region.

Source link

Wikipedia Reports Decline in Traffic Due to AI Search Summaries and Social Media Videos

Is Wikipedia Losing Relevance in the Age of AI and Social Media?

Often hailed as the last reliable website, Wikipedia is now facing challenges in a landscape dominated by toxic social media and AI-generated content. Recent insights from Marshall Miller at the Wikimedia Foundation indicate a significant drop in human pageviews, down 8% year-over-year.

Understanding the Decline: The Role of Bots

The Wikimedia Foundation is working to differentiate between human traffic and bot activity. According to Miller, the recent decline is attributed to high traffic from bots that had evaded detection, especially in May and June following an update to the platform’s bot detection systems.

The Shift in Information-Seeking Behavior

Why the decline in traffic? Miller cites the growing influence of generative AI and social media. As search engines increasingly deploy AI to deliver information directly to users, younger generations are turning to social video platforms over traditional sources like Wikipedia. Google has disputed claims that AI summaries are leading to reduced traffic from search queries.

Emphasizing Wikipedia’s Continued Importance

Despite these changes, Miller stresses that Wikipedia remains crucial for knowledge dissemination. Information from the encyclopedia still reaches users, even if they don’t visit the website directly. The platform has explored AI-generated summaries but paused the initiative after receiving backlash from its community.

The Risks of Reduced Engagement

This shift poses risks — with fewer visits to Wikipedia, there may be a decline in the number of volunteer contributors and financial supporters. Miller points out that some impressive volunteers have gone above and beyond in their commitment to the community, illustrating the potential loss of valuable contributions.

Encouraging More Traffic and Content Integrity

Miller advocates for AI and social media platforms to drive more visitors to Wikipedia. In response, the organization is developing a new framework for content attribution and has dedicated teams aimed at reaching new audiences, seeking volunteers to assist in these efforts.

Call to Action: Support Knowledge Integrity

He encourages readers to engage actively with content integrity, suggesting that when searching online, users should look for citations and visit original sources. Miller emphasizes discussing the significance of trusted, human-curated knowledge and supporting the real individuals behind generative AI content.

TechCrunch Event

San Francisco
|
October 27-29, 2025

Here are five FAQs related to the decline in website traffic attributable to AI search summaries and social video content:

FAQ 1: Why is website traffic falling?

Answer: Website traffic is declining primarily due to the rise of AI search summaries that provide users with quick answers to queries without needing to click through. This convenience reduces the number of visitors to traditional websites.


FAQ 2: How are AI search summaries impacting user behavior?

Answer: AI search summaries condense information from multiple sources into a single, easily digestible format. As users increasingly find answers directly on search engines, they are less likely to visit individual websites, leading to lower traffic volumes.


FAQ 3: What role does social video play in decreasing website traffic?

Answer: The popularity of social video platforms has led users to consume content in shorter, more engaging formats. This shift in preference diminishes the time users spend on websites, as they opt for quick video content that addresses their interests.


FAQ 4: Are all websites affected equally by this trend?

Answer: Not all websites are equally affected. While news and informational sites may experience more significant declines, niche websites with specialized content or unique offerings might maintain stable traffic levels, depending on their audience’s preferences.


FAQ 5: What can websites do to adapt to falling traffic?

Answer: Websites can adapt by focusing on creating engaging, high-quality content that provides value beyond quick answers, utilizing SEO strategies to improve visibility, and expanding into video content to meet users where they are consuming information. Engaging with audiences through social media can also help drive traffic.

Source link

Silicon Valley Raises Concerns Among AI Safety Advocates

<div>
    <h2>Silicon Valley Leaders Challenge AI Safety Advocates Amid Growing Controversy</h2>

    <p id="speakable-summary" class="wp-block-paragraph">This week, prominent figures from Silicon Valley, including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon, sparked significant debate with their remarks regarding AI safety advocacy. They insinuated that some advocates are driven by self-interest rather than genuine concern for the public good.</p>

    <h3>AI Safety Groups Respond to Accusations</h3>
    <p class="wp-block-paragraph">In conversations with TechCrunch, representatives from various AI safety organizations claim that the comments made by Sacks and OpenAI mark an ongoing trend in Silicon Valley to intimidate critics. This is not the first instance; last year, certain venture capitalists circulated false rumors that a California AI safety bill would lead to severe penalties for startup founders. Despite the Brookings Institution denouncing these claims as misrepresentations, Governor Gavin Newsom ultimately vetoed the bill.</p>

    <h3>Intimidation Tactics Leave Nonprofits Feeling Vulnerable</h3>
    <p class="wp-block-paragraph">Whether intentional or not, Sacks and OpenAI's statements have unsettled many advocates within the AI safety community. When approached by TechCrunch, multiple nonprofit leaders requested to remain anonymous, fearing backlash against their organizations.</p>

    <h3>A Growing Divide: Responsible AI vs. Consumerism</h3>
    <p class="wp-block-paragraph">This situation highlights the escalating conflict in Silicon Valley between responsible AI development and the push for mass consumer products. This week's episode of the <em>Equity</em> podcast delves deeper into these issues, including California's recent AI safety legislation and OpenAI's handling of sensitive content in ChatGPT.</p>

    <p>
        <iframe loading="lazy" class="tcembed-iframe tcembed--megaphone wp-block-tc23-podcast-player__embed" height="200px" width="100%" frameborder="no" scrolling="no" seamless="" src="https://playlist.megaphone.fm?e=TCML8283045754"></iframe>
    </p>

    <h3>Accusations of Fearmongering: The Case Against Anthropic</h3>
    <p class="wp-block-paragraph">On Tuesday, Sacks took to X to accuse Anthropic of using fear tactics regarding AI risks to advance its interests. He argued that Anthropic was leveraging societal fears around issues like unemployment and cyberattacks to push for regulations that could stifle smaller competitors. Notably, Anthropic was the sole major AI player endorsing California's SB 53, which mandates safety reporting for large companies.</p>

    <h3>Reaction to Concern: A Call for Transparency</h3>
    <p class="wp-block-paragraph">Sacks’ comments followed a notable essay by Anthropic co-founder Jack Clark, delivered at a recent AI safety conference. Clark expressed genuine concerns regarding AI's potential societal harms, but Sacks portrayed these as calculated efforts to manipulate regulations.</p>

    <h3>OpenAI Targets Critics with Subpoenas</h3>
    <p class="wp-block-paragraph">This week, Jason Kwon from OpenAI outlined why the company has issued subpoenas to AI safety nonprofits, including Encode, which openly criticized OpenAI’s reorganization following a lawsuit from Elon Musk. Kwon cited concerns over funding and coordination among opposing organizations as reasons for the subpoenas.</p>

    <h3>The AI Safety Movement: A Growing Concern for Silicon Valley</h3>
    <p class="wp-block-paragraph">Brendan Steinhauser, CEO of Alliance for Secure AI, suggests that OpenAI’s approach is more about silencing criticism than addressing legitimate safety concerns. This sentiment resonates amid a growing apprehension that the AI safety community is becoming more vocal and influential.</p>

    <h3>Public Sentiment and AI Anxiety</h3>
    <p class="wp-block-paragraph">Recent studies indicate a significant portion of the American population feels more apprehensive than excited about AI technology. Major concerns include job displacement and the risk of deepfakes, yet discussions about catastrophic risks from AI often dominate the safety dialogue.</p>

    <h3>Balancing Growth with Responsibility</h3>
    <p class="wp-block-paragraph">The ongoing debate suggests a crucial balancing act: addressing safety concerns while sustaining rapid growth in AI development. As the safety movement gathers momentum into 2026, Silicon Valley's defensive strategies may indicate the rising effectiveness of these advocacy efforts.</p>
</div>

This rewrite features engaging headers formatted for SEO, presenting an informative overview of the ongoing conflict surrounding AI safety and the dynamics within Silicon Valley.

Here are five FAQs regarding how Silicon Valley spooks AI safety advocates:

FAQ 1: Why are AI safety advocates concerned about developments in Silicon Valley?

Answer: AI safety advocates worry that rapid advancements in AI technology without proper oversight could lead to unintended consequences, such as biased algorithms, potential job displacement, or even existential risks if highly autonomous systems become uncontrollable.

FAQ 2: What specific actions are being taken by companies in Silicon Valley that raise red flags?

Answer: Many companies are prioritizing rapid product development and deployment of AI technologies, often opting for innovation over robustness and safety. This includes releasing AI tools that may not undergo thorough safety evaluations, which can result in high-stakes errors.

FAQ 3: How does the competitive environment in Silicon Valley impact AI safety?

Answer: The intensely competitive atmosphere encourages companies to expedite AI advancements to gain market share. This can lead to shortcuts in safety measures and ethical considerations, as firms prioritize speed and profit over thorough testing and responsible practices.

FAQ 4: What organizations are monitoring AI development in Silicon Valley?

Answer: Various non-profits, academic institutions, and regulatory bodies are actively monitoring AI developments. Organizations like the Partnership on AI and the Future of Humanity Institute advocate for ethical standards and safer AI practices, urging tech companies to adopt responsible methodologies.

FAQ 5: How can AI safety advocates influence change in Silicon Valley?

Answer: AI safety advocates can influence change by raising public awareness, engaging in policy discussions, promoting ethical AI guidelines, and collaborating with tech companies to establish best practices. Advocacy effort through research and public dialogue can encourage more responsible innovation in the field.

Source link

OpenAI Halts Sora Video Generations Featuring Martin Luther King Jr.

OpenAI Halts Video Creation of Martin Luther King Jr. Following Controversy

OpenAI announced a suspension on creating AI-generated videos of the late civil rights leader Martin Luther King Jr. using its Sora video model. The decision follows concerns from Dr. King’s estate over disrespectful representations made by some users.

Safeguards Requested by King’s Estate

OpenAI stated, “While there are strong free speech interests in depicting historical figures, we believe that public figures and their families should ultimately control how their likeness is used.” The company’s official post on X emphasized that authorized representatives can request the exclusion of their likeness from Sora videos.

Concerns Over AI Representations

Sora Launch Sparks Debate on AI Ethics

This decision follows closely on the heels of Sora’s launch, a platform allowing users to create AI-generated videos of historical figures and beyond. This feature has ignited passionate discussions on the ethical implications of AI-generated content and the need for protective measures.

Family Concerns Over AI Renderings

Dr. Bernice King, daughter of Dr. King, voiced her concerns on Instagram, pleading for an end to AI videos of her father. Her sentiments were echoed by others, including the daughter of Robin Williams.

Disrespectful Content Generated by Users

According to reports, instances of inappropriate AI-generated videos featuring Dr. King have surfaced, including portrayals of him making monkey noises and engaging in mock confrontations with Malcolm X. Similar crude depictions of other public figures are reported in the Sora app, including Bob Ross and Whitney Houston.

Broader Implications of Sora’s Launch

The controversy also highlights ongoing questions regarding how platforms should regulate AI representations of copyrighted material. The Sora app is rife with content featuring characters from popular culture, adding further complexity to the discussion.

Copyright Controls and AI Ethics

In response to criticisms, OpenAI has introduced specific restrictions to enhance copyright holder control over AI-generated likenesses. This move appears to be a reaction to an unfavorable initial response from Hollywood regarding Sora.

Balancing AI Innovation with Social Responsibility

As OpenAI implements these changes, it continues to adopt a more lenient approach to content moderation in ChatGPT. Recently, OpenAI announced plans to allow adult users to engage in “erotic” chats in the near future.

OpenAI’s Journey of Understanding AI Technology

OpenAI seems to be navigating the challenges of AI video generation as they strive to find a balance between innovation and public sentiment. CEO Sam Altman has acknowledged feelings of “trepidation” regarding the impact of Sora upon its release.

Learning from Experience

Nick Turley, head of ChatGPT, remarked earlier this month that the best approach to educating the public about new technologies is to actively engage with them. OpenAI is learning lessons both from ChatGPT and from Sora, indicating a growth in understanding how to responsibly introduce AI innovations.

Here are five frequently asked questions (FAQs) regarding OpenAI’s pause of Sora video generations related to Martin Luther King Jr.:

FAQ 1: Why has OpenAI paused Sora video generations featuring Martin Luther King Jr.?

Answer: OpenAI has paused Sora video generations for Martin Luther King Jr. to ensure that the content aligns with ethical guidelines and respects the sensitive nature of historical figures and their legacies.

FAQ 2: What does the pause on Sora video generations mean for users?

Answer: This pause means that users will not be able to create or access new video content featuring Martin Luther King Jr. while OpenAI reviews its policies and practices surrounding the representation of significant historical figures.

FAQ 3: Will the pause be permanent?

Answer: The duration of the pause is currently uncertain. OpenAI is evaluating the situation to determine the appropriate guidelines for generating content related to historical figures like Martin Luther King Jr.

FAQ 4: How can I stay updated on the status of Sora video generations?

Answer: Users can stay informed by following OpenAI’s official communications, including updates on their website and social media channels regarding any changes to Sora video generation policies.

FAQ 5: Are there alternative ways to learn about Martin Luther King Jr.?

Answer: Yes, users can explore a variety of educational resources, including books, documentaries, academic articles, and reputable websites that provide in-depth information about Martin Luther King Jr. and his contributions to civil rights.

Source link

Co-Founders of Eightfold Secure $35M for Viven, an AI Startup Creating Digital Twins for Accessing Unavailable Co-Workers

Revolutionizing Workplace Communication: Viven’s Digital Twin Technology

In today’s fast-paced work environment, effective communication is critical. However, when key team members are unavailable—whether on vacation or working across time zones—productivity suffers. Viven aims to change this dynamic.

Introducing Viven: A Game-Changer in Workforce Accessibility

Founded by Ashutosh Garg and Varun Kacholia, the minds behind the $2.1 billion AI recruiting startup Eightfold, Viven leverages the latest advances in Large Language Models (LLMs) and data privacy technologies to address these challenges. This innovative digital twin startup allows employees to access vital information from colleagues, even when they are not present.

Viven Emerges from Stealth Mode with Robust Backing

Recently launched, Viven secured $35 million in seed funding from prominent investors like Khosla Ventures, Foundation Capital, and FPV Ventures, marking a significant step in transforming workplace communication.

How Viven Creates Personalized Digital Twins

Viven builds a specialized LLM for each employee, essentially crafting a digital twin by analyzing their internal documents, including emails, Slack messages, and Google Docs. This allows other team members to query a colleague’s digital twin for immediate insights on shared projects, enhancing collaboration.

The Assurance of Privacy: Pairwise Context Technology

A critical concern is privacy, as employees often handle sensitive information. Viven addresses this through “pairwise context and privacy,” enabling LLMs to expertly manage what information can be shared and with whom, ensuring confidentiality while promoting accessibility.

Maintaining Integrity: Safeguards Against Inappropriate Queries

Viven’s system is designed to understand personal contexts, filtering out sensitive topics and protecting employee privacy. Each user has access to their digital twin’s query history, acting as a deterrent against inappropriate inquiries.

Strong Demand: Early Adoption by Major Enterprises

Viven is already in action at several enterprise-level clients including Genpact and Eightfold, demonstrating its potential to reshape workplace dynamics. Both Garg and Kacholia continue to balance their efforts between leading Eightfold and Viven.

Facing Competition: Viven’s Unique Market Position

Garg asserts that Viven stands alone in the digital twin market for enterprises. His discussions with Vinod Khosla confirmed the absence of direct competitors, which led to Khosla’s investment.

Future Landscape: Anticipating Market Changes

While there are no immediate rivals, Garg acknowledges that other tech giants like Anthropic, Google, Microsoft, and OpenAI could eventually explore similar offerings. Viven aims to maintain its edge through its innovative pairwise context technology.

Sure! Here are five FAQs based on the fundraising news about Viven, the AI digital twin startup co-founded by Eightfold’s founders:

FAQ 1: What is Viven?

Answer: Viven is an AI digital twin startup focused on creating virtual representations of co-workers, allowing users to query unavailable team members for insights, knowledge, and decisions, enhancing collaboration and productivity.

FAQ 2: How much funding did Viven raise?

Answer: Viven successfully raised $35 million in funding, which will be used to further develop its technology and expand its market reach.

FAQ 3: Who are the co-founders of Viven?

Answer: Viven was co-founded by the founders of Eightfold, a company known for its innovative approaches in AI and talent management, leveraging their expertise to drive Viven’s vision.

FAQ 4: What problem does Viven aim to solve?

Answer: Viven addresses the challenge of accessibility to knowledge and expertise when co-workers are unavailable. By creating digital twins, Viven enables teams to glean valuable insights even in the absence of key personnel.

FAQ 5: How does Viven’s technology work?

Answer: Viven’s technology utilizes AI to create digital replicas of individuals based on their knowledge, communication styles, and decision-making patterns. This allows users to interact with these digital twins to access information and insights as if they were conversing with the actual co-worker.

Source link

OpenAI Has Five Years to Transform $13 Billion into $1 Trillion

How OpenAI is Revolutionizing Revenue: A Look at Its Billion-Dollar Strategy

OpenAI is on a lucrative path, generating around $13 billion in annual revenue. An impressive 70% of this comes from everyday users subscribing to access AI chat services for just $20 a month, as reported by the Financial Times. With 800 million active users and only 5% opting for paid subscriptions, the figures are hard to ignore.

The High Stakes of OpenAI’s Ambitious Spending Plans

Despite its impressive earnings, OpenAI has set an ambitious goal of investing over $1 trillion in the next decade. This monumental spending spree includes securing over 26 gigawatts of computing power from giants like Oracle, Nvidia, AMD, and Broadcom—costing significantly more than the current revenue influx.

Innovative Approaches to Address Financial Gaps

To manage this financial disparity, OpenAI is diversifying its revenue streams. The Financial Times reveals a five-year strategy that explores government contracts, online shopping tools, video services, consumer hardware, and even establishing its own computing supply network through the Stargate data center initiative.

The Broader Implications for America’s Business Landscape

As more prominent companies turn to OpenAI for critical contracts, there’s an increased emphasis on the company’s stability. Should OpenAI face setbacks, the ripple effects could have serious repercussions for the broader U.S. market.

Here are five FAQs regarding OpenAI’s goal to turn $13 billion into $1 trillion within five years:

FAQ 1: Why does OpenAI have a $1 trillion target?

Answer: OpenAI aims for this ambitious target to significantly scale its technologies and reach, addressing growing market demands and enhancing its impact across various industries, particularly in AI and machine learning.


FAQ 2: What strategies will OpenAI employ to achieve this goal?

Answer: OpenAI plans to leverage its cutting-edge research, expand partnerships, innovate product offerings, and focus on monetizing applications of AI across sectors, such as healthcare, finance, and education.


FAQ 3: How does OpenAI plan to utilize the initial $13 billion investment?

Answer: The initial $13 billion will be invested in research and development, talent acquisition, infrastructure improvements, and marketing efforts to enhance product visibility and adoption.


FAQ 4: What challenges might OpenAI face in reaching this target?

Answer: Potential challenges include competition from other tech companies, regulatory hurdles, public perception of AI, and the need for continuous innovation to stay ahead in a rapidly evolving field.


FAQ 5: What would achieving the $1 trillion valuation mean for OpenAI?

Answer: Achieving a $1 trillion valuation would position OpenAI as a leader in the AI industry, significantly increasing its resources for research, fostering innovation, and potentially leading to major advancements in technology that could benefit society as a whole.

Source link

California Leads the Way as the First State to Regulate AI Companion Chatbots

California Takes Bold Step in AI Regulation with New Bill for Chatbot Safety

California Governor Gavin Newsom has recently signed a groundbreaking bill, making California the first state in the nation to mandate safety protocols for AI companion chatbots aimed at protecting children and vulnerable users online.

Introducing SB 243: A Shield for Young Users

The newly enacted law, SB 243, aims to safeguard children and other vulnerable users from the potential risks linked to AI companion chatbots. Under this legislation, companies—including major players like Meta and OpenAI as well as emerging startups such as Character AI and Replika—will be held legally accountable for their chatbot operations, ensuring compliance with established safety standards.

Driven by Tragedy: The Catalyst for Change

Introduced by state senators Steve Padilla and Josh Becker, SB 243 gained urgency following the tragic suicide of teenager Adam Raine, who engaged in harmful interactions with OpenAI’s ChatGPT. The bill also addresses alarming revelations about Meta’s chatbots, which were reportedly allowed to engage minors in inappropriate conversations. Additionally, a recent lawsuit against Character AI highlights the real-world implications of unregulated chatbot interactions.

Governor Newsom’s Commitment to Child Safety

“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom stated. “We’re committed to leading responsibly in AI technology, emphasizing that our children’s safety is non-negotiable.”

Key Provisions of SB 243: What to Expect

The new law will take effect on January 1, 2026. It mandates companies to put in place crucial measures like age verification, user warnings about social media interactions, and stronger penalties for producing illegal deepfakes (up to $250,000 per offense). Additionally, companies must develop protocols for dealing with issues related to suicide and self-harm, sharing relevant data with California’s Department of Public Health.

Transparency and User Protection Measures

The legislation stipulates that platforms clarify when interactions are AI-generated, and prohibits chatbots from posing as healthcare professionals. Companies are also required to implement reminders for minors to take breaks and block access to explicit content generated by the chatbots.

Industry Response: Initial Safeguards and Compliance

Some organizations have proactively begun introducing safeguards. OpenAI has rolled out parental controls and a self-harm detection system for its ChatGPT, while Replika, targeting an adult audience, emphasizes its commitment to user safety through extensive content-filtering measures and adherence to regulations.

Collaborative Future: Engaging Stakeholders in AI Regulation

Character AI has commented on its compliance with SB 243, stating that all chatbot interactions are fictionalized. Senator Padilla has expressed optimism, viewing the bill as a vital step toward establishing necessary safeguards for powerful technologies and urging other states to follow suit.

California’s Continued Leadership in AI Regulation

SB 243 is part of a larger trend of stringent AI oversight in California. Just weeks earlier, Governor Newsom enacted SB 53, which requires larger AI companies to boost transparency around safety protocols and offers whistleblower protections for their employees.

The National Conversation on AI and Mental Health

Other states, including Illinois, Nevada, and Utah, have passed legislation to limit or prohibit AI chatbots as substitutes for licensed mental health care. The national discourse around regulation reinforces the urgency for comprehensive measures aimed at protecting the most vulnerable.

TechCrunch has reached out for comments from Meta and OpenAI.

This article has been updated with responses from Senator Padilla, Character AI, and Replika.

Sure! Here are five FAQs regarding California’s regulation of AI companion chatbots:

FAQ 1: What is the new regulation regarding AI companion chatbots in California?

Answer: California has become the first state to implement regulations specifically for AI companion chatbots. This legislation aims to ensure transparency and accountability, requiring chatbots to disclose their artificial nature and provide users with information about data usage and privacy.


FAQ 2: How will this regulation affect users of AI companion chatbots?

Answer: Users will benefit from enhanced transparency, as chatbots will now be required to clearly identify themselves as AI. This helps users make informed decisions about their interactions and understand how their personal data may be used.


FAQ 3: Are there penalties for companies that do not comply with these regulations?

Answer: Yes, companies that fail to comply with the regulations may face penalties, including fines and restrictions on the deployment of their AI companion chatbots. This enforcement structure is designed to encourage responsible use of AI technology.


FAQ 4: What are the main goals of regulating AI companion chatbots?

Answer: The primary goals include protecting user privacy, establishing clear guidelines for ethical AI use, and fostering greater trust between users and technology. The regulation aims to mitigate risks associated with misinformation and emotional manipulation.


FAQ 5: How might this regulation impact the development of AI technologies in California?

Answer: This regulation may drive developers to prioritize ethical considerations in AI design, leading to safer and more transparent technologies. It could also spark a broader conversation about AI ethics and inspire similar regulations in other states or regions.

Source link

Nvidia’s AI Dominance: Exploring Its Major Startup Investments

Sure! Here’s a rewritten version of your article with engaging headlines and SEO optimization:

<div>
  <h2>Nvidia: Leading the Charge in AI Investments</h2>
  <p id="speakable-summary" class="wp-block-paragraph">No company has harnessed the AI revolution as effectively as Nvidia. Since the launch of ChatGPT and the wave of competitive generative AI services, Nvidia has seen its revenue, profitability, and cash reserves soar. With a market capitalization of $4.5 trillion, the company’s stock has skyrocketed, marking it as a formidable player in the tech industry.</p>

  <p class="wp-block-paragraph">As the premier manufacturer of high-performance GPUs, Nvidia has leveraged its increasing fortunes to bolster investments in AI startups.</p>

  <p class="wp-block-paragraph">In 2025, Nvidia has already engaged in 50 venture capital deals, surpassing the 48 completed in all of 2024, according to PitchBook data. Notably, these figures do not include investments made through its corporate VC fund, NVentures, which has also accelerated its investment pace significantly during this period.</p>

  <p class="wp-block-paragraph">Nvidia aims to enrich the AI landscape by investing in startups that are viewed as “game changers and market makers.”</p>

  <p class="wp-block-paragraph">The following list showcases startups that have raised over $100 million since 2023 with Nvidia as an investor, arranged from the highest to lowest funding amounts.</p>

  <h3>The Billion-Dollar Funding Contenders</h3>

  <p class="wp-block-paragraph"><strong>OpenAI:</strong> Nvidia made its first investment in ChatGPT’s creator in October 2024, contributing a $100 million stake in a monumental $6.6 billion funding round, valuing the company at $157 billion. Although Nvidia did not take part in OpenAI’s March $40 billion funding round, it later declared plans to invest up to $100 billion over time to foster a strategic partnership aimed at deploying robust AI infrastructure.</p>

  <p class="wp-block-paragraph"><strong>xAI:</strong> In December 2024, despite OpenAI’s advice against investing in competitors, Nvidia joined in on xAI's $6 billion funding round led by Elon Musk. It also plans to invest up to $2 billion in xAI’s anticipated $20 billion funding effort.</p>

  <p class="wp-block-paragraph"><strong>Mistral AI:</strong> Nvidia increased its investment in this French language model developer with a €1.7 billion ($2 billion) Series C round in September, at a remarkable post-money valuation of €11.7 billion ($13.5 billion).</p>

  <p class="wp-block-paragraph"><strong>Reflection AI:</strong> Nvidia spearheaded a $2 billion funding round in October for Reflection AI, a startup aimed at competing with Chinese firms by offering cost-effective open-source models.</p>

  <p class="wp-block-paragraph"><strong>Thinking Machines Lab:</strong> Backed by Nvidia among others, Mira Murati’s startup raised a $2 billion seed round, achieving a $12 billion valuation.</p>

  <p class="wp-block-paragraph"><strong>Inflection:</strong> Nvidia was a key investor in Inflection’s $1.3 billion round in June 2023. However, Microsoft acquired its founders less than a year later, shaping a complex future for the company.</p>

  <p class="wp-block-paragraph"><strong>Nscale:</strong> After raising $1.1 billion in September, Nvidia further supported Nscale with a $433 million SAFE funding in October, enabling the startup to build data centers for OpenAI’s Stargate project.</p>

  <p class="wp-block-paragraph"><strong>Wayve:</strong> Nvidia participated in a $1.05 billion funding round in May 2024 for this U.K. startup dedicated to self-learning autonomous systems, with additional investment slated.</p>

  <p class="wp-block-paragraph"><strong>Figure AI:</strong> In September, Nvidia took part in a Series C funding round valuing the humanoid robotics company at $39 billion.</p>

  <h3>The Hundreds of Millions Club</h3>

  <p class="wp-block-paragraph"><strong>Commonwealth Fusion:</strong> Nvidia contributed to an $863 million funding round in August 2025 for this nuclear fusion-energy startup alongside notable investors like Google.</p>

  <p class="wp-block-paragraph"><strong>Crusoe:</strong> Engaging in a $686 million funding round in November 2024, this startup focuses on building data centers with various big-name collaborators including Nvidia.</p>

  <p class="wp-block-paragraph"><strong>Cohere:</strong> Nvidia features prominently in multiple funding rounds for this enterprise AI model provider, including a recent $500 million Series D round.</p>

  <p class="wp-block-paragraph"><strong>Perplexity:</strong> Nvidia also backed this AI search engine through various rounds, including a $500 million round, keeping its momentum intact as the company’s valuation surged.</p>

  <h3>Significant Fundraising Deals</h3>

  <p class="wp-block-paragraph"><strong>Ayar Labs:</strong> Nvidia invested in a $155 million funding round for Ayar Labs, which focuses on developing optical interconnects for enhanced AI compute efficiency.</p>

  <p class="wp-block-paragraph"><strong>Kore.ai:</strong> This enterprise AI chatbot developer raised $150 million in December 2023, with Nvidia among the participating investors.</p>

  <p class="wp-block-paragraph"><strong>Sandbox AQ:</strong> In April, Nvidia backed Sandbox AQ in a $150 million round, which expanded the company’s valuation to $5.75 billion.</p>

  <p class="wp-block-paragraph"><strong>Hippocratic AI:</strong> This healthcare-focused AI startup successfully raised $141 million in January, marking Nvidia’s commitment to healthcare innovations.</p>

  <p class="wp-block-paragraph"><strong>Weka:</strong> In May 2024, Nvidia supported a $140 million funding round for Weka, emphasizing growth in AI-native data management.</p>

  <p class="wp-block-paragraph"><strong>Runway:</strong> Nvidia participated in Runway’s $308 million round, further solidifying its investment in generative AI technologies for media.</p>

  <p class="wp-block-paragraph"><em>This article was originally published in January 2025.</em></p>
</div>

Feel free to adjust the content further based on your specific requirements!

Here are five FAQs related to Nvidia’s investment in AI startups:

FAQ 1: What is Nvidia’s role in the AI startup ecosystem?

Answer: Nvidia is a leading player in the AI sector, providing essential hardware and software tools. The company invests in AI startups to foster innovation, support emerging technologies, and expand its ecosystem, leveraging its GPUs and AI frameworks.

FAQ 2: What types of startups does Nvidia typically invest in?

Answer: Nvidia invests in a diverse range of AI startups, including those focused on machine learning, data analytics, autonomous vehicles, healthcare technologies, and creative applications. This variety allows Nvidia to enhance its portfolio and support groundbreaking advancements in AI.

FAQ 3: How does Nvidia’s investment strategy benefit its business?

Answer: By investing in AI startups, Nvidia gains early access to innovative technologies and ideas, which can be integrated into its products. This strategy not only broadens Nvidia’s technological capabilities but also positions it as a key player in shaping the future of AI.

FAQ 4: Are there any notable success stories from Nvidia’s investments in startups?

Answer: Yes, several startups backed by Nvidia have achieved significant success. For instance, companies specializing in AI for healthcare or autonomous driving have leveraged Nvidia’s technology to create groundbreaking solutions, showcasing the potential impact of Nvidia’s strategic investments.

FAQ 5: How can startups approach Nvidia for investment opportunities?

Answer: Startups interested in seeking investment from Nvidia can typically submit their proposals through the company’s venture capital arm or during specific innovation events. It’s essential for startups to demonstrate how their technology aligns with Nvidia’s goals and the AI landscape.

Source link

Andrew Tulloch, Co-Founder of Thinking Machines Lab, Joins Meta

Thinking Machines Lab Loses Co-Founder to Meta: A Shift in the AI Landscape

Thinking Machines Lab, an innovative AI startup led by former OpenAI CTO Mira Murati, is experiencing a leadership change as co-founder Andrew Tulloch departs for Meta.

News of Departure Confirmed

According to The Wall Street Journal, Tulloch announced his decision to leave in a message to employees on Friday. A spokesperson for Thinking Machines Lab verified his departure, explaining that he “has decided to pursue a different path for personal reasons.”

Meta’s Aggressive Recruitment Strategy

In August, reports indicated that Mark Zuckerberg’s ambitious AI recruitment efforts included an attempt to acquire Thinking Machines Lab. When that proposition fell through, Zuckerberg reportedly offered Tulloch a lucrative compensation package potentially worth up to $1.5 billion over six years. Meta later dismissed the WSJ’s account of this offer as “inaccurate and ridiculous.”

A Rich Background in AI

Prior to co-founding Thinking Machines Lab, Tulloch gained valuable experience at OpenAI and Facebook’s AI Research Group, making his move to Meta a significant development in the tech industry.

Here are five FAQs regarding Andrew Tulloch’s move from Thinking Machines Lab to Meta:

FAQ 1: Who is Andrew Tulloch?

Answer: Andrew Tulloch is a co-founder of Thinking Machines Lab, known for his expertise in artificial intelligence and machine learning. He has played a significant role in the development of innovative AI solutions.

FAQ 2: Why is Andrew Tulloch moving to Meta?

Answer: Andrew Tulloch is joining Meta to leverage his skills in AI and contribute to the company’s focus on advancing machine learning technologies. His expertise will likely help enhance Meta’s capabilities in various areas, including social media and virtual reality.

FAQ 3: What impact might Tulloch’s move have on Thinking Machines Lab?

Answer: Andrew Tulloch’s departure could lead to changes in the leadership and direction of Thinking Machines Lab. However, it may also create opportunities for other team members to step up and contribute to ongoing projects.

FAQ 4: How does Andrew Tulloch’s expertise align with Meta’s goals?

Answer: Tulloch’s background in AI and machine learning aligns well with Meta’s goals of improving user experiences and developing cutting-edge technologies. His knowledge will be beneficial in driving innovation within Meta’s products and services.

FAQ 5: What are the potential implications for the AI community with Tulloch at Meta?

Answer: Tulloch’s transition to Meta could foster stronger collaborations between academia and the tech industry, stimulating advancements in AI research. His work may influence industry standards and practices, leading to more responsible and ethical AI development.

Source link