Ex-Splunk Executives’ Startup Resolve AI Achieves $1 Billion Valuation in Series A Funding Round

Resolve AI Secures $1 Billion Valuation with Series A Funding for Autonomous Site Reliability Engineer

Resolve AI, a pioneering startup focused on developing an autonomous site reliability engineer (SRE) to maintain software systems automatically, has successfully raised a Series A funding round led by Lightspeed Venture Partners, according to multiple sources familiar with the transaction.

Funding Details and Valuation Insights

The initial valuation for this funding round is reported at $1 billion. However, the overall blended valuation is lower due to a multi-tranched investment structure. In this scenario, investors bought a portion of equity at the $1 billion valuation while acquiring a larger share of the round at a reduced price. This innovative funding model is gaining traction among investors in the competitive AI startup landscape.

Leadership Team with Proven Success

Founded less than two years ago, Resolve AI is helmed by former Splunk executives Spiros Xanthos and Mayank Agarwal, who previously collaborated in creating Omnition, a startup acquired by Splunk in 2019. Their partnership dates back twenty years to their graduate studies at the University of Illinois Urbana-Champaign, underscoring their strong collaborative foundation.

Transforming Site Reliability Engineering

Human SREs typically handle troubleshooting and resolving system outages manually. In contrast, Resolve AI automates this process, autonomously identifying, diagnosing, and addressing production issues in real-time. This innovation addresses a significant challenge for organizations facing increasingly complex cloud-based software environments.

Benefits of Automation in Site Reliability

As software systems evolve, companies often struggle to maintain a skilled workforce of SREs necessary for smooth operations. Automating these processes can significantly reduce downtime, lower operational costs, and allow engineering teams to focus on developing new features, rather than continually managing production issues.

Recent Funding Milestones

In October of the previous year, Resolve AI raised a $35 million seed round that was led by Greylock, with contributions from notable figures like Fei-Fei Li, founder of World Labs, and Jeff Dean from Google DeepMind.

Competitive Landscape: Resolve AI vs. Traversal

Resolve AI finds itself in competition with Traversal, another AI-driven SRE startup that recently secured $48 million in Series A funding led by Kleiner Perkins, supported by Sequoia, highlighting the intense competition within the market.

TechCrunch Event

San Francisco
|
October 13-15, 2026

Here are five FAQs based on the announcement that Resolve AI, a startup founded by former Splunk executives, has reached a $1 billion valuation following its Series A funding:

FAQ 1: What is Resolve AI?

Answer: Resolve AI is a technology startup founded by former executives from Splunk, focusing on AI-driven solutions designed to enhance operational efficiency and improve decision-making processes for businesses.


FAQ 2: What does it mean for Resolve AI to reach a $1 billion valuation?

Answer: A $1 billion valuation indicates that Resolve AI has achieved "unicorn" status, which means investors believe the company has significant growth potential. This valuation is based on factors such as its market position, technology, and future revenue projections.


FAQ 3: How much funding did Resolve AI raise in its Series A round?

Answer: Resolve AI raised a substantial amount of capital in its Series A funding round, although the exact figure may vary by source. This injection of funds will be used to accelerate product development and expand market reach.


FAQ 4: What specific problems does Resolve AI aim to solve?

Answer: Resolve AI aims to address challenges related to data management and operational workflows. By leveraging AI, the company provides solutions that help businesses quickly analyze large volumes of data, automate processes, and gain actionable insights.


FAQ 5: Who are the key investors in Resolve AI’s Series A funding?

Answer: While the specific names of investors may vary, the funding round typically involves venture capital firms that specialize in technology and innovation. These investors are attracted to Resolve AI due to its experienced leadership and market potential.


Feel free to ask if you need more information!

Source link

OpenAI Allegedly Aiming to Secure $100 Billion at an $830 Billion Valuation

OpenAI Eyes Ambitious $100 Billion Funding Round

OpenAI is in discussions to secure a staggering $100 billion in funding, potentially valuing the ChatGPT creator at $830 billion, according to a recent report by the Wall Street Journal.

Funding Goals and Strategies

The tech giant aims to finalize this funding round by the end of Q1 next year, looking to attract investments from sovereign wealth funds, as reported by the WSJ. Initial reports suggested a valuation of $750 billion, as noted by The Information.

Strategic Spending and Revenue Generation

The anticipated funding comes as OpenAI commits to extensive expenditures and forms new partnerships globally in the competitive AI landscape. This cash influx would help manage rising inferencing costs, which appear increasingly reliant on cash rather than cloud credits.

Intensifying Competition in AI

With competitors such as Anthropic and Google advancing their offerings, OpenAI is pushing to enhance its model releases and expand within the developer ecosystem to maintain its lead.

Market Sentiment and Investment Concerns

Recent investor sentiment around AI has cooled, with doubts about the sustainability of massive, debt-driven investments from major players. Additionally, memory chip shortages threaten to impact the tech industry, compounding these concerns.

Potential IPO and Revenue Prospects

OpenAI is reportedly exploring an IPO to generate significant capital for its ongoing development, with current annual revenues estimated at around $20 billion. There are also discussions with Amazon regarding a $10 billion investment, which would provide access to advanced AI computing resources.

Implications of Successful Fundraising

Should the fundraising go through, it would significantly bolster OpenAI’s financial resources, which already exceed $64 billion, following a recent valuation of about $500 billion during a secondary transaction.

OpenAI has not yet responded to requests for comment regarding this funding initiative.

Join Us at TechCrunch Event

San Francisco
|
October 13-15, 2026

Here are five frequently asked questions (FAQs) regarding OpenAI’s financial efforts:

FAQ 1:

Why is OpenAI attempting to raise $100 billion?

OpenAI aims to secure significant funding to accelerate its research and development of artificial intelligence technologies. The capital will support expanding its workforce, enhancing computational power, and facilitating broader applications of AI across various industries.

FAQ 2:

What does an $830 billion valuation imply for OpenAI?

A valuation of $830 billion suggests that investors believe OpenAI has immense potential for growth and market influence in the AI sector. This high valuation reflects confidence in the company’s innovative capabilities and future revenue generation.

FAQ 3:

How will the funds raised be utilized?

The funds are expected to be allocated toward research projects, talent acquisition, developing new technologies, and scaling existing products. Additionally, investments may be directed toward expanding partnerships and collaborations within the tech industry.

FAQ 4:

Who are the potential investors in this funding round?

Potential investors may include venture capital firms, private equity investors, tech conglomerates, and possibly sovereign wealth funds. They are likely to be those interested in the AI landscape and looking to capitalize on its transformative potential.

FAQ 5:

What impact could this funding have on the AI landscape?

If successful, this funding could significantly enhance OpenAI’s resources and capabilities, potentially leading to breakthroughs in AI technology. This may spur competition and innovation across the industry, accelerating the pace of AI development and its integration into various sectors.

Source link

Lovable, the vibe-coding startup, secures $330M, achieving a $6.6B valuation.

Sure! Here’s a rewritten version of the article with SEO-optimized headlines:

<div>
    <h2>Lovable Achieves Remarkable Valuation Surge in Just Five Months</h2>

    <p id="speakable-summary" class="wp-block-paragraph">Swedish vibe coding startup Lovable has more than tripled its valuation in just five months.</p>

    <h3>Massive Funding Boost: $330 Million Series B Round</h3>
    <p class="wp-block-paragraph">Stockholm-based Lovable announced on Thursday a successful <a target="_blank" rel="nofollow" href="https://lovable.dev/blog/series-b">Series B funding round</a> totaling $330 million, led by CapitalG and Menlo Ventures, bringing its valuation to an impressive $6.6 billion. Notable participants included Khosla Ventures, Salesforce Ventures, and Databricks Ventures.</p>

    <h3>Rapid Growth Following Series A Success</h3>
    <p class="wp-block-paragraph">This funding comes just months after Lovable raised $200 million in a <a target="_blank" href="https://techcrunch.com/2025/07/17/lovable-becomes-a-unicorn-with-200m-series-a-just-8-months-after-launch/">Series A round</a>, which valued the startup at $1.8 billion in July.</p>

    <h3>Innovative Vibe-Coding Technology Driving Success</h3>
    <p class="wp-block-paragraph">Lovable, which capitalized swiftly on the AI trend, offers a groundbreaking “vibe-coding” tool that allows users to develop code and create complete applications through simple text prompts. Having launched in 2024, the company reached an impressive <a target="_blank" href="https://techcrunch.com/2025/07/23/eight-months-in-swedish-unicorn-lovable-crosses-the-100m-arr-milestone/">$100 million ARR milestone</a> within just eight months, doubling that number to exceed <a target="_blank" href="https://techcrunch.com/2025/11/19/as-lovable-hits-200m-arr-its-ceo-credits-staying-in-europe-for-its-success/">$200 million in annual recurring revenue</a> only four months later.</p>

    <h3>Major Clients and Impressive Project Volume</h3>
    <p class="wp-block-paragraph">Lovable proudly counts industry leaders like Klarna, Uber, and Zendesk among its clientele. The platform has facilitated over 100,000 new projects daily, with more than 25 million projects established in its inaugural year.</p>

    <h3>Future Plans Fueled by New Funding</h3>
    <p class="wp-block-paragraph">The latest funding round will support Lovable's efforts to deepen integrations with third-party applications, expand enterprise-level features, and enhance its platform's infrastructure—including databases, payments, and hosting—necessary for developing robust applications and services.</p>

    <h3>Staying Rooted in Europe: A Strategic Decision</h3>
    <p class="wp-block-paragraph">During the recent Slush conference in Helsinki, co-founder and CEO Anton Osika emphasized his decision to keep Lovable in Europe despite investor pressure to move to Silicon Valley. He stated, “I [can] sit here now and say, ‘Look, guys, you can build a global AI company from this country.’”</p>

    <h3>Addressing Tax Compliance Issues</h3>
    <p class="wp-block-paragraph">In November, Lovable faced scrutiny for not paying VAT, a common tax in the European Union. In a <a target="_blank" rel="nofollow" href="https://www.linkedin.com/posts/antonosika_lovable-just-got-called-out-for-not-paying-activity-7399176055850364928-Yq78/">LinkedIn post</a>, Osika acknowledged the oversight and assured that the company would resolve it, countering criticism that such tax issues hinder high-growth startups in the EU.</p>

    <h3>The Hot Trend of Vibe Coding in Venture Capital</h3>
    <p class="wp-block-paragraph">Vibe coding continues to attract significant investments from VCs. Cursor, a competing vibe coding startup, recently raised <a target="_blank" href="https://techcrunch.com/2025/11/13/coding-assistant-cursor-raises-2-3b-5-months-after-its-previous-round/">$2.5 billion in November</a>, achieving a remarkable valuation of $29.3 billion, thus doubling its valuation within the year.</p>

    <p class="wp-block-paragraph">TechCrunch has reached out to Lovable for additional insights.</p>
</div>

This version maintains the essence of the original article while improving the SEO structure and readability.

Here are five FAQs regarding Lovable’s recent funding news:

FAQ 1: What is Lovable’s primary focus as a startup?

Answer: Lovable is a vibe-coding startup that specializes in developing tools and platforms designed to enhance emotional connections in digital communications, making interactions more engaging and personalized.

FAQ 2: How much funding has Lovable recently raised?

Answer: Lovable has raised $330 million in its latest funding round.

FAQ 3: What is Lovable’s current valuation?

Answer: After the recent funding round, Lovable’s valuation has reached $6.6 billion.

FAQ 4: Who are some of Lovable’s investors in this funding round?

Answer: While specific investors may vary, Lovable’s funding has attracted major venture capital firms and possibly strategic investors interested in tech-driven emotional engagement.

FAQ 5: How will Lovable use the funds from this fundraising round?

Answer: Lovable plans to utilize the new funding to expand its product offerings, enhance technology, and scale its operations, ultimately aiming to improve user experience and reach a broader market.

Source link

Skana Robotics Enhances Communication Between Underwater Robot Fleets

Revolutionizing Underwater Communications: Skana Robotics’ Innovative Approach

The realm of underwater defense operations stands to benefit immensely from autonomous vessels and robots. Historically, submersibles have struggled with long-distance communication, often needing to surface—a risky move. Skana Robotics is poised to change the game with groundbreaking AI-driven underwater communication technologies.

A Breakthrough in Underwater AI Communications

Skana Robotics is making waves in the defense sector with their new capability for underwater communication, utilizing AI that diverges from traditional large language models.

Introducing SeaSphere: A New Era of Fleet Management

Based in Tel Aviv, Skana has enhanced its SeaSphere fleet management software to enable underwater communication among vessels over long distances. This innovative system allows robots to share critical data and adapt their tasks based on received information while collaboratively pursuing a unified mission.

Addressing Communication Challenges in Multi-Vessel Operations

“Effective communication among vessels is one of the primary obstacles in deploying multi-domain operations,” says Idan Levy, co-founder and CEO of Skana Robotics. “Our solution focuses on the practical deployment of numerous unmanned vessels, ensuring seamless data sharing and communication both above and below the water.”

The Science Behind the Breakthrough

Led by AI expert Teddy Lazebnik from the University of Haifa, the research team utilized older, mathematically-driven AI algorithms to create their advanced decision-making system. Lazebnik explained, “While these algorithms may be less predictable, they provide superior performance, explainability, and general applicability.”

Skana Robotics: A Focused Mission Amidst Rising Threats

Founded in 2024 and emerging from stealth mode earlier this year, Skana aims to cater to governments and companies in Europe, particularly given escalating maritime threats due to the ongoing conflict between Russia and Ukraine.

Future Endeavors: Government Contracts and Commercial Launch

Levy revealed that the company is negotiating a significant government contract expected to be finalized by year-end. In 2026, they plan to launch a commercial version of their technology and demonstrate its real-world capabilities.

Proving Their Concept: A Call to Action for Military Leaders

“We’re prepared to demonstrate our capacity to execute complex maritime operations,” Lazebnik stated. “We invite military leaders in the EU to witness the efficacy of our technology firsthand and assess our results.”

Sure! Here are five FAQs with answers related to Skana Robotics and its underwater robot communication technology.

FAQ 1: What is Skana Robotics?

Answer: Skana Robotics is a technology company specializing in the development of communication systems for fleets of underwater robots. Our solutions enable seamless communication between autonomous underwater vehicles (AUVs), enhancing their coordination and efficiency in various marine tasks.


FAQ 2: How do your underwater robots communicate with each other?

Answer: Our underwater robots use advanced acoustic communication protocols that allow them to send and receive messages underwater. This technology ensures reliable data exchange in environments where traditional radio frequencies cannot penetrate, facilitating real-time collaboration among the robots.


FAQ 3: What are the primary applications of Skana Robotics’ technology?

Answer: Our communication solutions are designed for various applications, including environmental monitoring, underwater exploration, submarine maintenance, and marine research. By enabling effective communication, our technology enhances the capabilities of fleets of underwater robots in complex underwater tasks.


FAQ 4: How does Skana Robotics ensure the reliability of underwater communication?

Answer: We implement robust error-checking algorithms and redundancy measures to maintain high reliability in underwater communication. Our systems are designed to mitigate the effects of signal degradation due to water conditions, ensuring that messages are accurately transmitted and received.


FAQ 5: Can your technology be integrated with existing underwater robot systems?

Answer: Yes, our communication solutions are designed to be compatible with a wide range of existing underwater robot systems. We provide integration support and consultation to ensure that our technology enhances the operational capabilities of your current fleet without requiring extensive modifications.

Source link

Adobe Firefly Introduces Prompt-Based Video Editing and Expands Third-Party Model Support

Adobe Firefly Revolutionizes AI Video Editing with New Features

Adobe is enhancing its AI video-generation platform, Firefly, by introducing an innovative video editor that facilitates precise prompt-based edits. This update also incorporates new third-party models for image and video generation, notably Black Forest Labs’ FLUX.2 and Topaz Astra.

Streamlined Editing with Prompt-Based Controls

Previously, Firefly limited users to full clip recreation if any aspect was unsatisfactory. The newly launched editor allows for text prompts to refine video elements, adjusting colors, camera angles, and more. Users can now easily manipulate frames, audio, and other features through an intuitive timeline view.

Introducing New Models and Features

Initially announced in October as a private beta, the video editor is now accessible to all users. With the integration of Runway’s Aleph model, creators can provide specific instructions such as “Change the sky to overcast and lower the contrast” or “Zoom in slightly on the main subject.”

Advanced Camera Manipulation and Upscaling Capabilities

Users can leverage Adobe’s own Firefly Video model to upload a starting frame and a reference video to recreate desired camera angles. Additionally, the Topaz Labs’ Astra model enables video upscaling to 1080p or 4K, while FLUX.2 will enhance image generation capabilities in the app. Moreover, a collaborative boards feature will be introduced soon.

Immediate Availability and Future Releases

FLUX.2 is now available across all platforms within Firefly, with Adobe Express users gaining access starting in January. Adobe seeks to engage more users by continually improving Firefly amid competition in image and video generation tools.

Special Offers for Firefly Subscribers

To attract users, Adobe will provide unlimited generations from all image models, including the Firefly Video Model, to subscribers of Firefly Pro, Firefly Premium, and certain credit plans until January 15.

A Year of Transformative Updates

Adobe has significantly revamped Firefly this year, launching subscriptions for varied image and video generation levels, followed by a new Firefly web app and mobile applications, alongside enhanced support for additional third-party models.

Here are five FAQs regarding Adobe Firefly’s new features:

FAQ 1: What is prompt-based video editing in Adobe Firefly?

Answer: Prompt-based video editing allows users to generate and modify video content using natural language prompts. This means you can describe what you want to see in the video, and Adobe Firefly will assist in creating or editing the footage accordingly.

FAQ 2: How does the addition of third-party models enhance Adobe Firefly’s capabilities?

Answer: The integration of third-party models expands the range of creative possibilities by allowing users access to diverse AI tools and resources. This helps in generating more customized and varied content tailored to specific needs or styles.

FAQ 3: What types of video editing tasks can I perform using Adobe Firefly’s prompt-based features?

Answer: You can perform a variety of tasks, including scene alterations, adding visual effects, color grading, and more—simply by inputting descriptive prompts. This aims to streamline the editing process and make it more intuitive.

FAQ 4: Is there a learning curve for using these new features in Adobe Firefly?

Answer: While there might be an initial adjustment period, Adobe Firefly is designed with user-friendliness in mind. Many find that using prompts simplifies tasks, making it accessible even for those with limited video editing experience.

FAQ 5: Are there any additional costs associated with using third-party models in Adobe Firefly?

Answer: The pricing structure for using third-party models may vary based on the specific models or services being utilized. It’s best to check Adobe’s official documentation or pricing page for the latest information on any additional costs.

Source link

Nvidia Considers Increasing H200 Production to Address Rising Demand in China

Nvidia Gains Approval to Sell H200 Chips in China Amid Surge in Demand

Nvidia has successfully lobbied for approval from the Trump administration, allowing the sales of its H200 chips to China. Reports suggest the company is now poised to increase production to meet the rising orders from Chinese firms, according to Reuters sources.

H200 Chips: A Game Changer for AI Training

The H200 chips, Nvidia’s most advanced offering from the Hopper GPU generation, were previously restricted from being sold in China due to the Biden administration’s export limitations on top-tier AI chips. However, a recent decision by the Department of Commerce has cleared the way for Nvidia to sell H200 GPUs in China, resulting in a 25% sales cut to the US government.

Chinese Demand Sparks Potential Production Expansion

Faced with remarkable demand from Chinese tech companies, Nvidia is contemplating increasing its production capacity, as reported by Reuters. Nevertheless, Chinese authorities are still deliberating on permitting the import of the H200 chips, which boast significantly enhanced capabilities compared to the previous H20 models designed for the Chinese market.

Opportunities and Challenges in the Race for AI Development

For Nvidia, ramping up H200 chip production would enable the company to capitalize on the untapped demand in China, a nation eager to develop its own AI chip capabilities. Rising competition and national security anxieties in Western countries have restricted access to the latest high-performance hardware essential for AI training, prompting Chinese firms to prioritize efficiency over sheer scale.

Key Players Eager to Secure H200 Chip Orders

Major Chinese companies, including Alibaba and ByteDance, are already engaging with Nvidia to secure substantial orders for the H200 chips, which are currently in limited supply, the report indicates.

Nvidia has not yet responded to requests for comment.

Here are five FAQs regarding Nvidia’s plans to ramp up H200 production in response to surging demand in China:

FAQ 1: What is the H200?

Answer: The H200 is Nvidia’s latest high-performance GPU designed for data centers, AI applications, and advanced computing tasks. It offers significant improvements in processing power and energy efficiency, making it suitable for a wide range of applications, including machine learning and data analytics.

FAQ 2: Why is Nvidia increasing H200 production in China?

Answer: Nvidia is ramping up H200 production to meet the surging demand from the Chinese market. As companies in China increasingly invest in AI and data center technologies, Nvidia aims to ensure that its products are readily available to cater to this growing need.

FAQ 3: How does this increase in production affect prices?

Answer: While an increase in production generally aims to stabilize or lower prices by meeting demand, other factors such as global supply chain issues, manufacturing costs, and trade regulations may also affect pricing. Therefore, it’s unclear if prices will drop as a direct result of increased H200 production.

FAQ 4: When can we expect the increased production to reflect in the market?

Answer: The timeline for increased production typically depends on multiple factors, including manufacturing capacity and logistical considerations. Analysts suggest that significant changes may become evident within a few months, but specific timelines can vary.

FAQ 5: Will these changes impact Nvidia’s other products?

Answer: While the focus on increasing H200 production primarily addresses current demand, it may also affect Nvidia’s overall production strategy. Resources and attention may shift, potentially influencing the availability or development timelines of other products in the Nvidia lineup.

Source link

AI Data Center Surge May Spell Trouble for Other Infrastructure Initiatives

Data Center Boom Threatens Infrastructure Development

Accelerating data center construction may jeopardize crucial improvements to roads, bridges, and other vital infrastructure, as highlighted by a recent Bloomberg report.

Record Debt Sales to Fund Infrastructure Projects

In 2025, state and local governments achieved record debt sales for the second consecutive year, with projections indicating an additional $600 billion in sales slated for the upcoming year. The majority of these funds are earmarked for infrastructure enhancements.

Private Sector Spending on Data Centers Surges

According to Census Bureau data, private investment in data center construction is currently exceeding an annualized rate of $41 billion, approximately equating to the spending by state and local governments on transportation initiatives.

Labor Shortages Complicate Construction Efforts

These competing construction projects are likely to contend for a limited pool of workers, particularly as the industry grapples with labor shortages caused by retirements and the effects of President Trump’s immigration policies.

Expert Insight: Data Centers Detract from Infrastructure Progress

Andrew Anagnost, CEO of Autodesk, emphasized to Bloomberg that there is “absolutely no doubt” that the surge in data center construction is diverting resources away from essential infrastructure projects. He remarked, “I guarantee you a lot of those [infrastructure] projects are not going to move as fast as people want.”

FAQ: AI Data Center Boom Impact on Infrastructure Projects

1. What is the AI data center boom?

The AI data center boom refers to the rapid growth and expansion of data centers specifically designed to support artificial intelligence applications. This trend has been driven by increasing demand for AI processing power, necessitating more robust infrastructure and energy resources.


2. Why could the growth of AI data centers negatively affect other infrastructure projects?

The massive energy and resource demands of AI data centers can divert attention and funding away from other critical infrastructure projects. As governments and companies prioritize data center construction, investments in roads, bridges, and public utilities may be delayed or scaled back.


3. What specific infrastructure projects could be impacted?

Key infrastructure projects that could face delays include renewable energy installations, public transportation enhancements, and water supply systems. Since AI data centers require significant electricity and cooling resources, existing infrastructure may struggle to accommodate these demands without significant upgrades.


4. Are there any environmental concerns associated with the AI data center boom?

Yes, the increased energy consumption from AI data centers raises concerns about carbon emissions and resource depletion. Many regions lack the necessary infrastructure to support this growth sustainably, leading to potential environmental degradation and increased energy costs for local communities.


5. What can be done to mitigate the negative effects on infrastructure from the AI data center expansion?

To alleviate the issues, comprehensive planning is essential. Policymakers and industry leaders should collaborate on sustainable energy solutions, prioritize balanced investment across infrastructure sectors, and assess the long-term impacts of AI data centers on community resources to ensure equitable development.

Source link

What’s Happening with LinkedIn’s Algorithm?

<div>
    <h2>LinkedIn Gender Experiment Raises Questions About Algorithm Bias</h2>

    <p id="speakable-summary" class="wp-block-paragraph">In November, a product strategist, whom we'll refer to as Michelle (a pseudonym), logged into her LinkedIn profile and switched her gender to male, changing her name to Michael. This was part of an experiment called #WearthePants, designed to explore potential biases in LinkedIn's algorithm against women.</p>

    <h3>The #WearthePants Experiment: Testing Algorithmic Bias</h3>

    <p class="wp-block-paragraph">Michelle was participating in a growing trend where women sought to verify claims of algorithmic bias on LinkedIn. The test came in response to observations by frequent users who noted decreased engagement and visibility on their posts, coinciding with recent algorithm updates.</p>

    <h3>Engagement Imbalances: A Closer Look</h3>

    <p class="wp-block-paragraph">With over 10,000 followers, Michelle ghostwrites for her husband, whose profile has about 2,000 followers. Surprisingly, both received similar engagement despite the follower disparity. “The only significant variable was gender,” she pointed out.</p>

    <h3>User Reports of Gender-Based Visibility Changes</h3>

    <p class="wp-block-paragraph">Users like Marilynn Joyner also noted stark differences after changing their gender on LinkedIn. After switching to male, she reported a 238% increase in post impressions within just one day. This trend was echoed by many, leading to discussions surrounding gender bias within the platform.</p>

    <h3>The Response from LinkedIn: No Bias, No Problem?</h3>

    <p class="wp-block-paragraph">In response to these claims, LinkedIn stated that its algorithms do not rely on demographic information to determine visibility in users’ feeds. Yet, experts have noted that implicit biases may still persist within the system.</p>

    <h3>Understanding the Algorithm: Complexity and Bias</h3>

    <p class="wp-block-paragraph">According to data ethics consultant Brandeis Marshall, LinkedIn's algorithms are complex and may inadvertently favor certain communication styles. This complexity makes it difficult to pinpoint specific causes for visibility variations.</p>

    <h3>Roots of the #WearthePants Movement</h3>

    <p class="wp-block-paragraph">The #WearthePants initiative originated from entrepreneurs Cindy Gallop and Jane Evans, who questioned if gender was influencing engagement levels. By having men post similar content, they highlighted stark discrepancies in reach.</p>

    <h3>Algorithmic Transparency: A Call to Action</h3>

    <p class="wp-block-paragraph">While some participants demand accountability from LinkedIn regarding potential bias, the company's secrecy about algorithm operations complicates the issue. Marshall emphasizes the need for platforms like LinkedIn to address biases that might stem from the way their AI systems are trained.</p>

    <h3>User Experiences: Mixed Reviews on Engagement</h3>

    <p class="wp-block-paragraph">Many users across genders express frustration with the new algorithm. While some see increased impressions, others struggle to achieve engagement levels similar to those prior to the changes.</p>

    <h3>The Search for Clarity and Fairness</h3>

    <p class="wp-block-paragraph">The algorithm's opaque nature means that users are left confused and seeking clarity. “I want transparency,” Michelle stated, encapsulating the broader demand for accountability in social media platforms.</p>
</div>

This rewrite preserves the core content while enhancing SEO with structured headings and concise summaries that engage the reader.

Here are five FAQs regarding LinkedIn’s algorithm:

FAQ 1: What does LinkedIn’s algorithm prioritize in user feeds?

Answer: LinkedIn’s algorithm prioritizes content that encourages engagement, such as likes, comments, and shares. It looks for posts that are relevant to your interests, industry, and connections, promoting high-quality, meaningful interactions over irrelevant content.

FAQ 2: How can I improve the visibility of my posts on LinkedIn?

Answer: To enhance the visibility of your posts, focus on creating engaging, original content that sparks conversation. Use relevant hashtags, tag connections, and post during peak hours when your audience is most active. Consistent interaction with your network also boosts your overall visibility.

FAQ 3: Are videos prioritized over text posts?

Answer: Yes, the algorithm tends to favor video content, as it often generates higher engagement rates. Incorporating video into your LinkedIn strategy can help attract more views and interactions compared to traditional text posts or images.

FAQ 4: Does commenting on others’ posts affect my own reach?

Answer: Absolutely! Engaging with others’ posts can expand your network and enhance your own visibility. When you comment on posts, your name is visible to the original poster’s connections, potentially increasing your reach and encouraging reciprocal engagement.

FAQ 5: How does LinkedIn determine what’s “high-quality” content?

Answer: LinkedIn assesses content quality based on user engagement metrics, relevance, and whether it fosters conversation. Posts that result in meaningful discussions, high interaction rates, and positive feedback from users are considered high-quality and are more likely to be promoted in feeds.

Source link

Google Unveils Its Most Advanced AI Research Agent on the Same Day OpenAI Releases GPT-5.2

Google Unveils Enhanced Gemini Deep Research Agent Powered by Gemini 3 Pro

On Thursday, Google unveiled a revamped version of its research agent, Gemini Deep Research, now enhanced with the cutting-edge Gemini 3 Pro foundation model.

Empowering Developers with New Research Capabilities

This updated agent goes beyond generating research reports to allow developers to integrate Google’s state-of-the-art research functionalities into their own applications through the new Interactions API. This innovation marks a significant advancement in the evolving landscape of agentic AI.

Versatile Solutions for Diverse Applications

The latest Gemini Deep Research tool is adept at synthesizing vast amounts of data, capable of managing substantial context within prompts. Google highlights its use for a variety of purposes, including due diligence and drug toxicity investigations.

Integrating AI Into Everyday Services

Google plans to weave this new deep research agent into key platforms, including Google Search, Google Finance, Gemini App, and its widely utilized NotebookLM. This strategy anticipates a future where AI agents will handle information queries, reducing the need for users to search online themselves.

Minimizing AI Hallucinations for Enhanced Accuracy

The Deep Research tool benefits significantly from Gemini 3 Pro’s status as the “most factual” model, specifically designed to reduce hallucinations, a pressing issue during complex, long-term reasoning tasks.

New Benchmark: DeepSearchQA

To validate its capabilities, Google introduced the DeepSearchQA benchmark, tailored for evaluating agents on intricate, multi-step information-seeking tasks, which has been made open source for broader community use.

Performance Comparisons with Other Leading AI

Additionally, Google tested Deep Research on the intriguingly named Humanity’s Last Exam and BrowserComp benchmarks. While Google’s new agent excelled in its own tests and Humanity’s, OpenAI’s ChatGPT 5 Pro emerged as a robust competitor, slightly outperforming Google on BrowserComp.

Rivalry Heating Up: OpenAI Launches GPT 5.2

The benchmark announcements from Google coincided with OpenAI’s release of the much-anticipated GPT 5.2, codenamed Garlic. OpenAI posits that its latest model outperforms competitors in crucial benchmark tests, including its own.

Strategic Timing for AI Announcements

The timing of Google’s announcement seems strategic, as it aims to capture attention amidst the buzz surrounding OpenAI’s Garlic, highlighting its commitment to innovation in AI technologies.

Sure! Here are five FAQs regarding Google’s latest AI research agent launch, coinciding with OpenAI’s release of GPT-5.2.

FAQ 1: What is Google’s new AI research agent?

Answer: Google’s new AI research agent is its deepest and most sophisticated artificial intelligence model to date. It leverages advanced machine learning techniques to enhance natural language understanding, improve conversational capabilities, and support a wide range of applications, from research assistance to creative content generation.

FAQ 2: How does this release compare to OpenAI’s GPT-5.2?

Answer: While both Google’s new AI agent and OpenAI’s GPT-5.2 push the boundaries of natural language processing, they may differ in specific capabilities, underlying architecture, and intended use cases. Google’s model is designed to enhance interactive and contextual understanding, while GPT-5.2 focuses on refining conversational flow and accuracy.

FAQ 3: What are the potential applications of Google’s AI research agent?

Answer: Google’s AI research agent can be applied in various fields, including customer service, content creation, coding assistance, and educational tools. Its advanced capabilities are aimed at improving user interactions, delivering personalized experiences, and aiding researchers in data analysis.

FAQ 4: Are there any ethical concerns associated with these AI advancements?

Answer: Yes, with the advancement of AI technology comes ethical considerations, including bias in algorithms, privacy concerns, and potential job displacement. Both Google and OpenAI emphasize the importance of developing these technologies responsibly and are actively working on guidelines to address these issues.

FAQ 5: How can users access Google’s new AI research agent?

Answer: Google is expected to gradually roll out its new AI research agent through its existing products, like Google Search and Workspace tools. Users may also find dedicated AI applications or APIs available for developers looking to integrate this technology into their platforms, though specific access details haven’t been fully implemented yet.

Source link

Interest in Spoor’s AI Software for Bird Monitoring is Skyrocketing

Spoor: Pioneering Computer Vision Technology to Protect Bird Populations from Wind Turbines

Spoor, founded in 2021, aims to harness computer vision to mitigate the impact of wind turbines on local bird species. The startup has validated its technology and is witnessing increasing demand from wind farms and other sectors.

Innovative Technology Tracking Bird Migration Patterns

Based in Oslo, Norway, Spoor has developed advanced software that utilizes computer vision to monitor and identify bird populations and their migration routes. This technology can detect birds within a 2.5-kilometer range (approximately 1.5 miles) and is compatible with any standard high-resolution camera.

Enhancing Wind Farm Operations through Smart Data

Wind farm operators can leverage this crucial information to strategically determine wind farm placements and better understand migration patterns. They can even adjust turbine operations—slowing or stopping them—during peak migration times to protect bird populations.

A Passion for Conservation: Insights from Spoor’s Co-Founder

Ask Helseth, co-founder and CEO of Spoor, shared his motivation with TechCrunch last year, citing the lack of effective tracking tools in the wind farm industry despite stringent regulations governing their operations and their impact on local avifauna.

“Regulators are raising expectations, yet the industry lacks adequate solutions,” Helseth noted. “Many field assessments still rely on binoculars and trained dogs to tally bird collisions with turbines.”

Inflated Capabilities: From 1 km to 2.5 km

Since its seed funding round in 2024, Spoor has expanded its tracking capabilities from a 1-kilometer range to a powerful 2.5 kilometers. This enhancement has led to a remarkable 96% accuracy in bird identification, thanks to an influx of data feeding into their AI algorithms.

Collaborations and Future Expansions

Spoor now operates on three continents and collaborates with over 20 leading energy companies worldwide. The startup is also generating interest from diverse sectors, including airports and aquaculture, and has partnered with mining giant Rio Tinto to monitor bat populations.

Exploring New Opportunities with Caution

While there is growing curiosity about using Spoor’s technology to track other small objects, such as drones, Helseth humorously noted, “Drones, in our view, are like plastic birds. They possess a different movement pattern and structure, so we are currently not focusing on collecting that data.”

Funding and Future Aspirations

Spoor successfully secured an €8 million ($9.3 million) Series A funding round, led by SET Ventures, with contributions from Ørstead Ventures and Superorganism, along with other strategic investors.

Helseth anticipates a growing demand for this kind of technology as regulatory scrutiny intensifies. Recently, French authorities shut down a wind farm due to its adverse effects on local bird species, imposing hefty fines in the process.

“Our mission is to ensure harmony between industry and nature,” Helseth concluded. “While we’ve made strides, we remain a small startup with much to demonstrate. Our goal is to solidify our status in the wind sector and to evolve as a global leader in addressing these pressing challenges, while also showing the wider value of our technology.”

Here are five FAQs regarding Spoor’s bird monitoring AI software:

FAQ 1: What is Spoor’s bird monitoring AI software?

Answer: Spoor’s bird monitoring AI software utilizes advanced artificial intelligence algorithms to analyze bird populations, track species movement, and gather data on their habitats. This software assists researchers and conservationists in understanding avian behaviors and making informed decisions for species protection.


FAQ 2: How does Spoor’s AI software improve bird monitoring?

Answer: The software enhances bird monitoring by automating data collection and analysis, allowing for more efficient tracking of various species. It can process vast amounts of audio and visual data, identify species, and detect behavioral patterns, significantly reducing the time and manpower needed for traditional monitoring methods.


FAQ 3: Who can benefit from using Spoor’s bird monitoring AI software?

Answer: The software is beneficial for researchers, conservation organizations, wildlife enthusiasts, and educators. It serves as a valuable tool for anyone interested in avian studies, habitat conservation, and biodiversity assessment.


FAQ 4: Is Spoor’s bird monitoring AI software user-friendly?

Answer: Yes, Spoor’s software is designed with user-friendliness in mind. It features an intuitive interface that requires minimal technical expertise. Comprehensive tutorials and support resources are also available to assist users in navigating the software effectively.


FAQ 5: How can I get started with Spoor’s bird monitoring AI software?

Answer: To get started, visit Spoor’s official website to request a demo or download the software. You may also explore subscription options suitable for individual or organizational use. Getting in touch with their support team can provide further guidance on implementation and features.

Source link