U.S. and Indian Venture Capitalists Join Forces in a $1B+ Initiative to Support India’s Deep Tech Startups

Groundbreaking Alliance Forms to Boost India’s Deep Tech Startups

A coalition of eight prominent U.S. and Indian venture capital and private equity firms, including Accel, Blume Ventures, Celesta Capital, and Premji Invest, has joined forces to invest over $1 billion in India’s deep tech startups over the next decade, enhancing U.S.-India tech collaboration.

Tackling Funding Concerns in India’s Startup Ecosystem

This alliance responds to persistent funding challenges highlighted by Indian Commerce Minister Piyush Goyal, who faced backlash for criticizing local startups for lacking innovation and focusing primarily on food delivery services. In contrast, founders pointed out that access to capital for deep tech ventures is scarce in India. The coalition aims to address these issues by channeling long-term private investment into technologies that have historically struggled for funding.

Unprecedented Collaboration Among Investors

The newly formed India Deep Tech Investment Alliance is notable because it formally unites investors who traditionally compete for deals. While collaboration typically happens on a case-by-case basis, this group is committed to pooling resources and efforts under a unified banner.

Members Commit to Long-Term Investment

The alliance, consisting of Celesta Capital, Accel, Blume Ventures, Gaja Capital, Ideaspring Capital, Premji Invest, Tenacity Ventures, and Venture Catalysts, announced its formation following a ₹1 trillion (approximately $11 billion) Research, Development, and Innovation (RDI) scheme approved by the Indian government aimed at promoting deep tech R&D.

Strategic Focus on Indian-Domiciled Startups

Each member of the alliance will commit private capital over the next 5 to 10 years to support local deep tech startups. As many notable deep tech companies with Indian founders are currently based in the U.S., the new RDI scheme requires local incorporation, which the coalition aims to leverage.

Providing Mentorship and Expanding Networks

Beyond funding, the alliance plans to offer mentorship and networking opportunities to startups, while also assisting portfolio companies with their expansion into the Indian market.

Navigating Geopolitical Challenges

Despite the complex geopolitical landscape, including recent tensions between the U.S. and India, the alliance is optimistic about India’s potential as a startup hub for foundational technologies like AI, semiconductors, and biotech.

Investment Opportunities for U.S. Companies

“India presents a particularly compelling market, not only for local companies but also for U.S. firms looking to expand,” noted Sriram Vishwanathan, founding managing partner at Celesta Capital, highlighting the alliance’s goal to invigorate the Indian startup ecosystem.

Focusing on Early-Stage Startups

The alliance’s initial focus will be on early-stage startups, from seed to Series B funding, with an eye on attracting further participation from both VC and private equity firms in the future.

Engagement with Government Policies

Members of the alliance intend to engage proactively with the Indian government to advocate for favorable policies, aiming to create a unified voice to support industry interests while adhering to RDI conditions.

Potential Risks and Rewards

While the collaborative effort is positioned as beneficial for the deep tech ecosystem, there’s an inherent risk that miscoordination could leave startups facing challenges. Nevertheless, optimism remains high for India’s ability to produce transformative technologies over the next decade.

“The future is bright: ambition, talent, and patient capital are converging to transform the Indian startup landscape,” stated Accel partner Anand Daniel.

Here are five FAQs regarding the U.S. and Indian VCs forming a $1B+ alliance to fund India’s deep tech startups:

FAQ 1: What is the purpose of the $1B+ alliance between U.S. and Indian VCs?

Answer: The alliance aims to fund and support India’s deep tech startups, fostering innovation and growth in sectors such as artificial intelligence, robotics, 5G, and biotechnology. By pooling resources and expertise, the VCs intend to accelerate the development of cutting-edge technologies in India.


FAQ 2: Which specific sectors will the alliance focus on?

Answer: The alliance will primarily concentrate on deep tech sectors, including artificial intelligence, machine learning, robotics, 5G communications, biotechnology, and other advanced technologies that have the potential for significant impact and scalability.


FAQ 3: How will this funding impact Indian startups?

Answer: The partnership is expected to provide significant financial resources, mentorship, and access to global markets, enabling Indian startups to scale their operations, innovate rapidly, and compete on an international level. This could lead to job creation and technological advancements within India.


FAQ 4: Are there any eligibility criteria for startups to secure funding from this alliance?

Answer: While specific criteria may vary, startups typically need to demonstrate innovative technology, scalability potential, a strong business model, and a capable management team. Startups will likely need to apply through designated channels or partners associated with the alliance.


FAQ 5: How can startups apply for funding through this alliance?

Answer: Startups interested in funding from this alliance should prepare a comprehensive business plan and proposal. They can monitor announcements from the participating VCs for application procedures, investment windows, and specific criteria. Networking at industry events and utilizing platforms connected to the alliance may also enhance visibility to potential investors.

Source link

Assessing the Effectiveness of AI Agents in Genuine Research: A Deep Dive into the Research Bench Report

Unleashing the Power of Large Language Models for Deep Research

As large language models (LLMs) continue to advance, their role as research assistants is increasingly profound. These models are transcending simple factual inquiries and delving into “deep research” tasks, which demand multi-step reasoning, the evaluation of conflicting information, data sourcing from various web resources, and synthesizing this information into coherent outputs.

This emerging capability is marketed under various brand names by leading labs—OpenAI terms it “Deep Research,” Anthropic refers to it as “Extended Thinking,” Google’s Gemini offers “Search + Pro” features, and Perplexity calls theirs “Pro Search” or “Deep Research.” But how effective are these models in real-world applications? A recent report from FutureSearch, titled Deep Research Bench (DRB): Evaluating Web Research Agents, delivers a comprehensive evaluation, showcasing both remarkable abilities and notable shortcomings.

What Is Deep Research Bench?

Developed by the FutureSearch team, Deep Research Bench is a meticulously designed benchmark that assesses AI agents on multi-step, web-based research tasks. These are not simple inquiries but reflect the complex, open-ended challenges faced by analysts, policymakers, and researchers in real-world situations.

The benchmark comprises 89 distinct tasks across eight categories, including:

  • Find Number: e.g., “How many FDA Class II medical device recalls occurred?”
  • Validate Claim: e.g., “Is ChatGPT 10x more energy-intensive than Google Search?”
  • Compile Dataset: e.g., “Job trends for US software developers from 2019–2023.”

Each task is carefully crafted with human-verified answers, utilizing a frozen dataset of scraped web pages termed RetroSearch. This approach ensures consistency across model evaluations, eliminating the variable nature of the live web.

The Agent Architecture: ReAct and RetroSearch

Central to Deep Research Bench is the ReAct architecture, which stands for “Reason + Act.” This model mirrors how human researchers approach problems by contemplating the task, executing relevant searches, observing outcomes, and deciding whether to refine their approach or conclude.

While earlier models explicitly followed this loop, newer “thinking” models often embed reasoning more fluidly into their actions. To ensure evaluation consistency, DRB introduces RetroSearch—a static version of the web. Agents utilize a curated archive of web pages gathered through tools like Serper, Playwright, and ScraperAPI. For complex tasks like “Gather Evidence,” RetroSearch can offer access to over 189,000 pages, all time-stamped to ensure a reliable testing environment.

Top Performing AI Agents

In the competitive landscape, OpenAI’s model o3 stood out, achieving a score of 0.51 out of 1.0 on the Deep Research Bench. Although this may seem modest, interpreting the benchmark’s difficulty is crucial: due to task ambiguity and scoring nuances, even an exemplary model likely caps around 0.8—referred to as the “noise ceiling.” Thus, even the leading models today still trail well-informed, methodical human researchers.

The evaluation’s insights are illuminating. o3 not only led the results but also demonstrated efficiency and consistency across nearly all task types. Anthropic’s Claude 3.7 Sonnet followed closely, showcasing adaptability in both its “thinking” and “non-thinking” modes. Google’s Gemini 2.5 Pro excelled in structured planning and step-by-step reasoning tasks. Interestingly, the open-weight model DeepSeek-R1 kept pace with GPT-4 Turbo, illustrating a narrowing performance gap between open and closed models.

A discernible trend emerged: newer “thinking-enabled” models consistently outperformed older iterations, while closed-source models held a marked advantage over open-weight alternatives.

Challenges Faced by AI Agents

The failure patterns identified in the Deep Research Bench report felt alarmingly familiar. I’ve often experienced the frustration of an AI agent losing context during extensive research or content creation sessions. As the context window expands, the model may struggle to maintain coherence—key details might fade, objectives become unclear, and responses may appear disjointed or aimless. In such cases, it often proves more efficient to reset the process entirely, disregarding previous outputs.

This kind of forgetfulness isn’t merely anecdotal; it was identified as the primary predictor of failure in the evaluations. Additional recurring issues include repetitive tool use—agents running the same search in a loop, poor query formulation, and too often reaching premature conclusions—delivering only partially formed answers that lack substantive insight.

Notably, among the top models, differences were pronounced. For instance, GPT-4 Turbo exhibited a tendency to forget previous steps, while DeepSeek-R1 was prone to hallucinate or fabricate plausible yet inaccurate information. Across the board, models frequently neglect to cross-validate sources or substantiate findings before finalizing their outputs. For those relying on AI for critical tasks, these shortcomings resonate all too well, underscoring the distance we still need to cover to build agents that truly mimic human-like thinking and research abilities.

Memory-Based Performance Insights

Intriguingly, the Deep Research Bench also assessed “toolless” agents—language models that function without access to external resources, such as the web or document retrieval. These models rely exclusively on their internal information, generating responses based solely on their training data. This means they can’t verify facts or conduct online searches; instead, they form answers based purely on recollections.

Surprisingly, some toolless agents performed nearly as well as their fully equipped counterparts on specific tasks. For instance, in the Validate Claim task—measuring the plausibility of a statement—they scored 0.61, just shy of the 0.62 average achieved by tool-augmented agents. This suggests that models like o3 and Claude possess strong internal knowledge, often able to discern the validity of common assertions without needing to perform web searches.

However, on more challenging tasks like Derive Number—requiring the aggregation of multiple values from diverse sources—or Gather Evidence, which necessitates locating and evaluating various facts, these toolless models struggled significantly. Without current information or real-time lookup capabilities, they fell short in generating accurate or comprehensive answers.

This contrast reveals a vital nuance: while today’s LLMs can simulate “knowledge,” deep research does not rely solely on memory but also on reasoning with up-to-date and verifiable information—something that only tool-enabled agents can genuinely provide.

Concluding Thoughts

The DRB report underscores a crucial reality: the finest AI agents can outperform average humans on narrowly defined tasks, yet they still lag behind adept generalist researchers—particularly in strategic planning, adaptive processes, and nuanced reasoning.

This gap is especially evident during protracted or intricate sessions—something I have experienced, where an agent gradually loses sight of the overarching objective, resulting in frustrating disjointedness and utility breakdown.

The value of Deep Research Bench lies not only in its assessment of surface-level knowledge but in its investigation into the interplay of tool usage, memory, reasoning, and adaptability, providing a more realistic mirroring of actual research than benchmarks like MMLU or GSM8k.

As LLMs increasingly integrate into significant knowledge work, tools like FutureSearch‘s DRB will be crucial for evaluating not just the knowledge of these systems, but also their operational effectiveness.

Here are five FAQs based on the topic "How Good Are AI Agents at Real Research? Inside the Deep Research Bench Report":

FAQ 1: What is the Deep Research Bench Report?

Answer: The Deep Research Bench Report is a comprehensive analysis that evaluates the effectiveness of AI agents in conducting real research tasks. It assesses various AI models across different domains, providing insights into their capabilities, limitations, and potential improvements.


FAQ 2: How do AI agents compare to human researchers in conducting research?

Answer: AI agents can process and analyze vast amounts of data quickly, often outperforming humans in data-heavy tasks. However, they may lack the critical thinking and creative problem-solving skills that human researchers possess. The report highlights that while AI can assist significantly, human oversight remains crucial.


FAQ 3: What specific areas of research were evaluated in the report?

Answer: The report evaluated AI agents across several research domains, including medical research, scientific experimentation, and literature review. It focused on metrics such as accuracy, speed, and the ability to generate insights relevant to real-world applications.


FAQ 4: What were the key findings regarding AI agents’ performance?

Answer: The report found that while AI agents excel in data analysis and pattern recognition, they often struggle with nuanced concepts and contextual understanding. Their performance varied across domains, showing stronger results in structured environments compared to more ambiguous research areas.


FAQ 5: What are the implications of these findings for future research practices?

Answer: The findings suggest that integrating AI agents into research processes can enhance efficiency and data handling, but human researchers need to guide and validate AI-generated insights. Future research practices should focus on collaboration between AI and human intellect to leverage the strengths of both.

Source link

Comparison of AI Research Agents: Google’s AI Co-Scientist, OpenAI’s Deep Research, and Perplexity’s Deep Research

Redefining Scientific Research: A Comparison of Leading AI Research Agents

Google’s AI Co-Scientist: Streamlining Data Analysis and Literature Reviews

Google’s AI Co-Scientist is a collaborative tool designed to assist researchers in gathering relevant literature, proposing hypotheses, and suggesting experimental designs. With seamless integration with Google’s ecosystem, this agent excels in data processing and trend analysis, though human input is still crucial for hypothesis generation.

OpenAI’s Deep Research: Empowering Deeper Scientific Understanding

OpenAI’s Deep Research relies on advanced reasoning capabilities to generate accurate responses to scientific queries and offer insights grounded in broad scientific knowledge. While it excels in synthesizing existing research, limited dataset exposure may impact the accuracy of its conclusions.

Perplexity’s Deep Research: Enhancing Knowledge Discovery

Perplexity’s Deep Research serves as a search engine for scientific discovery, aiming to help researchers locate relevant papers and datasets efficiently. While it may lack computational power, its focus on knowledge retrieval makes it valuable for researchers seeking precise insights from existing knowledge.

Choosing the Right AI Research Agent for Your Project

Selecting the optimal AI research agent depends on the specific needs of your research project. Google’s AI Co-Scientist is ideal for data-intensive tasks, OpenAI’s Deep Research excels in synthesizing scientific literature, and Perplexity’s Deep Research is valuable for knowledge discovery. By understanding the strengths of each platform, researchers can accelerate their work and drive groundbreaking discoveries.

  1. What sets Google’s AI Co-Scientist apart from OpenAI’s Deep Research and Perplexity’s Deep Research?
    Google’s AI Co-Scientist stands out for its collaborative approach, allowing researchers to work alongside the AI system to generate new ideas and insights. OpenAI’s Deep Research focuses more on independent research, while Perplexity’s Deep Research emphasizes statistical modeling.

  2. How does Google’s AI Co-Scientist improve research outcomes compared to other AI research agents?
    Google’s AI Co-Scientist uses advanced machine learning algorithms to analyze vast amounts of data and generate new hypotheses, leading to more innovative and impactful research outcomes. OpenAI’s Deep Research and Perplexity’s Deep Research also use machine learning, but may not have the same level of collaborative capability.

  3. Can Google’s AI Co-Scientist be integrated into existing research teams?
    Yes, Google’s AI Co-Scientist is designed to work alongside human researchers, providing support and insights to enhance the overall research process. OpenAI’s Deep Research and Perplexity’s Deep Research can also be integrated into research teams, but may not offer the same level of collaboration.

  4. How does Google’s AI Co-Scientist handle large and complex datasets?
    Google’s AI Co-Scientist is equipped with advanced algorithms that are able to handle large and complex datasets, making it well-suited for research in diverse fields. OpenAI’s Deep Research and Perplexity’s Deep Research also have capabilities for handling large datasets, but may not offer the same collaborative features.

  5. Are there any limitations to using Google’s AI Co-Scientist for research?
    While Google’s AI Co-Scientist offers many benefits for research, it may have limitations in certain areas compared to other AI research agents. Some researchers may prefer the more independent approach of OpenAI’s Deep Research, or the statistical modeling focus of Perplexity’s Deep Research, depending on their specific research needs.

Source link

AI’s Transformation of Knowledge Discovery: From Keyword Search to OpenAI’s Deep Research

AI Revolutionizing Knowledge Discovery: From Keyword Search to Deep Research

The Evolution of AI in Knowledge Discovery

Over the past few years, advancements in artificial intelligence have revolutionized the way we seek and process information. From keyword-based search engines to the emergence of agentic AI, machines now have the ability to retrieve, synthesize, and analyze information with unprecedented efficiency.

The Early Days: Keyword-Based Search

Before AI-driven advancements, knowledge discovery heavily relied on keyword-based search engines like Google and Yahoo. Users had to manually input search queries, browse through numerous web pages, and filter information themselves. While these search engines democratized access to information, they had limitations in providing users with deep insights and context.

AI for Context-Aware Search

With the integration of AI, search engines began to understand user intent behind keywords, leading to more personalized and efficient results. Technologies like Google’s RankBrain and BERT improved contextual understanding, while knowledge graphs connected related concepts in a structured manner. AI-powered assistants like Siri and Alexa further enhanced knowledge discovery capabilities.

Interactive Knowledge Discovery with Generative AI

Generative AI models have transformed knowledge discovery by enabling interactive engagement and summarizing large volumes of information efficiently. Platforms like OpenAI SearchGPT and Perplexity.ai incorporate retrieval-augmented generation to enhance accuracy while dynamically verifying information.

The Emergence of Agentic AI in Knowledge Discovery

Despite advancements in AI-driven knowledge discovery, deep analysis, synthesis, and interpretation still require human effort. Agentic AI, exemplified by OpenAI’s Deep Research, represents a shift towards autonomous systems that can execute multi-step research tasks independently.

OpenAI’s Deep Research

Deep Research is an AI agent optimized for complex knowledge discovery tasks, employing OpenAI’s o3 model to autonomously navigate online information, critically evaluate sources, and provide well-reasoned insights. This tool streamlines information gathering for professionals and enhances consumer decision-making through hyper-personalized recommendations.

The Future of Agentic AI

As agentic AI continues to evolve, it will move towards autonomous reasoning and insight generation, transforming how information is synthesized and applied across industries. Future developments will focus on enhancing source validation, reducing inaccuracies, and adapting to rapidly evolving information landscapes.

The Bottom Line

The evolution from keyword search to AI agents performing knowledge discovery signifies the transformative impact of artificial intelligence on information retrieval. OpenAI’s Deep Research is just the beginning, paving the way for more sophisticated, data-driven insights that will unlock unprecedented opportunities for professionals and consumers alike.

  1. How does keyword search differ from using AI for deep research?
    Keyword search relies on specific terms or phrases to retrieve relevant information, whereas AI for deep research uses machine learning algorithms to understand context and relationships within a vast amount of data, leading to more comprehensive and accurate results.

  2. Can AI be used in knowledge discovery beyond just finding information?
    Yes, AI can be used to identify patterns, trends, and insights within data that may not be easily discernible through traditional methods. This can lead to new discoveries and advancements in various fields of study.

  3. How does AI help in redefining knowledge discovery?
    AI can automate many time-consuming tasks involved in research, such as data collection, analysis, and interpretation. By doing so, researchers can focus more on drawing conclusions and making connections between different pieces of information, ultimately leading to a deeper understanding of a subject.

  4. Are there any limitations to using AI for knowledge discovery?
    While AI can process and analyze large amounts of data quickly and efficiently, it still relies on the quality of the data provided to it. Biases and inaccuracies within the data can affect the results generated by AI, so it’s important to ensure that the data used is reliable and relevant.

  5. How can researchers incorporate AI into their knowledge discovery process?
    Researchers can use AI tools and platforms to streamline their research process, gain new insights from their data, and make more informed decisions based on the findings generated by AI algorithms. By embracing AI technology, researchers can push the boundaries of their knowledge discovery efforts and achieve breakthroughs in their field.

Source link

AI Evolution: Transitioning from Search Retrieval to Deep Reasoning

With the rise of generative AI, the way we search for information is undergoing a radical transformation. Traditional search engines are giving way to advanced systems that utilize AI for deeper insights and more accurate results.

The New Era of Search Technology

The evolution of search technology began with keyword-based systems but has now advanced to AI-driven platforms that can understand context and intent, revolutionizing the search experience.

From Keywords to Conversations

AI-driven search engines now enable users to ask questions in a conversational manner, enhancing the relevance and accuracy of search results.

Advancing Search with Deep Reasoning

While AI has improved search accuracy, current systems still struggle with complex queries that require deeper reasoning. There is a need for search engines to provide more nuanced and detailed responses.

Introducing Retrieval-Augmented Generation

Retrieval-Augmented Generation (RAG) is transforming search technology by combining information retrieval with generative AI, offering more precise and efficient search experiences.

The Role of Knowledge Graphs in Search

Knowledge graphs are essential for enabling reasoning in search technologies, allowing AI systems to provide contextually relevant answers by understanding relationships between concepts.

Enhancing Search with Cognitive Computing

Cognitive computing takes AI-driven search to the next level by considering user intent and context, transforming search engines into intelligent assistants capable of engaging in meaningful conversations.

Implications Across Industries

The shift towards deep reasoning in search technology has significant implications for industries like healthcare, finance, and education, impacting decision-making processes and improving learning outcomes.

Conclusion

As search technology continues to evolve, AI-driven systems are moving towards becoming more intuitive assistants that can provide thoughtful responses to complex queries, making search a valuable resource for decision-making and learning.

  1. What is the current role of AI in search engines?
    AI currently plays a major role in search engines by assisting with retrieval tasks, such as understanding user queries and returning relevant search results.

  2. How will AI move from retrieval to deep reasoning in search?
    AI will move from retrieval to deep reasoning in search by utilizing advanced algorithms to not only understand user queries, but also analyze and synthesize complex information to provide more contextually relevant search results.

  3. What benefits will deep reasoning AI bring to search?
    Deep reasoning AI will bring benefits such as improved accuracy in search results, better understanding of user intent, and the ability to provide more nuanced answers to complex queries.

  4. Will deep reasoning AI in search engines replace human expertise?
    While deep reasoning AI may enhance search engines, it is not expected to completely replace human expertise. Human input and oversight will still be necessary to ensure the accuracy and relevance of search results.

  5. How will deep reasoning AI impact the future of search engine optimization (SEO)?
    Deep reasoning AI may impact SEO by requiring website owners and marketers to create more high-quality, relevant content that can be understood and interpreted by advanced AI algorithms.aupted armols.asadersd,eladeu, 37001987199802654740997

Source link