Chime Investor Lauren Kolodny Invests in AI to Transform Estate Processing

Lauren Kolodny: Championing Technology for Financial Inclusion

Lauren Kolodny, partner at Acrew Capital, is a staunch advocate for technology’s role in democratizing access to financial services for everyday individuals.

Backing Chime: A Bold Investment in Financial Innovation

In 2016, when the nascent neobank Chime faced skepticism from investors about its potential to serve the working class, Kolodny became the sole VC willing to invest, committing a $9 million Series A extension just as the company was on the brink of insolvency.

Reaping the Rewards: Chime’s Phenomenal Growth

That decision proved lucrative when Chime went public last month at an impressive $14.5 billion valuation.

Continuing the Mission: Investing in Consumer-Centric Tech

Kolodny, a three-time member of the Forbes Midas List, remains dedicated to funding tech solutions that empower consumers to better manage their finances.

Pioneering AI in Estate Settlements: Kolodny’s Latest Investment

Recently, she led a significant $20 million Series A investment in Alix, an innovative startup utilizing AI to streamline the estate settlement process.

A Personal Journey Fuels Innovation

Alix’s founder, Alexandra Mysoor, found inspiration after assisting a friend with settling a family estate. She shared with TechCrunch that the endeavor consumed 900 hours over 18 months, involving tedious tasks like contacting banks to transfer assets and locating various accounts.

Transforming an Archaic Process

“I was shocked at how complicated this process was,” said Mysoor. “It’s paper-driven and outdated. You’re searching for to-do lists that aren’t helpful and contacting attorneys who provide only a fraction of the needed work, charging exorbitant fees.”

Join us at the TechCrunch event!

San Francisco
|
October 27-29, 2025

Revolutionizing Estate Settlement with AI

This experience led Mysoor to realize that many labor-intensive elements of trust administration, such as document management and communication with banks, can be efficiently managed by AI technologies.

Kolodny’s Vision: Solving a Pressing Problem

Upon meeting Mysoor and understanding Alix’s mission, Kolodny recognized a significant issue that she couldn’t shake from her mind. As economists predict the transfer of trillions of dollars to younger generations, the burdensome paperwork for estate settlements largely falls on grieving families.

A Gap in the Market for Estate Services

Kolodny found that while some startups, such as Empathy, assist with account closures during bereavement, no companies provided end-to-end estate settlement solutions.

Aha Moment: The Need for a Comprehensive Solution

“How is it that such a complicated process, which requires extensive project management, lacks meaningful solutions?” Kolodny remarked. “It was a true ‘aha’ moment for me. This is exactly the type of challenge AI can tackle.”

Alix: Democratizing Financial Services

Kolodny believes that Alix could be among the first wave of AI-powered startups that will democratize financial and administrative services that were historically reserved for the wealthy.

Transparent Pricing: Alix’s Approach to Fees

Alix charges a fee of 1% of an estate’s total value. For inheritances below $1 million, clients can anticipate costs ranging between $9,000 and $12,000, depending on the estate’s complexity.

Here are five FAQs based on the topic of Lauren Kolodny and her investment in AI to revolutionize estate processing:

FAQ 1: Who is Lauren Kolodny?

Answer: Lauren Kolodny is a prominent backer in the tech industry, known for her investment in innovative startups. She has a strong focus on leveraging artificial intelligence to transform traditional processes, including estate processing.

FAQ 2: What is the significance of using AI in estate processing?

Answer: AI can streamline and automate various aspects of estate processing, improving efficiency and accuracy. This includes tasks such as document management, data analysis, and decision-making, ultimately making the process faster and reducing costs.

FAQ 3: How does Lauren Kolodny believe AI will change the estate industry?

Answer: Kolodny believes that AI will revolutionize the estate industry by providing tools that enhance transparency, improve speed, and reduce human error. This technological shift can also lead to better user experiences for clients navigating estate services.

FAQ 4: What challenges might arise with the integration of AI in estate processing?

Answer: Challenges may include data privacy concerns, the need for regulatory compliance, and the potential for resistance from traditional stakeholders. Additionally, there may be technical hurdles in implementing AI systems effectively within existing frameworks.

FAQ 5: How can startups in the estate processing field benefit from AI?

Answer: Startups can leverage AI to differentiate themselves in a competitive market by offering innovative solutions that simplify processes, enhance customer interactions, and provide valuable insights through data analytics. This can lead to increased customer satisfaction and potentially higher revenue.

Source link

NTT Introduces Revolutionary AI Inference Chip for Instantaneous 4K Video Processing on the Edge

NTT Corporation Unveils Groundbreaking AI Inference Chip for Real-Time Video Processing

In a significant advancement for edge AI processing, NTT Corporation has introduced a revolutionary AI inference chip capable of processing real-time 4K video at 30 frames per second while consuming less than 20 watts of power. This cutting-edge large-scale integration (LSI) chip is the first of its kind globally to achieve high-performance AI video inferencing in power-constrained environments, marking a breakthrough for edge computing applications.

Bringing AI Power to the Edge: NTT’s Next-Gen Chip Unveiled

Debuted at NTT’s Upgrade 2025 summit in San Francisco, this chip is designed specifically for deployment in edge devices, such as drones, smart cameras, and sensors. Unlike traditional AI systems that rely on cloud computing for inferencing, this chip delivers potent AI capabilities directly to the edge, significantly reducing latency and eliminating the need to transmit ultra-high-definition video to centralized cloud servers for analysis.

The Significance of Edge Computing: Redefining Data Processing

In the realm of edge computing, data is processed locally on or near the device itself. This approach slashes latency, conserves bandwidth, and enables real-time insights even in settings with limited or intermittent internet connectivity. Moreover, it fortifies privacy and data security by minimizing the transmission of sensitive data over public networks, a paradigm shift from traditional cloud computing methods.

NTT’s revolutionary AI chip fully embraces this edge-centric ethos by facilitating real-time 4K video analysis directly within the device, independent of cloud infrastructure.

Unlocking New Frontiers: Real-Time AI Applications Redefined

Equipped with this advanced chip, a drone can now detect people or objects from distances up to 150 meters, surpassing traditional detection ranges limited by resolution or processing speed. This breakthrough opens doors to various applications, including infrastructure inspections, disaster response, agricultural monitoring, and enhanced security and surveillance capabilities.

All these feats are achieved with a chip that consumes less than 20 watts, defying the hundreds of watts typically required by GPU-powered AI servers, rendering them unsuitable for mobile or battery-operated systems.

Breaking Down the Chip’s Inner Workings: NTT’s AI Inference Engine

Central to the LSI’s performance is NTT’s uniquely crafted AI inference engine, ensuring rapid, precise results while optimizing power consumption. Notable innovations include interframe correlation, dynamic bit-precision control, and native YOLOv3 execution, bolstering the chip’s ability to offer robust AI performance in once-constrained settings.

Commercialization and Beyond: NTT’s Vision for Integration

NTT plans to commercialize this game-changing chip by the fiscal year 2025 through NTT Innovative Devices Corporation. Researchers are actively exploring its integration into the Innovative Optical and Wireless Network (IOWN), NTT’s forward-looking infrastructure vision aimed at revolutionizing modern societal backbones. Coupled with All-Photonics Network technology for ultra-low latency communication, the chip’s local processing power amplifies its impact on edge devices.

Additionally, NTT is collaborating with NTT DATA, Inc. to merge the chip’s capabilities with Attribute-Based Encryption (ABE) technology, fostering secure, fine-grained access control over sensitive data. Together, these technologies will support AI applications necessitating speed and security, such as in healthcare, smart cities, and autonomous systems.

Empowering a Smarter Tomorrow: NTT’s Legacy of Innovation

This AI inference chip epitomizes NTT’s commitment to fostering a sustainable, intelligent society through deep technological innovation. As a global leader with a vast reach, NTT’s new chip heralds the dawn of a new era in AI at the edge—a realm where intelligence seamlessly melds with immediacy, paving the way for transformative advancements in various sectors.

  1. What is NTT’s breakthrough AI inference chip?
    NTT has unveiled a breakthrough AI inference chip designed for real-time 4K video processing at the edge. This chip is able to quickly and efficiently analyze and interpret data from high-resolution video streams.

  2. What makes this AI inference chip different from others on the market?
    NTT’s AI inference chip stands out from others on the market due to its ability to process high-resolution video data in real-time at the edge. This means that it can analyze information quickly and provide valuable insights without needing to send data to a centralized server.

  3. How can this AI inference chip be used in practical applications?
    This AI inference chip has a wide range of practical applications, including security monitoring, industrial automation, and smart city infrastructure. It can help analyze video data in real-time to improve safety, efficiency, and decision-making in various industries.

  4. What are the benefits of using NTT’s AI inference chip for real-time 4K video processing?
    Using NTT’s AI inference chip for real-time 4K video processing offers several benefits, including faster data analysis, reduced latency, improved security monitoring, and enhanced efficiency in handling large amounts of video data.

  5. Is NTT’s AI inference chip available for commercial use?
    NTT’s AI inference chip is currently in development and testing phases, with plans for commercial availability in the near future. Stay tuned for more updates on when this groundbreaking technology will be available for use in various industries.

Source link

The Emergence of Neural Processing Units: Improving On-Device Generative AI for Speed and Longevity

Experience the Revolution of Generative AI in Computing

The world of generative AI is not only reshaping our computing experiences but also revolutionizing the core of computing itself. Discover how neural processing units (NPUs) are stepping up to the challenge of running generative AI on devices with limited computational resources.

Overcoming Challenges in On-device Generative AI Infrastructure

Generative AI tasks demand significant computational resources for image synthesis, text generation, and music composition. Cloud platforms have traditionally met these demands, but they come with challenges for on-device generative AI. Discover how NPUs are emerging as the solution to these challenges.

The Rise of Neural Processing Units (NPUs)

Explore the cutting-edge technology of NPUs that are transforming the implementation of generative AI on devices. Drawing inspiration from the human brain’s structure, NPUs offer efficient and sustainable solutions for managing AI workloads.

Adapting to Diverse Computational Needs of Generative AI

Learn how NPUs, integrated into System-on-Chip (SoC) technology alongside CPUs and GPUs, cater to the diverse computational requirements of generative AI tasks. By leveraging a heterogeneous computing architecture, tasks can be allocated to processors based on their strengths.

Real World Examples of NPUs

  • Discover how leading tech giants like Qualcomm, Apple, Samsung, and Huawei are integrating NPUs into their devices to enhance AI performance and user experiences.

Unlock the Potential of NPUs for Enhanced On-device AI Capabilities

Experience the transformative power of NPUs in enhancing on-device AI capabilities, making applications more responsive and energy-efficient. As NPUs continue to evolve, the future of computing is brighter than ever.






1. What is a Neural Processing Unit (NPU) and how does it enhance generative AI on devices?
A Neural Processing Unit (NPU) is a specialized hardware component designed to accelerate the processing of neural networks, particularly for tasks like generative AI. By offloading intensive computations to an NPU, devices can run AI algorithms more efficiently and with greater speed, resulting in enhanced on-device generative AI capabilities.

2. How does the rise of NPUs contribute to the speed and sustainability of generative AI?
NPUs enable devices to perform complex AI tasks locally, without relying on cloud servers for processing. This reduces latency and enhances the speed of generative AI applications, while also lowering energy consumption and promoting sustainability by reducing the need for constant data transfer to and from remote servers.

3. What are some examples of how NPUs are being used to enhance on-device generative AI?
NPUs are being integrated into a wide range of devices, including smartphones, smart cameras, and IoT devices, to enable real-time AI-driven features such as image recognition, natural language processing, and content generation. For example, NPUs can power features like enhanced photo editing tools, voice assistants, and personalized recommendations without needing to rely on cloud resources.

4. How do NPUs compare to traditional CPUs and GPUs in terms of generative AI performance?
While traditional CPUs and GPUs are capable of running AI algorithms, NPUs are specifically optimized for neural network processing, making them more efficient and faster for tasks like generative AI. NPUs are designed to handle parallel computations required by AI algorithms, ensuring improved performance and responsiveness compared to general-purpose processors.

5. How can developers leverage NPUs to optimize their generative AI applications for speed and sustainability?
Developers can take advantage of NPUs by optimizing their AI models for deployment on devices with NPU support. By leveraging NPU-friendly frameworks and tools, developers can ensure that their generative AI applications run efficiently and sustainably on a variety of devices, delivering a seamless user experience with minimal latency and energy consumption.
Source link