Pat Gelsinger Seeks to Preserve Moore’s Law with Support from the Federal Government

Pat Gelsinger’s New Chapter: Leading xLight in the Semiconductor Arena

After a tumultuous exit from Intel, Pat Gelsinger continues to rise at dawn, navigating the complex semiconductor landscape from a fresh vantage point. As a general partner at Playground Global, Gelsinger is invested in 10 startups, with xLight—a promising semiconductor firm—drawing significant focus. Recently, xLight announced a preliminary agreement for up to $150 million from the U.S. Commerce Department, making the government a key stakeholder.

A Major Win After Intel

Gelsinger’s 35-year journey at Intel came to an unexpected end when the board dismissed him due to doubts about his revival strategies. Nevertheless, the xLight partnership highlights a new trend that raises eyebrows in Silicon Valley: the Trump administration’s willingness to take equity in essential tech companies.

Silicon Valley’s Unease

California Governor Gavin Newsom expressed the industry’s discomfort at a recent event, questioning, “What happened to free enterprise?” This sentiment reverberates through a tech sector historically rooted in free-market ideals.

Driving Innovation in Lithography

During a TechCrunch StrictlyVC event, Gelsinger, who holds the role of executive chairman at xLight, wasn’t fazed by these concerns. His focus is on tackling a crucial bottleneck in semiconductor production: lithography. xLight aims to develop large-scale “free electron lasers” powered by particle accelerators, potentially transforming chip manufacturing processes.

Reviving Moore’s Law

“I’m dedicated to revitalizing Moore’s law in semiconductor technology,” Gelsinger stated, referencing the foundational belief that computing power doubles biennially. “We believe this innovation will reinvigorate Moore’s law.”

Securing Future Funding

The xLight deal marks the inaugural Chips and Science Act award during Trump’s second term and is part of funds designated for promising early-stage firms. While the funding is currently in the letter of intent phase, Gelsinger remained transparent about the complexities involved: “We’ve agreed in principle, but we still have work to do.”

Ambitious Technological Developments

xLight’s vision encompasses creating colossal machines—approximately 100 meters by 50 meters—that will generate extreme ultraviolet light at remarkably precise wavelengths of 2 nanometers. This surpasses the capabilities of ASML, the industry leader dominating the EUV lithography market.

Transforming the Light Source Paradigm

“Half of the semiconductor industry’s investment goes into lithography,” Gelsinger explained. “Innovating on light wavelength and power is crucial for advancing semiconductor technology.” Leading xLight, Nicholas Kelez brings a unique perspective from his experience in quantum computing and large-scale X-ray science initiatives.

Embracing Viability in New Technologies

Kelez highlighted why xLight’s approach is feasible now, contrasting it with ASML’s previous abandonment of a similar strategy. The industry is now primed for this innovation, bolstered by advancements and the ubiquity of EUV lithography in semiconductor manufacturing.

Looking Ahead to 2028

With ambitions of producing silicon wafers by 2028 and launching a commercial system by 2029, xLight is poised for significant growth.

Collaborative Strategies

xLight is not directly competing with ASML but rather collaborating to integrate their systems. Gelsinger mentioned that while there are no contracts from major chipmakers yet, discussions are ongoing with potential partners.

Navigating Complex Competitive Dynamics

As competition intensifies, other startups like Substrate are emerging with similar technologies. However, Gelsinger views them as potential collaborators rather than rivals.

Political Underpinnings of xLight’s Funding

Gelsinger’s engagement with the Trump administration adds complexity to the narrative. Earlier discussions with Secretary of Commerce Howard Lutnick paved the way for this significant funding. While recent developments fueled criticism from some quarters, Gelsinger remains steadfast in framing government engagement as vital for national competitiveness.

Minimal Strings Attached

According to Kelez, the government investment comes with few conditions, allowing xLight the freedom to innovate without heavy oversight. With plans to raise additional funds soon, Gelsinger is optimistic about xLight’s trajectory.

Paving a New Path in Semiconductor Tech

Ultimately, xLight represents more than just another venture for Gelsinger; it’s an opportunity to reinforce his influence in the semiconductor landscape he helped shape, even as he navigates the shifting tides of Silicon Valley ethics.

A Commitment to Corporate Leadership

Gelsinger emphasizes the need for corporate leaders to remain above political fray, claiming, “CEOs and companies should neither be Republican nor Democrat.” He believes the primary goal is achieving business objectives while navigating beneficial policies, regardless of their political origin.

Reflecting on New Opportunities

In response to queries about managing multiple startups post-Intel, Gelsinger expressed contentment, asserting that influencing a broad spectrum of technologies excites him. “I’m just grateful the Playground team welcomed me,” he remarked, before humorously adding, “And I gave my wife back her weekends.” While this may seem a light comment, those familiar with Gelsinger’s work ethic might ponder how long this arrangement will hold.

Sure! Here are five FAQs related to Pat Gelsinger’s efforts to support Moore’s Law with assistance from the federal government:

FAQ 1: Who is Pat Gelsinger?

Answer: Pat Gelsinger is the CEO of Intel Corporation and a prominent figure in the tech industry. He has been an advocate for advancing semiconductor technology and maintaining the pace of innovation encapsulated by Moore’s Law, which predicts the doubling of transistors on a microchip approximately every two years.

FAQ 2: What is Moore’s Law?

Answer: Moore’s Law is an observation made by Gordon Moore, co-founder of Intel, which states that the number of transistors on a microchip doubles approximately every two years, leading to an increase in computing power and efficiency. It has been a driving principle behind the rapid advancement of technology.

FAQ 3: How does Pat Gelsinger plan to save Moore’s Law?

Answer: Pat Gelsinger aims to save Moore’s Law by advocating for increased investments in semiconductor research and development. He seeks collaboration with the federal government to support legislation and funding that would enhance the U.S. semiconductor manufacturing capabilities, ensuring innovation continues at the pace that Moore’s Law describes.

FAQ 4: What role does the federal government play in this initiative?

Answer: The federal government can provide financial support and incentives, helping to foster research and development in semiconductor technology. This includes potential funding for manufacturing facilities, tax incentives for companies investing in advanced technologies, and support for educational programs to develop a skilled workforce in the technology sector.

FAQ 5: Why is it important to maintain Moore’s Law?

Answer: Maintaining Moore’s Law is crucial because it drives technological advancements that are foundational to various industries, including computing, telecommunications, and consumer electronics. Continued progress under Moore’s Law leads to faster, cheaper, and more efficient computing solutions, ultimately benefiting society through better technologies in healthcare, transportation, and many other fields.

Source link

California’s New AI Safety Law Demonstrates That Regulation and Innovation Can Coexist

California’s Landmark AI Bill: SB 53 Brings Safety and Transparency Without Stifling Innovation

Recently signed into law by Gov. Gavin Newsom, SB 53 is a testament to the fact that state regulations can foster AI advancement while ensuring safety.

Policy Perspectives from Industry Leaders

Adam Billen, vice president of public policy at the youth-led advocacy group Encode AI, emphasized in a recent Equity podcast episode that lawmakers are aware of the need for effective policies that protect innovation and ensure product safety.

The Core of SB 53: Transparency in AI Safety

SB 53 stands out as the first bill in the U.S. mandating large AI laboratories to disclose their safety protocols and measures to mitigate risks like cyberattacks and bio-weapons development. Compliance will be enforced by California’s Office of Emergency Services.

Industry Compliance and Competitive Pressures

According to Billen, many companies are already engaging in safety testing and providing model cards, although some may be cutting corners due to competitive pressures. He highlights the necessity of such legislation to uphold safety standards.

Facing Resistance from Tech Giants

Some AI companies have hinted at relaxing safety standards under competitive circumstances, as illustrated by OpenAI’s statements regarding its safety measures. Billen believes that firm policies can help prevent any regression in safety commitments due to market competition.

Future Challenges for AI Regulation

Despite muted opposition to SB 53 compared to California’s previous AI legislation, many in Silicon Valley argue that any regulations could impede U.S. advancements in AI technologies, especially in comparison to China.

Funding Pro-AI Initiatives

Prominent tech entities and investors are significantly funding super PACs to support pro-AI candidates, which is part of a broader strategy to prevent state-level AI regulations from gaining traction.

Coalition Efforts Against AI Moratorium

Encode AI successfully mobilized over 200 organizations to challenge proposed AI moratoriums, but the struggle continues as efforts to establish federal preemption laws resurface, potentially diminishing state regulations.

Federal Legislation and Its Implications

Billen warns that narrowly-framed federal AI laws could undermine state sovereignty and hinder the regulatory landscape for a crucial technology. He believes SB 53 should not be the sole regulatory framework for all AI-related risks.

The U.S.-China AI Race: Regulatory Impacts

While he acknowledges the significance of competing with China in AI, Billen argues that dismantling state-level legislations doesn’t equate to an advantage in this race. He advocates for policies like the Chip Security Act, which aim to secure AI chip production without sacrificing necessary regulations.

Inconsistent Export Policies and Market Dynamics

Nvidia, a major player in AI chip production, has a vested interest in maintaining sales to China, which complicates the regulatory landscape. Mixed signals from the Trump administration regarding AI chip exports have further complicated the narrative surrounding state regulations.

Democracy in Action: Balancing Safety and Innovation

According to Billen, SB 53 exemplifies democracy at work, showcasing the collaboration between industry and policymakers to create legislation that benefits both innovation and public safety. He asserts that this process is fundamental to America’s democratic and economic systems.

This article was first published on October 1.

Sure! Here are five FAQs based on California’s new AI safety law and its implications for regulation and innovation:

FAQ 1: What is California’s new AI safety law?

Answer: California’s new AI safety law aims to establish guidelines and regulations for the ethical and safe use of artificial intelligence technologies. It focuses on ensuring transparency, accountability, and fairness in AI systems while fostering innovation within the technology sector.


FAQ 2: How does this law promote innovation?

Answer: The law promotes innovation by providing a clear regulatory framework that encourages developers to create AI solutions with safety and ethics in mind. By setting standards, it reduces uncertainty for businesses, enabling them to invest confidently in AI technologies without fear of future regulatory setbacks.


FAQ 3: What are the key provisions of the AI safety law?

Answer: Key provisions of the AI safety law include requirements for transparency in AI algorithms, accountability measures for harmful outcomes, and guidelines for bias detection and mitigation. These provisions are designed to protect consumers while still allowing for creative advancements in AI.


FAQ 4: How will this law affect consumers?

Answer: Consumers can benefit from increased safety and trust in AI applications. The law aims to minimize risks associated with AI misuse, ensuring that technologies are developed responsibly. This could lead to more reliable services and products tailored to user needs without compromising ethical standards.


FAQ 5: Can other states adopt similar regulations?

Answer: Yes, other states can adopt similar regulations, and California’s law may serve as a model for them. As AI technology grows in importance, states may look to California’s approach to balance innovation with necessary safety measures, potentially leading to a patchwork of regulations across the country.

Source link