In 1982, I held my first US patent. I had immigrated to this country because it was the only place on Earth where a young engineer could turn ideas into products, products into companies, and companies into something that actually mattered. Over the next four decades, I built AVG Group of Companies into an American electronics and automation manufacturing firm. And I watched, with my own eyes, as China systematically dismantled the American electronics sector through state subsidies, intellectual property theft, and a patient long-term strategy we never took seriously until it was too late.
I am not writing this to relitigate that loss. I am writing this because the same pattern is repeating itself: faster, at higher stakes, with less time to correct course.
Artificial intelligence is not just another technology. It is the technology that will determine which civilization shapes the 21st century. Every policy choice we make in the next five years is simultaneously a geopolitical choice. I say this not as a partisan but as an engineer and industrialist who has spent a career at the intersection of technology and statecraft: we need to get this right, and getting it right requires being honest about both the threats and the tradeoffs.
The Competitive Reality Is Not Hypothetical
China’s 2026 five-year plan places AI and robotics at the center of its economic and military strategy. A bipartisan House report in April 2025 declared DeepSeek a “profound threat” to US national security, not because the technology itself is magic, but because it demonstrated that capable AI can be built at dramatically lower cost than American firms assumed, potentially narrowing the lead we thought we had. Chinese firms are actively routing AI hardware through third countries to evade export controls. This is not speculation; it is documented and ongoing.
I want to be precise here, because precision matters: China’s AI ambitions do not make every American regulatory concern illegitimate, and acknowledging real AI risks does not mean surrendering to Beijing. These are not contradictory positions. The question is how we manage risk, not whether risk exists.
What “Pro-Innovation” Actually Requires
The debate in Washington has calcified into a false binary: either you support aggressive AI regulation, or you support American competitiveness. This framing is wrong, and it is costing us.
The European Union’s AI Act is the cautionary tale here, but we should be honest about why it is one, not just that it is. The EU’s instinct toward comprehensive, precautionary regulation came from a real place: American social media platforms caused documented, measurable harm with minimal accountability for years. European policymakers watched that and drew conclusions. Those conclusions led to a compliance-heavy framework that, in practice, amounts to an innovation tax. European AI startups are relocating to the United States. European engineering talent is leaving. The EU has chosen to be a regulatory referee rather than a technological competitor.
That is a genuine failure, but the lesson is not “regulation bad.” The lesson is that oversight design matters as much as oversight’s existence. Rules that require pre-market approval for every AI application will indeed strangle innovation. Rules that establish clear liability frameworks, require transparency in high-stakes applications, and empower companies to self-certify against published standards can protect citizens and preserve the speed that makes American AI competitive.
The difference between those two approaches is not philosophy. It is engineering. We should design regulatory systems the way we design products: with clear performance requirements, testable criteria, and accountability at the deployment layer rather than the research layer.
When it comes to national security, I have little patience for ambiguity. DeepSeek operating on American infrastructure is a genuine risk, not a hypothetical one. China is not the only adversary. We should be equally clear-eyed about any AI system where the data flows, the training provenance, or the corporate ownership creates vectors for exfiltration or manipulation. Enforcement here is not over-regulation. It is basic strategic hygiene.
Getting regulatory design right is a necessary condition for American AI leadership. But it is not sufficient. The deeper question is whether democratic nations can build the shared infrastructure, supply chains, and governance frameworks that make that leadership durable. That requires partners. And one partnership in particular deserves far more attention than it is getting.
Why the US-India Partnership Is the Right Model
I have spent over a decade working on the US-India strategic relationship at the highest levels of government. The conviction driving that work has always been straightforward: the world’s oldest democracy and the world’s largest democracy share a fundamental interest in ensuring the future is built on pluralism and open competition, not authoritarian control.
On February 20, 2026, that conviction became operational policy.
The US-India AI Opportunity Partnership, signed at the India AI Impact Summit, is a bilateral addendum to the Pax Silica Declaration, a US-led international framework that recognizes the physical substrate of AI (semiconductors, critical minerals, energy infrastructure, compute capacity) as the defining geopolitical contest of our era. It builds on the TRUST and COMPACT frameworks launched by both governments in February 2025, translating strategic alignment into concrete execution: cross-border venture capital flows, coordinated data center investment, and shared regulatory standards designed by practitioners rather than bureaucrats.
This matters for reasons that go beyond bilateral trade statistics.
India brings 1.4 billion people, the world’s largest pool of STEM graduates, a democratic legal system with genuine IP protections, and a government that has demonstrated, through UPI, Aadhaar, and the Digital Public Infrastructure stack, that it understands how to deploy technology at a population scale. America brings capital depth, the world’s most dynamic private technology ecosystem, and intellectual property frameworks that reward invention.
Critically, India has also developed something the US-EU relationship has never quite produced: a pragmatic approach to AI governance that takes safety seriously without weaponizing it against deployment speed. The Ministry of Electronics and Information Technology has engaged substantively with questions of algorithmic accountability, data governance, and cross-border AI liability, reflecting genuine technical understanding rather than bureaucratic pattern-matching.
The partnership’s founding document states: “A significant risk facing the free world is not the advancement of AI, but the failure to lead it.” I would add one clarification: the risk is not just falling behind in capability. It is allowing the norms of AI development to be set by default by whoever builds fastest. How systems are audited, how liability is assigned, how data is governed, how military applications are bounded: these questions will be answered by someone. Democratic nations building together have a chance to answer them deliberately. That window will not stay open indefinitely.
What Responsible Speed Actually Looks Like
I am an engineer. I understand what these systems can do, and I take seriously the risks that serious researchers have identified: brittleness in high-stakes deployment, potential for misuse in influence operations, concentration of capability in very few hands, and longer-horizon questions about systems that may eventually exceed human oversight capacity in narrow domains.
None of that argues for regulatory paralysis. All of it argues for building safety architecture into the development process rather than treating it as a compliance checkbox bolted on afterward.
Concretely, this means three things:
First, hard national security guardrails with real enforcement. Export controls on AI hardware need teeth and need to close the third-country routing loopholes that currently exist. Foreign AI systems operating on critical American infrastructure need to meet the same security standards we apply to telecommunications hardware. This is not controversial among serious national security professionals.
Second, technical standards co-designed by government and industry, developed transparently, and adopted through a credible process. The people who build these systems understand their failure modes in ways that no regulator writing rules from the outside can fully replicate. But industry left entirely to set its own standards will optimize for speed and market share, not safety. The answer is structured collaboration: published performance requirements, testable criteria, third-party auditing, and liability frameworks that make accountability real. Engineers and entrepreneurs at the table, not as lobbyists seeking exemptions, but as the technical authors of standards they will then be held to.
Third, the US-India model is a replicable template for democratic AI governance. The framework being built through the Pax Silica Declaration works because it is designed around what democratic partners can actually offer each other: complementary strengths, compatible legal systems, and shared incentives to move fast without ceding norm-setting to authoritarian competitors. It is not built on lowest-common-denominator compliance, which is where multilateral frameworks tend to collapse. It is built on concrete execution: specific supply chain agreements, defined investment flows, and safety standards authored by practitioners who will be legally accountable for the results.
That is what makes it extensible. Japan, South Korea, and Australia share the same profile: democratic governance, serious technical capacity, and direct exposure to Chinese technological competition. The architecture scales because the logic scales.
I will not pretend the risks are zero. US-India relations carry their own friction: divergent trade interests, competing industrial policies, and a history of strategic hedging on India’s part that is not simply going to vanish. The test of this partnership is not the signing ceremony. It is whether the frameworks survive bureaucratic inertia, political turnover, and the inevitable moments when national interests create real tension. I believe they can. But that requires both governments to treat this as structural policy, not a diplomatic photo opportunity.
The Honest Assessment
I have watched America lose a technology race before. I know what the early stages look like: the dismissiveness, the assumption that our lead is structural rather than earned, the willingness to sacrifice long-term position for short-term political comfort. I have seen what happens to workers, to communities, to a nation’s industrial base when that complacency compounds over decades. It is not abstract. It is factories that close and do not reopen.
We are not there yet. The United States still leads in frontier AI capability, in the concentration of top research talent, and in the private capital available to scale that capability. But leads erode. They erode through underinvestment, through regulatory miscalculation, through the failure to build the diplomatic and institutional architecture that makes technological advantage durable.
The window to build those institutions is open right now. The US-India partnership is one piece of that architecture. Serious export enforcement is another. Co-designed safety standards are a third. None of it is glamorous. None of it fits on a bumper sticker. But that is what statecraft actually looks like when it is working: not declarations, but frameworks; not summits, but execution.
I have spent four decades building things. I know the difference between a press release and a foundation. What is being built now, if we do not squander it, is a foundation. The question is whether we have the discipline and the seriousness to build on it before the window closes.
That question does not have a partisan answer. It has an engineering answer: start with the requirements, design to meet them, test rigorously, and iterate fast. America still knows how to do that. The task is to prove it.
Shalabh “Shalli” Kumar is an Indian-American industrialist and technologist. He is the founder and former CEO of AVG Group of Companies, a US electronics and automation manufacturer with operations across North America and Asia. He has worked directly with heads of government on US-India technology and trade policy for over a decade, including on the frameworks described in this piece.