Future Tense amid Shifting Tides in AI Race without Referee

Future Tense amid Shifting Tides in AI Race without Referee

The release of DeepSeek refuelled the conversation about an emerging ‘Cold War’ bringing to focus the global race for AI dominance. And the 2025 AI Action Summit in Paris did little to allay the fears associated with a technology that is fast reshaping our lives. The fault line was wide open as the world leaders failed to broker a consensus on making AI inclusive, open, ethical and safe.

The US and the UK refused to sign the declaration, citing concerns about excessive regulation stifling innovation in the AI sector.

The communique signed by 58 countries talks of “ensuring AI is open, inclusive, transparent [and] ethical”, making AI “sustainable for people and the planet” and “taking into account international frameworks”. The agreement also spoke of AI energy use, but it also did not go far enough to address the possible risks and harms caused by AI. Unleashing AI with minimum checks was the ‘Mool Mantra’ as they argued that regulations would raise barriers for new start-ups and hinder innovations, thus killing a transformative industry.

This is far cry from the goal set in the previous two summits in the UK and South Korea which was to define red-lines for AI: risk thresholds that would require mitigations at the international level with the technology evolving far too rapidly to understand or control.

Innovation first and regulation can follow is what it finally boiled down to with countries pushing for an increase in investment and a change in narrative to stay competitive.

Among the six priorities, only one, “AI should be safe and ethical”, is loosely linked to regulations. The US did not agree on ‘inclusive and sustainable AI” as it wants to power data centres with fossil fuel.

Global data centres are expected to emit 2.5 billion tonnes of carbon dioxide through 2030, which is equal to Russia’s total emission every year.

Chinese researchers present at the conference also bemoaned the emerging new cold war between Washington and Beijing, saying that it made the whole world less safe.

In short, regulations took a backseat with the word “safety” banished in favour of “action” in the Paris Summit’s name.

AI WAR OF DATA, CHIPS & ENERGY
This ‘Cold War’ centers around four key “accelerator technologies”: artificial intelligence, semiconductor chips, quantum computing, and biotechnology.

When AI went mainstream with OpenAI’s ChatGPT and Google Gemini, the general belief was that developing Large Language Models (LLM) required deep pockets.

China’s DeepSeek, however, opened the gateway for affordable AI development despite restrictions on chip export. It upended the US dominance by training an open-source LLM at just USD 5.6 million and led to carnage on Wall Street. Silicon Valley got a realistic check with China’s grand entry to the AI stage.

The West has an advantage over semiconductors but for how long! Taiwan produces about 90 per cent of the world’s most advanced semiconductors, but China’s largest chipmaker SMIC is fast catching up. It recently unveiled a 5-nanometer chip using alternative methods like quadruple patterning. This breakthrough came after hiring 100 plus TSMC engineers or a brain drain as Taiwan calls from its industrial sector. Though a little outdated, they certainly are making up lost ground.

The state’s subsidy of the Chinese government is helping these AI start-ups not only to fuel growth in innovation but also cut their cost by a tenth, giving them a massive competitive advantage over their Western rivals.

The energy divide also puts the US at severe disadvantage. China produced 300 gigawatt of renewables in 2024 alone. It will certainly outpace sluggishness of America’s adoption of nuclear and other expensive forms of energy that the West is struggling with right now.

Experts also view the open-source DeepSeek as a ‘poison pill’ strategy where the aim is not to democratise AI but flood the market with cheap alternatives, forcing Western firms into a pricing race which they cannot win.

The EU, on the other hand, has a new liability directive which forces companies to prove that their models did not train on copyrighted or unethical data, a rule which China could wilfully ignore as it focuses on ideological compliance.

CALL OF REGULATION TO AVOID ‘LOSS OF CONTROL’
Are we ready for human-like AI by 2030 with no concrete roadmap and guardrails in place?

Technology has always been a double-edged sword. It can be wielded for both good and bad. Even as we look at improving efficiency and productivity, its misuse or abuse can lead to untold harmful consequences.

AI has woven itself into the fabric of our everyday lives. From healthcare to transportation and customer service, AI influence is far-reaching. It can be used to diagnose diseases, increase productivity and the efficiency of care delivery, automate routine tasks, and improve security and the accuracy of decision-making in a wide range of fields.

However, it also carries significant risks, including the potential to worsen existing inequalities and create new ethical and societal challenges.

Calls for regulation have been made due to its existential risk to humanity. It can harm millions through errors when working in health care, weaponising misinformation, impersonating people and could cause displacement of workers.

The EU AI Act has a simple principle that is to make the technology more “human-centre”. The idea is to regulate AI based on its capacity to harm society. It seeks to rank applications as low risk, medium risk and high risk. The higher the risk, the stricter the rules. France and Germany, however, are pushing for a more balanced approach that protects innovation.

On one hand, the US promulgates “hands-off” approach while emphasising on values like data privacy and algorithmic transparency, and China, on the other, prioritises state control at the expense of personal freedoms as it requires providers to ensure AI-generated content aligns with the government’s core socialist values.

Reports suggest that Russia is doubling down on AI in defence, despite technological limitations from sanctions.

While India has done away with ‘explicit permission’ for deploying unrest AI models, these can now be made available to users after “appropriately labelling the possible inherent fallibility or unreliability of the output generated’.

The need, however, is for global regulation that balances innovation and economic growth with ethical and societal concerns and is consistent across countries. This will ensure that AI is developed and deployed in a way that maximizes its benefits while protecting privacy, safety, and human dignity.

It is important to be able to strike a balance between regulation and embracing safe innovations. To keep the corporate AI race from becoming reckless requires the establishment and development of rules and the enforcement of legal guardrails.

In November last year, Anthropic, founded by former members of OpenAI, had warned that the window for proactive risk prevention is closing fast and governments should urgently take action on AI policy in the next eighteen months. “Dragging our feet might lead to the worst of both worlds: poorly designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks.”

It added that the potential for misuse is likely to increase as models advance in capabilities.

The challenge is to keep pace with the digital era’s velocity of change and protect the public interest in this fastest-ever race happening without a referee.

Since AI is a multi-faceted capability, the regulations have to be risk-based and targeted, it added.

Therefore, the policies should focus on how to design, use or sell AI while setting rules for verification of data input and output. The one must-step is pilot testing of AI models before release.

ANXIETY OF BEING REPLACE BY AI
One of the greatest fears regarding AI is that it will eliminate jobs. However, the summit failed to address it. The joint statement said that an observatory will be set up to analyse the impact of AI on jobs. The idea is to decide how education and training should evolve. No concrete proposals or enforceable guidelines. It is not without reason that tech companies are describing the summit as a missed opportunity.

This disruptive innovation is set to reshape the world of employment in the years to come.

According to the International Monetary Fund, AI will impact 40 per cent of all jobs by 2025 but the world seems unprepared for it. Besides engineers who may be left high and dry due to the application gap in education, AI can impact other jobs like technical writing, legal assistance, market research analyst, travel agents, translators, customer service and a lot more.

An UK Think Tank report stated that women, young workers and people who are least paid will be the most affected with routine jobs being automated.

Till Leopold, Head of Work, Wages and Job Creation at the World Economic Forum, stressed the need for businesses and governments to work together, invest in skills and build an equitable and resilient global workforce.

The International Labour Organisation, in its policy recommendations, has emphasised on prioritising redeployment and training over job loss, while focussing on the most exposed sectors. It has suggested social coverage protection and access to retraining in case of displacement. These apart, policies should be designed to address gender-specific needs in the transition process. It also stressed Investment in sectors that are under-funded and have the potential to be a source of good quality jobs, such as in the care or green economy.

The US administration is looking at maintaining a pro-worker growth path for AI so it can be an important tool for job creation. It believes that while eliminating some jobs and creating others, AI will probably make workers more productive, thus benefiting the economy.

India shares similar views as it sees a change in the nature of work with evolving technology. The emphasis here is on the need to invest in skilling and re-skilling people for an AI-driven future.

WHERE DOES INDIA STAND
India, which has been playing catch up having approved its artificial intelligence mission only less than a year ago, has also announced its plans to develop its own generative AI model. The government is in touch with 6 developers for building the foundational model which can take anywhere from 4-6 months. India’s AI spending also got a major boost with Rs 2,200 crore allocation in the annual budget.

It is also focussing on building related technologies, to establish a solid presence in its own turf. India wants to capitalise on data centres, which is the backbone of the AI economy. While it holds 20 per cent of global data, yet only 3 per cent of total data capacity, which it wants to change.

Mukesh Ambani-led Reliance Industries Ltd is planning to build what may become the world’s largest data centre—potentially three times the capacity of current leading facilities, with an estimated investment of USD 20-30 billion.

India is also expecting an investment of USD 300 billion in hyper scalers and data centres during the next three years.

At the Paris summit, India pitched for collective global efforts to establish governance and standards for AI to uphold shared values and address concerns related to cyber security, disinformation and deepfakes, while underlining the need for open-source systems. The priorities included ensuring AI access to global south nations, upscaling to prevent job losses and building data sets “free from biases” while creating people-centric applications.

It also intensively engaged with AI leaders across the globe to build support for the philosophy of ‘Data for Development’.

India is looking to not only position itself as a key player in AI but also as a hub for the next generation of technological innovations.

Leave a Comment

Your email address will not be published.