Top 5 This Week

Related Posts

India’s AI Future at Stake: Why the Proxy Culture Wars with Tech Companies Must End

Last week, a seismic shift rippled across the globe when a subsidiary of a Chinese hedge fund unveiled DeepSeek R1, a reasoning AI model that is not only partly open-source but also competes with some of the best models on key benchmarks. The frenzy surrounding DeepSeek quickly became a global talking point—including in India.

Some arguments online were valid, others misplaced. A common critique of Indian tech companies is that the country cannot develop frontier AI labs because its tech industry is built on cheap labour. Critics argue that this is why the country is stuck producing food delivery and quick commerce apps rather than pushing the boundaries of AI and research.

To some extent, this is agreeable. The rise of quick commerce and food delivery services is undeniably driven by the availability of cheap, scalable labour—there’s no real counterpoint to that. However, the discussion should not end there. Indian quick commerce apps are among the best in the world in terms of design and user experience.

The lack of frontier AI labs is not a result of some inherent inability but rather a consequence of structural issues: researchers in India are grossly underpaid, and the country lacks the graphics processing unit (GPU) clusters necessary for training cutting-edge models. In other words, while Indian techies may not excel in one area, they do in others—such as app design and optimisation.

Also Read | Who is afraid of DeepSeek?

India will eventually develop frontier AI labs, especially as technological advancements, architectural innovations, software optimisation, economies of scale, and GPU shortages ease. But for that to happen, India must also address and parallelly solve the deeper systemic issues that hold us back—the exploitation of cheap labour, inadequate pay for researchers, and the broader social biases and divisions that permeate our society.

In June 2024, Meta launched Meta AI, an AI assistant and chatbot powered by Llama 3—a leading foundational model that is open-source (a claim that remains debatable)—in India. However, within a week of its release, the platform faced a backlash, with #BoycottMetaAI and #ShameOnMetaAI trending on X. The chatbot faced allegations of being Hinduphobic. The reason? Some users asked it to crack jokes about the Prophet of Islam, to which it refused, but when prompted to joke about Hindu deities, it complied. The irony was hard to miss—those who took offence did so only after deliberately testing the chatbot against their own religious dogma.

What was even more surprising was the involvement of prominent figures in amplifying the hashtag. Among those who pushed it were Arun Yadav, the State head of social media for BJP Haryana and Vishva Hindu Parishad leader Sadhvi Prachi. A closer analysis of the trend data suggests patterns indicative of a coordinated hashtag campaign.

Fig. 1: Tweet volume by hour on June 30, 2024 (GMT). The chart visualises the number of tweets posted per hour. It shows a significant spike at 7 am GMT, followed by a gradual decline throughout the day. The peak activity suggests possible coordinated engagement or a major event occurring early in the morning.
| Photo Credit:
Data Source: Twitter API | #BoycottMetaAI OR #ShameOnMetaAI

Meta is not the only company to find itself embroiled in controversy over seemingly trivial matters. Two years ago, OpenAI’s ChatGPT faced similar outrage. On January 7, 2023, Mahesh Vikram Hegde, the founder of Postcard News—a platform with a history of spreading misinformation—shared a screenshot on X. According to Hegde’s X bio, Prime Minister Modi follows him. The screenshot showed the chatbot agreeing to crack a joke about a Hindu deity but refusing to do so when asked about Prophet Muhammad or Jesus. Hegde captioned the post, “…Amazing hatred towards Hinduism!” The post garnered 410.6K views during its cycle.

Also Read | ChatGPT is bullshit — and here’s why

Nearly 10 days later, one of India’s most-watched Hindi private satellite TV channels aired a half-hour segment on ChatGPT during its prime-time slot, fuelled by the outrage that had erupted on X. Throughout the broadcast, tickers flashed provocative headlines such as “High-Tech Conspiracy Against the Hindu Religion”, “ChatGPT is a Hub of Anti-Hindu Thoughts”, and “AI Spews Code Full of Venom”.

It was hardly surprising to see that both the host and the correspondent presenting the so-called “evidence” of blasphemy had little to no understanding of the technology they were reporting on.

While Meta and OpenAI never publicly acknowledged these controversies, close observers of the field could sense their underlying anxiety manifesting in subtler ways. OpenAI’s first hire in India, for instance, was a policy role rather than a technical one. This is not to diminish the importance of non-technical hires, but it does reveal something about how things operate in a country like India. Even the world’s most well-funded companies can’t escape the gravitational pull of outrage and must carefully navigate the terrain of policy, bureaucracy and culture to keep everyone appeased.

The bullies

These dynamics are not unique to AI or large language models. It is a well-known fact that over the past decade, nearly every tech company—big or small, global or homegrown—has found itself at the heart of some cultural battleground in India. It is a sinister mechanism that does not rely on the full force of the government but instead operates through proxies to bully these tech companies and to keep them in check. The crowd does the heavy-lifting, and within that crowd are eager cheerleaders, always ready to spark the first flame. The government gauges the situation from afar and intervenes only when it senses genuine momentum building within these outrages and knows that its intervention will act as a source of validation. It puts the final nail in the coffin of outrage as a way to reward its ardent followers.

For instance, two years ago, when an e-commerce lingerie brand experienced a data breach, it was quickly given a communal spin by an X user, falsely claiming that the leaked customer data contained details exclusively of Hindu women and was being sold to young Muslim men. However, when journalists examined the dataset, it was clear this was untrue—the data included information from individuals of various faiths.

Despite the lack of evidence, a statutory government body swiftly took suo motu cognisance of the matter. What was even more striking was the language used in the official notice issued to the company, which claimed the data were being sold to “Islamic groups through the dark web for targeted harassment, love jehad, women trafficking, and abduction”. This assertion was not only baseless but also more extreme than the original claims made online. Even the individuals who first flagged the breach had only implied such allegations; it was the government body that explicitly articulated them.

In doing so, the body did not just lend legitimacy to a factually unsupported narrative—it effectively amplified and validated it. By acting on an allegation with no material evidence beyond a few selective screenshots, the institution rewarded and reinforced communal hysteria, it essentially blurred the line between governance and ideological bias.

The messaging is fundamentally inconsistent. If both the public and the government are genuinely committed to technological advancement, why do they continue to prioritise trivial matters? Data breaches, at most, should be treated as national security concerns and, at the very least, as failures of the basic protections that companies are obligated to guarantee their customers.

Narratives of polarisation

Framing such incidents through the lens of communal disharmony only distracts us from the real issue. After all, data can be misused by anyone, regardless of their background. It is also painfully clear which types of “allegations” receive serious attention—those that conveniently align with existing narratives of polarisation and division.

Some Indian tech companies have tried to capitalise on the chaos driven by culture wars, banking on the allure of nationalism and jingoism. However, time and again, these efforts have backfired, failing to deliver the intended impact. The flip side is that even the most fervent nationalists eventually see through these theatrics. They stop viewing such companies as serious players in critical discussions—whether it is about homegrown AI, social media platforms, or electric vehicles.

If the government genuinely aims to foster technological advancement, it must first end its proxy war with tech companies over cultural issues—these conflicts serve no meaningful purpose. This is not a call for the government to regulate speech—that is an entirely separate debate. Rather, it is about recognising that the proxies it nurtures and rewards often hinder, rather than help, the cause of innovation.

Superficial rhetoric about building foundational models may create the illusion of progress, but without addressing deeper structural issues—such as the exploitation of labour, stagnant wages, systemic inequality, and the pervasive influence of hate and prejudice—any gains will be fragile and unsustainable. Injecting funds and artificially accelerating development can only go so far. Without a strong, organic foundation rooted in robust research ecosystems, critical thinking, and scientific inquiry, the cycle of short-lived technological leaps followed by stagnation will repeat itself. The growth we aspire to as a nation will remain elusive, with each new technological wave met by the same scramble to catch up.

Being politically correct

It is not as if other countries have resolved these issues, either. American tech companies often find themselves entangled in culture wars, with their AI models navigating complex terrains and attempting to produce responses they deem politically correct. The same goes for Chinese tech companies and their AI systems—questions about Taiwan, Tibet, or Tiananmen Square will inevitably yield answers aligned with the Chinese Communist Party’s narrative.

However, in India’s case, the culture war often takes centre stage, with public relations and political posturing steering the conversation while science and technology are relegated to the back seat. For India to catch up, it must reverse this dynamic. The focus should be on creating an environment where scientific research, critical reasoning, and technological inquiry are not just supported but prioritised. Only by addressing both systemic societal issues and the structural barriers within the tech ecosystem can India hope to achieve sustainable, meaningful progress.

Kalim Ahmed is a columnist and an open-source researcher with a focus on tech accountability, disinformation, and Foreign Information Manipulation and Interference.

source: https://frontline.thehindu.com/science-and-technology/india-ai-growth-politics-big-tech-meta-ai-deepseek-chatgpt/article69206010.ece

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles