Connect with us

News desk

Ghost in the machine: Deepfake tools warp India election

Published

on

A man rides past an election awareness poster displayed on a street ahead of India’s upcoming general elections, in Hyderabad
Share this:

Death has not extinguished the decades-long rivalry between two Indian leaders: both have now seemingly risen from the grave, in digital form, to rally their supporters ahead of national elections.

Political parties are harnessing powerful artificial intelligence tools to make deepfakes, reproducing famous faces and voices in ways that often appear authentic.

Both the government and campaigners have warned that the spread of such tools is a dangerous and growing threat to the integrity of elections in India.

With a marathon six-week general election starting on April 19, so-called “ghost appearances” — the use of dead leaders in videos — have become a popular mode of campaigning in the southern Tamil Nadu state.

Actress turned politician J. Jayalalithaa died in 2016, but she has been featured in a voice message deeply critical of the state’s current governing party, once led by arch-rival M. Karunanidhi.

“We have a corrupt and useless state government,” her digital avatar says. “Stand by me… we are for the people.”

Karunanidhi died in 2018 but has appeared in AI-generated videos — clad in his trademark black sunglasses — showering praise on his son M. K. Stalin, the state’s current chief minister.

Recycling “very charismatic” speakers offered a novel way to grab attention, said Senthil Nayagam, founder of Chennai-based firm Muonium, which made the AI video purporting to be Karunanidhi.

Resurrecting dead leaders is also a cost-effective way of campaigning compared to traditional rallies, which are time-consuming to organise and expensive to stage for voters accustomed to a grand spectacle.

“Bringing crowds is a difficult thing,” Nayagam told AFP. “And how many times can you do a laser or drone show?”

– ‘Very thin line’ –

Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP) has been an eager early adopter of technology in election campaigning.

In 2014, the year he swept to power, the party expanded Modi’s campaign reach by using 3D projections of the leader to make him appear virtually at rallies.

But harnessing technology that can clone a politician’s voice, and create videos so seemingly real that voters struggle to decipher reality from fiction, has naturally sparked concern.

Ashwini Vaishnaw, the communications minister, said in November that deepfakes were “a serious threat to democracy and social institutions”.

AI creator Divyendra Jadoun said he had received a “huge surge” of requests for content from his company, The Indian Deepfaker.

“There is a huge risk in this coming election, and I am pretty damn sure many people are using it for unethical activities”, the 30-year-old said. 

Jadoun’s repertoire includes voice cloning, chatbots and mass dissemination of finished products through WhatsApp messaging, sharing content instantly with up to 400,000 people for 100,000 rupees ($1,200).

He insisted that he turned down offers that he disagreed with, but said it was a “very thin line” to determine whether or not a request for his services was unethical.

“Sometimes even we get confused,” he added.

Jadoun said the rapidly advancing technology was little understood by a “big part of the country”, and AI products were taken by many to be true.

“We only tend to fact-check videos which don’t align with our preconceived notions,” he warned.

– ‘Threat to democracy’ –

Most AI-generated campaign material has so far been used to lampoon rivals, especially through song. 

This week a leader of the BJP’s youth wing posted an AI-generated video of Arvind Kejriwal, a leading opponent of Modi arrested last month in a graft probe.

It shows him sitting behind bars, strumming a guitar and singing a verse from a popular Bollywood song: “Forget me, for you have to live without me now.”

Elsewhere, digitally altered videos purport to show lawmaker Asaduddin Owaisi, one of India’s most prominent Muslim politicians, singing devotional Hindu songs. 

A caption alongside the video on Facebook jokes that “anything is possible” if Modi’s party, known for its Hindu-nationalist politics and accused of discriminating against India’s Muslim minority, wins again.

Joyojeet Pal, an expert in the role of technology in democracy from the University of Michigan, said that ridiculing a political opponent was a more effective campaigning tool than “calling them a thug or a crook”.

Mocking opponents in political cartoons is a centuries-old tactic, but Pal warned that AI-generated images can easily be misinterpreted as real.

“It is a threat to what we can and cannot believe,” he said. “It is a threat to democracy as a whole.”

Share this:

News desk

Quarter of UK 5 to 7-year-olds have smart phone: study

Published

on

By

A growing number of parents say smart phones should be kept out of the hands of small children
Share this:

Around a quarter of British children aged between five and seven-years-old now have a smart phone, a study by the UK communications regulator said on Friday.

The findings come as parents have started to push back against the trend for giving younger children access to the devices.

Research by the Ofcom authority found that 38 percent of children in the age group were using social media platforms such as TikTok, Instagram and Whatsapp despite rules requiring users to be at least 13.

The study also found that the number of the same age watching live-streamed content rose from 39 percent to around half.

Ofcom said parental concerns appeared to have increased considerably but “enforcement of rules appears to be diminishing”.

It said this could be due to a sense among adults of “resignation about their ability to intervene in their children’s online lives”.

Science Minister Michelle Donelan described the findings as “stark”.

Online safety legislation passed by parliament last October aims to crack down on harmful content, including online child sex abuse.

“Children as young as five should not be accessing social media,” Donelan said.

“Most platforms say they do not allow under-13s onto their sites and the (Online Safety) Act will ensure companies enforce these limits or they could face massive fines,” she added.

– ‘Massive pressure’ –

Under the new law tech companies could face fines of up to 10 percent of global revenue for rule breaches and bosses could be jailed.

The study follows a massive reaction from UK parents this year after one mother’s Instagram post when viral.

Daisy Greenwell posted that she was horrified to learn from another parent that her 11-year-old son had his own smart phone, as did a third of the boy’s class.

“This conversation has filled me with terror. I don’t want to give my child something that I know will damage her mental health and make her addicted,” she wrote.

“But I also know that the pressure to do so, if the rest of her class have one, will be massive,” she added.

Thousands of parents immediately got in contact to share their own fears that the devices could open them up to predators, online bullying, social pressure and harmful content. It resulted in the launch of the Parents United for a Smartphone Free Childhood campaign. 

US author Jonathan Haidt — whose recent book “The Anxious Generation” says smart phones have rewired children’s brains — has urged parents to act together on smart phone access for kids.

A child “breaks our heart” by telling us they are excluded from their peer group by being the only one without a phone, he said last month. Haidt advocates for no smart phones before the age of 14 or social media before 16.

“These things are hard to do as one parent. But if we all do it together — if even half of us do it together — then it becomes much easier for our kids,” he added.

Share this:
Continue Reading

News desk

Olympic chief Bach says AI can be a game changer for athletes

Published

on

By

IOC president Thomas Bach delivers his keynote speech at the Olympic AI Agenda launch
Share this:

IOC president Thomas Bach said artificial intelligence can help identify talented athletes “in every corner of the world” as he unveiled the Olympic AI Agenda in London on Friday.

Bach, speaking at Olympic Park, which hosted the 2012 Games, said the Olympic movement needs to lead change as the global AI revolution gathers pace.

“Today we are making another step to ensure the uniqueness of the Olympic Games and the relevance of sport, and to do this, we have to be leaders of change, and not the object of change,” said the International Olympic Committee president.

The former fencing gold medallist said it was vital to have a “holistic” approach to create an “overall strategy for AI and sport”.

Bach, speaking less than 100 days before the start of the Paris Olympics, said “unlike other sectors of society, we in sport are not confronted with the existential question of whether AI will replace human beings”.

“In sport, the performances will always have to be delivered by the athletes,” he said. “The 100 metres will always have to be run by an athlete -– a human being. Therefore, we can concentrate on the potential of AI to support the athletes.

“AI can help to identify athletes and talent in every corner of the world. AI can provide more athletes with access to personalised training methods, superior sports equipment and more individualised programmes to stay fit and healthy.”

Bach said other advantages of AI included fairer judging, better safeguarding and improved spectator experience.

The Olympic AI Agenda comes from the IOC AI working group -– a high-level panel of global experts including AI pioneers and athletes, set up last year.

When asked about the potential negatives of AI, Bach was keen to emphasise the importance of free choice in sport.

“He and she, or the parents, must still have the free choice,” said the German. “So a guy who is then maybe identified as a great athlete in wrestling must still have the chance to play tennis and cannot be sorted out from these sports.”

– Vonn ‘jealous’ –

Former Olympic skiing champion Lindsey Vonn, who also spoke at the London event, told AFP she envied current athletes, who could use AI to enhance their training.

“I’m very jealous that I didn’t have any of this technology when I was racing because I just really feel that it’s going to enhance the athlete’s experience all around,” she said.

“Athletes can utilise AI in training to enhance their knowledge from training like, for example, skiing on the mountain but then also off the mountain in the gym recovery times,” added the American.

“The more we understand about your body, about the sport, about performance, the better you can adjust as an athlete.

Vonn, 39, also said AI would be a vital tool for talent identification, particularly in nations without the resources to scout talent.

“You can give them access to AI through a cell phone and you do a series of tests and they can identify ‘OK this athlete would be a great, a 40-metre dash sprinter, or this athlete would potentially be an amazing high jumper,” she said. 

“You have the ability then to find the talent and give them resources through things they already have like a cell phone.”

Share this:
Continue Reading

News desk

Meta releases beefed-up AI models

Published

on

By

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use
Share this:

Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.

Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.

“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.

Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.

“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.

“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”

That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.

“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.

AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”

Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.

– Slow and steady –

Meta AI has been consistently updated and improved since its initial release last year, according to the company.

“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.

“Its social media apps represent a massive user base that it can use to test AI experiences.”

By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.

Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.

Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.

“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.

Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.

Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.

Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.

Share this:
Continue Reading

Featured