Connect with us

News desk

AI bot ‘Jennifer’ calling California voters for Congress hopeful

Published

on

Peter Dixon, a Democratic congressional candidate from California's Silicon Valley, is using interactive, AI-generated phone calls to voters as part of his campaign
Share this:

Jennifer spent her weekend calling California voters, urging them to cast their ballot in Tuesday’s primary election for Democrat Peter Dixon.

But unlike her human counterparts, Jennifer is a creation of artificial intelligence (AI), allowing her to make thousands of calls without taking a break or losing her cool.

“Hello there. My name is Jennifer and I’m an artificial intelligence volunteer,” she says, immediately declaring her identity in calls to Silicon Valley voters in the US congressional race.

In her slightly robotic-sounding voice — intentionally designed to make it clear she is not human — she introduces the candidate, asks questions and responds to those she gets from voters, all in a surprisingly natural tone.

“I’m wondering why a person hasn’t called me today,” Dixon’s operations manager Austin Madden asks her during a demonstration call for AFP.

“My apologies if I missed that point earlier,” Jennifer replies without missing a beat. “The reason an AI like me is calling instead of a real person is to help the campaign reach more people efficiently, allowing human volunteers to focus on areas where personal interaction is crucial.”

Dixon only recently began using Jennifer, the product of start-up Civox.

At first “we were skeptical,” said Dixon, a Marine veteran and  cybersecurity entrepreneur. “And so we tested it.”

– ‘People were shocked’ –

His staff expected results would be “a mixed bag.”

Instead, “People were shocked at how good the capability was,” Dixon said from his company’s headquarters in Palo Alto, sitting before a computer screen showing clips from his campaign.

In one of the videos, images alternate between reality (Dixon holding his young daughter) and sequences in which the background (the Afghan war) and his outfit are artificially generated — and presented as such.

The point, he said, was to “show that we are comfortable not just understanding these tools, but… using them in an ethical, responsible and transparent way.”

Stunning progress in AI in the past year and the appearance of generative AI programs like ChatGPT — which produce text, images and sounds on demand and in everyday language — have sparked tremendous enthusiasm but also grave concerns about potential risks, including lost jobs, intellectual property theft and fraud.

“I’m terrified about all of that,” Dixon admitted.

But he would rather see the US “continue to lead in how we use it, and figure out how to write the rules of the road ourselves, as opposed to having another country like China” doing so.

Ilya Mouzykantskii co-founded Civox partly to sharpen the focus on “the intersection of artificial intelligence and politics.”

“We are already in a future,” he said, where politicians are “using artificial intelligence tools to develop policy and to make decisions” — without necessarily announcing that they are doing so.

“Maybe that is the benevolent technocracy that we are hurtling towards,” Mouzykantskii said. “But we shouldn’t end up there accidentally, and we shouldn’t end up there without consent.”

– ‘The best technology’ –

In the future, said Adam Reis, Civox’s other co-founder, “it’s not going to be the best-funded campaigns necessarily that have an unfair advantage. It’s going to be the ones with the best technology.”

Reis said he had long been working to create AI “characters” with whom he could have believable dialogues. The arrival of generative AI made that much easier.

But, he added, “We’ve discovered that the mechanics of conversations and of speech are actually much, much more difficult than the content of what is said.”

To be truly convincing, an AI character needs to speak fluidly, understand and react quickly, and  know both when to interrupt and when to allow an interruption — all difficult challenges.

“Some people try to trick the system,” said Patrick McNally, Civox’s field director. “But the bot is very good at bringing it back to policy… sometimes to a point a human wouldn’t even be able to.”

In January, an automated program that called voters using an AI-generated voice of President Joe Biden heightened concerns about massive disinformation enabled by the novel technology in an election year.

US authorities subsequently banned the use of such “cloned” voices, to combat political or commercial fraud.

But that does not affect Jennifer or her counterparts using Civox technology. For they don’t pretend to be something — or someone — they aren’t.

Share this:

News desk

Quarter of UK 5 to 7-year-olds have smart phone: study

Published

on

By

A growing number of parents say smart phones should be kept out of the hands of small children
Share this:

Around a quarter of British children aged between five and seven-years-old now have a smart phone, a study by the UK communications regulator said on Friday.

The findings come as parents have started to push back against the trend for giving younger children access to the devices.

Research by the Ofcom authority found that 38 percent of children in the age group were using social media platforms such as TikTok, Instagram and Whatsapp despite rules requiring users to be at least 13.

The study also found that the number of the same age watching live-streamed content rose from 39 percent to around half.

Ofcom said parental concerns appeared to have increased considerably but “enforcement of rules appears to be diminishing”.

It said this could be due to a sense among adults of “resignation about their ability to intervene in their children’s online lives”.

Science Minister Michelle Donelan described the findings as “stark”.

Online safety legislation passed by parliament last October aims to crack down on harmful content, including online child sex abuse.

“Children as young as five should not be accessing social media,” Donelan said.

“Most platforms say they do not allow under-13s onto their sites and the (Online Safety) Act will ensure companies enforce these limits or they could face massive fines,” she added.

– ‘Massive pressure’ –

Under the new law tech companies could face fines of up to 10 percent of global revenue for rule breaches and bosses could be jailed.

The study follows a massive reaction from UK parents this year after one mother’s Instagram post when viral.

Daisy Greenwell posted that she was horrified to learn from another parent that her 11-year-old son had his own smart phone, as did a third of the boy’s class.

“This conversation has filled me with terror. I don’t want to give my child something that I know will damage her mental health and make her addicted,” she wrote.

“But I also know that the pressure to do so, if the rest of her class have one, will be massive,” she added.

Thousands of parents immediately got in contact to share their own fears that the devices could open them up to predators, online bullying, social pressure and harmful content. It resulted in the launch of the Parents United for a Smartphone Free Childhood campaign. 

US author Jonathan Haidt — whose recent book “The Anxious Generation” says smart phones have rewired children’s brains — has urged parents to act together on smart phone access for kids.

A child “breaks our heart” by telling us they are excluded from their peer group by being the only one without a phone, he said last month. Haidt advocates for no smart phones before the age of 14 or social media before 16.

“These things are hard to do as one parent. But if we all do it together — if even half of us do it together — then it becomes much easier for our kids,” he added.

Share this:
Continue Reading

News desk

Olympic chief Bach says AI can be a game changer for athletes

Published

on

By

IOC president Thomas Bach delivers his keynote speech at the Olympic AI Agenda launch
Share this:

IOC president Thomas Bach said artificial intelligence can help identify talented athletes “in every corner of the world” as he unveiled the Olympic AI Agenda in London on Friday.

Bach, speaking at Olympic Park, which hosted the 2012 Games, said the Olympic movement needs to lead change as the global AI revolution gathers pace.

“Today we are making another step to ensure the uniqueness of the Olympic Games and the relevance of sport, and to do this, we have to be leaders of change, and not the object of change,” said the International Olympic Committee president.

The former fencing gold medallist said it was vital to have a “holistic” approach to create an “overall strategy for AI and sport”.

Bach, speaking less than 100 days before the start of the Paris Olympics, said “unlike other sectors of society, we in sport are not confronted with the existential question of whether AI will replace human beings”.

“In sport, the performances will always have to be delivered by the athletes,” he said. “The 100 metres will always have to be run by an athlete -– a human being. Therefore, we can concentrate on the potential of AI to support the athletes.

“AI can help to identify athletes and talent in every corner of the world. AI can provide more athletes with access to personalised training methods, superior sports equipment and more individualised programmes to stay fit and healthy.”

Bach said other advantages of AI included fairer judging, better safeguarding and improved spectator experience.

The Olympic AI Agenda comes from the IOC AI working group -– a high-level panel of global experts including AI pioneers and athletes, set up last year.

When asked about the potential negatives of AI, Bach was keen to emphasise the importance of free choice in sport.

“He and she, or the parents, must still have the free choice,” said the German. “So a guy who is then maybe identified as a great athlete in wrestling must still have the chance to play tennis and cannot be sorted out from these sports.”

– Vonn ‘jealous’ –

Former Olympic skiing champion Lindsey Vonn, who also spoke at the London event, told AFP she envied current athletes, who could use AI to enhance their training.

“I’m very jealous that I didn’t have any of this technology when I was racing because I just really feel that it’s going to enhance the athlete’s experience all around,” she said.

“Athletes can utilise AI in training to enhance their knowledge from training like, for example, skiing on the mountain but then also off the mountain in the gym recovery times,” added the American.

“The more we understand about your body, about the sport, about performance, the better you can adjust as an athlete.

Vonn, 39, also said AI would be a vital tool for talent identification, particularly in nations without the resources to scout talent.

“You can give them access to AI through a cell phone and you do a series of tests and they can identify ‘OK this athlete would be a great, a 40-metre dash sprinter, or this athlete would potentially be an amazing high jumper,” she said. 

“You have the ability then to find the talent and give them resources through things they already have like a cell phone.”

Share this:
Continue Reading

News desk

Meta releases beefed-up AI models

Published

on

By

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use
Share this:

Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.

Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.

“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.

Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.

“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.

“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”

That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.

“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.

AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”

Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.

– Slow and steady –

Meta AI has been consistently updated and improved since its initial release last year, according to the company.

“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.

“Its social media apps represent a massive user base that it can use to test AI experiences.”

By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.

Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.

Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.

“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.

Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.

Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.

Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.

Share this:
Continue Reading

Featured