Connect with us

News desk

California looks to Europe to rein in AI

Published

on

Legislators in the California state capitol are working on a flurry of laws aiming to crack down on abusive uses of artificial intelligence on the home turf of some of the world's powerful tech titans
Share this:

California, home to Silicon Valley, is eager to rein in the deployment of artificial intelligence and is looking to Europe’s tough-on-big-tech approach for inspiration.

The richest state in the United States by GDP, California is a hotbed of no-holds-barred tech innovation, but lawmakers in state capital Sacramento want to give the industry laws and guardrails it has largely been spared in the internet age.

Brussels has enacted a barrage of laws on US-dominated tech and sprinted to pass the AI Act after OpenAI’s Microsoft-backed ChatGPT arrived on the scene in late 2022, unleashing a global AI race.

“What we’re trying to do is actually learn from the Europeans, but also work with the Europeans, and figure out how to put regulations in place on AI,” said David Harris, senior policy advisor at the California Initiative for Technology and Democracy.

As they have in the past with EU laws on private data, lawmakers in California are looking to recent European legislation on AI, especially given the little hope of equivalent national legislation out of Washington.

There are at least 30 different bills proposed by California state legislators that relate to various aspects of AI, according to Harris, who said he has advised officials here and in Europe on such laws.

Proposed laws in California range from requiring AI makers to reveal what was used to train models to banning election ads containing any computer generated features.

“One of the aspects I think is really important is the question of how we deal with deepfakes or fake text created to look like a human being is sending you messages,” Harris told AFP.

State assembly member Gail Pellerin is backing a bill she says would essentially ban the spreading of deceptive digital content created with generative AI in the months leading up to and the weeks following an election.

“Bad actors who are utilizing this are really hoping to create chaos in an election,” Pellerin said.

– Law-breaking ‘bad guys’ –

Industry association NetChoice is dead set against importing aspects of European legislation on AI, or any other EU tech regulation.

“They are taking, essentially, a European approach on artificial intelligence – which is that we must ban the technology,” said Carl Szabo, the general counsel of the association, which advocates for light touch regulation of tech.

“Outlawing AI won’t stop (anything). It’s bad because bad guys don’t follow the law,” Szabo argued.

“That’s what makes them bad guys.”

US computer software giant Adobe, like most tech giants, worked with Europe on the AI Act, according to Adobe General Counsel and Chief Trust Officer Dana Rao.

At the heart of the EU AI Act is a risk-based approach, with AI practices deemed more risky getting more scrutiny.

“We feel good about where the AI Act ended up” with its high-risk, low-risk approach, said Rao.

Already, Adobe engineers carry out “impact assessments” to rate risk before making AI products available, according to Rao.

“You want to think about nuclear safety, about cybersecurity, about when AI is making substantial decisions over human rights,” Rao said.

– ‘Watching California’ –

In California, Rao said he expected the problem of deepfakes to be the first to fall under the authority of a new law.

Assembly Bill 602 would criminalize non-consensual deepfake pornography while Assembly Bill 730 bans the use of AI deepfakes during election campaign season.

To fight this, Adobe joined other companies to create “content credentials” that Rao equated to a “nutrition label” for digital content.

Assemblywoman Pellerin expects AI laws adopted in California to be replicated in other states.

“People are watching California,” Pellerin said, with a slew of US states also working on their own AI deepfake bills.

“We’re all in this together; we have to stay ahead of the folks that are trying to wreak havoc in an election,” she said.

Share this:

News desk

Quarter of UK 5 to 7-year-olds have smart phone: study

Published

on

By

A growing number of parents say smart phones should be kept out of the hands of small children
Share this:

Around a quarter of British children aged between five and seven-years-old now have a smart phone, a study by the UK communications regulator said on Friday.

The findings come as parents have started to push back against the trend for giving younger children access to the devices.

Research by the Ofcom authority found that 38 percent of children in the age group were using social media platforms such as TikTok, Instagram and Whatsapp despite rules requiring users to be at least 13.

The study also found that the number of the same age watching live-streamed content rose from 39 percent to around half.

Ofcom said parental concerns appeared to have increased considerably but “enforcement of rules appears to be diminishing”.

It said this could be due to a sense among adults of “resignation about their ability to intervene in their children’s online lives”.

Science Minister Michelle Donelan described the findings as “stark”.

Online safety legislation passed by parliament last October aims to crack down on harmful content, including online child sex abuse.

“Children as young as five should not be accessing social media,” Donelan said.

“Most platforms say they do not allow under-13s onto their sites and the (Online Safety) Act will ensure companies enforce these limits or they could face massive fines,” she added.

– ‘Massive pressure’ –

Under the new law tech companies could face fines of up to 10 percent of global revenue for rule breaches and bosses could be jailed.

The study follows a massive reaction from UK parents this year after one mother’s Instagram post when viral.

Daisy Greenwell posted that she was horrified to learn from another parent that her 11-year-old son had his own smart phone, as did a third of the boy’s class.

“This conversation has filled me with terror. I don’t want to give my child something that I know will damage her mental health and make her addicted,” she wrote.

“But I also know that the pressure to do so, if the rest of her class have one, will be massive,” she added.

Thousands of parents immediately got in contact to share their own fears that the devices could open them up to predators, online bullying, social pressure and harmful content. It resulted in the launch of the Parents United for a Smartphone Free Childhood campaign. 

US author Jonathan Haidt — whose recent book “The Anxious Generation” says smart phones have rewired children’s brains — has urged parents to act together on smart phone access for kids.

A child “breaks our heart” by telling us they are excluded from their peer group by being the only one without a phone, he said last month. Haidt advocates for no smart phones before the age of 14 or social media before 16.

“These things are hard to do as one parent. But if we all do it together — if even half of us do it together — then it becomes much easier for our kids,” he added.

Share this:
Continue Reading

News desk

Olympic chief Bach says AI can be a game changer for athletes

Published

on

By

IOC president Thomas Bach delivers his keynote speech at the Olympic AI Agenda launch
Share this:

IOC president Thomas Bach said artificial intelligence can help identify talented athletes “in every corner of the world” as he unveiled the Olympic AI Agenda in London on Friday.

Bach, speaking at Olympic Park, which hosted the 2012 Games, said the Olympic movement needs to lead change as the global AI revolution gathers pace.

“Today we are making another step to ensure the uniqueness of the Olympic Games and the relevance of sport, and to do this, we have to be leaders of change, and not the object of change,” said the International Olympic Committee president.

The former fencing gold medallist said it was vital to have a “holistic” approach to create an “overall strategy for AI and sport”.

Bach, speaking less than 100 days before the start of the Paris Olympics, said “unlike other sectors of society, we in sport are not confronted with the existential question of whether AI will replace human beings”.

“In sport, the performances will always have to be delivered by the athletes,” he said. “The 100 metres will always have to be run by an athlete -– a human being. Therefore, we can concentrate on the potential of AI to support the athletes.

“AI can help to identify athletes and talent in every corner of the world. AI can provide more athletes with access to personalised training methods, superior sports equipment and more individualised programmes to stay fit and healthy.”

Bach said other advantages of AI included fairer judging, better safeguarding and improved spectator experience.

The Olympic AI Agenda comes from the IOC AI working group -– a high-level panel of global experts including AI pioneers and athletes, set up last year.

When asked about the potential negatives of AI, Bach was keen to emphasise the importance of free choice in sport.

“He and she, or the parents, must still have the free choice,” said the German. “So a guy who is then maybe identified as a great athlete in wrestling must still have the chance to play tennis and cannot be sorted out from these sports.”

– Vonn ‘jealous’ –

Former Olympic skiing champion Lindsey Vonn, who also spoke at the London event, told AFP she envied current athletes, who could use AI to enhance their training.

“I’m very jealous that I didn’t have any of this technology when I was racing because I just really feel that it’s going to enhance the athlete’s experience all around,” she said.

“Athletes can utilise AI in training to enhance their knowledge from training like, for example, skiing on the mountain but then also off the mountain in the gym recovery times,” added the American.

“The more we understand about your body, about the sport, about performance, the better you can adjust as an athlete.

Vonn, 39, also said AI would be a vital tool for talent identification, particularly in nations without the resources to scout talent.

“You can give them access to AI through a cell phone and you do a series of tests and they can identify ‘OK this athlete would be a great, a 40-metre dash sprinter, or this athlete would potentially be an amazing high jumper,” she said. 

“You have the ability then to find the talent and give them resources through things they already have like a cell phone.”

Share this:
Continue Reading

News desk

Meta releases beefed-up AI models

Published

on

By

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use
Share this:

Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.

Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.

“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.

Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.

“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.

“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”

That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.

“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.

AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”

Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.

– Slow and steady –

Meta AI has been consistently updated and improved since its initial release last year, according to the company.

“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.

“Its social media apps represent a massive user base that it can use to test AI experiences.”

By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.

Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.

Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.

“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.

Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.

Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.

Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.

Share this:
Continue Reading

Featured