Connect with us

News desk

ChatGPT’s Altman pleads to US Senate for AI rules

Published

on

Samuel Altman, CEO of OpenAI, is sworn in during a Senate Judiciary Subcommittee on artificial intelligence
Share this:

Sam Altman, the chief executive of ChatGPT’s OpenAI, told US lawmakers on Tuesday that regulating artificial intelligence was essential, after his chatbot stunned the world.

The lawmakers stressed their deepest fears of AI’s developments, with a leading senator opening the hearing on Capitol Hill with a computer-made voice, sounding remarkably similar to his own, reading a text generated by the bot.

“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” said Senator Richard Blumenthal.

Artificial intelligence technologies “are more than just research experiments. They are no longer fantasies of science fiction, they are real and present,” Blumenthal said.

The latest figure to erupt from Silicon Valley, Altman testified before a US Senate subcommittee and urged Congress to impose new rules on big tech, despite deep political divisions that for years have blocked legislation aimed at regulating the internet.

But governments worldwide are under pressure to move quickly after the release of ChatGPT, a bot that can churn out human-like content in an instant, went viral and both wowed and spooked users.

Altman has since become the global face of AI as he both pushes out his company’s technology, including to Microsoft and scores of companies, and warns that the work could have nefarious effects on society.

“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” Altman told a Senate judiciary subcommittee hearing.

He insisted that in time, generative AI developed by OpenAI will “address some of humanity’s biggest challenges, like climate change and curing cancer.”

However, given the risk to disinformation, jobs and other problems, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.

– Go ‘global’ –

Altman suggested the US government might consider a combination of licensing and testing requirements before the release of powerful AI models, with a power to revoke permits if rules were broken.

He also recommended labeling and increased global coordination in setting up rules over the technology as well as the creation of a dedicated US agency to handle artificial intelligence.

“I think the US should lead here and do things first, but to be effective we do need something global,” he added.

Senator Blumenthal underlined that Europe had already advanced considerably with its AI Act that is set to go to a vote next month at the European Parliament.

A sprawling legislative text, the EU measure could see bans on biometric surveillance, emotion recognition and certain policing AI systems.

Crucially for OpenAI, US lawmakers underlined that it also seeks to put generative AI systems such as ChatGPT and DALL-E in a category requiring special transparency measures, such as notifications to users that the output was made by a machine.

OpenAI’s DALL-E last year sparked an online rush to create lookalike Van Goghs and has made it extremely easy to generate illustrations and graphics with a simple request.

Lawmakers also heard warnings that the technology was still in its early stages.

“There are more genies yet to come for more bottles,” said New York University professor emeritus Gary Marcus, another panelist.

“We don’t have machines that can really… improve themselves. We don’t have machines that have self awareness and we might not ever want to go there,” he said.

Christina Montgomery, chief privacy and trust officer at IBM, urged lawmakers against being too broad stroked in setting up rules on AI. 

“A chatbot that can share restaurant recommendations or draft an email has different impacts on society than a system that supports decisions on credit, housing, or employment,” she said.

Share this:

News desk

Quarter of UK 5 to 7-year-olds have smart phone: study

Published

on

By

A growing number of parents say smart phones should be kept out of the hands of small children
Share this:

Around a quarter of British children aged between five and seven-years-old now have a smart phone, a study by the UK communications regulator said on Friday.

The findings come as parents have started to push back against the trend for giving younger children access to the devices.

Research by the Ofcom authority found that 38 percent of children in the age group were using social media platforms such as TikTok, Instagram and Whatsapp despite rules requiring users to be at least 13.

The study also found that the number of the same age watching live-streamed content rose from 39 percent to around half.

Ofcom said parental concerns appeared to have increased considerably but “enforcement of rules appears to be diminishing”.

It said this could be due to a sense among adults of “resignation about their ability to intervene in their children’s online lives”.

Science Minister Michelle Donelan described the findings as “stark”.

Online safety legislation passed by parliament last October aims to crack down on harmful content, including online child sex abuse.

“Children as young as five should not be accessing social media,” Donelan said.

“Most platforms say they do not allow under-13s onto their sites and the (Online Safety) Act will ensure companies enforce these limits or they could face massive fines,” she added.

– ‘Massive pressure’ –

Under the new law tech companies could face fines of up to 10 percent of global revenue for rule breaches and bosses could be jailed.

The study follows a massive reaction from UK parents this year after one mother’s Instagram post when viral.

Daisy Greenwell posted that she was horrified to learn from another parent that her 11-year-old son had his own smart phone, as did a third of the boy’s class.

“This conversation has filled me with terror. I don’t want to give my child something that I know will damage her mental health and make her addicted,” she wrote.

“But I also know that the pressure to do so, if the rest of her class have one, will be massive,” she added.

Thousands of parents immediately got in contact to share their own fears that the devices could open them up to predators, online bullying, social pressure and harmful content. It resulted in the launch of the Parents United for a Smartphone Free Childhood campaign. 

US author Jonathan Haidt — whose recent book “The Anxious Generation” says smart phones have rewired children’s brains — has urged parents to act together on smart phone access for kids.

A child “breaks our heart” by telling us they are excluded from their peer group by being the only one without a phone, he said last month. Haidt advocates for no smart phones before the age of 14 or social media before 16.

“These things are hard to do as one parent. But if we all do it together — if even half of us do it together — then it becomes much easier for our kids,” he added.

Share this:
Continue Reading

News desk

Olympic chief Bach says AI can be a game changer for athletes

Published

on

By

IOC president Thomas Bach delivers his keynote speech at the Olympic AI Agenda launch
Share this:

IOC president Thomas Bach said artificial intelligence can help identify talented athletes “in every corner of the world” as he unveiled the Olympic AI Agenda in London on Friday.

Bach, speaking at Olympic Park, which hosted the 2012 Games, said the Olympic movement needs to lead change as the global AI revolution gathers pace.

“Today we are making another step to ensure the uniqueness of the Olympic Games and the relevance of sport, and to do this, we have to be leaders of change, and not the object of change,” said the International Olympic Committee president.

The former fencing gold medallist said it was vital to have a “holistic” approach to create an “overall strategy for AI and sport”.

Bach, speaking less than 100 days before the start of the Paris Olympics, said “unlike other sectors of society, we in sport are not confronted with the existential question of whether AI will replace human beings”.

“In sport, the performances will always have to be delivered by the athletes,” he said. “The 100 metres will always have to be run by an athlete -– a human being. Therefore, we can concentrate on the potential of AI to support the athletes.

“AI can help to identify athletes and talent in every corner of the world. AI can provide more athletes with access to personalised training methods, superior sports equipment and more individualised programmes to stay fit and healthy.”

Bach said other advantages of AI included fairer judging, better safeguarding and improved spectator experience.

The Olympic AI Agenda comes from the IOC AI working group -– a high-level panel of global experts including AI pioneers and athletes, set up last year.

When asked about the potential negatives of AI, Bach was keen to emphasise the importance of free choice in sport.

“He and she, or the parents, must still have the free choice,” said the German. “So a guy who is then maybe identified as a great athlete in wrestling must still have the chance to play tennis and cannot be sorted out from these sports.”

– Vonn ‘jealous’ –

Former Olympic skiing champion Lindsey Vonn, who also spoke at the London event, told AFP she envied current athletes, who could use AI to enhance their training.

“I’m very jealous that I didn’t have any of this technology when I was racing because I just really feel that it’s going to enhance the athlete’s experience all around,” she said.

“Athletes can utilise AI in training to enhance their knowledge from training like, for example, skiing on the mountain but then also off the mountain in the gym recovery times,” added the American.

“The more we understand about your body, about the sport, about performance, the better you can adjust as an athlete.

Vonn, 39, also said AI would be a vital tool for talent identification, particularly in nations without the resources to scout talent.

“You can give them access to AI through a cell phone and you do a series of tests and they can identify ‘OK this athlete would be a great, a 40-metre dash sprinter, or this athlete would potentially be an amazing high jumper,” she said. 

“You have the ability then to find the talent and give them resources through things they already have like a cell phone.”

Share this:
Continue Reading

News desk

Meta releases beefed-up AI models

Published

on

By

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use
Share this:

Meta on Thursday introduced an improved AI assistant built on new versions of its open-source Llama large language model.

Meta AI is smarter and faster due to advances in the publicly available Llama 3, the tech titan said in a blog post.

“The bottom line is we believe Meta AI is now the most intelligent AI assistant that you can freely use,” Meta co-founder and chief executive Mark Zuckerberg said in a video on Instagram.

Being open source means that developers outside of Meta are free to customize Llama 3 as they wish and the company may then incorporate those improvements and insights in an updated version.

“We’re excited about the potential that generative AI technology can have for people who use Meta products and for the broader ecosystem,” Meta said.

“We also want to make sure we’re developing and releasing this technology in a way that anticipates and works to reduce risk.”

That effort includes incorporating protections in the way Meta designs and releases Llama models and being cautious when it adds generative AI features to Facebook, Instagram, WhatsApp, and Messenger, according to Meta.

“We’re also making Meta AI much easier to use across our apps. We built it into the search box right at the top of WhatsApp, Facebook, and Instagram messenger, so any time you have a question, you can just ask it right there,” said Zuckerberg in the video.

AI models, Meta’s included, have been known to occasionally go off the rails, giving inaccurate or bizarre responses in episodes referred to as “hallucinations.”

Examples shared on social media included Meta AI claiming to have a child in the New York City school system during an online forum conversation.

– Slow and steady –

Meta AI has been consistently updated and improved since its initial release last year, according to the company.

“Meta’s slower approach to building its AI has put the company behind in terms of consumer awareness and usage, but it still has time to catch up,” said Sonata Insights chief analyst Debra Aho Williamson.

“Its social media apps represent a massive user base that it can use to test AI experiences.”

By weaving AI into its family of apps, Meta will quickly get features powered by the technology to billions of people and benefit from seeing what users do with it.

Meta cited the example of refining the way its AI answers prompts regarding political or social issues to summarize relevant points about the topic instead of offering a single point of view.

Llama 3 has been tuned to better discern whether prompts are innocuous or out-of-bounds, according to Meta.

“Large language models tend to overgeneralize, and we don’t intend for it to refuse to answer prompts like ‘How do I kill a computer program?’ even though we don’t want it to respond to prompts like ‘How do I kill my neighbor?’,” Meta explained.

Meta said it lets users know when they are interacting with AI on its platform and puts visible markers on photorealistic images that were in fact generated by AI.

Beginning in May, Meta will start labeling video, audio, and images “Made with AI” when it detects or is told content is generated by the technology.

Llama 3, for now, is based in English but in the coming months Meta will release more capable models able to converse in multiple languages, the company said.

Share this:
Continue Reading

Featured