Sexting chatbot ban points to looming battle over AI rules
Users of the Replika “virtual companion” just wanted company. Some of them wanted romantic relationships, sex chat, or even racy pictures of their chatbot.
But late last year users started to complain that the bot was coming on too strong with explicit texts and images — sexual harassment, some alleged.
Regulators in Italy did not like what they saw and last week barred the firm from gathering data after finding breaches of Europe’s massive data protection law, the GDPR.
The company behind Replika has not publicly commented and did not reply to AFP’s messages.
The General Data Protection Regulation is the bane of big tech firms, whose repeated rule breaches have landed them with billions of dollars in fines, and the Italian decision suggests it could still be a potent foe for the latest generation of chatbots.
Replika was trained on an in-house version of a GPT-3 model borrowed from OpenAI, the company behind the ChatGPT bot, which uses vast troves of data from the internet in algorithms that then generate unique responses to user queries.
These bots and the so-called generative AI that underpins them promise to revolutionise internet search and much more.
But experts warn that there is plenty for regulators to be worried about, particularly when the bots get so good that it becomes impossible to tell them apart from humans.
– ‘High tension’ –
Right now, the European Union is the centre for discussions on regulation of these new bots — its AI Act has been grinding through the corridors of power for many months and could be finalised this year.
But the GDPR already obliges firms to justify the way they handle data, and AI models are very much on the radar of Europe’s regulators.
“We have seen that ChatGPT can be used to create very convincing phishing messages,” Bertrand Pailhes, who runs a dedicated AI team at France’s data regulator Cnil, told AFP.
He said generative AI was not necessarily a huge risk, but Cnil was already looking at potential problems including how AI models used personal data.
“At some point we will see high tension between the GDPR and generative AI models,” German lawyer Dennis Hillemann, an expert in the field, told AFP.
The latest chatbots, he said, were completely different to the kind of AI algorithms that suggest videos on TikTok or search terms on Google.
“The AI that was created by Google, for example, already has a specific use case — completing your search,” he said.
But with generative AI the user can shape the whole purpose of the bot.
“I can say, for example: act as a lawyer or an educator. Or if I’m clever enough to bypass all the safeguards in ChatGPT, I could say: ‘Act as a terrorist and make a plan’,” he said.
– ‘Change us deeply’ –
For Hillemann, this raises hugely complex ethical and legal questions that will only get more acute as the technology develops.
OpenAI’s latest model, GPT-4, is scheduled for release soon and is rumoured to be so good that it will be impossible to distinguish from a human.
Given that these bots still make tremendous factual blunders, often show bias and could even spout libellous statements, some are clamouring for them to be tightly controlled.
Jacob Mchangama, author of “Free Speech: A History From Socrates to Social Media”, disagrees.
“Even if bots don’t have free speech rights, we must be careful about unfettered access for governments to suppress even synthetic speech,” he said.
Mchangama is among those who reckon a softer regime of labelling could be the way forward.
“From a regulatory point of view, the safest option for now would be to establish transparency obligations regarding whether we are engaging with a human individual or an AI application in a certain context,” he said.
Hillemann agrees that transparency is vital.
He envisages AI bots in the next few years that will be able to generate hundreds of new Elvis songs, or an endless series of Game of Thrones tailored to an individual’s desires.
“If we don’t regulate that, we will get into a world where we can differentiate between what has been made by people and what has been made by AI,” he said.
“And that will change us deeply as a society.”
Threat of US ban surges after TikTok lambasted in Congress
A US ban of Chinese-owned TikTok, the country’s most popular social media for young people, seems increasingly inevitable a day after the brutal grilling of its CEO by Washington lawmakers from across the political divide.
But the Biden administration will have to move carefully in denying 150 million young Americans their favorite platform over its links to China, especially after a previous effort by then president Donald Trump was struck down by a US court.
TikTok CEO Shou Zi Chew endured a barrage of questions — and was often harshly cut off — by US lawmakers who made their belief quite clear that the app best known for sharing jokes and dance routines was a threat to US national security as well as being a danger to mental health.
In a tweet, TikTok executive Vanessa Pappas deplored a hearing “rooted in xenophobia”.
With both Republicans and Democrats against him at Congress, Chew must now confront a White House ultimatum that TikTok either sever ties with ByteDance, its China-based owners, or get banned in America.
A ban will depend on passage of legislation called the RESTRICT ACT, a bipartisan bill introduced in the Senate this month that gives the US Commerce Department powers to ban foreign technology that threatens national security.
When asked about Chew’s tumultuous hearing, spokeswoman Karine Jean-Pierre repeated the White House’s support of the legislation, which is just one of several proposals by Congress to ban or squeeze TikTok.
– ‘Prove a negative’ –
The sell-or-get banned order tears up 2.5 years of negotiations between the White House and Tiktok to find a way for the company to keep running under its current ownership while satisfying national security concerns.
Those talks resulted in a proposal by TikTok called Project Texas in which the personal data of US users stays in the United States and would be inaccessible to Chinese law or oversight.
But the White House turned sour on the idea after officials from the FBI and the Justice Department said that the vulnerabilities to China would remain.
“It’s hard for TikTok to prove a negative ‘No, we’re not turning over any data to the Chinese government.’ Look at how skeptical our European partners are about US companies where we have a strong legal system,” said Michael Daniel, executive director of the Cyber Threat Alliance, a non-governmental organization dedicated to cybersecurity.
Presently, the White House’s preferred solution is that TikTok sever ties with ByteDance either through a sale or a spin-off.
“My understanding is that what has been… insisted on is the divestment of Tiktok by the parent company,” US Secretary of State Antony Blinken said on Thursday.
But that option is riddled with difficulties, with many experts saying that Tiktok cannot function without ByteDance, which develops the app’s industry-leading technology.
“ByteDance’s ownership of TikTok and the golden jewel algorithm at the center of this security debate is a hot button issue that will not necessarily be solved just by a spin-off or sale of the assets,” said Dan Ives of Wedbush Securities.
Proving the point, China has ruled out giving the go-ahead for a TikTok sale, citing its own laws to protect sensitive technology from foreign buyers.
That leaves a ban which would see the full might of the US government crush TikTok to the undeniable benefit of domestic rivals Instagram, Snapchat and YouTube.
They currently trail TikTok, which is the most popular social media in the United States.
– Snapchat wins –
TikTok’s demise “will clearly benefit Meta and Snapchat front and center in the eyes of Wall Street,” said Ives, who believes the saga will play out for the rest of the year.
One unknown is whether a death sentence for TikTok will cost Washington politically among young voters.
Through a ban, “a democracy will be taking steps that impede the ability of young Americans to express themselves and earn a livelihood,” said Sarah Kreps, professor of government at Cornell University.
The lawmakers putting the Tiktok CEO over the coals minimized the danger of political blowback.
“I want to say this to all the teenagers… who think we’re just old and out of touch,” said representative Dan Crenshaw, a Republican.
“You may not care that your data is being accessed now, but there will be one day when you do care about it,” he said.
US state to require parental consent for social media
Utah on Thursday became the first US state to require social media sites to get parental consent for accounts used by under-18s, placing the burden on platforms like Instagram and TikTok to verify the age of their users.
The law, which takes effect March 2024, was brought in response to fears over growing youth addiction to social media, and to security risks such as online bullying, exploitation, and collection of children’s personal data.
But it has prompted warnings from tech firms and civil liberties groups that it could curtail access to online resources for marginalized teens, and have far-reaching implications for free speech.
“We’re no longer willing to let social media companies continue to harm the mental health of our youth,” tweeted Spencer Cox, governor of the western US state, who signed two related bills at a ceremony Thursday.
The bills also require social media firms to grant parents full access to their children’s accounts, and to create a default “curfew” blocking overnight access to children’s accounts.
They set out fines for social media companies if they target users under 18 with “addictive algorithms,” and make it easier for parents to sue social media companies for financial, physical or emotional harm.
“We hope that this is just the first step in many bills that we’ll see across the nation, and hopefully taken on by the federal government,” said state representative Jordan Teuscher, who co-sponsored the bill.
Michael McKell, a Republican member of Utah’s Senate who also sponsored the bill, said it was a “bipartisan” effort, and praised President Joe Biden’s recent State of the Union address, in which he raised the issue.
Biden last month called on US lawmakers to restrict how social media companies advertise to children and collect their data, as he accused Big Tech of conducting a “for profit” experiment on the nation’s youth.
California has already introduced online safety laws including strict default privacy settings for minors, but the Utah law goes further.
Lawmakers in states such as Ohio and Connecticut are working on similar bills.
Platforms including Instagram and TikTok have introduced more controls for parents, such as messaging limits and time caps.
At Thursday’s ceremony in Utah, McKell pointed to data from the federal Centers for Disease Control and Prevention which he said highlighted the toll social media apps can have on young minds.
“The impact on our daughters — and I have two daughters — it was incredibly troubling,” he said.
“Thirty percent of our daughters from ninth grade to 12th grade had seriously contemplated suicide. That’s startling.”
Google opens chatbot Bard for testing in US and UK
Google on Tuesday invited people in the United States and Britain to test its AI chatbot, known as Bard, as it scrambles to catch up with Microsoft-backed ChatGPT.
Bard, ChatGPT and other similar apps churn out essays, poems or computing code on command, though they come with warnings that the information they create can be incorrect or inappropriate.
People wishing to play with Bard can sign up on a waiting list at bard.google.com website, distinctly separate from the tech giant’s search engine.
Google CEO Sundar Pichai said in a tweet that the move is an “early experiment” allowing people to collaborate with generative artificial intelligence (AI).
“We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people,” Google vice presidents Sissie Hsiao and Eli Collins said in a blog post.
“We continue to see that the more people use them, the better LLMs (large language models) get at predicting what responses might be helpful.”
As exciting as chatbots are, they have their faults, Hsiao and Collins cautioned.
They can incorporate real-world biases, stereotypes or inaccuracies in responses, according to the vice presidents.
Google has adopted a more cautious rollout of generative AI in contrast to Microsoft that has chosen to swiftly make the products available to consumers despite reports of problems.
ChatGPT’s OpenAI is backed by Microsoft, which earlier this year said it would finance the research company to the tune of billions of dollars.
OpenAI recently released a long-awaited update of its AI technology that it said would be safer and more accurate than its predecessor.
Much of the new model’s firepower is now available to the general public via ChatGPT Plus, OpenAI’s paid subscription plan and on an AI-powered version of Microsoft’s Bing search engine.
News desk6 months ago
U.S. proposes redefining when gig workers are employees
Business5 months ago
WeaveSphere technology conference announces first human-AI keynote
Business5 months ago
Sun Life’s Chief Architect on culture and upskilling, and their role in DX
Business5 months ago
WeaveSphere technology conference announces keynote speakers
Business5 months ago
WeaveSphere’s goal? Make STEM education more accessible and inclusive