From “intelligent” vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence has burrowed its way into every arena of modern life.
Its promoters reckon it is revolutionising human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions.
Regulators in Europe and North America are worried.
The European Union is likely to pass legislation next year — the AI Act — aimed at reining in the age of the algorithm.
The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.
Looming large in the debates has been China’s use of biometric data, facial recognition and other technology to build a powerful system of control.
Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating “totalitarian infrastructures”.
“I see that as a huge threat, no matter the benefits,” she told AFP.
But before regulators can act, they face the daunting task of defining what AI actually is.
– ‘Mug’s game’ –
Suresh Venkatasubramanian of Brown University, who co-authored the AI Bill of Rights, said trying to define AI was “a mug’s game”.
Any technology that affects people’s rights should be within the scope of the bill, he tweeted.
The 27-nation EU is taking the more tortuous route of attempting to define the sprawling field.
Its draft law lists the kinds of approaches defined as AI, and it includes pretty much any computer system that involves automation.
The problem stems from the changing use of the term AI.
For decades, it described attempts to create machines that simulated human thinking.
But funding largely dried up for this research — known as symbolic AI — in the early 2000s.
The rise of the Silicon Valley titans saw AI reborn as a catch-all label for their number-crunching programs and the algorithms they generated.
This automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars.
“AI was a way for them to make more use of this surveillance data and to mystify what was happening,” Meredith Whittaker, a former Google worker who co-founded New York University’s AI Now Institute, told AFP.
So the EU and US have both concluded that any definition of AI needs to be as broad as possible.
– ‘Too challenging’ –
But from that point, the two Western powerhouses have largely gone their separate ways.
The EU’s draft AI Act runs to more than 100 pages.
Among its most eye-catching proposals are the complete prohibition of certain “high-risk” technologies — the kind of biometric surveillance tools used in China.
It also drastically limits the use of AI tools by migration officials, police and judges.
Hasselbach said some technologies were “simply too challenging to fundamental rights”.
The AI Bill of Rights, on the other hand, is a brief set of principles framed in aspirational language, with exhortations like “you should be protected from unsafe or ineffective systems”.
The bill was issued by the White House and relies on existing law.
Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because Congress is deadlocked.
– ‘Flesh wound’ –
Opinions differ on the merits of each approach.
“We desperately need regulation,” Gary Marcus of New York University told AFP.
He points out that “large language models” — the AI behind chatbots, translation tools, predictive text software and much else — can be used to generate harmful disinformation.
Whittaker questioned the value of laws aimed at tackling AI rather than the “surveillance business models” that underpin it.
“If you’re not addressing that at a fundamental level, I think you’re putting a band-aid over a flesh wound,” she said.
But other experts have broadly welcomed the US approach.
AI was a better target for regulators than the more abstract concept of privacy, said Sean McGregor, a researcher who chronicles tech failures for the AI Incident Database.
But he said there could be a risk of over-regulation.
“The authorities that exist can regulate AI,” he told AFP, pointing to the likes of the US Federal Trade Commission and the housing regulator HUD.
But where experts broadly agree is the need to remove the hype and mysticism that surrounds AI technology.
“It’s not magical,” McGregor said, likening AI to a highly sophisticated Excel spreadsheet.
Elon Musk, White House discuss electric vehicles
Tesla head Elon Musk met with senior White House officials Friday to discuss the Biden administration’s push to grow the electric vehicle market, Press Secretary Karine Jean-Pierre said.
“That meeting did happen today,” she told reporters.
Musk, who has had sometimes openly prickly relations with President Joe Biden, met infrastructure development coordinator Mitch Landrieu and clean energy advisor John Podesta.
They discussed “electrification and how the bipartisan infrastructure law and the Inflation Reduction Act can advance EVs and increase the electrification more broadly,” Jean-Pierre said, referring to two major pieces of legislation passed under Biden providing subsidies and incentives to bolster clean energy, electric vehicles and general infrastructure.
Jean-Pierre said Biden did not meet with Musk, but “it’s important that senior members of his team had a meeting.”
The billionaire entrepreneur occupies an unusual place at the intersection of cutting edge industry and politics with ownership of the country’s most famous EV brand, space projects and Twitter.
He has often tangled with the Biden administration and has used Twitter to embrace right-wing talking points.
On Thursday, he said he met with Republican Speaker of the House Kevin McCarthy and Democratic minority leader Hakeem Jeffries as Congress explores potential curbs on social media platforms.
Musk tweeted that he went to “discuss ensuring that this platform is fair to both parties.”
Hive ransomware: modern, efficient business model
The US Justice Department’s shutdown Thursday of the Hive ransomware operation — which extorted some $100 million from more than 1,5000 victims worldwide — highlights how hacking has become an ultra-efficient, specialized industry that can allow anyone to become a cyber-shakedown artist.
– Modern business model –
Hive operated in what cybersecurity experts call a “ransomware as a service” style, or RaaS — a business that leases it software and methods to others to use in extorting a target.
The model is central to the larger ransomware ecosystem, in which actors specialize in one skill or function to maximize efficiency.
According to Ariel Ropek, director of cyber threat intelligence at cybersecurity firm Avertium, this structure makes it possible for criminals with minimal computer fluency to get into the ransomware game by paying others for their expertise.
“There are quite a few of them,” Ropek said of RaaS operations.
“It is really a business model nowadays,” he said.
– How it works –
On the so-called dark web, providers of ransomware services and support pitch their products openly.
At one end are the initial access brokers, who specialize in breaking into corporate or institutional computer systems.
They then sell that access to the hacker, or ransomware operator.
But the operator depends on RaaS developers like Hive, which have the programming skills to create the malware needed to carry out the operation and avoid counter-security measures.
Typically, their programs — once inserted by the ransomware operator into the target’s IT systems — are manipulated to freeze, via encryption, the target’s files and data.
The programs also extract the data back to the ransomware operator.
RaaS developers like Hive offer a full service to the operators, for a large share of the ransom paid out, said Ropek.
“Their goal is to make the ransomware operation as turnkey as possible,” he said.
– Polite but firm –
When the ransomware is planted and activated, the target receives a message telling them how to correspond and how much to pay to get their data unencrypted.
That ransom can run from thousands to millions of dollars, usually depending on the financial strength of the target.
Inevitably the target tries to negotiate on the portal. They often don’t get very far.
Menlo Security, a cybersecurity firm, last year published the conversation between a target and Hive’s “Sales Department” that took place on Hive’s special portal for victims.
In it, the Hive operator courteously and professionally offered to prove the decryption would work with a test file.
But when the target repeatedly offered a fraction of the $200,000 demanded, Hive was firm, insisting the target could afford the total amount.
Eventually, the Hive agent gave in and offered a significant reduction — but drew the line there.
“The price is $50,000. It’s final. What else to say?” the Hive agent wrote.
If a target organization refuses to pay, the RaaS developers hold a backup position: they threaten to release the hacked confidential files online or sell them.
Hive maintained a separate website, HiveLeaks, to publish the data.
On the back end of the deal, according to Ropek, there are specialist operations to collect the money, making sure those taking part get their shares of the ransom.
Others, known as cryptocurrency tumblers, help launder the ransom for the hacker to use above-ground.
– Modest blow –
Thursday’s action against Hive was only a modest blow against the RaaS industry.
There are numerous other ransomware specialists similar to Hive still operating.
The biggest current threat is LockBit, which attacked Britain’s Royal Mail in early January and a Canadian children’s hospital in December.
In November, the Justice Department said LockBit had reaped tens of millions of dollars in ransoms from 1,000 victims.
And it isn’t hard for Hive’s operators to just start again.
“It’s a relatively simple process of setting up new servers, generating new encryption keys. Usually there’s some kind of rebrand,” said Ropek.
Madison Square Garden’s facial recognition blacklisting sparks outcry
The heated debate over facial recognition technology has a new flashpoint: Manhattan’s celebrated Madison Square Garden, home to the New York Knicks basketball team and countless Billy Joel concerts.
The operator of the arena, where Joe Frazier defeated Muhammad Ali in 1971’s “Fight of the Century,” is under fire for using the software to identify and eject certain lawyers from events at the venue — because they are associated with ongoing litigation involving MSG.
Local lawmakers want to halt the crackdown, which rights campaigners say is a gross abuse of a technology that is already raising fears about privacy and control from America to China.
“When the rich and powerful are free to use facial recognition to track the public it puts everyone else at risk,” said Albert Fox Cahn, executive director of STOP, a non-profit that advocates for privacy.
“Here we see a chilling example of how petty the retaliation can be,” he told AFP.
Last October, Barbara Hart and her husband were approaching their seats at “The Garden” for a Brandi Carlile concert to celebrate their wedding anniversary when security guards stopped them.
She said the guards identified her without seeing her ID card, and despite the tickets being in her husband’s name, before removing the couple from the venue.
The attorney believes the guards used technology to match her face with an image of herself taken from her company’s website.
Hart said she was targeted because her firm is engaged in a lawsuit against the venue’s parent company, MSG Entertainment, even though she is not on the case.
“It was bewildering and upsetting. Bullying with fancy tools,” the 62-year-old told AFP.
Hart is among at least four lawyers removed recently from MSG Entertainment venues because their firms are locked in legal disputes with the company.
Kerry Conlon told local media she was refused entry to Radio City Music Hall in November while trying to see dancers the Rockettes with her 9-year-old daughter.
Two other attorneys said they were denied entry to MSG to watch the Knicks and NHL team the Rangers respectively.
Billionaire businessman James Dolan’s MSG Entertainment says it has a “straightforward policy that precludes attorneys from firms pursuing active litigation against the company from attending events at our venues until that litigation has been resolved.”
New York’s attorney general, Letitia James, warned him Tuesday that the policy “may violate” state civil rights legislation.
State senators this week proposed closing a loophole in the law, which prohibits the “wrongful refusal of admission” of patrons with a valid ticket to entertainment venues.
– ‘Orwellian’ –
For rights advocates, the proposed amendment, while welcomed, doesn’t deal with the crux of the issue — growing surveillance in the age of the algorithm.
Facial recognition technology is legal in New York. It is used by police and at airports.
In 2020, the state government temporarily banned its use in schools. Campaigners like Cahn support a total ban.
He says the Madison Square Garden example shows that private business can use facial recognition “to exclude anyone whose voice you want to silence.”
MSG has deployed facial recognition technology since 2018. A New York Times report that year said the venue uses an algorithm to compare images taken by a camera to a stored database of photographs.
“The facial recognition technology system does not retain images of individuals, with the exception of those who were previously advised they are prohibited from entering our venues, or whose previous misconduct in our venues has identified them as a security risk,” an MSG Entertainment spokesperson told AFP.
The United States and the European Union are among those grappling with how to regulate the use of biometric data, facial recognition and artificial intelligence.
Supporters say facial recognition bolsters security, but critics say the imperfect technology is prone to false matches among ethnic minorities and discriminatory.
Detractors also highlight Chinese police’s use of it to track down and detain recent protesters.
MSG’s use “paints an Orwellian picture of the society we’re in right now,” Daniel Schwarz of the New York Civil Liberties Union told AFP.
Business3 months ago
WeaveSphere technology conference announces first human-AI keynote
Business5 months ago
From research foundation to the award-winning WeaveSphere tech conference
Business5 months ago
IBM and Evoke announce 2022 WeaveSphere tech conference
Business3 months ago
WeaveSphere technology conference announces keynote speakers
Business4 months ago
University and college students get a leg up thanks to IBM Canada’s upskilling curriculum