An amateur photographer who goes by the name “ibreakphotos” decided to do an experiment on his Samsung phone last month to find out how a feature called “space zoom” actually works.
The feature, first released in 2020, claims a 100x zoom rate, and Samsung used sparkling clear images of the Moon in its marketing.
Ibreakphotos took his own pictures of the Moon — blurry and without detail — and watched as his phone added craters and other details.
The phone’s artificial intelligence software was using data from its “training” on many other pictures of the Moon to add detail where there was none.
“The Moon pictures from Samsung are fake,” he wrote, leading many to wonder whether the shots people take are really theirs anymore — or if they can even be described as photographs.
Samsung has defended the technology, saying it does not “overlay” images, and pointed out that users can switch off the function.
The firm is not alone in the race to pack its smartphone cameras with AI — Google’s Pixel devices and Apple’s iPhone have been marketing such features since 2016.
The AI can do all the things photographers used to labour over — tweaking the lighting, blurring backgrounds, sharpening eyes — without the user ever knowing.
But it can also transform backgrounds or simply wipe away people from the image entirely.
And the debate over AI is not limited to hobbyists on message boards — professional bodies are raising the alarm too.
– Sidestepping the tech –
The industry is awash with AI, from cameras to software like Photoshop, said Michael Pritchard of the Royal Photographic Society of Britain.
“This automation is increasingly blurring boundaries between a photograph and a piece of artwork,” he told AFP.
The nature of AI is different to previous innovations, he said, because the technology can learn and bring new elements beyond those recorded by film or sensor.
This brings opportunities but also “fundamental challenges around redefining what photography is, and how ‘real’ a photograph is”, Pritchard said.
Nick Dunmur of the Britain-based Association of Photographers said professionals most often use “RAW” files on their digital cameras, which capture images with as little processing as possible.
But sidestepping the tech is less easy for a casual smartphone shooter.
Ibreakphotos, who posted his finding on Reddit, pointed out that technical jargon around AI is not always easy to understand — perhaps deliberately so.
“I wouldn’t say that I am happy with the use of AI in cameras, but I am OK with it as long as it is communicated clearly what each processing pipeline actually does,” he told AFP, asking not to use his real name.
– Not ‘human-authored’ –
What professional photographers are most concerned about, though, is the rise of AI tools that generate completely new images.
In the past year, DALL-E 2, Midjourney and Stable Diffusion have exploded in popularity thanks to their ability to create images in hundreds of styles with just a short text prompt.
“This is not human-authored work,” Dunmur said, “and in many cases is based on the use of training datasets of unlicensed work.”
These issues have already led to court cases in the United States and Europe.
According to Pritchard, the tools risk disrupting the work of anyone “from photographers, to models, to retouchers and art directors”.
But Jos Avery, an American amateur photographer who recently tricked thousands on Instagram by filling his feed with stunning portraits he had created with Midjourney, disagreed.
He said the lines drawn between “our work” and “the tool’s work” were arbitrary, pointing out that his Midjourney images often took many hours to create.
But there is broad agreement on one fundamental aspect of the debate — the risk for photography is not existential.
“AI will not be the death of photography,” Avery said.
Pritchard agreed, noting that photography had endured from the daguerreotype to the digital era, and photographers had always risen to technical challenges.
That process would continue even in a world awash with AI-generated images, he said.
“The photographer will bring a deeper understanding to the resulting image even if they haven’t directly photographed it,” he said.
AI chip crunch: startups vie for Nvidia’s vital component
The artificial intelligence revolution is fully underway, but soaring demand for its most crucial component has startups scratching their heads on how they can deliver on AI’s promise.
Generative AI’s lifeblood is a book-sized semiconductor known as the graphics processing unit (GPU) — built by one company, Nvidia.
Nvidia’s CEO and founder Jensen Huang made a wild bet years ago that the world would soon clamor for a powerful chip usually used for making video games, but that could build AI as well.
No company working with the generative AI models that fuel today’s frenzy can get off the ground without Nvidia’s singular product: the latest model is the H100 and its accompanying software.
That painful reality is one that Amazon, Intel, AMD and others are scrambling to fix with their own alternatives, but those attempts could take years.
– ‘Not a lot of GPUs’ –
And with the biggest tech companies throwing all their financial might into generative AI, the smaller fish must go on the hunt to secure Nvidia’s holy grail.
“Around the world, it is becoming very hard to get thousands of GPUs because all these big companies are putting in billions of dollars, stockpiling GPUs,” said Fangbo Tao, co-founder of Mindverse.AI, a Singapore-based startup.
“There’s not a lot of GPUs around,” he said.
Tao spoke to AFP at the TechCrunch Disrupt conference in San Francisco, where AI startups jostled to make their pitches to Silicon Valley’s venture capitalists (VC).
ChatGPT took the world by storm just as Silicon Valley was caught in a nasty hangover from the pandemic when investors threw money at startups, convinced that life had gone irreversibly online.
That turned out to be far-fetched, and the US tech scene entered a downturn with rounds of layoffs and VC money dried up.
Thanks to AI, some of the old mojo is back, and anyone with those two letters on their resume will likely see a red carpet rolled out on the legendary Sand Hill Road, home to Silicon Valley’s most storied investors.
But as the startups walk away with their VC cash, the money in their pockets will be quickly forked out to Nvidia for GPUs either directly or through providers to bring their AI dreams to execution.
“We call on a lot of the big cloud providers (Microsoft, AWS and Google) ), and they all tell us even they are having trouble getting supplies,” said Laurent Daudet, CEO of AI startup LightOn.
The problem is most acute for companies involved in training generative AI models, which requires that power hungry GPUs work at peak capacity to process troves of data ingested from the internet.
The computing needs are so massive that only a few companies can stump up the cash to build one of these state-of-the-art large language models.
– ‘Sucking the oxygen’ –
The ten billion dollars investment by Microsoft into OpenAI is widely understood to be paid out as credits to access purpose-built data centers humming with Nvidia GPUs.
Google has built its own models and now Amazon on Monday said it was pumping four billion dollars into Anthropic AI, another company that trains AI.
Training on that mountain of data “is sucking out almost all the oxygen from the GPU market right now,” said Said Ouissal, CEO of Zededa, a company that works on making AI less power hungry.
“You’re looking at mid-next year, maybe late next year before you’re actually going to get delivery on new orders. The shortage doesn’t seem to be letting up,” added Wes Cummins, CEO Applied Digital, a company that supplies AI infrastructure.
Companies on the AI frontlines also point out that Nvidia’s primordial role makes it the de-facto kingmaker on where the technology is going.
The market is “almost entirely driven by the big players — Googles, Amazons, Metas” that have the “enormous amounts of data and enormous amounts of capital,” former Nvidia engineer Jacopo Pantaleoni told The Information.
“This was not the world I wanted to help build,” he said.
Some veterans of Silicon Valley said that the frenzied days of Nvidia GPUs will not last forever and that other options will inevitably emerge.
Or the cost of entry will prove too high, even for the giants, bringing the current boom down to earth.
Blue Origin to remain grounded for now following crash probe
US aviation regulators said Wednesday that Blue Origin must complete “21 corrective actions” before it can resume launches, closing a probe into an uncrewed crash last year that set back Jeff Bezos’s space company.
The Federal Aviation Administration report into the September 12, 2022 “mishap” said failure of an engine nozzle caused by higher-than-expected engine operating temperatures caused the New Shepard rocket to fall back to the ground shortly after liftoff, even as the capsule carrying research experiments escaped and floated safely back to Earth.
“During the mishap the onboard launch vehicle systems detected the anomaly, triggered an abort and separation of the capsule from the propulsion module as intended and shut down the engine,” said the FAA.
The fact the capsule ejected right away was viewed positively, suggesting that any crew would have been safe if they had been aboard.
But “the closure of the mishap investigation does not signal an immediate resumption of New Shepard launches,” the agency said.
Blue Origin responded with a post on the social media site X, saying “We’ve received the FAA’s letter and plan to fly soon.”
In all, Blue Origin has flown 31 people — some as paying customers and others as guests — since July 2021, when Bezos himself took part in the first flight.
While it has been grounded, rival Virgin Galactic, the company founded by British billionaire Richard Branson, has pressed on, flying four spaceflights so far this year.
The two companies compete in the emerging space tourism sector, offering a few minutes of weightlessness in “suborbital” space.
Virgin Galactic tickets were sold for between $200,000-$450,000, while Blue Origin doesn’t disclose its ticket prices publicly.
Meta putting AI in smart glasses, assistants and more
Meta chief Mark Zuckerberg on Wednesday said the tech giant is putting artificial intelligence into digital assistants and smart glasses as it seeks to gain lost ground in the AI race.
Zuckerberg made his announcements at the Connect developers conference at Meta’s headquarters in Silicon Valley, the company’s main annual product event.
“Advances in AI allow us to create different (applications) and personas that help us accomplish different things,” Zuckerberg said as he kicked off the gathering.
“And smart glasses are going to eventually allow us to bring all of this together into a stylish form factor that we can wear.”
Smart glasses are one of the many ways that tech companies have tried to move beyond the smartphone as a user-friendly device, but so far with little success.
The second-generation Meta Ray-Ban smart glasses made in a partnership with EssilorLuxottica will have a starting price of $299 when they hit the market on October 17.
The smart glasses also add the ability for users to stream what they are seeing in real time, Zuckerberg said.
“Smart glasses are the ideal form factor for you to let AI assistants see what you’re seeing and hear what you’re hearing.”
Meta also introduced 28 “AIs” that people can message on WhatsApp, Messenger, and Instagram with “personalities” based on celebrities including Snoop Dogg, Paris Hilton and YouTube star MrBeast.
Zuckerberg demonstrated an interaction with one such AI from the stage in a type-written chat promising that the new bots would soon be voiced.
“This is our first effort at training a bunch of AI that are a bit more fun,” Zuckerberg said.
“But look, this is early stuff and these still have a lot of limitations, which you will see when you use them.”
The event was the first in-person edition of Connect since 2019, before the pandemic, and announcements on generative AI were widely expected.
Meta has taken a much more cautious approach than its rivals Microsoft, OpenAI and Google to push out AI products, prioritizing small steps and making its in-house models available to developers and researchers.
– ‘Best value’ –
Meta also unveiled the latest version of its Quest virtual reality headset with richer graphics, improved audio, and the ability for a wearer to see what is around them without taking the gear off, a demonstration for AFP showed.
“This is going to be a big game changer and a big capacity improvement for these headsets,” Zuckerberg told developers gathered in a Meta headquarters courtyard.
Quest 3 headsets were priced starting at $499 and will begin shipping on October 10, according to Meta.
This is substanially cheaper than Apple’s Vision Pro, which will cost a hefty $3,499 when it is available early next year in the United States only.
The Quest 3 “is going to be the best value on the market for a long time to come”, said Meta Chief Technology Officer Andrew Bosworth, to laughter from the audience.
New game titles for Quest 3 included “Assassin’s Creed Nexus” from Ubisoft as well as a Roblox game.
“Meta is trying to bring a much upgraded version of (mixed-reality) to the masses,” said Insider Intelligence principal analyst Yory Wurmser.
Events2 months ago
Where will AI go next?￼
Business4 months ago
How to build company culture in a scale-up
Events6 months ago
The innovator’s mindset and the battle between Batman-v-Superman: mesh conference day 2
Business4 months ago
How to build and maintain a company culture among a remote workforce
Events6 months ago
3 big ideas animating day one of the mesh conference