Connect with us

Business

Japan 1st-quarter GDP shrank as Omicron wave hit

Published

on

Japan's economy shrank in the first quarter of the year as Covid restrictions were imposed
Share this:

Japan’s economy shrank slightly in the first quarter of 2022, official data showed Wednesday, hit by Covid-19 restrictions and higher prices.

The world’s third-largest economy shrank 0.2 percent quarter-on-quarter in the January-March period, slightly less than the market expectations of a 0.4 percent contraction.

It followed a modest rebound in the final three months of 2021 that proved short-lived after Japan put Covid restrictions in place as an outbreak fuelled by the Omicron coronavirus variant took hold in January.

Growth was also hit by the rising cost of imports with energy prices surging and the yen falling to its lowest level against the dollar in 20 years.

Economists expect the economy to recover again in the April-June quarter now that virus restrictions have been lifted, but caution there are some caveats.

“We see three headwinds to this expected recovery,” said UBS economists Masamichi Adachi and Go Kurihara in a note ahead of the GDP data release.

“First is a rise in food and energy prices. Second is a drag from the lockdown in China,” and third is the risk of a potential resurgence in virus infections, they said.

Others point to ongoing uncertainties linked to “tensions in international relations and military conflicts”, according to a survey among economists conducted by the Japan Center for Economic Research.

During the current earnings season, major Japanese firms such as Sony and Nissan have offered cautious forecasts because of the uncertainty, particularly over supply chain disruption and the effect of Covid lockdowns in China.

Wednesday’s data showed the economy’s rebound in the last quarter of 2021 was 0.9 percent, slightly weaker than an initial estimate of 1.1 percent growth.

– Rising prices –

Japan is battling a series of economic headwinds linked to the pandemic and Russia’s invasion of Ukraine, which has sent energy costs soaring.

The yen has also slumped against the dollar, with a widening gap between Japan’s ultra-loose monetary policy and tightening in the United States as the Federal Reserve attempts to combat inflation.

Rising energy prices and other hikes are squeezing Japanese consumers and businesses, with household spending dipping 2.3 percent in March from a year earlier.

Analysts have warned that the pace of nominal wage increases in Japan is unlikely to track rising prices, dampening spending appetites.

Last month, the government unveiled a 6.2 trillion yen (around $48 billion) economic package that included handouts for low-income families to help cushion the impact of rising prices and energy costs.

Looking ahead, “net trade will boost growth over the coming months as supply shortages ease and the weak yen boosts exports and softens demand for imports,” Tom Learmouth, Capital Economics economist, said in a note.

“With coronavirus cases continuing to fall and nearly 60 percent of the population triple-jabbed, another round of restrictions looks unlikely for now.”

“However, we expect GDP growth to disappoint across 2022 due to the hit to household income from higher inflation and signs that elderly consumers remain wary of catching the virus,” he added.

Japan has seen a smaller Covid outbreak than many countries, although cases surged because of the highly transmissible Omicron variant.

The country has recorded around 30,050 deaths despite avoiding harsh lockdowns.

Share this:

Business

Robots are starting to deliver takeout orders. Are they here to stay?

Published

on

By

Task Group analyzed the state of autonomous delivery systems, both nationally and internationally, to see how far along this technology has come.  
Share this:

In a March 2023 Deloitte survey, 47% of Americans said they would order from a restaurant that delivers food with a drone or an autonomous vehicle. That’s up 3 percentage points from the company’s 2021 survey about restaurant trends.

In that first survey, researchers noted there was “massive uncertainty in the industry, and many worried that restaurant patronage might never recover” from the COVID-19 pandemic. It found that two-thirds of consumers believed they would not immediately return to their pre-pandemic restaurant habits.

In 2023, most restaurant customer behavior is back to normal—though some changes have blended into the industry’s practices. Task Group analyzed the state of autonomous delivery systems, both nationally and internationally, to measure the progress of this technology post-pandemic.

As with other industries, technology has helped maximize efficiency and improve customer satisfaction. Business owners learned new service methods, marketing strategies, and technical terminology. Food delivery skyrocketed during lockdowns, making greater strides in restaurant efficiency and, in some cases, profits. Many restaurant owners connected apps that allowed customers to order without talking to a human to state-of-the-art delivery systems that don’t require a driver.

Restaurants and transportation companies in North America and Europe are experimenting with new automated delivery techniques that can reduce their costs as long as they do not compromise customer satisfaction. And consumers are ready—but how soon will it become standard practice?


Food delivery robots on pathway.

Julija Sh // Shutterstock

What are drones and sidewalk delivery vehicles?

The robots most commonly used in the food delivery industry are aerial drones and wheeled autonomous delivery vehicles that travel along sidewalks to reach customers.

Drones are classified by how they generate lift—with fixed wings, rotors, or a combination—by how they’re used, such as food delivery, and what equipment they have on board, including batteries and cameras.

In the U.S., the Federal Aviation Administration regulates drone use. The agency requires pilots to be certified—and bans drones within five miles of airports.

For many years, the FAA stood in the way of companies seeking to use drones for deliveries, but in 2019, the agency agreed to allow uncrewed delivery flights beyond the pilot’s line-of-sight by UPS and Wing Aviation, owned and operated by Google’s mothership Alphabet. Since then, the agency has approved drone delivery operations for several companies, including Amazon and Walmart.

According to a study published by the Harvard Kennedy School in 2022, autonomous delivery vehicles are not the future. They’re already here. Self-driving machines about the size of a large cooler are already traveling down our sidewalks and crosswalks to deliver various packages.

Policymakers question how these vehicles will work and interact with people and other vehicles in already congested and chaotic urban environments. The Harvard researchers believe these vehicles “offer the promise of less congestion and greener shipments,” but also “raise concerns about safety and use of road and sidewalk infrastructure.”

While the debate continues, the manufacturers of these robots continue to advance their technology, including using machine learning to improve navigation, efficiency, and safety.

Food delivery drone in flight.

Canva

How far off are drone or sidewalk deliveries?

Estonia-based delivery startup Bolt, working with Starship Technologies, has been trialing sidewalk deliveries in Estonia, the U.K., and the U.S. and plans to formally launch robot deliveries later this year in as many as 500 cities in 45 countries.

Bolt’s main competitor, Uber, signed a deal in 2022 with autonomous vehicle startup Nuro “to test driverless food deliveries” in Mountain View, California, and Houston, Texas. Before the agreement, Uber ran a pilot program for sidewalk delivery in Los Angeles, while Nuro delivered Domino’s pizzas in specific areas of Houston for a year. 

Story editing by Jeff Inglis. Copy editing by Kristen Wegrzyn.

This story originally appeared on Task Group and was produced and
distributed in partnership with Stacker Studio.

Share this:
Continue Reading

Business

AI “superusers” seek education, fun, and productivity with generative AI

A look at two separate studies by Sparktoro and Salesforce on people’s generative AI use.

Published

on

Share this:

Maybe it was through your job. Or simply out of curiosity.

With the rise of generative AI, you’ve probably tried out ChatGPT or a similar tool. But how often are people using these? More interestingly, what motivates them? Both Salesforce and SparkToro sought to find out with two separate studies. 

Here are highlights from each report and how they compare:

Work automation and educational pursuits top priorities for AI users

Both Salesforce and SparkToro can agree on this. SparkToro highlighted professional use of the platform as at an “all-time high,” then ranked categories of interest across over 4,000 ChatGPT prompts with these in the top 5:

  • Programming: 29.14%
  • Education: 23.30%
  • Content: 20.79%
  • Sales and Marketing: 13.47%
  • Personal & Other: 6.73%

Salesforce found that 75% of generative AI users are motivated by streamlined work communications and task automation. The second highest topic of interest? Technically “messing around” (38%), though a close third was learning and education (34%). Both SparkToro and Salesforce posit that education doesn’t just include homework or university coursework—users also use tools like ChatGPT to develop knowledge of other desired educational topics. 

Younger generations more likely to use AI than older ones despite general decline in usage

Salesforce surveyed 4,000 people to find out how they use generative AI and what their demographics are. Turns out, most “superusers” — aka those who use the tool every day — are Millennials or Gen Zers (65%). Plus, 70% of the Gen Z participants surveyed said they use generative AI. 

Still, SparkToro notes an overall decline in generative AI use regardless of age. After studying monthly traffic data on OpenAI provided by Datos, SparkToro found overall traffic fell by nearly 30%. 

Users ask ChatGPT to write, create, and list

These were the top three common words in SparkToro’s assessment in ChatGPT prompts. However, they also share a notable prevalence of the words “game” and “SEO in prompts as well. Other words less commonly used yet enough to come up in the results included judge, SaaS pricing, curriculum, employment, and employer.

Read the SparkToro report here and the Salesforce report here

Share this:
Continue Reading

Business

Trends in AI ethics before and after ChatGPT

Published

on

By

Artificial intelligence is disrupting the world, including by creating ethical dilemmas. Magnifi, an AI investing platform, analyzed complaints collected by AIAAIC to see how AI concerns have grown over the last decade.
Share this:

Computational systems demonstrating logic, reasoning, and understanding of verbal, written, and visual inputs have been around for decades. But development has sped up in recent years with work on so-called generative AI by companies such as OpenAI, Google, and Microsoft.

When OpenAI announced the launch of its generative AI chatbot ChatGPT in 2022, the system quickly gained more than 100 million users, earning it the fastest adoption rate of any piece of computer software in history.

With the rise of AI, many are embracing the technology’s possibilities for facilitating decision-making, speeding up information gathering, reducing human error in repetitive tasks, and enabling 24-7 availability for various tasks. But ethical concerns are also growing. Private companies are behind much of the development of AI, and for competitive reasons, they’re opaque about the algorithms they use in developing these tools. The systems make decisions based on the data they’re fed, but where that data comes from isn’t necessarily shared with the public.

Users don’t always know if they’re using AI-based products, nor if their personal information is being used to train AI tools. Some worry that data could be biased and lead to discrimination, disinformation, and—in the case of AI-based software in automobiles and other machinery, accidents and deaths.

The federal government is on its way to establishing regulatory powers to oversee AI development in the U.S. to help address these concerns. The National AI Advisory Committee recommends companies and government agencies create Chief Responsible AI Officer roles, whose occupants would be encouraged to enforce a so-called AI Bill of Rights. The committee, established through a 2020 law, also recommended embedding AI-focused leadership in every government agency.

In the meantime, an independent organization called AIAAIC has taken up the torch in making AI-related issues more transparent. Magnifi, an AI investing platform, analyzed ethics complaints collected by AIAAIC regarding artificial intelligence dating back to 2012 to see how concerns about AI have grown over the last decade. Complaints originate from media reports and submissions reviewed by the AIAAIC.


Hands using laptop with AI renderings.

SomYuZu // Shutterstock

A significant chunk of the public struggles to understand AI and fears its implications

Many consumers are aware when they’re interacting with AI-powered technology, such as when they ask a chatbot questions or get shopping recommendations based on past purchases. However, they’re less aware of how widespread these technologies have become.

When Pew Research surveyed Americans in December 2022, and asked if they knew about six specific examples of how AI is used, only 3 in 10 adults knew all of them. This includes understanding how AI works with email services and organizing your inbox, how wearable fitness trackers utilize AI, and how security cameras might recognize faces. This low understanding of how AI manifests in daily life contributes to Americans’ attitudes toward this technology. Pew found that 38% of Americans are more concerned than excited about the increase of AI.

Ethics concerns about artificial intelligence use by companies and governments more than doubled from 2019-2020 according to data collected by independent watchdog group AIAAIC.

Magnifi

As AI works its way into consumer tech, concerns grow to a fever pitch

Concerns about AI initially focused on social media companies and their algorithms—like the 2014 Facebook study when the company’s researchers manipulated 700,000 users’ feeds without their knowledge, or algorithms spreading disinformation and propaganda during the 2020 presidential election.

The viral adoption of ChatGPT and multimedia creation tools in the last year have fueled concerns about AI’s effects on society, particularly in increasing plagiarism, racism, sexism, bias, and proliferation of inaccurate data.

In September 2022, an AIAAIC complaint against Upstart, a consumer lending company that used AI, cited racial discrimination in determining loan recipients. Other complaints focus on a lack of ethics used in training AI tools.

In June 2023, Adobe users and contributors filed an AIAAIC complaint about Adobe’s Firefly AI art generator, saying the company was unethical when it failed to inform them it used their images to train Firefly.

A chart showing government as having the most complaints concerning AI, according to data kept by the AIAAIC. Tech and media follow.

Magnifi

Government, technology, and media emerge as leading industries of concern

While the AIAAIC data set is imperfect and subjective, it’s among the few sources to track ethical concerns with AI tools. Many of the government agencies that have embraced AI—particularly law enforcement—have found themselves on the receiving end of public complaints. Incidents such as facial recognition technology caused wrongful arrests in Louisiana, for example, and a quickly scrapped 2022 San Francisco Police Department policy that would allow remote-controlled robots to kill suspects.

Not surprisingly, many citizens and organizations have concerns about technology companies’ use of AI in the rise of chatbots. Some involving ChatGPT and Google Bard center around plagiarism and inaccurate information, which can reflect poorly on individuals and companies and spread misinformation.

The automotive industry is another sector where major players like Tesla leverage AI in their sprint toward autonomous vehicles. Tesla’s Autopilot software is the subject of much scrutiny, with the National Highway Traffic Safety Administration reporting the software has been connected with 736 crashes and 17 fatalities since 2019.

Health care professional analyzing tablet and AI concepts.

Chinnapong // Shutterstock

The optimistic case for AI’s future is rooted in the potential for scientific, medical, and educational advancements

As the federal government works toward legislation that establishes clearer regulatory powers to oversee AI development in the U.S. and ensure accountability, many industries ranging from agriculture and manufacturing to banking and marketing are poised to see major transformations.

The health care sector is one field gaining attention for how AI changes may signficantly improve health outcomes and advance human society. The 2022 release of a technology that can predict protein shapes is helping medical researchers better understand diseases, for example. AI can help pharmaceutical companies create new medications faster and more cheaply through more rapid data analysis in the search for potential new drug molecules.

AI has the potential the benefit the lives of millions of patients as it fuels the expansion of telemedicine and has the potential to aid in expanding access to health care; assist with diagnosis, treatment, and management of chronic conditions; and help more people age at home while potentially lowering costs.

Scientists see potential for creating new understandings by leveraging AI’s ability to crunch data and speed up scientific discovery. One example is Earth-2, a project that uses an AI weather prediction tool to forecast extreme weather events better and help people better prepare for them. Even in education, experts believe AI tools could improve learning accessibility to underserved communities and help develop more personalized learning experiences.

In the financial sector, experts say AI warrants a considerable number of ethical concerns. Gary Gensler, the head of the US Securities and Exchange Commission, told the New York Times that herding behavior—or everyone relying on the same information, faulty advice, and conflicts of interest could spell economic disaster if not preempted. “You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Gensler said in an interview with the New York Times. To address those concerns, the SEC put forward a proposal that would regulate platforms’ use of AI, prohibiting them from putting their business needs before their customers’ best interests.

Story editing by Jeff Inglis. Copy editing by Kristen Wegrzyn.

This story originally appeared on Magnifi and was produced and
distributed in partnership with Stacker Studio.

Share this:
Continue Reading

Featured