Connect with us

Business

Think you’re immune to cybercrime because you’re young and tech savvy? Think again

Published

on

Using government data, Twingate has compiled surprising facts about how different age groups are victims of fraud cybercrimes.
Share this:

While generalizations are rarely true, there is one that holds up pretty well: People tend to believe (and take comfort in the idea) that different kinds of crime could never happen to them—notably cybercrime. They’re too smart, too careful, and too tech savvy.

But, of course, the truth is more complicated than that.

A 2021 study by the Federal Trade Commission found less than 5% of mass-market consumer fraud victims report their experiences to either the Better Business Bureau or a government agency. This study also described an interesting variation in the inclination of victims of various forms of fraud to report malfeasance in any way.

For example, while 58% of people duped into purchasing a product or service that was never delivered registered a complaint to the vendor, less than 20% of victims of fraudulent credit card insurance or computer repair logged complaints. And overall, only 12% of victims of any form of digital fraud complained to their credit card company, bank, or other financial service provider, despite the protections such institutions provide their clientele.

One could speculate that embarrassment keeps many people from seeking justice, or perhaps they assume filing a complaint won’t get them anywhere. Age is most certainly a factor. Older Americans lose more money overall from cyberscams than younger age groups, though those younger age groups experience a higher total volume of cybercrimes—meaning that while it costs older folks more cash, there are more young victims than old.

While age is one of the easiest ways to categorize and reduce fraud, especially cybercrime, there are nonetheless valid (and quite alarming) variations in instances of cybercrime that can be qualified by looking at the issue through the lens of age. Twingate collected and analyzed information from the FBI’s Internet Crime Complaint Center and the Federal Trade Commission’s Consumer Sentinel to understand how online crime differed between age groups in 2021.

The FBI receives an average of 2,300 complaints per day about online crime, and the bureau estimates there was almost $7 billion lost to it in 2021 alone. No small potatoes. How it breaks down among the population’s generations provides key insights into how cybercrime affects every American.

Detailed closeup of a list of spam emails.

Kaspri // Shutterstock

People under age 50 lost around $2.7 billion to internet scams in 2021

– Under 20 years old: $101 million
– 20-29 years old: $431 million
– 30-39 years old: $937 million
– 40-49 years old: $1.2 billion
– 50-59 years old: $1.3 billion
– 60+ years old: $1.7 billion

It’s true the older you get, the more dollars your age group has been scammed out of. When you consider the history of the digital world, it really isn’t until you get to the early tip of the 40-49 range that you begin to see people who grew up with the internet as a component part of their lives from an early age.

It is therefore not terribly surprising that those over 40 have suffered the greatest monetary losses to cybercrime. The losses as shown here rise with near-uniformity until you reach the over-40 age ranges, where they strike the billions. This begs the question of precisely how cyberthieves are targeting the older age groups.

According to the FBI’s 2021 Internet Crime Report, confidence fraud (also known as romance scams), tech support fraud, phishing, and personal data theft are all high on the list of the most common forms of cybercrime.

Unpaid parcel shipping fees scam text

mundissima // Shutterstock

People in younger age groups are scammed at higher rates than those over 60

– Under 20 years old: 182 per million
– 20-29 years old: 1,580
– 30-39 years old: 1,948
– 40-49 years old: 2,181
– 50-59 years old: 1,753
– 60+ years old: 1,198

While younger people are scammed for less cash each time they are targeted, they are nonetheless scammed more frequently. This makes intuitive sense, too.

High-profile, high-cost scams like romance scams and predatory telemarketing scams are more likely to affect older people, while it’s easy to imagine younger people buying, for example, counterfeit sneakers—a lousy circumstance, but one that might cost just $300 instead of $30,000.

A young woman looking at smartphone frustrated.

fizkes // Shutterstock

41% of people in their 20s reported losing money to fraud

– Compared to 18% of people ages 70-79

This number is staggering—it means that 2 in 5 people in their 20s have lost money to fraud. That’s more than 18 million victims nationwide. But it makes more sense when you consider the full scope of things that count as cybercrimes.

Older people may be more likely to lose more money in one fell swoop, but younger people are surrounded by opportunities to log into new websites and buy from new advertisers, both of which are key opportunities for legitimate-looking websites to steal your data or financial information.

Man installing software in laptop in dark at night.

Tero Vesalainen // Shutterstock

People in their 40s were the fastest-growing segment of online crime victims

– 44,878 reports in 2017 vs 89,184 in 2021 (49% increase)

Between 2017 and 2021, cybercrimes against people in their 40s increased by nearly 50%. That this makes them the fastest-growing cybercrime demographic is not all that surprising.

Consider the fact that this age range is a key demographic of people who grew up with an older version of the internet, and therefore may likely overestimate their skill set for remaining safe online as technology continues to evolve away from modes and methods of familiarity—most notably with regard to how payment transactions take place and the perceived security surrounding bank and credit card information.

Close up female hands holding credit card and smartphone.

fizkes // Shutterstock

Adults under 40 were more than twice as likely to be the victims of social media scams

– Social media was the most profitable method for scammers, with about $770 million in losses in 2021

The FTC report indicating $770 million in cybercrime losses over social media in 2021 represents more than 13% of the total amount all age groups were scammed for that year. People under 40 are by far the largest group on every major social media site, so it might stand to reason that they are more than twice as likely to be the victim of social media related scams. What’s interesting about the dollar amount here is that it is not larger, despite the fact that it represents the most profitable means of cybercrime. Common forms of social media scams include clickbait and impersonation scams, sweepstakes or lottery scams, and various money-making or “get rich quick” scams.

This story originally appeared on Twingate and was produced and
distributed in partnership with Stacker Studio.

Share this:
Continue Reading

Business

AI “superusers” seek education, fun, and productivity with generative AI

A look at two separate studies by Sparktoro and Salesforce on people’s generative AI use.

Published

on

Share this:

Maybe it was through your job. Or simply out of curiosity.

With the rise of generative AI, you’ve probably tried out ChatGPT or a similar tool. But how often are people using these? More interestingly, what motivates them? Both Salesforce and SparkToro sought to find out with two separate studies. 

Here are highlights from each report and how they compare:

Work automation and educational pursuits top priorities for AI users

Both Salesforce and SparkToro can agree on this. SparkToro highlighted professional use of the platform as at an “all-time high,” then ranked categories of interest across over 4,000 ChatGPT prompts with these in the top 5:

  • Programming: 29.14%
  • Education: 23.30%
  • Content: 20.79%
  • Sales and Marketing: 13.47%
  • Personal & Other: 6.73%

Salesforce found that 75% of generative AI users are motivated by streamlined work communications and task automation. The second highest topic of interest? Technically “messing around” (38%), though a close third was learning and education (34%). Both SparkToro and Salesforce posit that education doesn’t just include homework or university coursework—users also use tools like ChatGPT to develop knowledge of other desired educational topics. 

Younger generations more likely to use AI than older ones despite general decline in usage

Salesforce surveyed 4,000 people to find out how they use generative AI and what their demographics are. Turns out, most “superusers” — aka those who use the tool every day — are Millennials or Gen Zers (65%). Plus, 70% of the Gen Z participants surveyed said they use generative AI. 

Still, SparkToro notes an overall decline in generative AI use regardless of age. After studying monthly traffic data on OpenAI provided by Datos, SparkToro found overall traffic fell by nearly 30%. 

Users ask ChatGPT to write, create, and list

These were the top three common words in SparkToro’s assessment in ChatGPT prompts. However, they also share a notable prevalence of the words “game” and “SEO in prompts as well. Other words less commonly used yet enough to come up in the results included judge, SaaS pricing, curriculum, employment, and employer.

Read the SparkToro report here and the Salesforce report here

Share this:
Continue Reading

Business

Trends in AI ethics before and after ChatGPT

Published

on

By

Artificial intelligence is disrupting the world, including by creating ethical dilemmas. Magnifi, an AI investing platform, analyzed complaints collected by AIAAIC to see how AI concerns have grown over the last decade.
Share this:

Computational systems demonstrating logic, reasoning, and understanding of verbal, written, and visual inputs have been around for decades. But development has sped up in recent years with work on so-called generative AI by companies such as OpenAI, Google, and Microsoft.

When OpenAI announced the launch of its generative AI chatbot ChatGPT in 2022, the system quickly gained more than 100 million users, earning it the fastest adoption rate of any piece of computer software in history.

With the rise of AI, many are embracing the technology’s possibilities for facilitating decision-making, speeding up information gathering, reducing human error in repetitive tasks, and enabling 24-7 availability for various tasks. But ethical concerns are also growing. Private companies are behind much of the development of AI, and for competitive reasons, they’re opaque about the algorithms they use in developing these tools. The systems make decisions based on the data they’re fed, but where that data comes from isn’t necessarily shared with the public.

Users don’t always know if they’re using AI-based products, nor if their personal information is being used to train AI tools. Some worry that data could be biased and lead to discrimination, disinformation, and—in the case of AI-based software in automobiles and other machinery, accidents and deaths.

The federal government is on its way to establishing regulatory powers to oversee AI development in the U.S. to help address these concerns. The National AI Advisory Committee recommends companies and government agencies create Chief Responsible AI Officer roles, whose occupants would be encouraged to enforce a so-called AI Bill of Rights. The committee, established through a 2020 law, also recommended embedding AI-focused leadership in every government agency.

In the meantime, an independent organization called AIAAIC has taken up the torch in making AI-related issues more transparent. Magnifi, an AI investing platform, analyzed ethics complaints collected by AIAAIC regarding artificial intelligence dating back to 2012 to see how concerns about AI have grown over the last decade. Complaints originate from media reports and submissions reviewed by the AIAAIC.


Hands using laptop with AI renderings.

SomYuZu // Shutterstock

A significant chunk of the public struggles to understand AI and fears its implications

Many consumers are aware when they’re interacting with AI-powered technology, such as when they ask a chatbot questions or get shopping recommendations based on past purchases. However, they’re less aware of how widespread these technologies have become.

When Pew Research surveyed Americans in December 2022, and asked if they knew about six specific examples of how AI is used, only 3 in 10 adults knew all of them. This includes understanding how AI works with email services and organizing your inbox, how wearable fitness trackers utilize AI, and how security cameras might recognize faces. This low understanding of how AI manifests in daily life contributes to Americans’ attitudes toward this technology. Pew found that 38% of Americans are more concerned than excited about the increase of AI.

Ethics concerns about artificial intelligence use by companies and governments more than doubled from 2019-2020 according to data collected by independent watchdog group AIAAIC.

Magnifi

As AI works its way into consumer tech, concerns grow to a fever pitch

Concerns about AI initially focused on social media companies and their algorithms—like the 2014 Facebook study when the company’s researchers manipulated 700,000 users’ feeds without their knowledge, or algorithms spreading disinformation and propaganda during the 2020 presidential election.

The viral adoption of ChatGPT and multimedia creation tools in the last year have fueled concerns about AI’s effects on society, particularly in increasing plagiarism, racism, sexism, bias, and proliferation of inaccurate data.

In September 2022, an AIAAIC complaint against Upstart, a consumer lending company that used AI, cited racial discrimination in determining loan recipients. Other complaints focus on a lack of ethics used in training AI tools.

In June 2023, Adobe users and contributors filed an AIAAIC complaint about Adobe’s Firefly AI art generator, saying the company was unethical when it failed to inform them it used their images to train Firefly.

A chart showing government as having the most complaints concerning AI, according to data kept by the AIAAIC. Tech and media follow.

Magnifi

Government, technology, and media emerge as leading industries of concern

While the AIAAIC data set is imperfect and subjective, it’s among the few sources to track ethical concerns with AI tools. Many of the government agencies that have embraced AI—particularly law enforcement—have found themselves on the receiving end of public complaints. Incidents such as facial recognition technology caused wrongful arrests in Louisiana, for example, and a quickly scrapped 2022 San Francisco Police Department policy that would allow remote-controlled robots to kill suspects.

Not surprisingly, many citizens and organizations have concerns about technology companies’ use of AI in the rise of chatbots. Some involving ChatGPT and Google Bard center around plagiarism and inaccurate information, which can reflect poorly on individuals and companies and spread misinformation.

The automotive industry is another sector where major players like Tesla leverage AI in their sprint toward autonomous vehicles. Tesla’s Autopilot software is the subject of much scrutiny, with the National Highway Traffic Safety Administration reporting the software has been connected with 736 crashes and 17 fatalities since 2019.

Health care professional analyzing tablet and AI concepts.

Chinnapong // Shutterstock

The optimistic case for AI’s future is rooted in the potential for scientific, medical, and educational advancements

As the federal government works toward legislation that establishes clearer regulatory powers to oversee AI development in the U.S. and ensure accountability, many industries ranging from agriculture and manufacturing to banking and marketing are poised to see major transformations.

The health care sector is one field gaining attention for how AI changes may signficantly improve health outcomes and advance human society. The 2022 release of a technology that can predict protein shapes is helping medical researchers better understand diseases, for example. AI can help pharmaceutical companies create new medications faster and more cheaply through more rapid data analysis in the search for potential new drug molecules.

AI has the potential the benefit the lives of millions of patients as it fuels the expansion of telemedicine and has the potential to aid in expanding access to health care; assist with diagnosis, treatment, and management of chronic conditions; and help more people age at home while potentially lowering costs.

Scientists see potential for creating new understandings by leveraging AI’s ability to crunch data and speed up scientific discovery. One example is Earth-2, a project that uses an AI weather prediction tool to forecast extreme weather events better and help people better prepare for them. Even in education, experts believe AI tools could improve learning accessibility to underserved communities and help develop more personalized learning experiences.

In the financial sector, experts say AI warrants a considerable number of ethical concerns. Gary Gensler, the head of the US Securities and Exchange Commission, told the New York Times that herding behavior—or everyone relying on the same information, faulty advice, and conflicts of interest could spell economic disaster if not preempted. “You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Gensler said in an interview with the New York Times. To address those concerns, the SEC put forward a proposal that would regulate platforms’ use of AI, prohibiting them from putting their business needs before their customers’ best interests.

Story editing by Jeff Inglis. Copy editing by Kristen Wegrzyn.

This story originally appeared on Magnifi and was produced and
distributed in partnership with Stacker Studio.

Share this:
Continue Reading

Business

1 in 5 companies founded in 2021 closed within the year—a story all too familiar in the US

Published

on

By

PlanPros investigated what it takes for a business to make it through its first year—a milestone that 1 in 5 companies don't achieve.
Share this:

Whether a startup is successful in its first year depends on a variety of factors—from industry type and location to funding and money management strategies. PlanPros investigated what it takes for a business to make it through its first year—a milestone that 1 in 5 companies don’t achieve.

Entrepreneurship is a core tenet of American culture. As many as 55% of Americans have started at least one business in their lifetime, according to a 2019 survey by the Global Entrepreneurship Monitor consortium at Babson College. In fact, there are over 33 million small businesses—which have fewer than 500 employees—in operation today according to estimates from the Small Business Administration. However, the Bureau of Labor Statistics reports that since 1994, about 20% of new businesses have not survived their first year.

The success of a small business affects more than just the business owners’ livelihood. According to the SBA Small Business Facts Report, small businesses are responsible for 2 in 3 jobs created in the past 25 years. Additionally, the SBA estimates that small businesses are responsible for about 44% of all economic activity in the United States.

Market research

According to a 2022 Skynova survey of 492 startup founders, 58% said they wished they had done more market research before starting their business. Put simply, market research involves evaluating how likely a product or service is to be received well by its intended customers.

Where a startup is based can have a significant effect on its finances. Business taxes vary across states, as does the availability of various government grant and loan programs designed to aid small businesses. Residents’ purchasing power also ranges geographically. The first-year failure rate for small businesses by state ranged from 18.2% to 36.6% in 2019, the most recent data available—California had the lowest first-year failure rate, while Washington-based startups faced the highest first-year failure rate.

Startups can face certain advantages and disadvantages depending on the nature of their industry as well. According to the Small Business Funding lending agency, small businesses in the health care industry have the highest chance of surviving to at least their fifth year at 60%. Conversely, small businesses in the transportation industry have the lowest chance of surviving through their fifth year at 30%.

Funding and well-managed cash flow

The primary reason new businesses fail is due to a lack of cash or available financial support in its absence, according to the aforementioned Skynova report. In 2022, 47% of startup failures were attributed to a lack of financing or investors, while running out of money contributed to 44% of failures in the same year. A 2019 study funded by the SBA of 1,000 startup small business owners attributes 82% of startup failures to cash flow problems and mismanagement. These data point out the importance of adhering to a strict budget and limiting expenses as much as possible in the first year.

It is also important to identify potential sources of funding or support in advance of any immediate need. This can help prevent running into unsustainable growth. Many government programs exist to help startups survive, including state and federal grants, some of which are designated for certain demographics and industries. 

Even after a business is fairly well established, it is important to monitor cash flow closely. Businesses need to survive well beyond just the first year. According to data from the Bureau of Labor Statistics, roughly half of small businesses fail within five years. After 15 years, about 3 in 4 small businesses will have failed.

But the end of a company is not necessarily the end of entrepreneurship for every small business owner. A study by University of Michigan and Stanford economists suggests that business owners who start a second business after their first failures are more likely to succeed on their second attempt.

Story editing by Jeff Inglis. Copy editing by Tim Bruns.

Share this:
Continue Reading

Featured