Privacy law violations: who investigates and what are the consequences?
Eighty-five percent of American adults say they go online daily—and 31% say they’re online constantly—which is likely no surprise considering how much of our modern lives have become tethered to the internet. It’s not only the hours we spend scrolling through our social media feeds, checking email, and streaming music playlists. Many of the businesses and services we use to send money, sign documents, view bills, schedule doctor appointments, or check our bank statements store our information digitally long after we’ve logged off. To protect all the countless pieces of our digital lives stored online, on the cloud, and on computer servers, privacy laws are critical to deterring theft and safeguarding our confidential information.
To learn about the different privacy laws in the U.S., including what types of privacy they protect, who enforces them, and what consequences of their violations are, TripleBlind compiled a list of federal privacy laws and investigated who enforces them using a variety of government and academic sources.
When it comes to how the U.S. manages privacy, its management processes are very siloed—especially compared to how Europe protects privacy, for example. The European Union allows for the free flow of information among member nations under the General Data Protection Regulation, an umbrella law that governs nearly every form of personal data and sets strict requirements for the protection of all processing and personal data. The U.S. protects particular data types under specific circumstances, as reflected in most privacy acts passed.
The U.S. Constitution does not specify any provisions for privacy protection. Still, several constitutional amendments have been interpreted in legal decisions as bearing weight on various forms of privacy, including the Third Amendment’s protection of the privacy of one’s home and the Fifth Amendment’s protection against self-incrimination, which also extends to the security of private information.
Legislation related to the issue of privacy protections is extensive in terms of what it addresses and what it does not. It can be challenging to understand precisely what type of privacy each act protects, which government entity or entities investigate violations of each act, and what consequences of violations of each act resemble.
Read on for a breakdown of privacy law and consequences for violations in the U.S.
Casper1774 Studio // Shutterstock
Fair Credit Reporting Act of 1970
One of the earliest federal privacy laws to be passed, the Fair Credit Reporting Act of 1970 protects personal financial information collected by credit agencies, tenant screening services, and medical information companies. In essence, it guarantees the privacy and accuracy of the information in consumer credit bureau files and empowers action in the event of inaccuracies.
The Federal Trade Commission is the government entity that enforces the FCRA, though the Consumer Financial Protection Bureau is primarily responsible for rulemaking. Violations can come in many forms, including inaccurate debt reporting, failing to send poor credit rating notifications, disseminating credit reporting information without consent, and failing to provide a satisfactory process to prevent identity theft.
Such violations can result in various damages awarded, court costs, and attorney’s fees. Actual damages include those that resulted from a proven failure to act, or an action by an individual, business, or agency that brings harm; they are case-specific and thus have no limit. Statutory damages don’t require evidence to support them and have a compensation limit from $100 to $1,000. Punitive damages are awarded as punishment against an individual, business, or agency found in FCRA violation and are meant to deter the guilty party from further wrongdoing. All damage types are contingent upon willful and negligent FCRA violations.
Nirat.pix // Shutterstock
Privacy Act of 1974
The Privacy Act of 1974 prevents federal agencies from disclosing personal information they collect or control when not authorized. The act also requires that federal agencies publicly disclose their system of records in the Federal Register, which is the U.S. government’s official record. The act was ratified in response to concerns over how the creation and use of computerized databases impacted personal privacy; however, it is important to note that the act applies only to federal agencies and not to state or local agencies.
Many agencies share the duty of enforcing this act due to its range of protections, but the director of the Office of Management and Budget has the power to create guidelines for how agencies should follow the act. Penalties differ for violations of specific sections of the act and can be civil or criminal in nature, or both. In civil court, an individual can sue to have a record amended should an agency refuse to do so—and the individual can also have reasonable litigation costs paid by the government if the court so rules. An individual can also sue to have records produced in civil court. Should a court find that any agency has committed a violation intentionally or willfully, the court can award actual damages to the individual and the individual’s reasonable attorney fees.
Suppose a government agency employee or officer willfully and knowingly discloses personally identifiable information or deliberately maintains a records system without disclosing relevant details or even the system’s existence. In that case, they can be fined up to $5,000 and cited for a misdemeanor. Moreover, the same misdemeanor penalty can apply to anyone who willfully and knowingly requests the record of an individual from an agency under false pretenses.
REDPIXEL.PL // Shutterstock
Computer Fraud and Abuse Act of 1986
The Computer Fraud and Abuse Act of 1986 is an anti-hacking law prohibiting unauthorized use of any protected device connected to the internet, including computers and smartphones. This act has been amended since its original ratification. It has come under scrutiny for what has been seen by some as vague language that allowed the law to be so broadly interpreted, it criminalizes everyday activities.
Fortunately, in June 2021, the Supreme Court narrowed the act, saying that the law should not apply to people using systems they’ve been allowed to access, as otherwise, a large number of everyday computer activities would, in effect, be criminal.
The Department of Justice enforces this act and recently updated its enforcement policy so that good-faith security research—accessing a computer solely with good-faith vulnerability or security flaw correction, investigation, or testing purposes—would not be charged. Within the DOJ, the FBI has primary investigative authority regarding cases involving foreign relations or national defense issues, foreign counterintelligence, restricted data, and suspicion of espionage. The Secret Service is also authorized to investigate instances of fraud.
The CFAA criminalizes unauthorized access of a computer or the obtaining of protected information by exceeding authorized access; extortion involving computers; intentional and unauthorized access to a computer that causes reckless damage; and any attempts to commit such offenses, even if ultimately unsuccessful. A first offense can result in a maximum of 10 years in prison; a second offense increases the sentence to 20 years.
fizkes // Shutterstock
Children’s Online Privacy Protection Act of 1998
The Children’s Online Privacy Protection Act enforces requirements on services online that are directed at and collect information from children younger than 13. Such services must provide specific parental controls and the ability to opt out, and must make their privacy policies available and easily accessible.
The Federal Trade Commission enforces the application of this act and investigates violations thereof—most recently turning its attention to online education tools. When a COPPA violation occurs, the violator could receive a fine of up to $43,280 per violation. This figure throws into stark relief the $170 million fine levied against Google in 2019 for COPPA violations on YouTube. The web service collected children’s personal information without consent and then used it to target these children with advertising.
Many companies have committed COPAA violations by improperly gathering children’s personal information over the years, including WW International and Kurbo Inc. in 2018, Musical.ly (TikTok) in 2019, We Heart It in 2020, HyperBeard in 2020, OpenX in 2021, and Recolor in 2021.
Surasak_Ch // Shutterstock
Gramm-Leach-Bliley Act of 1999
The Gramm-Leach-Bliley Act requires financial institutions to safeguard the public’s nonpublic personal information and provide their customers with an explanation of their information-sharing practices. It also mandates that consumers or customers can opt out of all information sharing. The act is enforced by several types of authorities, primarily the Federal Trade Commission; federal banking agencies, additional federal regulatory authorities, and state insurance oversight agencies are also responsible for enforcement.
Penalties for violations of the GLBA can include severe personal and financial consequences for employees and executives. For each violation, a financial institution can get a fine of up to $100,000. An institution’s directors and officers can face a fine of up to $10,000 or five years in prison (or both). Additionally, companies that violate this act will face a loss of confidence from their customers and increased exposure.
ldutko // Shutterstock
Health Information Portability and Accountability Act of 1996
The Health Information Portability and Accountability Act ensures the proper protection of individuals’ health information by setting disclosure and use standards. The Office for Civil Rights at the Department of Health and Human Services is responsible for enforcing HIPAA privacy and security rules. The office investigates complaints and conducts compliance reviews per HIPAA standards.
Penalties for HIPAA violations can be levied as both civil and criminal. Civil penalties are a minimum of $100 per violation; if the same breach has occurred in multiple variations, this fine can reach $25,000. Such penalties are applied if an individual was aware of wrongdoing or is proven to have failed to exercise such due diligence as would have made them aware. Penalties do not apply in the absence of willful neglect or if the individual corrects the violation within 30 days of being made aware of it.
Criminal penalties are, of course, much stiffer. A willful violation bears a minimum fine of $50,000 up to a maximum of $250,000. Moreover, the guilty individual may have to pay restitution to any victims involved. Imprisonment is also possible. Prison terms can vary from up to one year in the case of criminal negligence to up to 10 years for violating HIPAA rules with malicious intent or for personal profit.
Telephone Records and Privacy Protection Act of 2006
The Telephone Records and Privacy Protection Act made it a criminal offense to engage in pretexting—using manipulation or false statements to obtain personal information—to acquire phone records from telecommunication companies. It not only prohibits a person from using fraudulent tactics to obtain phone data, but also makes it illegal to try accessing confidential phone data online or on computers. Selling and transferring phone records that were illegally obtained is also prohibited.
With the passage of the act, violators can incur fines or be sentenced up to 10 years imprisonment. Both of these penalties can also increase based on the severity of the crime: If the fraudulent activity had more than 50 victims or involved more than $100,000, fines can double and an additional five years could be added to a prison sentence. Another additional five years could be added if the fraudulently acquired phone records were used to commit violent crimes, crimes against law enforcement officers, or domestic violence.
This story originally appeared on TripleBlind and was produced and
distributed in partnership with Stacker Studio.
How has US wealth evolved since the 1980s?
America’s economy has exploded since 1989.
Gross domestic product, which measures all of the goods and services produced in a year, grew from $9.9 trillion to $22.5 trillion from 1989 to 2023 (after accounting for inflation), according to the Bureau of Economic Analysis. This figure represents a massive increase in economic output.
This increased productivity has fed into a similarly significant increase in wealth. The Wealth Enhancement Group used data from the Federal Reserve to look at how the assets held by U.S. households has evolved over time.
Data shows that American households owned a combined $161 trillion in assets in the third quarter of 2023, up from $24 trillion in 1989. That makes for a roughly 570% increase, or 170% after adjusting for inflation.
After accounting for debt, such as mortgages, America’s total household net worth grew to $142 trillion, up from $20 trillion. Although the number is down by about 1% from its peak in the second quarter of 2022, it still reflects a dramatic increase over time.
The most valuable asset class the typical American family holds is real estate. Besides a significant drop during the 2000s subprime mortgage crisis and a brief dip following interest rate hikes in 2022, housing has been a reliable generator of wealth for the middle class.
Wealth Enhancement Group
Household assets have skyrocketed since 1989
For Americans in the bottom half of the wealth distribution, housing made up 51% of their assets. Wealthier households, in contrast, tend to have higher shares of their savings in equities.
Households in the top 0.1% held 60% of their assets in shares of public and private companies in 2023. Meanwhile, households in the bottom half of wealth in the United States held only around 6% of assets in equities.
Yet, despite how much housing has grown in value, its ascent pales compared to the fastest-growing asset class: public equities.
Between 1989 and 2023, the value of public stocks held by American households grew by nearly 1,700%, rising from $2 trillion in value to $37 trillion. This trend, coupled with the fact that shares in companies are held disproportionately by the rich, has caused the share of American household assets held by the top 0.1% to increase from 8% to 12%.
Wealth Enhancement Group
The wealthy tend to own shares in companies
Some economists argue that, in theory, the ratio of a country’s wealth to its economy, as measured by GDP, should be constant over time.
Yet, data from the Bureau of Economic Analysis and the Federal Reserve data shows that the ratio of the net worth of American households and nonprofit organizations to GDP rose from around 3.6 in the 1980s to 5.5 in the third quarter of 2023.
In 2022, YiLi Chien and Ashley Stewart, two researchers at the St. Louis Federal Reserve, offered a few theories to explain how this ratio has increased over time. They suggest that American companies might now have greater market power, allowing them to charge more. The authors also note that since the internet era, many of America’s biggest companies, such as Meta and Google, offer their services to consumers for free—while investors may value their economic contributions, they do not count for much in the GDP numbers.
However, assets are not net worth. The rich are more likely to own their homes outright. In the third quarter of 2023, households from the top 0.1% owned $1.83 trillion worth of real estate while owing just $70 billion in mortgages. In contrast, households in the bottom 50% of wealth owned $4.87 billion of real estate against $3 billion of housing debt.
Story editing by Ashleigh Graf. Copy editing by Kristen Wegrzyn.
This story originally appeared on Wealth Enhancement Group and was produced and
distributed in partnership with Stacker Studio.
Deepfakes cause 30% of organizations to doubt biometrics, Gartner finds
A look at AI deepfakes, it’s impact on security, and ways to mitigate the risks
A fake moustache and trenchcoat isn’t a convincing disguise, right? But a digitally altered video that makes your face identical to someone else’s?
That’s a different story.
Deepfakes are artificial images or videos that imitate a person’s likeness so convincingly that it can be nearly impossible to recognize they’re fake. Hackers use them to impersonate people’s faces and voices. This can have monumental impacts — even $25 million worth, which is what one undisclosed company lost in a deepfake scam.
Even with all the money a company spends on voice authentication and facial biometrics, it can all be in vain if a deepfake hacker manages to fool them.
Gartner explores the impact of deepfakes on organizational policy, and we’ll share some risk management considerations to address the trend.
30% of organizations can’t rely on facial recognition software and biometrics
Biometrics rely on presentation attack detection (PAD) to assess a person’s identity and liveness. The problem now is that today’s PAD standards don’t protect against injection attacks from AI deepfakes. Once a bulletproof security strategy, biometrics are now inefficient for 30% of companies surveyed by Gartner.
“These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,”— Akif Khan, VP Analyst at Gartner
The solution is a demand for more innovative cybersecurity tech. Gartner advises organizations to update their minimum requirements from cybersecurity members to include all of the following
- Injected attacks detection (IAD)
- Image inspection
On top of that, you can beef up security with:
- Device identification: Numerical values or codes to identify a user’s device
- Behavioural analytics: Machine learning algorithms to detect any shifts in day-to-day online behaviour
So, how can you account for deepfakes risks and mitigation in practice? Here are a few more tips to consider:
- Educate employees: Hold monthly or quarterly meetings with experts in the field to help your employee identify common signs of deepfakes, including blurred or pixelated images in a person’s video, or distorted audio. Greater awareness of what to look out for can allow employees to flag suspicions.
- Don’t rely on one authentication process: Multi-factor authentication demands 2+ pieces of evidence to verify a user before admitting them into a network. Include email, phone, or voice verification in addition to biometrics.
- Invest in deepfake detection software: Consider a subscription Sensity AI, Deepware Scan, Truepic, or Microsoft Video Authenticator.
Gartner plans to share more findings and research on deepfakes at their security and risk management summits taking place in various countries around the world.
Read more about those summits and see the news release here.
Veronica Ott is a freelance writer and digital marketer with a specialization in finance and business. As a CPA with experience in the industry, she’s able to provide unique insight into various monetary, financial and economic topics. When Veronica isn’t writing, you can find her watching the latest films!
Where companies have adopted AI—and where they are planning to do so in the near future
On Nov. 30, 2022, OpenAI launched ChatGPT, a chatbot driven by artificial intelligence. The app spread like wildfire. Not only did it provide an entertaining companion to chat with, but it also showed promise as a piece of productivity software.
ChatGPT allows users to ask questions about myriad topics and get useful responses in a way that search engines like Google cannot provide. Similar technologies have emerged in all kinds of domains, including image generation, language translation, transcription, computer programming, and more.
Firms across the U.S. are embracing artificial intelligence. To find out which regions are the most enthusiastic about AI, Verbit analyzed data from surveys taken by the Census Bureau in December 2023. Overall, 4.9% of businesses said they were using AI to produce goods or services in the past two weeks, while 6.7% say they plan to within the next six months.
Unsurprisingly, information technology companies are the most eager to use artificial intelligence—22% of respondents from American tech companies said they had used AI for their products or services within the past two weeks. That number actually understates AI’s impact in the field. A survey of computer programmers conducted by JetBrains, a software company, found that 77% of respondents used ChatGPT, while 46% used GitHub Copilot, an AI coding assistant.
Professional, scientific, and technical services were the second-most likely type of firm to respond that they used AI tools, according to the Census Bureau. Law firms are using tools to scan through thousands of past cases. And, according to Tess Bennett, a technology reporter for Financial Review, consultants and accountants are using AI to create PowerPoint presentations and conduct exploratory data analysis.
Some businesses have been quicker to adopt AI than others. Companies in Rhode Island lead the way on this front—8.7% of businesses in the state are currently using AI, nearly twice the rate of companies in the United States as a whole.
Companies on the West Coast and the Southwest tended to be more AI-friendly, while companies in the Rust Belt were likelier to have the lowest interest in using AI tools.
This story matches the Census survey numbers with data on what kinds of companies each state has within its borders and the education level of its workforce to understand why these disparities across states exist.
In general, states with a higher share of businesses in the technology sector also were likely to have more businesses use AI to produce goods and services. However, the weak correlation suggests that despite all of the hype surrounding AI, companies have still been slow to change their practices to adopt the technology.
Getting on the bandwagon
Businesses in Washington D.C., were the most likely to say they planned to adopt AI in the next six months, at 13.7%. Meanwhile, about 9% of businesses in Maryland, Alaska, New Mexico, Rhode Island, and Florida said they planned on implementing AI. Alabama and Delaware were the least enthusiastic about AI adoption—only 3.3% of businesses in the two states reported plans to implement AI.
This analysis of Census data found a much stronger correlation between how many of a state’s firms are in the tech sector and their willingness to implement AI in their business practices in the near future.
Similar trends were found when it came to states with highly educated workforces—in general, the higher the share of a state’s residents with college degrees, the more likely its businesses were to say they were planning on implementing AI. Artificial intelligence might be the future. But Census data reveals it is still early days.
Story editing by Ashleigh Graf. Copy editing by Kristen Wegrzyn.
This story originally appeared on Verbit and was produced and
distributed in partnership with Stacker Studio.
People5 months ago
Women are great cybersecurity hires. So why are they so underrepresented in a short-staffed field?
Technology5 months ago
How to live forever: the longevity industry ramps up
Business4 months ago
Why is there such a massive cybersecurity talent gap in Canada?
People5 months ago
Multiple generations help a workplace, but age isn’t everything
Events4 months ago
Top 5 tech and digital transformation events to wrap up 2023