Connect with us

Technology

Amazon adds fear detection and age ranges to its facial-recognition tech as the Border Patrol looks to award a $950 million contract

Published

on

border
Share this:
  • Amazon Web Services has added several new features to its facial-recognition technology, Rekognition.
  • This includes expanded age-recognition capabilities and the new ability to recognize fear.
  • Rekognition is a controversial technology and has been the subject of much criticism and protests — from both inside and outside Amazon.
  • These new features drew some flack from commenters on Twitter.
  • Meanwhile, the US Customers and Border Patrol is looking for quotes on a sweeping new border protection system that includes more facial-recognition tech.

Amazon Web Services has expanded the capabilities of its controversial facial-recognition technology called Rekognition.

It now better detects more age ranges and it can also detect fear, the company announced in a blog post on Monday.

The company explained (emphasis ours):

“Today, we are launching accuracy and functionality improvements to our face analysis features. Face analysis generates metadata about detected faces in the form of gender, age range, emotions, attributes such as ‘Smile’, face pose, face image quality and face landmarks. With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.Lastly, we have improved age range estimation accuracy; you also get narrower age ranges across most age groups.”

Earlier this month AWS also announced that Rekognition can now detect violent content such as blood, wounds, weapons, self-injury, corpses, as well as sexually explicit content.

But it was the news of more age ranges and fear detection that was met with comments on Twitter.

Just last month several protesters interrupted Amazon AWS CTO Werner Vogels during a keynote speech at an AWS conference in New York.

They were protesting AWS’s work with the U.S. Immigration and Customs Enforcement (ICE) and the family separation policy at the Southern Border. Amazon hasn’t acknowledged whether ICE uses its Rekognition technology, but the company did meet with ICE officials to pitch its facial-recognition tech, among other AWS services, as revealed by emails between Amazon and various government officials obtained by the American Civil Liberties Union Foundations.

Amazon’s Rekognition has come under fire from a wide range of groups who want the company to stop selling it to law enforcement agencies. In April, AI experts penned an open letter to Amazon about it. Civil rights group have protested it. 100 Amazon employees sent a letter to management last year asking the company to stop selling Rekognition to law enforcement. Another 500 signed a letter this year asking Amazon to stop working with ICE altogether.

“AWS comes under fire for Rekognition sales to the federal government, who in turn is building concentration camps for children, and AWS’s response is to improve ‘age range estimation’ and ‘fear detection’ in the service? Are you f– KIDDING ME?!” tweeted Corey Quinn from the Duckbill Group, a consultant that helps companies manage their AWS bill. Quinn also hosts theScreaming in the Cloud podcast.

Another developer tweeted, “In 25 years we’re going to be talking about how AWS handled this situation in the same way we talk about how IBM enabled the holocaust. Every engineer and ML researcher who worked on this should be ashamed of themselves.”

The CBP is looking to buy more facial-recognition tech

Meanwhile, the U.S. Customs and Border Protection (CBP), a sister agency to ICE, has put out a new request for quotes on a sweeping new border-security system that includes expanded use of facial-recognition technology.

“Integration of facial recognition technologies is intended throughout all passenger applications,” the RFQ documents say.

The CBP already uses facial recognition at various airports, such as in Mexico City, where it matches passenger’s faces with photos taken from their passports or other government documents, it says.

And the CBP uses other biometric information, such as taking fingerprints of people at the border if it suspects that they are entering the country illegally, it says.

“CBP’s future vision for biometric exit is to build the technology nationwide using cloud computing,” the agency wrote in a 2017 article about the use of facial recognition and finger-print tech.

This new contract for new border security technologies is expected to begin in early 2020 and could be worth $950 million over its lifespan, according to the RFQ documents.

This article was originally published on Business Insider. Copyright 2019.

Share this:

Technology

“AI for everything is not a future that I cosign on”

Data scientist and NYU associate professor Meredith Broussard on AI’s negative impact on social problems and beyond.

Published

on

Share this:

AI is the talk of the town, with applications like ChatGPT and smart homes taking the world by storm. 

But AI isn’t just a trend for the odd convenience. Most didn’t notice as closely, but the technology’s been developing for decades. Virtual reality and natural language processing were already developing in the 1990s, while robots kept popping up throughout the 2000s. 

Then there’s Apple and Amazon — giants synonymous with their AI counterparts, Siri and Alexa.

Convenient? Definitely. Flawless? Absolutely not, according to Meredith Broussard. The AI researcher, author, and NYU associate professor shared some insights on AI’s limitations after noticing AI’s role in her own cancer diagnostics.

Here are some highlights from the interview: 

On the notion that AI bias is the only obstacle to widespread AI application:

One of the big issues I have with this argument is this idea that somehow AI is going to reach its full potential. AI is just math. I don’t think that everything in the world should be governed by math. Computers are really good at solving mathematical issues. But they are not very good at solving social issues, yet they are being applied to social problems. This kind of imagined endgame of Oh, we’re just going to use AI for everything is not a future that I cosign on.

On AI predictive grading in the education system:

As a professor, predicting student grades in advance is the opposite of what I want in my classroom. I want to believe in the possibility of change. I want to get my students further along on their learning journey. An algorithm that says “This student is this kind of student, so they’re probably going to be like this,” is counter to the whole point of education, as far as I’m concerned. 

On AI’s limitations in the police system:

Police are also no better at using technology than anybody else. If we were talking about a situation where everybody was a top-notch computer scientist who was trained in all of the intersectional sociological issues of the day, and we had communities that had fully funded schools and we had, you know, social equity, then it would be a different story. But we live in a world with a lot of problems, and throwing more technology at already overpoliced Black, brown, and poorer neighborhoods in the United States is not helping. 

On how to make machine learning less discriminatory: 

That’s a really good question. All of my talk about auditing sort of explodes our notion of the “black box.” As I started trying to explain computational systems, I realized that the “black box” is an abstraction that we use because it’s convenient and because we don’t often want to get into long, complicated conversations about math. 

When we’re writing about machine-learning systems, it is tempting to not get into the weeds. But we know that these systems are being discriminatory. The time has passed for reporters to just say Oh, we don’t know what the potential problems are in the system. We can guess what the potential problems are and ask the tough questions. Has this system been evaluated for bias based on gender, based on ability, based on race? Most of the time the answer is no, and that needs to change.

Read the full interview in the MIT Technology Review.

Share this:
Continue Reading

Technology

How are AI tools like ChatGPT deployed in retail?

“ChatGPT is no doubt amplifying customers’ shopping experiences, leading to more sales and profit for retailers,” says one tech founder.

Published

on

Share this:

While many people are asking online AI tool ChatGPT all kinds of fun questions, it also being put to work by retailers, to improve the way they do business

And the use cases are wide in variety. 

In an article from the National Automobile Dealers Association, Fiat and Kia Germany have begun using ChatGPT to answer questions in an interactive digital showroom. The report went on to describe how it could dispense information about vehicles, financing options, and provide a more personalized experience for customers. 

In another example, retailers can now create a profile of a customer’s sizing, previous purchases, and browsing history, and integrate ChatGPT into their recommendation engines to personalize product suggestions for customers. In a similar track, software company ElifTech created ElifMail, an email marketing solution powered by ChatGPT, which helps automate the process of responding to customer inquiries, so that the retailer can focus on more critical tasks.

Inside Intelligence reported that French supermarket chain Carrefour is “experimenting with ChatGPT and generative AI to create videos answering common customer questions, such as how to eat healthier for less.”

ChatGPT is also helpful for retailers when it comes to visual product searches, noted Ryan Faber, founder of Copymatic, a cutting-edge AI-powered platform that helps businesses create content.  

“It comes with visual recognition technology that retailers are utilizing to attract and give more options to customers,” he explains. 

“Before, only text and voice searches were available, but with visual product searches, customers are getting more satisfied as they can easily upload an image and get exact results. ChatGPT is no doubt amplifying customers’ shopping experiences, leading to more sales and profit for retailers.”

Instacart is already using the service as a chatbot, reports Wall Street Journal. Chatbots are inherently able to handle a large volume of customer inquiries and provide personalized recommendations, product information, and support.

One benefit for the consumer is that the purchase could be completed entirely within the chat window, noted Abby McNally, Director of Planning and Awareness Media at Collective Measures, a marketing agency based in Minneapolis.

“There’s no need to navigate to a different page or to enter any credit card information. This can lead to higher conversion rates and more sales.” Backing up this point, a LivePerson survey showed that 68 percent of consumers become more loyal to a brand if they can resolve issues through a chatbot, and 60 percent of those aged 18 to 24 actually prefer a chatbot interaction to a human’s. 

McNally added that AI chatbots in retail will come in handy as a tool for employee training. A generative AI chatbot could draft fictional customer service scenarios for associates to respond to.

Nevertheless, as anyone who has used ChatGPT, it’s clear sometimes the responses can be flawed, inaccurate, or incomplete. Chani Jos, a freelance web programmer in Montreal, who has worked for Google, and Waze, said she has concerns that it has “the potential to be misused for malicious purposes, such as spreading misinformation or conducting scams.” She also warned that the programming is not yet at a point when it will replicate real-life needs for customer service.

Since ChatGPT and similar generative AI tools have been created on an open AI model, they are largely accessible to anyone. Because of this, we’re sure to see more startups using GPT3 to build various tools. 

“ChatGPT and AI tools can be a great for small businesses to save time and effort on their marketing, product description analyzation and customer service, while achieving better results,” said marketing consultant and cofounder of business consulting firm 172Group, Shari Wright Pilo. She’s been researching AI and ChatGPT to help her clients. 

One example? Tasked with analyzing a client’s executive summary, Wright Pilo asked ChatGPT to develop a marketing plan that included social and email campaigns for special offers.

After tweaking prompts, she and the client were able to create a content calendar for three months, complete with post copy, calls to action, and visual suggestions. This took 20 minutes, not including revision time.

The client transferred the plan to a Google sheet, where she personalized the posts to match her tone and branding, creating visuals in Canva. 

The result was the client having a complete social media and email marketing strategy for the next three months, created within two days, freeing up her time to focus on other business tasks and generate more revenue. 

“Even better, she’s now engaging with her audience on social media, growing her email list, and keeping her customers happy,” noted Wright Pilo.

Share this:
Continue Reading

Business

Only 13% of Web3 founding teams include any women, BCG study finds

A look into a BCG report highlighting gender disparity in Web3 and STEM.

Published

on

Share this:

It’s shocking that 2023 still sees vast gender disparity in entire industries. Unfortunately, the STEM and sub-industries like Web3 see it the most. 

If you haven’t heard, Web3 is the latest cryptocurrency technology for a blockchain-based internet. 

The Boston Consulting Group (BCG) found that only 13% of Web3 companies included any women on their founding teams. Another key finding was that only 3% of Web3 company founding teams consisted of all women. 

Talk about archaic, for such a progressive industry. 

We dove into the report to understand the severity of that disparity and what companies can do about it. Let’s start with some of the report’s key findings on founders:

  • 13% of Web3 company founding teams have at least one woman
  • 3% of Web3 company founding teams encompass all women
  • 93% of Web3 founders are men

These findings above remain consistent not only in North America, but also in the Asia-Pacific and Europe. Now, this disparity unfortunately continues even when you look at the wider workforce of Web3 companies:

  • 73% of Web3 companies’ entire workforce are men
  • 88% of technical roles at Web3 companies are held by men

BCG also examined the role of women in Web3 founding teams by startup stage and funding amount. Sadly, the bigger the investment, the less likely a woman was to sit on the founding team. Only 7% of Web3 companies with $1B invested had women in the founding teams. Similarly, companies that received between $500M to $999M had men as founding teams.

STEM companies show similar results. While the US Census demonstrates more women achieving STEM roles, the disparity is still present. The BCG’s report backs this as well:

  • 33% of STEM company workforces are women
  • 25% of technical roles at STEM companies are held by women

What does BCG propose we do about it? Luckily, the early nature of Web3 offers time to rectify the gender disparity. Here are some strategies discussed:

  • Monitor the data: Granular, objective data collection will keep track of female representation within a company’s workforce and founders. 
  • Include women on VC investment teams: All-male investment teams are more likely to garner all-make founding teams. 
  • Create inclusive brand experiences: The Web3 experience should cater to a broad audience. 
  • Stay close to regulators: Collaborate with government and organizational entities to shape regulations for this new industry.
  • Build mentorship and support opportunities: Diverse networks and mentorship opportunities can keep companies in check with gender equality. 

Read BCG’s full press release.

Share this:
Continue Reading

Featured