facial recognition

Smart Glasses Bring Facial Recognition Concerns

Harvard students have demonstrated that "smart glasses" can be used to look at somebody in public and reveal their identities and personal information. Meta, which made the glasses used in the demonstration, say they have adequate security safeguards in place. The Ray-Ban smart glasses, produced by Facebook owner Meta, connect wirelessly to a smartphone. They include a camera, speaker and microphone and allows a range of hands-free actions such as filming, taking photos and making calls. (Source: meta.com ) Facial Recognition Abused AnhPhu Nguyen and Caine Ardayfio of Harvard University ... (view more)





facial recognition

Retailers and malls embrace facial recognition and video analytics for enhanced security and footfall analysis

In the recent 2-3 years, an increasing number of malls and retail chains have adopted real-time video analytics and facial recognition to enhance security, customer experience and footfall analysis.

Some of these technologies are showcased this week at the NRF Protect Conference in Long Beach, California.



  • Data Capture
  • Exhibitions and Events
  • Surveillance and Security

facial recognition

Facial recognition technique could improve hail forecasts

Full Text:

The same artificial intelligence technique typically used in facial recognition systems could help improve prediction of hailstorms and their severity, according to a new, National Science Foundation-funded study. Instead of zeroing in on the features of an individual face, scientists trained a deep learning model called a convolutional neural network to recognize features of individual storms that affect the formation of hail and how large the hailstones will be, both of which are notoriously difficult to predict. The promising results highlight the importance of taking into account a storm's entire structure, something that's been challenging to do with existing hail-forecasting techniques.

Image credit: Carlye Calvin




facial recognition

What happens when facial recognition gets it wrong – Week in security with Tony Anscombe

A facial recognition system misidentifies a woman in London as a shoplifter, igniting fresh concerns over the technology's accuracy and reliability




facial recognition

China drafts rules for using facial recognition technology

The use of the technology will also require individual's consent, the CAC said in a statement. It added that non-biometric identification solutions should be favored over facial recognition in cases where such methods are equally effective.




facial recognition

ARGOS Identity Provides Identity Verification and Facial Recognition Technology to Sports Betting Platform Stadiobet

The Need for Identity Verification in Sports Betting Platforms





facial recognition

Retailers and malls embrace facial recognition and video analytics for enhanced security and footfall analysis

In the recent 2-3 years, an increasing number of malls and retail chains have adopted real-time video analytics and facial recognition to enhance security, customer experience and footfall analysis.




facial recognition

How does facial recognition work and is it safe? | WIRED Explains

In May 2019, San Francisco became the first US city to ban the use of facial recognition, but this is an isolated example of resistance to this controversial technology. In the UK, it's been used on numerous occasions, while London's Metropolitan Police has confirmed that it will start using the technology as part of its regular policing. But how does facial recognition work and is it accurate and safe? In this WIRED Explains video, security editor Matt Burgess breaks down the ins and outs of the technology and the issues surrounding its use. This video was produced as part of Digital Society, a publishing partnership between WIRED and Vontobel where all content is editorially independent. Visit Vontobel Impact for more stories on how technology is shaping the future of society: https://www.vontobel.com/en-int/about-vontobel/impact/ #privacy #facialrecognition #wiredexplains




facial recognition

Lawsuit Challenges Clearviews Use of Scraped Social Media Images for Facial Recognition

Facial recognition technology is getting more sophisticated, more reliable, and more pervasive as the world eases its way toward becoming an all-encompassing surveillance state. That surveillance state does not even have to be built; it is increasingly ready for deployment as law enforcement agencies cut deals with private companies that have already assembled the tools and databases for use. As with cell phone tracking, that plug-and-play quality does an end-run around safeguards that, at least nominally, restrict government actors, and invites legal challenges based on civil liberties concerns.




facial recognition

SnapPay launches facial recognition payments for North American merchants

(The Paypers) SnapPay has announced the availability of facial recognition payment technology for North...




facial recognition

Facial recognition technique could improve hail forecasts

Full Text:

The same artificial intelligence technique typically used in facial recognition systems could help improve prediction of hailstorms and their severity, according to a new, National Science Foundation-funded study. Instead of zeroing in on the features of an individual face, scientists trained a deep learning model called a convolutional neural network to recognize features of individual storms that affect the formation of hail and how large the hailstones will be, both of which are notoriously difficult to predict. The promising results highlight the importance of taking into account a storm's entire structure, something that's been challenging to do with existing hail-forecasting techniques.

Image credit: Carlye Calvin




facial recognition

Bridge Makes Patient Portal Login Faster and More Secure With Fingerprint and Facial Recognition

Bridge Patient Portal introduces biometric authentication on mobile devices for fast, easy, and secure patient portal login




facial recognition

VR Office Place Introduces Real Time Facial Recognition and Social Fingerprint Analysis

The world is ripe for a new shift in the use of facial recognition by organizations. VR Office Place is proud to present real time social fingerprint identification in the form of facial recognition and publicly available social presence.




facial recognition

Number of players determined using facial recognition

There is provided a system and method for determining a number of players present using facial recognition. There is provided a method comprising capturing an image of the players present, and determining the number of players present based on the image. In this manner, players may more easily configure game settings, whereas spectators may be presented a more engaging experience.




facial recognition

Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.




facial recognition

Communities come face-to-face with the growing power of facial recognition technology

As law enforcement agencies deploy AI-powered facial recognition systems, some communities are pushing back, insisting on having a say in how they’re used.




facial recognition

Facial recognition on the rise: can current laws protect the public?

The ICO is investigating reports that a property developer has quietly installed a facial recognition system in London's King's Cross. We spoke to experts from the legal and technology sectors to find some clarity about the rules




facial recognition

Smart Energy Council calls for state to abandon facial recognition

Some users have been brought to tears by 'broken' facial recognition software now required to approve solar rebate applications.




facial recognition

Facial Recognition Is Tech's Biggest Mistake

Biometrics are generally a good alternative to passwords, but authentication via face-scanning is a terrible idea, according to security expert Max Eddy.








facial recognition

Trump CTO Addresses AI, Facial Recognition, Immigration, Tech Infrastructure, and More

Michael Kratsios, the fourth U.S. Chief Technology Officer, explains administration policies at the Fall Conference of the Stanford Institute for Human-Centered Artificial Intelligence



  • robotics
  • robotics/artificial-intelligence

facial recognition

Opinion: Worried about how facial recognition technology is being used? You should be

Facial recognition surveillance, powered by artificial intelligence, is being used — or misused — in cities worldwide.




facial recognition

5 questions policymakers should ask about facial recognition, law enforcement, and algorithmic bias

In the futuristic 2002 film “Minority Report,” law enforcement uses a predictive technology that includes artificial intelligence (AI) for risk assessments to arrest possible murderers before they commit crimes. However, a police officer is now one of the accused future murderers and is on the run from the Department of Justice to prove that the…

       




facial recognition

How to build guardrails for facial recognition technology

Facial recognition technology has raised many questions about privacy, surveillance, and bias. Algorithms can identify faces but do so in ways that threaten privacy and introduce biases. Already, several cities have called for limits on the use of facial recognition by local law enforcement officials. Now, a bipartisan bill introduced in the Senate proposes new…

       




facial recognition

Singapore plans to launch country-wide facial recognition system that will replace photo IDs by 2022

The government of Singapore is preparing to transition to a facial recognition program it hopes will eliminate the need for ID cards by 2022, and paper checks for retail transactions by 2025.




facial recognition

Google, YouTube and Twitter send cease and desist order to facial recognition app Clearview AI

Using Clearview AI police can upload a photo of an unknown person they would like to identify, and see a list of matches culled from a database of over three billion photos.




facial recognition

Controversial facial recognition company Clearview AI considered for coronavirus contact tracing

The controversial facial recognition company Clearview AI is in negotiations with several unnamed federal agencies and three US states to provide contact tracing services during.




facial recognition

Why Some Cities Are Banning Facial Recognition Technology

A handful of US cities have banned government use of facial recognition technology due to concerns over its accuracy and privacy. WIRED's Tom Simonite talks with computer vision scientist and lawyer Gretchen Greene about the controversy surrounding the use of this technology.