Jump to content

Search the Community

Showing results for tags 'facial recognition'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

  1. Ban facial recognition in Europe, says EU privacy watchdog 365 Image Credit: Håkan Dahlström (Reuters) — Facial recognition should be banned in Europe because of its “deep and non-democratic intrusion” into people’s private lives, EU privacy watchdog the European Data Protection Supervisor (EDPS) said on Friday. The comments come two days after the European Commission proposed draft rules that would allow facial recognition to be used to search for missing children or criminals and in cases of terrorist attacks. The draft rules, which need to be thrashed out with EU countries and the European Parliament, are an attempt by the Commission to set global rules for artificial intelligence, a technology dominated by China and the United States. The privacy watchdog said it regretted that the Commission had not heeded its earlier call to ban facial recognition in public spaces. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” it said in a statement. “The EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy.” The Commission’s proposals have drawn criticism from civil rights groups, concerned about loopholes that may allow authoritarian governments to abuse AI to clamp down on people’s rights. Source: Ban facial recognition in Europe, says EU privacy watchdog
  2. Some UK Stores Are Using Facial Recognition to Track Shoppers Branches of the British grocer Southern Co-op are using surveillance technology to look for potential shoplifters. Photograph: Chris Ratcliffe/Bloomberg/Getty Images Branches of Co-op in the south of England have been using real-time facial recognition cameras to scan shoppers entering stores. In total 18 shops from the Southern Co-op franchise have been using the technology in an effort to reduce shoplifting and abuse against staff. As a result of the trials, other regional Co-Op franchises are now believed to be trialing facial recognition systems. Use of facial recognition by police forces has been controversial, with the Court of Appeal ruling parts of its use to be unlawful earlier this year. But its use has been creeping into the private sector, and the true scale of its use remains unknown. Southern Co-op's facial recognition was quietly introduced for limited trials during the the last 18 months. While shops with face recognizing cameras displayed signs telling customers about its operation, no general public announcement was made before the trials started. The rollout has left privacy advocates questioning whether the shops can fully justify the use of the technology under data protection laws. They also worry about creeping surveillance and the ability of police forces to access private systems. Southern Co-op is using facial recognition technology from Facewatch, a London-based startup. Every time someone enters one of the 18 shops using the tech cameras scan their faces. These CCTV images are converted to numerical data and compared against a watchlist of ‘suspects’ to see if there’s a match. If a match is made, staff within the store receive notifications on smartphones. “The system alerts our store teams immediately when someone enters their store who has a past record of theft or anti-social behavior,” Gareth Lewis, Southern Co-op’s loss prevention lead wrote in a blog post on the Facewatch website. The post is the only public acknowledgement of the use of the technology and Lewis says it has been “successful,” with the tech being deployed in branches where there are higher levels of crime. In response to police use of facial recognition technology, the Court of Appeal criticized a lack of transparency around the creation of watchlists and who could be on them. Co-op staff decide who is added to its watchlists based on behavior. A spokesperson for the firm says its “limited and targeted” use of facial recognition is to “identify when a known repeat offender enters one of our stores.” “Only images of individuals known to have offended within our premises, including those who have been banned/excluded, are used on our facial recognition platform,” the spokesperson says. “Using facial recognition in this limited way has improved the safety of our store colleagues.” Southern Co-op says there has been an 80 percent increase in assaults and violence against store staff this year and the “number one” reason why this happens is when staff try to apprehend shoplifters. “This gives our colleagues time to decide on any action they need to take, for example, asking them to politely leave the premises or notifying police if this is a breach of a banning order,” the spokesperson says. They add it is not planning on rolling out the tech to all of its Southern Co-op stores. In a Facewatch promotional video published in October, Co-op’s Lewis says the tech has been used in inner city stores for 18 months and it has “diverted over 3,000 incidents of theft.” In the same video Facewatch CEO Nick Fisher says the Co-op has “the best watchlist in the UK.” The Facewatch system doesn’t store or add everyone’s faces to a central database but instead amalgamates watchlists created by the companies it works with. Facewatch says “subjects of interest” can be individuals “reasonably suspected” of carrying out crimes, which have been witnessed by CCTV or staff members. A person does not have to be charged or convicted with a crime to be flagged and their data is kept for two years. “The data is then held stored and shared proportionally with other retailers creating a bigger watchlist where all benefit,” a spokesperson for Facewatch says. Its website claims it is the “ONLY shared national facial recognition watchlist” and the watchlist works by essentially linking up multiple private facial recognition networks. It adds that since the Southern Co-op trial it has started a trial with another division of Co-op. Facewatch refuses to say who all of its clients are, citing confidential reasons, but its website includes case studies from petrol stations and other shops in the UK. Last year, the Financial Times reported Humber prison is using its tech, as well as police and retailers in Brazil. Facewatch said its tech was going to be used in 550 stores across London. This can mean huge numbers of people have their faces scanned. In Brazil during December 2018, 2.75 million faces were captured by the tech with the company founders telling the FT it reduced crime “overall by 70 percent.” (The report also said one Co-op food store around London’s Victoria station was using the tech.) However, civil liberties advocates and regulators are cautious of the expansion of private facial recognition networks, with concerns about their regulation and proportionality. “Once anyone walks into a Co-op store, they'll be subject to facial recognition scans... that might deter people from entering the stores during a pandemic,” says Edin Omanovic, an advocacy director who has been focussing on facial recognition at NGO Privacy International. The group has written to Co-op, regulators and law enforcement about the use of the tech. Further than this, his colleague Ioannis Kouvakas says the use of the Facewatch technology raises legal concerns. “It's unnecessary and disproportionate,” Kouvakas, a legal officer at Privacy International, says. Facewatch and Co-op both rely upon their legitimate business interests under GDPR and data protection laws for scanning people’s faces. They say that using the facial recognition technology allows them to minimize the impact of crimes and improve safety for staff. “You still need to be necessary and proportionate. Using an extremely intrusive technology to scan people's faces without them being 100 percent aware of the consequences and without them having the choice to provide explicit, freely given, informed and unambiguous consent, it's a no go” Kouvakas says. It’s not the first time Facewatch’s technology has been questioned. Other legal experts have cast doubt on whether there is a substantial public interest in using the facial recognition technology. The UK’s data protection regulator, the Information Commissioner’s Office (ICO), says companies must have clear evidence that there’s a legal basis for these systems to be used. “Public support for the police using facial recognition to catch criminals is high, but less so when it comes to the private sector operating the technology in a quasi-law enforcement capacity,” a spokesperson for the ICO says. The ICO is investigating where live facial recognition is being used in the private sector and expects to report its findings early next year. “The investigation includes assessing the compliance of a range of private companies who have used, or are currently using, facial recognition technology,” the ICO spokesperson says. “Facewatch is amongst the organizations under consideration.” Part of the ICO’s investigation into private sector facial recognition use includes where police forces are involved. There is growing concern around how police officials and law enforcement may be able to access images captured by privately run surveillance systems. In the US, Amazon’s smart Ring doorbells, which includes movement tracking and face recognition, have been setup to provide data to police in some circumstances. And London’s Met Police was forced to apologize after handing images of seven people to a controversial private facial recognition system in Kings Cross in October 2019. Both Co-op and Facewatch say their work involves no data sharing with police. “No facial images are shared with the police or with any other organization, nor are any other organization’s images shared with us for use within facial recognition,” the shop’s spokesperson says. However, Facewatch in the past has talked about striking relationships with police bodies around the UK. “Facewatch do not share data with the police and vice versa,” a spokesperson says. In the coming years, the use of private facial recognition networks is certainly set to increase. Cameras and the cloud technology needed to run the AI systems are becoming increasingly more powerful and cheaper. Civil liberties groups say as this expansion grows it needs to be transparent and properly regulated. “Public spaces in general will become completely surrounded by surveillance networks of some sort,’ Omanovic says. “So if police are having access to any of them, or a large proportion of them, it will essentially obliterate the ability to walk down the street or enter any retail centre or any cafe without somehow being subject to surveillance network.” This story originally appeared on WIRED UK. Some UK Stores Are Using Facial Recognition to Track Shoppers
  3. It's the latest city in the US to ban the use of facial recognition by officials. Portland, Maine is the latest in the growing list of cities in the US to ban facial recognition technologies. According to Bangor Daily News, people voted in favor of of passing a new measure that strengthens Portland’s existing ban on the use of facial recognition tech by law enforcement agencies and public officials. City councilors originally agreed on a ban back in August with the understanding that the voter referendum would replace their ordinance if it passes. Now that it has passed, it can’t be touched for at least five years. Back in September, Portland, Oregon passed what could be the strictest municipal ban on facial recognition in the country — one that prohibits even private businesses from deploying the technology in public spaces. Before that, Boston, San Francisco and Oakland also prohibited its use, though their rules only apply to public officials. This pushback against the use of facial recognition technologies stems from various studies that show how they’re struggling with race and gender bias. They still tend to misidentify women and POCs, putting them at risk. Case in point: False facial recognition matches led to at least a couple of wrongful arrests in Detroit this year. The new measure Portland, Maine has passed will allow citizens to sue the city for illegal surveillance and will make them eligible to receive $100 per violation or $1,000, whichever is higher. It will also require the city to suppress illegally obtained evidence. Plus, violating the ordinance will now be grounds for a city official’s suspension or termination. Source
  4. This summer, when officials in a few cities started using facial recognition software to identify protesters, many cried foul. Those objections turned ironic when protesters used facial recognition to identify police officers who had covered their badges or nameplates during protests. Powerful technology beloved by police had become a tool for accountability: David defeats Goliath. San Francisco, where this photo was taken, was the first U.S. city to ban local government agencies’ use of facial recognition technology. Possibly satisfying—but profoundly naive. Protesters, civil libertarians, and ordinary Americans have far more to lose than gain from the normalization of facial recognition software. These incidents simply highlight the pressing need for more comprehensive regulation of this increasingly cheap and powerful tool, one that threatens to alter the balance of power between citizens and their government. Law enforcement likes to talk the most about the least objectionable use of this technology: identifying suspected criminals from security footage. Police in Maryland, for example, identified the man who shot eight people at the Capital Gazette newspaper by running his photo against the Maryland Image Repository System, which contains millions of mug shots and driver’s license photos. But already, law enforcement is using this technology for mass surveillance and data gathering purposes, not just one-off identifications where there is reasonable suspicion a crime has been committed by an identifiable suspect. Except for a few localities that have banned its use, the only real limitation on law enforcement’s use of facial recognition technology is fear of public outrage. In China, facial recognition is used to identify jaywalkers, who receive text messages warning them of punishment for second offenses. The hard truth is that there are few current legal obstacles to American police using the technology in much the same way. Congress has resisted calls for federal regulation. Decades of judicial curtailment of privacy protections—usually in the name of the drug war—have reduced the scope of Fourth and Fifth Amendment protections, and Americans out and about on public streets have surprisingly few privacy rights that police must observe. The only real obstacle to a China-like panopticon in the U.S. is funding and policymakers’ hesitance to outrage Americans. Americans concerned about their privacy should not rely on the scruples of politicians and police, however. The NYPD, for example, recently issued a facial recognition policy that permits the technology’s use in investigating any crime, no matter how minor, including shoplifting. Florida courts recently greenlighted a lawsuit by a Coral Gables resident, who alleged that local police use automated license plate readers to identify nearly every vehicle that enters or exits the city, and at many points in between. These readers note the license plate number, date, time, and location of hundreds of thousands of cars every day. The data is stored for three years and shared with other law enforcement agencies in the state—which means that police in one Florida beach town can build and then share with other departments an astonishingly detailed history of a person’s movements through town and into other localities. Other Florida towns have pressed to expand their own use of plate reader technology. Coral Gables didn’t really deny the allegations. Instead, like all police departments with similar programs, it tends to talk out of both sides of its mouth, insisting that the law-abiding have nothing to fear while denying that they have any privacy interest in their public movements in the first place. But the legal precedents cited by police rely on an unspoken assumption that resource constraints operate as their own kind of protection. Police don’t have the time or money to track and monitor every person moving about public byways and property, so they will necessarily limit their monitoring to those suspected of criminal activity. The average citizen’s anonymity is protected, while public safety is enhanced. In an age of technologies such as license plate readers, “geofencing” (which uses smartphones’ GPS capability to map social networks and track personal data), and especially facial recognition, this unspoken assumption has been upended. Surveillance and monitoring data can be collected and stored on a mass basis, giving law enforcement the ability to build astonishingly detailed portraits of people’s lives. When the plaintiff in the Coral Gables case obtained his license plate reader records under a state sunshine law, the report ran for more than 80 pages. In our mass surveillance future, every trip to the doctor’s office, girlfriend’s apartment, library, church, gay bar, pharmacy, or liquor store can be identified, stored, and analyzed by law enforcement—with no need for individualized monitoring or individualized suspicion. Facial recognition programs do have weaknesses, and they are currently far better at identifying white males than women or persons of color (which leads to the likelihood that the technology will misidentify black men and women as criminal suspects). But improvements have been exponential—the 127 leading software algorithms got 20 times better at searching photo databases between 2014 and 2018—and near-flawless facial recognition technology is likely within the industry’s grasp. Law enforcement, especially the FBI, has poured billions of dollars into databases to store photographs to improve facial recognition efforts, and is expanding efforts to identify persons based on voice prints or even walking gait. This technology is spreading, getting better, cheaper, and more powerful with every year, and legal precedents and policy debates haven’t kept up. This is why protesters should not be so quick to embrace the use of this technology, even to hold police accountable. Nine times out of ten, Goliath beats David. Hoping for the occasional miracle isn’t a policy. Source
  5. Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog The Council of Europe's new guidelines call for a ban on some applications of facial recognition, and stringent privacy safeguards when the technology is deployed. Some applications of facial recognition that can lead to discrimination should be banned altogether, according to Europe's human rights watchdog, following months of deliberation on how to best regulate the technology. The Council of Europe has published new guidelines to be followed by governments and private companies that are considering the deployment of facial recognition technologies. For example, workplaces that use digital tools to gauge worker engagement based on their facial expressions, or insurance companies using the technology to determine customers' health or social status could all be affected by the new guidelines. The watchdog effectively advises that where the technology is used exclusively to determine an individual's skin color, religious belief, sex, ethnic origin, age, health or social status, the use of facial recognition should be prohibited, unless it can be shown that its deployment is necessary and proportionate. Under the same conditions, the ban should also apply to some of the digital tools that can recognize emotions, detect personality traits or mental health conditions, and which can be used unfairly in hiring processes or to determine access to insurance and education. "At is best, facial recognition can be convenient, helping us to navigate obstacles in our everyday lives. At its worst, it threatens our essential human rights, including privacy, equal treatment and non-discrimination, empowering state authorities and others to monitor and control important aspects of our lives – often without our knowledge or consent," said Council of Europe Secretary General Marija Pejčinović Burić. "But this can be stopped. These guidelines ensure the protection of people's personal dignity, human rights and fundamental freedoms, including the security of their personal data." In addition to a ban on specific applications, the organization also designed regulations to protect citizens' privacy when facial recognition technology is deemed a suitable tool to use. For example, there should be strict parameters and criteria that law enforcement agencies should adhere to when they find it justifiable to use facial recognition tools; and where the use of the technology is covert, it should only be allowed to "prevent imminent and a substantial risk to public security." The Council of Europe also called for a public debate to regulate the deployment of the technology in public places and schools, where it argued that less intrusive mechanisms exist. Private companies should not be allowed to use facial recognition in environments like shopping centers, be it for marketing or private security purposes. When they deploy the technology, they must get explicit consent from those who will be affected and offer them an alternative solution. The Council of Europe's new guidelines were built on top of an agreement called the Convention 108+, which was first published in 1981 and constituted at the time the first legally binding document in the field of data protection. In 2018, the convention was modernized to adapt the agreement to the digital age, and now has 55 participating states. Despite the re-writing of the convention, experts have worried that European regulation is not suited to the age of AI and potentially leads to detrimental outcomes for citizens, especially in the case of technologies that can be problematic like facial recognition. Martin Ebers, the co-founder of the Robotics and AI Law Society (RAILS), told ZDNet: "We have regulatory frameworks that are not specifically tailored to AI systems, but are nevertheless applied to AI systems. For example, there are no specific rules at an EU level to deal with facial recognition systems." The last few years have seen repeated attempts from various European institutions and activists to impose stricter regulation on AI systems, and particularly facial recognition tools. In a white paper published on artificial intelligence last year, the EU said it would consider banning the technology altogether, which was shortly followed by the European Data Protection Supervisor Wojciech Wiewiórowski arguing in favor of a moratorium on the use of facial recognition in public spaces. Although the guidelines are a set of reference measures rather than legally binding laws, the document provides the most extensive set of proposals so far to regulate facial recognition technology in Europe. The measures will go through the European Parliament before being passed as new laws. Fanny Hidvégi, Europe Policy Manager at Brussels-based thinktank AccessNow, told ZDNet: "We urge the Council of Europe to take the next step and support a ban for applications that are in inherent conflict with fundamental rights. No democratic debate, temporary pause or safeguards can mitigate individual and societal harms caused by such use of these technologies." Source: Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog
  6. Human rights group urges New York to ban police use of facial recognition The technology has repeatedly come up short in tests for racial bias and has been restricted in other major cities across the US Facial recognition technology amplifies racist policing, threatens the right to protest and should be banned globally, Amnesty International said as it urged New York City to pass a ban on its use in mass surveillance by law enforcement. “Facial recognition risks being weaponised by law enforcement against marginalised communities around the world,” said Matt Mahmoudi, AI and human rights researcher at Amnesty. “From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights. “New Yorkers should be able to go out about their daily lives without being tracked by facial recognition. Other major cities across the US have already banned facial recognition, and New York must do the same.” Albert Fox Cahn of New York’s Urban Justice Centre, which is supporting Amnesty’s Ban the Scan campaign, said: “Facial recognition is biased, broken, and antithetical to democracy. “For years, the [New York police department] has used facial recognition to track tens of thousands of New Yorkers, putting New Yorkers of colour at risk of false arrest and police violence. Banning facial recognition won’t just protect civil rights: it’s a matter of life and death.” In tests for racial bias, facial recognition technology has repeatedly come up short. In 2016, a team at Georgetown University analysed more than 10,000 pages of documents on the use of the technology by US police departments. It found that the departments were applying the technology to databases that were “disproportionately African American”, even while using software that was particularly bad at recognising black faces. In another study, the American Civil Liberties Union fed photos of members of Congress into Amazon’s facial recognition tool, Rekognition, and asked it to find those matched with a mugshot database. The system returned 28 matches, “disproportionately of people of colour”. Initially, Amnesty will ask New Yorkers to file official comments on NYPD use of facial recognition, under the Public Oversight of Surveillance Technologies Act, passed by the city last summer, which required police to disclose their use of surveillance tech by 12 January this year, and provided a 45-day window for public comment. Amnesty will also help New Yorkers generate freedom of information requests to see where facial recognition technology is being used, and run an open-source intelligence campaign to spot and tag facial-recognition-capable cameras. Eventually, the organisation said, it hopes to make the campaign global. Amnesty is calling “for a total ban on the use, development, production, and sale of facial recognition technology for mass surveillance purposes by the police and other government agencies and calling for a ban on exports of the technology systems”. It is not alone in calling for the technology to be banned from law enforcement, nor in highlighting the risks of racial discrimination and suppression of the right to protest. In March, the UK Equality and Human Rights Commission said use of the technology should be suspended until its impact had been independently scrutinised and laws regulating its application actively passed. “The law is clearly on the back foot with invasive [automated facial recognition] and predictive policing technologies,” said EHRC chief executive Rebecca Hilsenrath. “It is essential that their use is suspended until robust, independent impact assessments and consultations can be carried out, so that we know exactly how this technology is being used and are reassured that our rights are being respected.” Source: Human rights group urges New York to ban police use of facial recognition
  7. Intel launches RealSense ID for on-device facial recognition Intel today launched the newest addition to RealSense, its product range of depth and tracking technologies designed to give machines depth perception capabilities. Called RealSense ID, it’s an on-device solution that combines an active depth sensor with a machine learning model to perform facial authentication. Intel claims RealSense ID adapts to users as physical features like facial hair and glasses change over time and works in various lighting conditions for people “with a wide range of heights or complexions.” But numerous studies and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to various biases. One issue is that the datasets used to train the algorithms skew white and male. IBM found that 81% of people in the three face-image collections most widely cited in academic studies have lighter-colored skin. Academics have found that photographic technology and techniques can also favor lighter skin, including everything from sepia-tinged film to low-contrast digital cameras. As a result, Amazon, IBM, Microsoft, and others have self-imposed moratoriums on the sale of facial recognition systems. In response to this and other criticism, Intel says RealSense ID, which starts at $99 and will launch in Q1 2021, has built-in anti-spoofing technology to protect against false entry attempts using photographs, videos, or masks. The company also says it has a “one-in-1-million” false acceptance rate and processes facial images locally, encrypting user data, and is activated through “user awareness.” In other words, it won’t authenticate unless prompted by a preregistered user. Intel sees RealSense ID initially being used for smart locks, access control, point-of-sale, ATMs, and kiosks. The company eventually plans to expand into subcategories like finance, health care, and smart access control. “We’ve done extensive data collection of all ethnicities from Asia, Europe, Middle East Africa,” Intel head of product management and marketing Joel Hagberg told reporters during a briefing this week. “We were very careful to ensure that we have all ethnicities covered.” Corporate vice president Sagi Ben Moshe added: “We invested a lot of money to build a proprietary database … It was a massive data-capturing project that took us a lot of time, and this is what makes it unique.” He and Hagberg didn’t reveal the size of the database or whether audits had been conducted to identify biases it might have, nor the method by which photos and subjects contained within the database were selected. RealSense ID follows the launch of Amazon Web Services’ Panorama Appliance, a plug-in appliance from Amazon that connects to a network and identifies video streams from existing cameras. The Panorama Appliance ships with computer vision models for manufacturing, retail, construction, and other industries, supporting models built in SageMaker. It also integrates with AWS IoT services, including SiteWise, to send data for broader analysis. Alongside the Panorama Appliance is the AWS Panorama SDK, which enables hardware vendors to build new cameras that run computer vision at the edge. It works with chips designed for computer vision and deep learning from Nvidia and Ambarella, and Panorama-compatible cameras will work out of the box with AWS machine learning services. Source: Intel launches RealSense ID for on-device facial recognition
  8. If you’ve been thinking about trying your hand at social media’s 10 Year Challenge and are concerned about your privacy, you may want to take a moment to see why some are saying the trend may not be so harmless. Like many fads in the social realm, this one could come with some unintended consequences. First, for those who are catching up on the 10-year craze, the challenge, otherwise known as #2009vs.2019, the #HowHardDidAgingHitYouChallenge and the #GloUpChallenge, involves posting two photos of yourself – one from 2009 and one from 2019.In lieu of that, 2008 and 2018, or some other decade or substantial length of time. On Facebook, people shared their first profile picture with their current picture. In all cases, the idea is to show how you’ve changed (or stayed the same, like Reese Witherspoon, pictured below) over that period. Celebrities ranging from Janet Jackson to Snooki, Kevin Smith, Fat Joe and Tyra Banks have taken up the challenge. Some, like Smith and Fat Joe, showed off a considerable slimdown, while others just had fun looking back 10 years. (Or 50, like Samuel L. Jackson.) What could go wrong? “Y’all posting these #2009v2019 comparison photos and that’s how you get your identity stolen,” tweeted Desus Nice of the upcoming Showtime series “Desus vs. Mero," on Sunday. “Imagine that you wanted to train a facial recognition algorithm on age-related characteristics, and, more specifically, on age progression (e.g. how people are likely to look as they get older),” she says. “Ideally, you’d want a broad and rigorous data set with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years.” It’s not that Facebook or Twitter or Instagram didn’t already have photos of you, she says. It’s just that they just weren’t clearly organized in a specific, labeled progression, she explains. The date you posted a profile picture doesn’t necessarily mean that’s when it was taken. So with this trend, we are providing more detailed data by denoting when each photo was taken. “In other words, thanks to this meme, there’s now a very large data set of carefully curated photos of people from roughly 10 years ago and now,” O’Neill says. If you’re OK with that, by all means, proceed with showing off your glo-up. But know this: “Age progression could someday factor into insurance assessment and healthcare,” O’Neill says, allowing the lighthearted trend a dystopian ending. “For example, if you seem to be aging faster than your cohorts, perhaps you’re not a very good insurance risk. You may pay more or be denied coverage.” And law enforcement could use facial recognition technology to track people — she notes that Amazon sold its facial recognition services to police departments. But O’Neill also says the technology can be used to find missing children. Ultimately, every digital footprint comes with a wide host of implications for how that information can be used. Of course, it’s up to you to decide what photos and information you want to share, even if you’re just doing it for the “likes.” Source
  9. It's a novel way of raising funds in Brexit Britain Good to know police won't be abusing new technology GOOD NEWS! THE MET POLICE'S controversial facial recognition trial has earned the public purse £90 it wouldn't have otherwise had. The bad news is that the way the money was earned should really make everyone stop and have a long hard think about where society is going. This particular chilling anecdote comes from campaign group Big Brother Watch, and describes a man who saw the warning of automatic facial recognition cameras, and took steps to avoid them. "He simply pulled up the top of his jumper over the bottom of his face, put his head down and walked past," explained Big Brother Watch director Silkie Carlo to The Independent. "There was nothing suspicious about him at all … you have the right to avoid [the cameras], you have the right to cover your face. I think he was exercising his rights." Carlo explained that this was enough to trigger suspicions, and the man was followed and eventually accosted by officers who "pulled him over to one side," and demanded to see his ID which he provided. It became heated, and the man the officers them to "piss off" - we think, anyway, the Independent has prissily censored the word, so it might be "pony." Probably not, though, as said words landed the man a £90 fine as a public order offence for swearing. "He was really angry", Carlo added, although in the circumstances we think that's understandable. The Metropolitan Police had previously put out a statement saying that "anyone who declines to be scanned will not necessarily be viewed as suspicious." It looks like the word "necessarily" is doing an awful lot of heavy lifting in that sentence. Source
  10. Samsung Galaxy S10 facial recognition fooled by a video of the phone owner There's a reason why Samsung tells users to avoid using facial recognition screen locking on Galaxy S10 smartphones. Experts have proven once again that facial recognition on modern devices remains hilariously insecure and can be bypassed using simple tricks such as showing an image or a video in front of a device's camera. The latest device to fall victim to such attacks is Samsung Galaxy S10, Samsung's latest top tier phone and considered one of the world's most advanced smartphones to date. Unfortunately, the Galaxy S10's facial recognition feature remains just as weak as the one supported in its previous versions or on the devices of its competitors, according to Lewis Hilsenteger, a smartphone reviewer better known as Unbox Therapy on YouTube. Hilsenteger showed in a demo video uploaded on his YouTube channel last week how putting up a video of the phone owner in front of the Galaxy S10 front camera would trick the facial recognition system into unlocking the device. Similarly, an Italian journalist from SmartWorld.it also unlocked a Galaxy S10 device using nothing but a photo, which would be much easier to obtain by an attacker, compared to a front-facing video of the device owner. However, this method didn't always yield the same result when others tried to replicate it --unlike Hilsenteger's approach, which seemed to work almost every time. Hearing that users have cracked the facial recognition screen lock feature in one of the world's top phones didn't trigger the same shock and awe reaction that it used to a few years back. This is because in the past few years, both security researchers and regular users alike have bypassed the facial recognition feature on a plethora of devices. For example, users bypassed the facial recognition on a Samsung S8 using a photo, they bypassed Apple's FaceID feature on an iPhone X with a $150 mask, they broke into many top tier Android phones using a 3D-printed head, and they used the same 3D printed head method to gain access to a Windows 10 device protected by the Windows Hello biometrics solution. In fact, the issue is quite widespread. A study by a Dutch non-profit last year found that investigators could bypass face unlock-type of features on 42 out of the 110 smartphones they tested. The issue with all these facial recognition systems implemented in current commercial products is that they don't perform any type of 3D depth scans of the tested face, but merely look at the position of the eyes, nose, or mouth to authorize a person and unlock a device --hence the reason most of them can be bypassed by flashing photos or videos in front of their cameras. Source
  11. Facial recognition: Apple, Amazon, Google and the race for your face Facial recognition technology is both innovative and worrisome. Here's how it works and what you need to know. Facial recognition is a blossoming field of technology that is at once exciting and problematic. If you've ever unlocked your iPhone ($1,000 at Amazon) by looking at it, or asked Facebook or Google to go through an unsorted album and show you pictures of your kids, you've seen facial recognition in action. Whether you want it to or not, facial recognition (sometimes called simply "face recognition") is poised to play an ever-growing role in your life. Your face could be scanned at airports or concerts with or without your knowledge. You could be targeted by personalized ads thanks to cameras at shopping malls. Facial recognition has plenty of upside. The tech could help smart home gadgets get smarter, sending you notifications based on who it sees and offering more convenient access to friends and family. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. James Martin/CNET But at the very least, facial recognition raises questions of privacy. Experts have concerns ranging from the overreach of law enforcement, to systems with hidden racial biases, to hackers gaining access to your secure information. Over the next few weeks, CNET will be diving into facial recognition with in-depth pieces on a wide variety of topics, including the science that allows it to work and the implications, both positive and negative, for many of its applications. To get you up to speed, here's a brief overview including what facial recognition is, how it works, where you'll find it in use today, as well as a few of the implications of this rapidly expanding corner of technology. What is facial recognition? Facial recognition is a form of biometric authentication, which uses body measurements to verify your identity. Facial recognition is a subset of biometrics that identifies people by measuring the unique shape and structure of their faces. Different systems use different techniques, but at its core, facial recognition uses the same principles as other biometric authentication techniques, such as fingerprint scanners and voice recognition. How does facial recognition work All facial recognition systems capture either a two- or three-dimensional image of a subject's face, and then compare key information from that image to a database of known images. For law enforcement, that database could be collected from mugshots. For smart home cameras, the data likely comes from pictures of people you've identified as relatives or friends via the accompanying app. Woodrow "Woody" Bledsoe first developed facial recognition software at a firm called Panoramic Research back in the 1960s using two-dimensional images, with funding for the research coming from an unnamed intelligence agency. Even now, most facial recognition systems rely on 2D images, either because the camera doesn't have the ability to capture depth information -- such as the length of your nose or the depth of your eye socket -- or because the reference database consists of 2D images such as mugshots or passport photos. 2D facial recognition primarily uses landmarks such as the nose, mouth and eyes to identify a face, gauging both the width and shape of the features, and the distance between the various features of the face. Those measurements are converted to a numerical code by facial recognition software, which is used to find matches. This code is called a faceprint. This geometric system can struggle with different angles and lighting. A straight-on shot of a face will show a different distance from nose to eyes, for instance, than a shot of a face turned to the side. The problem can be somewhat mitigated by mapping the 2D image onto a 3D model and undoing the rotation. Adding a third dimension 3D facial recognition software isn't as easily fooled by angles and light and doesn't rely on average head size to guess at a faceprint. With cameras that sense depth, the faceprint can include the contours and curve of the face as well as depth of the eyes and distances from points like the tip of your nose. Most cameras gauge this depth by projecting invisible spectrums of light onto a face and using sensors to capture the distance of various points of this light from the camera itself. Even though these 3D sensors can capture much more detail than a 2D version, the basis of the technology remains the same -- turning the various shapes, distances and depths of a face into a numerical code and matching that code to a database. If that database consists of 2D images, software needs to convert the 3D faceprint back to a 2D faceprint to get a match. Apple's Face ID uses 30,000 infrared dotsthat map the contours of your face. The iPhone then remembers the relative location of those dots the next time you try to unlock your phone. Even these more advanced systems can be defeated by something as simple as different facial expressions, wearing glasses or scarves that obscure parts of your face. Apple's Face ID can struggle to match your tired, squinting, just-woke-up face to your made-up, caffeinated, ready-for-the-day face. Reading your pores A more recent development, called skin texture analysis, could help future applications overcome all of these challenges. Developed by Identix, a tech company focused on developing secure means of identification, skin texture analysis differentiates itself by functioning at a much smaller scale. Instead of measuring the distance between your nose and your eyes, it measures the distance between your pores. It then converts those numbers into a mathematical code. This code is called a skinprint. This method could theoretically be so precise that it can tell the difference between twins. Currently, Identix is working to integrate into facial recognition systems alongside a more normal 3D face map. The company claims the tech increases accuracy by 25 percent. Your face can be turned into a code. James Martin/CNET Where is facial recognition being used? While Bledsoe laid the groundwork for the tech, modern facial recognition began in earnest in the 1980s and '90s thanks to mathematicians at MIT. Since then, facial recognition has been integrated into all manner of commercial and institutional applications with varying degrees of success. The Chinese government uses facial recognition for large-scale surveillance in public CCTV cameras, both to catch criminals and monitor the behavior of all individuals with the intent of turning the data into a score. Seemingly harmless offenses like buying too many video games or jaywalking can lower your score. China uses that score for a sort of "social credit" system that determines whether the individual should be allowed to get a loan, buy a house or even much simpler things like board a plane or access the internet. The London Metropolitan Police also use it as a tool when narrowing their search for criminals, though their system supposedly isn't very accurate -- with incorrect matches reported in a whopping 98 percent of cases. In the US, police departments in Oregon and Florida are teaming up with Amazon to install facial recognition into government-owned cameras. Facial recognition is undergoing trials at airports to help move people through security more quickly. The Secret Service is testing facial recognition systems around the White House. Taylor Swift even used it to help identify stalkers at one of her concerts. Facial recognition famously led to the arrest of the Capital Gazette shooter in 2018 by matching a picture of the suspect to an image repository of mugshots and pictures from driver's licenses. The upcoming 2020 Olympics in Tokyo will be the first to use facial recognition to help improve security. Facial recognition could have large implications for retail outlets and marketers as well, beyond simply watching for thieves. At CES 2019, consumer goods giant Procter & Gamble showed a concept store where cameras could recognize your face and make personalized shopping recommendations. Bringing facial recognition home Aside from large-scale installations, facial recognition has several uses in consumer products. Beyond iPhones, some phoneswith Google's Android operating system like the Google Pixel 2 and the Samsung Galaxy S9 are capable of facial recognition, but the technology on Android isn't yet secure enough to verify mobile payments. The next version of Android is expected to get a more secure facial recognition system closer to Apple's Face ID, although Samsung did not incorporate any facial recognition into its newest phone, the Galaxy S10 ($1,000 at Amazon), as many industry watchers had expected. Facebook has used facial recognition for years to suggest tags for pictures. Other photo applications, such as Google Photos, are getting better at doing the same. In the smart home, after starting as a niche feature in connected cams such as the Netatmo Welcome, facial recognition is now built into several popular models, including the Nest Hello video doorbell. We saw a bunch of new gadgets with the tech on display at CES 2019. Security cameras with facial recognition tech inside Connected cams compare faces with others they've seen before so you can customize notifications based on who the camera sees. All the models we've tested take a while to learn faces, as they need to be able to recognize the members of your household at various angles and in various outfits. Once the cameras learn, you can use facial recognition to make your connected security system that much smarter by making your notifications more relevant to what you actually want to know. Beyond the security uses in the home, even robots like Lovot and Sony's Aibo robot dogcan recognize faces. Aibo and others learn faces not to track who comes and goes, but to adapt to the specific preferences of different people over time. What are the implications? Unlike other forms of biometric authentication, cameras can gather information about your face with or without your knowledge or consent. If you're a privacy-minded person, you could potentially be exposing your data when in a public place without knowing it. Because the technology is so new, there aren't any laws in the US limiting what companies can do with images of your face after they capture it. A bipartisan bill was recently introduced in the Senate to rectify the lack of regulation. The American Civil Liberties Union delivered a petition to Amazon last year asking it to stop giving its facial recognition technology to law enforcement agencies and the government, calling the prospect "a user manual for authoritarian surveillance." According to a report by Buzzfeed, the US Customs and Border Protection agency plans to implement facial recognition to verify the identity of passengers on international flights in airports across the country. The Electronic Privacy Information Center shared documents with Buzzfeed that suggested the UCB skipped over gathering public feedback before starting to implement these systems, and that they have a questionable accuracy rate and little established privacy regulations as far as what the airlines can do with this facial data after they collect it. NBC News reported that the databases of pictures used to improve facial recognition often comes from social media sites without the consent of the subject or photographer. Companies like IBM have the stated goal of using these images to try and improve the accuracy of facial recognition, particularly among people of color. Theoretically, by ingesting the data from a large catalog of faces, the system can fine tune its algorithms to account for a larger variety of facial structures. The Electronic Frontier Foundation notesthat current facial recognition systems tend to produce a disproportionately high number of false positives when identifying minorities. NBC's story also details how it can be tedious to impossible for private citizens to opt out of using their pictures in these databases. The Ring Doorbell would have watched for suspicious individuals. Chris Monroe/CNET Facebook faces a class action lawsuit over its own facial recognition technology, called DeepFace, which identified people in photos without their consent. Smart home company Ring, an Amazon subsidiary, also came under fire last year for filing patents based on facial technology that could have violated civil rights. Ring's video doorbells would have monitored neighborhoods for known sex offenders and those on "most wanted" lists and could then have automatically notified law enforcement. The idea was criticized as likely to target those unfairly deemed a threat and potentially even political activists. The science behind facial recognition is certainly exciting, and the tech could lead to a safer and more personal smart home, but facial recognition could easily result in a loss of privacy, unjust profiling and violations of personal rights. While the impact of facial recognition is still being determined and debated, it's important to recognize that facial recognition is no longer some distant concept reserved for science fiction. For better or worse, facial recognition is here now and spreading quickly. Check back throughout the month as CNET dives deeper into the implications of this developing technology. Source
  12. Facial recognition isn't ready to spot terrorists on the road. New York's bid to identify road-going terrorists with facial recognition isn't going very smoothly so far. The Wall Street Journal has obtained a Metropolitan Transportation Authority email showing that a 2018 technology test on New York City's Robert F. Kennedy Bridge not only failed, but failed spectacularly -- it couldn't detect a single face "within acceptable parameters." An MTA spokesperson said the pilot program would continue at RFK as well as other bridges and tunnels, but it's not an auspicious start. The problem may be inherent to the early state of facial recognition at these speeds. Oak Ridge National Laboratory achieved more than 80 percent accuracy in a study that spotted faces through windshields, but that was at low speed. It might not be ready for someone barrelling down a bridge. Not that privacy advocates will necessarily mind. Facial recognition is already a contentious issue, let alone when it's being used to peep into cars. Whether or not you see it as an Orwellian intrusion that could lead to abuses of power, there are accuracy problems at the best of times. It sometimes has trouble recognizing non-white people and women, and it assumes a culprit won't wear a mask or another disguise. While no terrorist detection system is foolproof, there are real concerns that current approaches could generate false positives or let suspects slip through the cracks. Source
  13. With facial recognition, shoplifting may get you banned in places you've never been There are hundreds of stores using facial recognition -- none that have any rules or standards to prevent abuse. A live demonstration uses artificial intelligence and facial recognition in dense crowd spatial-temporal technology at the Horizon Robotics exhibit at the Las Vegas Convention Center during CES 2019. David Mcnew / AFP/Getty Images At my bodega down the block, photos of shoplifters sometimes litter the windows, a warning to would-be thieves that they're being watched. Those unofficial wanted posters come and go, as incidents fade from the owner's memory. But with facial recognition, getting caught in one store could mean a digital record of your face is shared across the country. Stores are already using the technology for security purposes and can share that data -- meaning that if one store considers you a threat, every business in that network could come to the same conclusion. One mistake could mean never being able to shop again. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. While that may be good news for shopkeepers, it raises concerns about potential overreach. It's just one example of how facial recognition straddles the line between being a force for good and being a possible violation of personal privacy. Privacy advocates fear that regulations can't keep up with the technology -- found everywhere from your phone to selfie stations -- leading to devastating consequences. "Unless we really rein in this technology, there's a risk that what we enjoy every day -- the ability to walk around anonymous, without fearing that you're being tracked and identified -- could be a thing of the past," said Neema Singh Guliani, the American Civil Liberties Union's senior legislative counsel. The technology is appearing in more places every day. Taylor Swift uses it at her concerts to spot potential stalkers, with cameras hidden in kiosks for selfies. It's being used in schools in Sweden to mark attendance and at airports in Australia for passengers checking in. Supermarkets in the UK are using it to determine whether customers are old enough to buy beer. Millions of photos uploaded onto social media are being used to train facial recognition without people's consent. Revenue from facial recognition is expected to reach $10 billion by 2025, more than double the market's total in 2018. But despite that forecast for rapid growth, there's no nationwide regulation on the technology in the US. The gap in standards means that it's possible the technology being used at US borders could have the same accuracy rate as facial recognition used to take selfies at a concert. Accuracy rates matter -- it's the difference between facial recognition determining you're a threat or an innocent bystander, but there's no standard on how precise the technology needs to be. The time is now to regulate this technology before it becomes embedded in our everyday lives. Jennifer Lynch, Electronic Frontier Foundation Without any legal restrictions, companies can use facial recognition without limits. That means being able to log people's faces without telling customers their data is being collected. Two facial recognition providers told CNET that they don't check on their customers to make sure they're using the data properly. There are no laws requiring them to. "So far, we haven't been able to convince our legislators that this is a big problem and will be an even larger problem in the future," said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. "The time is now to regulate this technology before it becomes embedded in our everyday lives." Faced everywhere At the International Security Conference in New York last November, I walked past booths with hundreds of surveillance cameras. Many of them used facial recognition to log my gaze. These companies want this technology to be part of our daily routines -- in stores, in offices and in apartment buildings. One company, Kogniz, boasted it was capable of automatically enrolling people as they enter a camera's view. "Preemptively catalogues everyone ever seen by the camera so they can be placed on a watchlist," Kogniz's business card says. Kogniz's business card, boasting that it could create automatic watchlists. Alfred Ng / CNET This technology is available and advertised as a benefit to stores without any privacy concerns in mind. As more stores adopt this dragnet approach to facial recognition, data on your appearance could be logged anywhere you go. California-based video surveillance startup Kogniz launched in 2016 and now has about 30 retail and commercial customers, with thousands of security cameras using its facial recognition technology. Stores use Kogniz's facial recognition to identify known shoplifters. If a logged person tries entering the store, Kogniz's facial recognition will be able to detect that and flag security, Daniel Putterman, the company's co-founder and director, said in an interview. And it's not just for that one location. "We are a cloud system, so we're inherently multi-location," Putterman said. If someone is barred from one store because of facial recognition, that person could potentially be prevented from visiting another branch of that same store ever again. Kogniz also offers a feature called "collaborative security," which lets clients opt in to share facial recognition data with other customers and share potential threats across locations. That would mean facial recognition could detect you in a store you've never even visited to before. Putterman said none of Kogniz's customers has opted into that program yet, but it's still available. Recognition without regulation People don't have to be convicted of a crime to be placed on a private business' watch list. There aren't any rules or standards governing how companies use facial recognition technology. "The clients use it at their own discretion for their own purposes," Putterman said. "If someone is falsely accused of being a shoplifter, that's beyond our control." Amazon provides Rekognition, its facial recognition software, to law enforcement agencies like the Washington County sheriff's office in Oregon. The tech giant recommends that police use Rekognition with a 99 percent confidence threshold for the most accurate results. But the sheriff's office told Gizmodo in January that it doesn't use any established standards when employing facial recognition. Security cameras with facial recognition tech inside Gemalto, a digital security company, has been providing facial recognition to the Department of Homeland Security, which uses it at US exits to log foreign visitors leaving the country. The company also works with local police departments on facial recognition. As with Amazon's Rekognition, its customers aren't beholden to any standards when using its facial recognition. "Once the customer has it, they're going to operate with the standards that they define," said Neville Pattinson, Gemalto's senior vice president for government programs. "It's not our responsibility to have involvement past the point of delivery." The lack of standards across the entire industry leads to a massive potential for abuse, privacy advocates say. One store that uses Kogniz shares its login information with its local police department, Putterman said. He declined to disclose which store, but the police are automatically alerted when the store's facial recognition detects a flagged face -- even if the person is not committing a crime. "The retailer has given them permission to log in and see what's going on," Putterman said. "That local police department can look at the live footage and decide whether or not they want to act on it." I would hate to see the technology end up with reactionary restrictions on the basis of concerns on privacy. Neville Pattinson, Gemalto That practice comes with legal concerns, Guliani said. Police need a warrant to track a specific person's whereabouts, as the Supreme Court decided last June regarding phone location data. But with facial recognition, authorities can get around this limitation. "That means without a warrant, without any oversight, what law enforcement can do is track your movements any time you walk into a store," Guliani said. And as facial recognition expands into more places, privacy advocates worry that more businesses will provide that access to law enforcement agencies with no limits. "If they're using it for when you're shopping, or driving through the Holland Tunnel for your daily commute, what happens when law enforcement wants to tap into all of those systems and use them for recording?" said Jake Laperruque, a senior counsel at the Constitution Project. 'We need clear rules' The federal government hasn't taken action on facial recognition yet, but local governments are starting to limit how the technology can be used. In late January, San Francisco supervisor Aaron Peskin proposed legislation to completely ban the city's government agencies from using facial recognition. State lawmakers across the US have suggested similar legislation, like in Washington and Massachusetts. Last Thursday, senators proposed a bill that would stop businesses from using the technology without notifying customers and prevent them from sharing that data without people's consent. If passed, it would be the first federal law protecting your privacy from businesses using the technology. Facial recognition providers balked at the proposed regulations, arguing that the technology's benefits outweigh privacy concerns. "I would hate to see the technology end up with reactionary restrictions on the basis of concerns on privacy," Pattinson said. Putterman said Kogniz understands the potential for abuse, and said his company does not sell facial recognition to government agencies. He hopes for regulations to arrive so that the technology can be used responsibly. As it finds its way into every corner of our daily lives, facial recognition offers benefits -- but without regulations, it could grow into an uncontrollable privacy violation, advocates argue. Before it does, many are calling for lawmakers to protect your privacy. "This isn't something that needs to be completely banned or cut off, but we need clear rules," Laperruque said. isn't something that needs to be completely banned or cut off, but we need clear rules," Laperruque said. Source
  14. MoviePass founder wants to use facial recognition to score you free movies Facial recognition is the linchpin to PreShow, an app that will earn you free movie tickets by watching ads -- so long as you're OK with your phone watching you back. Everything free comes with a price. But PreShow, a new company from a founder of MoviePass, wants advertising to be the only price of a movie ticket. Launching a campaign on KickstarterThursday, PreShow is developing an app to earn you free movie tickets -- to any film in any theater -- if you watch 15 to 20 minutes of high-end advertising. Like a sponsored session of ad-free Spotify that you unlock with a special commercial, PreShow wants to make going to the movies feel like free. But PreShow hinges on what some may consider a cost and others consider a bargain: facial recognition. "If it weren't for facial recognition, I don't think we could still do it," Stacy Spikes, the founder and chief executive of PreShow, said in an interview last week. "If not, they could game this all day long." This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. CNET Forgoing a password, PreShow's app will only unlock with your phone's facial recognition technology. And while you're watching the ads to earn that free ticket, your phone's camera monitors your level of attention. Walk away or even obscure part of your face? The ad will pause after five seconds. Facial recognition is already playing an ever-growing role in your life, for good and for ill. It's the technology that helps you find all the snapshots of a particular friend in Google Photos; it's central to how some smart-home technologies aim to make your life simpler. But it also could scan your face at an airportor concert without your knowledge. As face recognition advances and spreads, it opens up a host of privacy and security worries. But without facial recognition, PreShow wouldn't be possible. "We had two problems to solve: We didn't want people creating dummy accounts, and we're dealing with real currency at the end of the day, so we needed to uniquely lock it," Spikes said. "Facial recognition at the phone level is just a year and a half old. You couldn't do this company two years ago." Watching a phone watching you PreShow uses facial recognition for identification and verification -- it needs to make sure you're the only person who can open the app (and you only have one account) and that you're actually watching those ads while they play. The unlocking mechanism is built upon your phone's existing face recognition capabilities. In a demo of PreShow, Spikes unlocked the app via his iPhone 7's front-facing camera. The viewing accountability part was harder. PreShow built its app so that when you're watching an ad, a green border glows around the edge of the video. So long as the front-facing camera can recognize that your face is looking at the screen, the border stays green. Walk away, direct your head away from the camera or even obscure part of your face, and the outline turns to red in the demo. After five seconds of red, the commercial automatically pauses. PreShow CEO Stacy Spikes was a founder of MoviePass. PreShow Like the app itself, PreShow's privacy specifics are still in flux. A private beta launch of the PreShow app is slated to start in July, with Kickstarter patrons being the first to use and share it. The company hasn't specified when it will be made available to the public at large. So, for example, PreShow hasn't finalized an end user license agreement yet. Yes, that's the "fine print" digital contract you always sign, probably without reading, whenever you sign up for a online service. But it would be where company officially spells out how it plans to use its facial recognition. The beta testing will be key, PreShow says, as it will be working on terms and conditions during the beta as it learns from user feedback. Once PreShow starts letting people into the beta test, it will be sending out more details about the user experience overall, including the facial recognition technology, it said. Among the specifics PreShow would define today: The app will not be recording anyone as they watch, and it won't be sharing personally identifiable data to third parties. For its advertising partners, it will provide aggregated and anonymized data -- as Spikes put it, "they will not get anything other than a 30,000-foot view" of your activity. That 30,000-foot view will be based on basic demographic data you volunteer in the signup process. PreShow plans to ask you to identify your age, geography and gender so it can provide the aggregated demographic details that advertisers want. Will PreShow use facial recognition to verify that you were honest in identifying yourself as a man or a woman, for example? Facial recognition raises questions that PreShow will need to be address in black-and-white disclosures. Spikes notes that in other ways, this kind of advertising is more sensitive to your privacy. "What's happening today in media is the brand will buy a bunch of data, and then it will trade that data and it will follow you around and it will embed cookies -- it's a covert art," he said. "This is much more out in the open." A MoviePass second act Spikes was a co-founder of MoviePass in 2011, but he parted ways with the company after Helios and Matheson Analytics took control of it and introduced the $10-a-month, unlimited-daily-ticket deal. That unlimited deal that was beloved, and new members subscribed in drove -- until it became notorious. The initiative drove MoviePass to the edge of insolvency, triggering the company into a farce of constantly changing prices and benefits. Stacy Spikes was a MoviePass founder but parted ways with the company after it introduced its cut-rate $10 monthly subscription for unlimited daily movies. MoviePass But before MoviePass unraveled, Spikes (as a PreShow representative put it) "was let go." "You'd have to ask MoviePass about specific rationale for the decision, but Stacy didn't agree with the direction the company was going. That likely had something to do with it," the spokeswoman said. "He started working on PreShow soon thereafter." Like MoviePass, Spikes hopes that PreShow can help pull cinema into new ways of doing business. "If the innovation is there in a big way -- that is, universal from the consumer standpoint -- it helps cinema to leap forward. And hopefully, making moviegoing possibly free will radically do that again," he said. But unlike MoviePass, where the cost of movie admissions for a single member could far outstrip the revenue their subscription brings in, PreShow won't be a business that operates in the red, Spikes said. The consumer pays his or her own way by virtue of watching the ads, whether the company's user base is 3,000 or 30 million. Thursday's Kickstarter launch will fund the company's initial free-ticket campaigns. In addition, PreShow has funding from an unnamed angel investor who is a former wireless industry executive with an interest in cinema. Spikes also raised the possibility that earned points in PreShow could be applied to buy products, like a pair of tennis shoes, in addition to movie tickets. The advertising in the app will be video made by brands with integrations in movies -- think Dodge putting its cars in the Fast and the Furious franchise of movies. It would also include behind-the-scenes-style videos involving brands and films -- imagine a featurette about Brie Larson's training regimen for Captain Marvel that's sponsored by a fitness company. Demos of PreShow's design opened the app to splashy full-screen image ads that would lead into video advertising when tapped. PreShow Not all advertising in PreShow will earn you credit. When advertising is in new release, it will run a promotion that earns credit, but once that inventory is used up, the video will go to a libary where it's still viewable without any rewards. For example, when a new Marvel film is coming out, the advertising for it may drop into PreShow two weeks in advance. The company earns enough ad dollars for, say, 100,000 tickets. But once the promotional inventory "sells out," the credits are gone but the content will remain. "There will be scarcity in the ticket, but the advertising will have an afterlife," Spikes said. But the price of a PreShow free movie ticket isn't just advertising, it's facial recognition too. It'll be up to you to decide whether that price is a steal. Source
  15. A Chinese subway is experimenting with facial recognition to pay for fares Fare would be automatically deducted from a linked payment method Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post. The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account. There are some advantages to the system. For example, riders won’t have to worry about forgetting their subway card or a low balance, but at the same time, it likely means that their every journey into the subway will be tracked down to the pixels of their faces. It’s not clear if that’s any more tracking than what’s already been done. Many major Chinese cities have extensive surveillance camera systemsthat log citizens’ faces, ages, genders, and how long they’ve been staying in the area. The algorithms for the facial recognition tech were designed in a lab overseen by Shenzhen Metro and phone maker Huawei. Shenzhen Metro hasn’t given a timeline for when facial recognition could reach all of its stations and subway lines. We’ve reached out to Huawei for comment. PEOPLE COULD ALREADY BUY FRIED CHICKEN BY SCANNING THEIR FACES Using facial recognition for payments isn’t new, although using it on subways is. At KFC stores across China, people can scan their face to buy fried chicken, which has been around since 2017. China is ahead of the US when it comes to mobile payments, as nearly half of the country used their phones to make payments in 2018. Payments made through WeChat Pay or Alipay were so popular that China’s central bank had to warn stores last year not to reject cash or face unspecified penalties. Still, on one occasion when I was in China last year, a shop could not accept cash because it didn’t have enough bills in its coffers to make proper change. Source
  16. (Reuters) - A federal appeals court on Thursday rejected Facebook Inc’s (FB.O) effort to undo a class action lawsuit claiming that it illegally collected and stored biometric data for millions of users without their consent. The 3-0 decision from the 9th U.S. Circuit Court of Appeals in San Francisco over Facebook’s facial recognition technology exposes the company to billions of dollars in potential damages to the Illinois users who brought the case. It came as the social media company faces broad criticism from lawmakers and regulators over its privacy practices. Last month, Facebook agreed to pay a record $5 billion fine to settle a Federal Trade Commission data privacy probe. “This biometric data is so sensitive that if it is compromised, there is simply no recourse,” Shawn Williams, a lawyer for plaintiffs in the class action, said in an interview. “It’s not like a Social Security card or credit card number where you can change the number. You can’t change your face.” Facebook said it plans to appeal. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email. Google, a unit of Alphabet Inc (GOOGL.O), won the dismissal of a similar lawsuit in Chicago last December. The lawsuit began in 2015, when Illinois users accused Facebook of violating that state’s Biometric Information Privacy Act in collecting biometric data. Facebook allegedly accomplished this through its “Tag Suggestions” feature, which allowed users to recognize their Facebook friends from previously uploaded photos. Writing for the appeals court, Circuit Judge Sandra Ikuta said the Illinois users could sue as a group, rejecting Facebook’s argument that their claims were unique and required individual lawsuits. She also said the 2008 Illinois law was intended to protect individuals’ “concrete interests in privacy,” and Facebook’s alleged unauthorized use of a face template “invades an individual’s private affairs and concrete interests.” The court returned the case to U.S. District Judge James Donato in San Francisco, who had certified a class action in April 2018, for a possible trial. Illinois’ biometric privacy law provides for damages of $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. Williams, a partner at Robbins Geller Rudman & Dowd, said the class could include 7 million Facebook users. The FTC probe arose from the discovery that Facebook had let British consulting firm Cambridge Analytica harvest users’ personal information. Facebook’s $5 billion payout still requires U.S. Department of Justice approval. The case is Patel et al v Facebook Inc, 9th U.S. Circuit Court of Appeals, No. 19-15982. Source
  17. THE USE of facial recognition by South Wales Police has been deemed lawful in a ruling on Wednesday by the High Court in London following a judicial review. Welsh cops' use of facial recognition is legal, High Court rules Civil rights group Liberty and local Cardiff resident Ed Bridges had challenged the deployment of facial recognition in the first legal challenge to UK police use of facial recognition technology. It was first used by South Wales Police in a trial during the Champions League Final at Cardiff's Millennium Stadium in June 2017. In total, South Wales Police is believed to have scanned the faces of more than 500,000 members of the public. Bridges claimed that he had been scanned at least twice - on Queen Street in Cardiff in December 2017 and at a protest against the arms trade in March 2018. Metropolitan Police has also trialled facial recognition, with less than convincing results. Liberty had claimed in court that facial recognition systems were little different from police fingerprinting or obtaining DNA, around which tight legal safeguards exist. However, the court ruled that while facial recognition might infringe upon people's privacy rights it wasn't unlawful per se. The court declared that the current legal framework governing facial recognition is adequate, but ought to be subject to periodic review. Liberty, though, is campaigning for a complete ban on what it describes as an "authoritarian surveillance tool". Liberty lawyer Megan Goulding said: "This disappointing judgment does not reflect the very serious threat that facial recognition poses to our rights and freedoms. "Facial recognition is a highly intrusive surveillance technology that allows the police to monitor and track us all. It is time that the government recognised the danger this dystopian technology presents to our democratic values and banned its use. Facial recognition has no place on our streets." Police use of facial recognition, Liberty added, involves the processing of sensitive, personal data of everyone who is scanned, not just those on a watchlist. The organisation has vowed to appeal. South Wales Police typically use facial recognition in cameras attached to vans. These take scans of people's faces, making a biometric map of the face which is then run against a database of facial biometric maps. When a positive match is made, the image is flagged for a manual review. UK police have around 20 million mugshots in various databases. South Wales Police is also planning to put the technology onto police mobile phones, which will make its use even more widespread. Source
  18. UK's Information Commissioner's Office challenges the interpretation of a court ruling that gave the green light for using facial recognition on the public. Police forces should be subject to a code of practice if they want to use live facial recognition technology on the public, according to the UK's Information Commissioner's Office (ICO). ICO commissioner Elizabeth Denham has released her opinion on the use of live facial recognition on the public by police in response to a recent High Court ruling that South Wales Police didn't violate human rights or UK law by deploying the technology in a public space. Denham argues facial recogition should be restricted to targeted deployments that are informed by intelligence and time-limited, rather than ongoing. She also reckons the High Court's decision "should not be seen as a blanket authorisation for police forces to use [live facial recognition] systems in all circumstances". The case concerned police using live CCTV feeds to extract individuals' facial biometric information and matching it against a watchlist of people of interest to police. Large scale trials of facial recognition tech by the South Wales Police and the Metropolitan Police Service (Met) for public safety have irked some people who fear a dystopian future of mass surveillance combined with automated identification. The ICO kicked off an investigation in August over the use of surveillance cameras to track commuters and passersby in London. Denham raised concerns over people being identified in public without gaining an individual's consent. Surveillance cameras themselves make some people uncomfortable, but technology that automatically identifies people raises new questions for privacy in public spaces. The Met began trialling the tech on shoppers in London last Christmas. Denham said live facial recognition was a significant change in policing techniques that raises "serious concerns". "Never before have we seen technologies with the potential for such widespread invasiveness. The results of that investigation raise serious concerns about the use of a technology that relies on huge amounts of sensitive personal information," she said. Denham argues the UK needs a "a statutory and binding code of practice" for the technology's deployment due to a failure in current laws to manage the risks it poses. The privacy watchdog will be pushing the idea of a code of practice with the UK's chief surveillance bodies, including policing bodies, the Home Office and the Investigatory Powers Commissioner. Denham argues in her opinion statement that for police to use facial recognition, they need to meet the threshold of "strict necessity" and also consider proportionality. She believes this is likelier to be met on small scale operations, such as when "police have specific intelligence showing that suspects are likely to be present at a particular location at a particular time." Another is at airports, where live facial recognition supports "tailored security measures". Source: Facial recognition could be most invasive policing technology ever, warns watchdog (via ZDNet)
  19. California will block police officers from including facial recognition technology in their body cameras, joining two other states that have created similar laws—Oregon and New Hampshire. On Tuesday Gov. Gavin Newsom signed AB1215 into law, barring law enforcement from using any “biometric surveillance system” in their body cameras, and enabling people to take legal action against officers who violate the law. As the San Francisco Chronicle points out, state legislators were encouraged to pass the bill following an ACLU demonstration in which Amazon’s Rekognition software misidentified 26 lawmakers, incorrectly deeming them criminal suspects. “We wanted to run this as a demonstration about how this software is absolutely not ready for prime time,” said the bill’s creator assemblymember Phil Ting at a press conference following the test. “While we can laugh about it as legislators, it’s no laughing matter if you are an individual who is trying to get a job, if you are an individual trying to get a home, if you get falsely accused of an arrest, what happens, it could impact your ability to get employment, it absolutely impacts your ability to get housing.” However, when the technology advances, and is ready for “prime time” it will only cause more ethical concerns. “When you’re talking about an AI tool on a body camera, then these are extra-human abilities,” Brian Brackeen, CEO of AI startup Kairos, told Gizmodo last year for a story about racial bias in face recognition technology. A month earlier Brackeen revealed Kairos has turned down a contract with Axon, manufacturer of body cameras. “Let’s say an officer can identify 30 images an hour. If you were to ask a police department if they were willing to limit [recognition] to 30 recognitions an hour they would say no,” Brackeen told Gizmodo. “Because it’s not really about the time of the officer. It’s really about a superhuman ability to identify people, which changes the social contract.” According to the Chronicle, the California Peace Officers’ Association claims that no California law enforcement agencies currently use facial recognition technology in body cameras. But the newspaper reports that some agencies have considered adopting the technology. The original bill would have created a permanent ban, but Ting compromised due to protests from the Peace Officers’ Association and other police advocacy groups. For now, the law expires in 2023. Source
  20. The Pixel 4's facial recognition will unlock your phone with your eyes closed Google announced its Pixel 4 series of handsets earlier this week, and one thing that's different this time around is that there's no fingerprint sensor on the back. Instead, they use a new face unlock feature. Unfortunately, it turns out that with face unlock, you don't actually have to be looking at the phone. On Google's face unlock support page, the company confirmed that your Pixel 4 can be "unlocked by someone else if it’s held up to your face, even if your eyes are closed". The firm also noted that looking at the phone can unlock it when you don't mean to, and it can be unlocked by someone that "looks a lot like you". The only other mainstream flagship smartphones that have facial recognition and no fingerprint sensor are Apple's lineup of iPhones. And even when Apple's Face ID was first introduced with an iPhone X two years ago, it required not only that your eyes be open, but that you're actually looking at the device. Sadly, there's no option to require your eyes to be open on the Pixel, although it wouldn't be surprising if Google adds that at some point. If you're worried about someone using your face to unlock your phone while you're sleeping, your only option is to turn off the feature completely. Source: The Pixel 4's facial recognition will unlock your phone with your eyes closed (Neowin)
  21. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  22. A university lecturer in east China is suing a wildlife park for breach of contract after it replaced its fingerprint-based entry system with one that uses facial recognition, according to a local newspaper report. Guo Bing, an associate law professor at Zhejiang Sci-tech University, bought an annual pass to Hangzhou Safari Park for 1,360 yuan (RM803) in April, Southern Metropolis Daily reported on Sunday. But when he was told last month about the introduction of the new system he became concerned it might be used to “steal” his identity and asked for a refund, the report said. The park declined to return his money, so Guo filed a civil lawsuit last week at a district court in Fuying, Hangzhou, the capital of Zhejiang province. The report said the court had accepted the case, in which Guo is demanding 1,360 yuan (RM803) compensation plus costs. “The purpose of the lawsuit is not to get compensation but to fight the abuse of facial recognition,” he was quoted as saying. Guo said that when he bought the ticket – which offers 12 months’ unlimited visits to the park for himself and a family member – he was required to provide his name, phone number and fingerprints. He complied with the request and said he had visited the park on several occasions since. However, when the attraction upgraded its admission system, all annual pass-holders where asked to update their records – including having their photograph taken – before Oct 17 or they would no longer be allowed to enter, the report said. Guo said he believed the change was an infringement of his consumer rights. Zhao Zhanling, a lawyer at the Beijing Zhilin Law Firm, said it was possible the park had breached the conditions of the original contract. “The plaintiff’s concern is totally understandable,” he said. Facial identities were “highly sensitive”, Zhao said, and the authorities “should strictly supervise how data is collected, used, stored and transferred.” A manager at the park, who gave her name only as Yuan, said the upgrade was designed to improve efficiency at the entrance gates. Since the launch of the central government’s “smart tourism” initiative in 2015, more than 270 tourist attractions around the country have introduced facial recognition systems, Xinhua reported earlier. Source: Chinese professor sues wildlife park after it introduces facial recognition entry system (via The Star Online)
  23. The complaint claims Google “failed to obtain consent from anyone” when it introduced facial recognition to its cloud service for storing and sharing photos. The lawsuit comes in the wake of the announcement of a proposed $550 million settlement that Facebook Inc. reached in a BIPA class action. Google has been hit with a new lawsuit accusing the company of violating an Illinois biometric privacy law by compiling faceprints. “Unbeknownst to the average consumer ... Google’s proprietary facial recognition technology scans each and every photo uploaded to the cloud-based Google Photos for faces, extracts geometric data relating to the unique points and contours (i.e., biometric identifiers) of each face, and then uses that data to create and store a template of each face -- all without ever informing anyone of this practice,” Illinois resident Brandon Molander alleges in a class-action complaint filed Thursday in U.S. District Court for the Northern District of California. This new complaint comes one week after Facebook said it had agreed to pay $550 million to settle a similar lawsuit. Molander, who says he has had a Google Photos account for five years, claims the company is violating the Illinois Biometric Privacy Act. That law requires companies to obtain consumers' written consent before collecting or storing scans of their facial geometry. “Molander’s Google Photos account contains dozens of photos depicting Plaintiff Molander that were taken with his smart phone and automatically uploaded in Illinois to Google Photos,” his complaint alleges. “Google analyzed these photos by automatically locating and scanning Plaintiff Molander’s face, and by extracting geometric data relating to the contours of his face and the distances between his eyes, nose, and ears -- data which Google then used to create a unique template of Plaintiff Molander’s face.” The complaint appears similar to one filed against Google in 2016, in federal district court in Illinois. That case, brought by Illinois residents Lindabeth Rivera and Joseph Weiss, was decided in Google's favor last year by U.S. District Court Judge Edmond Chang, who ruled that the company's alleged practices didn't cause the kind of concrete injury that warrants a lawsuit. Chang said in his ruling that faces are “public” information, and that Google didn't violate people's privacy by using facial recognition technology on photos of faces. “All that Google did was to create a face template based on otherwise public information -- plaintiffs’ faces,” he wrote. But federal judges in California came to the opposite conclusion in a lawsuit accusing Facebook of violating the Illinois privacy law. In that matter, a trial judge and a three-judge panel of the 9th Circuit Court of Appeals rejected Facebook's argument that its alleged faceprint collection didn't cause the kind of concrete injury that would support a lawsuit. Source
  24. Meanwhile, Europe wants to ban the technology for up to five years. London's Metropolitan Police Service has begun using live facial recognition (LFR) technology. At key areas throughout the city, signposted cameras will scan the faces of passing people, alerting officers to potential matches with wanted criminals. According to the Met, "this will help tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable". In a tweet, the Met assured the public that any images obtained that don't trigger a potential alert are deleted immediately -- and that it's up to officers whether they decide to stop someone based on an alert or not. The technology operates from a standalone system, and isn't linked to any other imaging platforms, such as CCTV or bodycams. Despite the Met's insistence that the technology can be used for good, however, some critics have lambasted LFR as ineffectual and in some cases, unlawful. In April 2019, for example, a report from the University of Essex found that the Met's LFR technology has an inaccuracy rate of 81 percent. The previous year, technology used by police in South Wales mistakenly identified 2,300 innocent people as potential criminals. The Met's new endeavor is launched at a tumultuous time for facial recognition technology. Just last week the European Commission revealed it's considering a ban on the use of LFR in public areas for up to five years, while regulators figure out how to prevent the tech being abused. Meanwhile, privacy campaign group Big Brother Watch -- supported by more than 18 UK politicians and 25 additional campaign groups -- has called for a halt to adoption, citing concerns about implementation without proper scrutiny. Source
  25. Every few days, China finds a new way to introduce facial recognition in people’s daily lives. According to a report from the South China Morning Post, Shanghai is testing face recognition terminals at pharmacies to catch folks attempting to buy controlled substances in substantial quantities, likely for resale. The report noted buyers of drugs containing sedatives and psychotropic substances will have to verify themselves through the terminal. The system will scan both pharmacists and buyers to prevent any misdoings. The move is also to prevent people from obtaining medicines that contain raw materials for illegal drugs. For instance, ephedrine or pseudoephedrine, found in drugs for the common colds, is a key element to produce crystal meth. The system has been adopted by 31 healthcare institutions and has performed over 300 scans. Shanghai city administration aims to cover the whole city with these terminals by the first half of 2021. Previously, China has experimented with facial recognition with subway rides, payments, catching criminals, and buying SIM cards. While some solutions may sound dystopian, this new system is relatively more well-intentioned, as it could help curb drug abuse. Source
×
×
  • Create New...