Jump to content
  • Amazon-Powered AI Cameras Used to Detect Emotions of Unwitting UK Train Passengers

    aum

    • 398 views
    • 7 minutes
     Share


    • 398 views
    • 7 minutes

    CCTV cameras and AI are being combined to monitor crowds, detect bike thefts, and spot trespassers.

     

    Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. The image recognition system was used to predict travelers’ age, gender, and potential emotions—with the suggestion that the data could be used in advertising systems in the future.

     

    During the past two years, eight train stations around the UK—including large stations such as London’s Euston and Waterloo, Manchester Piccadilly, and other smaller stations—have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime.

     

    The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition—a type of machine learning that can identify items in videofeeds—to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior (“running, shouting, skateboarding, smoking”), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow.

     

    The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. “The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step,” says Jake Hurfurt, the head of research and investigations at the group.

     

    The AI trials used a combination of “smart” CCTV cameras that can detect objects or movements from images they capture and older cameras that have their videofeeds connected to cloud-based analysis. Between five and seven cameras or sensors were included at each station, note the documents, which are dated from April 2023. One spreadsheet lists 50 possible AI use cases, although not all of these appear to have been used in the tests. One station, London Euston, was due to trial a “suicide risk” detection system, but the documents say the camera failed and staff did not see need to replace it due to the station being a “terminus” station.

     

    Hurfurt says the most “concerning” element of the trials focused on “passenger demographics.” According to the documents, this setup could use images from the cameras to produce a “statistical analysis of age range and male/female demographics,” and is also able to “analyze for emotion” such as “happy, sad, and angry.”

     

    The images were captured when people crossed a “virtual tripwire” near ticket barriers, and were sent to be analyzed by Amazon’s Rekognition system, which allows face and object analysis. It could allow passenger “satisfaction” to be measured, the documents say, noting that “this data could be utilized to maximum advertising and retail revenue.”

     

    AI researchers have frequently warned that using the technology to detect emotions is “unreliable,” and some say the technology should be banned due to the difficulty of working out how someone may be feeling from audio or video. In October 2022, the UK’s data regulator, the Information Commissioner’s Office, issued a public statement warning against the use of emotion analysis, saying the technologies are “immature” and “they may not work yet, or indeed ever.”

     

    Network Rail did not answer questions about the trials sent by WIRED, including questions about the current status of AI usage, emotion detection, and privacy concerns.

     

    “We take the security of the rail network extremely seriously and use a range of advanced technologies across our stations to protect passengers, our colleagues, and the railway infrastructure from crime and other threats,” a Network Rail spokesperson says. “When we deploy technology, we work with the police and security services to ensure that we’re taking proportionate action, and we always comply with the relevant legislation regarding the use of surveillance technologies.”

     

    It is unclear how widely the emotion detection analysis was deployed, with the documents at times saying the use case should be “viewed with more caution” and reports from stations saying it is “impossible to validate accuracy.” However, Gregory Butler, the CEO of data analytics and computer vision company Purple Transform, which has been working with Network Rail on the trials, says the capability was discontinued during the tests and that no images were stored when it was active.

     

    The Network Rail documents about the AI trials describe multiple use cases involving the potential for the cameras to send automated alerts to staff when they detect certain behavior. None of the systems use controversial face recognition technology, which aims to match people’s identities to those stored in databases.

     

    “A primary benefit is the swifter detection of trespass incidents,” says Butler, who adds that his firm’s analytics system, SiYtE, is in use at 18 sites, including train stations and alongside tracks. In the past month, Butler says, there have been five serious cases of trespassing that systems have detected at two sites, including a teenager collecting a ball from the tracks and a man “spending over five minutes picking up golf balls along a high-speed line.”

     

    At Leeds train station, one of the busiest outside of London, there are 350 CCTV cameras connected to the SiYtE platform, Butler says. “The analytics are being used to measure people flow and identify issues such as platform crowding and, of course, trespass—where the technology can filter out track workers through their PPE uniform,” he says. “AI helps human operators, who cannot monitor all cameras continuously, to assess and address safety risks and issues promptly.”

     

    The Network Rail documents claim that cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to pinpoint bikes in the footage. “It was established that, whilst analytics could not confidently detect a theft, but they could detect a person with a bike,” the files say. They also add that new air quality sensors used in the trials could save staff time from manually conducting checks. One AI instance uses data from sensors to detect “sweating” floors, which have become slippery with condensation, and alert staff when they need to be cleaned.

     

    While the documents detail some elements of the trials, privacy experts say they are concerned about the overall lack of transparency and debate about the use of AI in public spaces. In one document designed to assess data protection issues with the systems, Hurfurt from Big Brother Watch says there appears to be a “dismissive attitude” toward people who may have privacy concerns. One question asks: “Are some people likely to object or find it intrusive?” A staff member writes: “Typically, no, but there is no accounting for some people.”

     

    At the same time, similar AI surveillance systems that use the technology to monitor crowds are increasingly being used around the world. During the Paris Olympic Games in France later this year, AI video surveillance will watch thousands of people and try to pick out crowd surges, use of weapons, and abandoned objects.

     

    “Systems that do not identify people are better than those that do, but I do worry about a slippery slope,” says Carissa Véliz, an associate professor in psychology at the Institute for Ethics in AI, at the University of Oxford. Véliz points to similar AI trials on the London Underground that had initially blurred faces of people who might have been dodging fares, but then changed approach, unblurring photos and keeping images for longer than was initially planned.

     

    “There is a very instinctive drive to expand surveillance,” Véliz says. “Human beings like seeing more, seeing further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...