Jump to content
  • Security Researcher Demos Microsoft Copilot Flaws at Black Hat Conference

    aum

    • 298 views
    • 3 minutes
     Share


    • 298 views
    • 3 minutes

    Former Microsoft security architect Michael Bargury demonstrated multiple flaws that malicious hackers can exploit to abuse Microsoft Copilot, bypassing the protections the software giant put in place.

     

    Bargury demonstrated the Copilot flaws this past week during two sessions at Black Hat USA 2024, 15 Ways to Break Your Copilot and Living off Microsoft Copilot, and he posted more information on the website for Zenity Labs, the company he cofounded after leaving Microsoft. In each case, he was specifically highlighting Copilot for Microsoft 365, because that service relies on access to the sensitive internal data that’s stored by Microsoft’s corporate customers. And despite security controls designed to keep that data private, Bargury was able to mine and exfiltrate it in some cases.

     

    Some of this involves social engineering. His most dramatic demo is of a so-called spear-phishing attack called LOLCopilot that can gain access to internal emails, draft new emails that mimic the author’s writing style, and send mass mailings on their behalf. It requires that the user’s account first be compromised in some way, an important caveat. But Copilot’s ability to automate malicious actions using so much internal data amplifies the damage it can do dramatically.

     

    “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf,” Bargury told Wired. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”

     

    Unlike the security researchers who undermined the Recall feature that Microsoft planned to release in June with new Copilot+ PCs, Bargury properly disclosed the flaws he discovered to the software giant privately. He’s complimentary of the work Microsoft has done securing Copilot, and he’s working with the company to help address the underlying problems.

     

    “The risks of post-compromise abuse of AI are similar to other post-compromise techniques,” Microsoft head of AI incident detection and response Phillip Misner said of Bargury’s findings. “Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.”

     

    Microsoft aggressively pushed its AI technologies into the market at an unusually fast pace over the past few years. But the worry is that, in doing so, the software giant may have left Copilot open to attack and abuse. Gaining the upper hand against competitors isn’t just about speed, after all: If Copilot is found to be unsafe, corporations will ignore it, and those that have adopted it will drop it.

     

    Not coincidentally, Microsoft this past week described the “red teaming” it does to emulate real-world attacks against its AI systems so it can help proactively protect corporate data. But this is particularly challenging because those systems are changing rapidly.

     

    “The practice of AI red teaming not only covers probing for security vulnerabilities, but also includes probing for other system failures, such as the generation of potentially harmful content,” Microsoft AI Read Team Lead Ram Shankar Siva Kumar explains. “AI systems come with new risks, and red teaming is core to understanding those novel risks, such as prompt injection and producing ungrounded content. ... Microsoft recently committed that all high-risk AI systems will go through independent red teaming before deployment.”

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...