Jump to content
  • Microsoft researchers leak 38TB of sensitive data due to misconfigured storage on GitHub


    Karlston

    • 1 comment
    • 749 views
    • 2 minutes
     Share


    • 1 comment
    • 749 views
    • 2 minutes

    As AI projects involve massive datasets, accidental exposures become more common as data is shared between teams. Recently, it is reported that Microsoft accidentally exposed "tens of terabytes" of sensitive internal data online due to a misconfigured cloud storage access setting.

     

    Cloud security firm Wiz discovered that an Azure storage container linked from a GitHub repository used by Microsoft AI researchers had an overly permissive shared-access-signature (SAS) token assigned. This allowed anyone who accessed the storage URL full control over all data in the entire storage account.

     

    For those not familiar, Azure Storage is a service that allows you to store data as a File, Disk, Blob, Queue, or Table. The data exposed included 38 terabytes of files, including the personal backups of two Microsoft employees containing passwords, secret keys, and over 30,000 internal Microsoft Teams messages.

     

    The data had been accessible since 2020 due to the misconfiguration. Wiz notified Microsoft of the issue on June 22, and the company revoked the SAS token two days later.

     

    An investigation found no customer data was involved. However, the exposure could have allowed malicious actors to delete, modify or inject files into Microsoft's systems and internal services over an extended time.

     

    In a blog post, Microsoft wrote;

     

    No customer data was exposed, and no other internal services were put at risk because of this issue. No customer action is required in response to this issue... Our investigation concluded that there was no risk to customers as a result of this exposure.

     

    In response to the findings from Wiz's research, Microsoft has enhanced GitHub's secret scanning service. Microsoft's Security Response Center said it will now monitor all public open-source code modifications for instances where credentials or other secrets are exposed as plain text.

     

    In an interview with TechCrunch, Wiz co-founder Ami Luttwak said;

     

    AI unlocks huge potential for tech companies. However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards.

     

    With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open source projects, cases like Microsoft's are increasingly hard to monitor and avoid.

     

    Source: Microsoft's Security Response Center via TechCrunch

     

    Source

    • Like 1

    User Feedback

    Recommended Comments

    I've said this once, and I think it is worth repeating.

     

    CEO Nadella = Weak Leadership

    Weak Leadership = Weak Security

    Weak Security = Data Leaks

     

    Now what part of Nadella running Microsoft Corp. into the ground do you not understand?

    Link to comment
    Share on other sites




    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...