Microsoft’s AI team accidentally leaks 38TB of private company data

AI researchers at Microsoft made a big mistake.
According to a new report According to cloud security company Wiz, Microsoft’s AI research team accidentally leaked 38 TB of the company’s private data.
38 terabytes. That is a lot of files.
The exposed data included full backups of two employees’ computers. These backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from more than 350 Microsoft employees.
The tweet may have been deleted
So how did this happen? The report explains that Microsoft’s AI team has uploaded a lot of training data that includes open source code and AI models for image recognition. Users who came across the Github repository received a link from Azure, Microsoft’s cloud storage service, to download the models.
One problem: The link provided by Microsoft’s AI team gave visitors full access to the entire Azure storage account. And visitors could not only see everything in the account, but also upload, overwrite, or delete files.
Wiz says this is due to an Azure feature called Shared Access Signature (SAS) tokens, which is “a signed URL that grants access to Azure Storage data.” The SAS token may have been set up with restrictions on the file(s) that can be accessed. However, this particular link was configured with full access.
Adding to the potential problems, according to Wiz, is that this data appears to have been exposed since 2020.
Wiz contacted Microsoft earlier this year, on June 22, to warn them of their discovery. Two days later, Microsoft invalidated the SAS token, solving the problem. Microsoft conducted and completed an investigation into the possible impact in August.
Microsoft provided one to TechCrunch opinionand claimed, “No customer data was exposed and no other internal services were compromised as a result of this issue.”
subjects
Cybersecurity Microsoft