The Microsoft AI research division accidentally leaked dozens of terabytes of sensitive data starting in July 2020 while contributing open-source AI learning models to a public GitHub repository.
Microsoft has learned an important lesson after having to clean up a major data leak resulting from an “overly permissive” shared access signature (SAS) token accidentally disclosed by one of its ...