

Microsoft is facing scrutiny after a bug in its Microsoft 365 Copilot reportedly allowed the AI assistant to summarise confidential emails without proper authorization for several weeks. The flaw bypassed organisational Data Loss Prevention safeguards, raising concerns about how sensitive corporate information could be processed despite protective labels.
The issue, first detected in late January, affected the Copilot chat feature’s work environment, where the AI system accessed emails stored in Sent Items and Drafts folders. Even messages marked with strict confidentiality labels normally meant to block automated access were incorrectly analysed and summarised due to the error.
Microsoft attributed the problem to a coding issue and said it began rolling out a fix in early February. The company is continuing to monitor the update and contact affected users to ensure the correction works as intended, though it has not disclosed how many customers were impacted. The incident comes as the firm rapidly expands AI features across its productivity ecosystem.












Comments (0)
No comments yet
Be the first to comment!