Microsoft Error Exposes Confidential Emails to AI Tool Copilot
Microsoft has acknowledged a technical error that caused its AI work assistant, Microsoft 365 Copilot Chat, to mistakenly access and summarise some users' confidential emails.
The company markets Microsoft 365 Copilot Chat as a secure generative AI chatbot designed for workplace use by staff and enterprises.
However, Microsoft revealed that a recent issue led the tool to display information from messages stored in users' drafts and sent email folders, including emails marked as confidential.
In response, Microsoft has deployed an update to resolve the problem and stated that it "did not provide anyone access to information they weren't already authorised to see."
Despite this, experts have cautioned that the rapid pace at which companies are integrating new AI features makes such mistakes inevitable.
Copilot Chat operates within Microsoft applications such as Outlook and Teams, enabling users to obtain answers to queries or summarise messages.
A Microsoft spokesperson told ,
"We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop."
They added,
"While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access."
"A configuration update has been deployed worldwide for enterprise customers."
The incident was initially reported by tech news outlet Bleeping Computer, which cited a service alert confirming the issue.
The alert noted that "users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat."
It further explained that a work tab within Copilot Chat had summarised email messages stored in a user's drafts and sent folders, even when those emails had sensitivity labels and data loss prevention policies configured to block unauthorized data sharing.
Reports indicate Microsoft first became aware of the error in January.
The company also shared a notice about the bug on a support dashboard for NHS workers in England, attributing the root cause to a "code issue."
A section of the NHS IT support notice suggests that the NHS has been affected by the problem.
However, Microsoft told that the contents of any draft or sent emails processed by Copilot Chat remain with their creators and that no patient information has been exposed.
'Data leakage will happen'
Enterprise AI tools like Microsoft 365 Copilot Chat, which are available to organizations with a Microsoft 365 subscription, typically include stricter controls and security protections to prevent the sharing of sensitive corporate data.
Nonetheless, experts say this incident highlights the risks associated with adopting generative AI tools in certain professional environments.
Nader Henein, a data protection and AI governance analyst at Gartner, commented,
"This sort of fumble is unavoidable,"given the frequent release of "new and novel AI capabilities."
He told that organizations using these AI products often lack the necessary tools to protect themselves and manage each new feature.
"Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up,"Henein said.
"Unfortunately the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible,"he added.
Cyber-security expert Professor Alan Woodward from the University of Surrey emphasized the importance of making such AI tools private-by-default and opt-in only.
"There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen,"he told .








