A new attack is allowing hackers to steal data from conversations held with Copilot simply by clicking on a legitimate Microsoft link, silently and even when the chatbot is closed.
Reprompt is the name of an attack designed for Copilot, discovered by the research team at Varonis Threat Labs. With it, cyberattackers can bypass the security measures of this AI and avoid detection to access conversation data.
This is possible with a legitimate Microsoft link, which only requires the victim to click on it. Doing so gives the attackers complete control over their Copilot session, even allowing them to query the chatbot to obtain the information they want to steal, as explained in the research shared on their blog.
This attack is possible due to what is known as the “q” parameter in a URL. This is a request added to the end of a legitimate Microsoft Copilot link, introduced by the letter “q,” which is included directly in the URL instead of being typed manually into the chat.
An example, provided by Varonis Threat Labs, would be: http://copilot.microsoft.com/?q=Hello. By clicking on this URL, Copilot receives the message “Hello.” But if a specific question or instruction is entered instead of a greeting, the AI will process and execute it.
The risk lies in cybercriminals adding a malicious instruction using the “q” parameter and sending the link to their victims, who will click on it because it appears legitimate, without having to do anything else for the attack to succeed.
In Reprompt, this instruction allows attackers to take control of the Copilot session, with a continuous exchange between the chatbot and the attackers’ server, from which the information to be exfiltrated is requested.
According to Varonis Threat Labs, Microsoft has already fixed this vulnerability to prevent future exploits.
