Microsoft Open-sources Tool For Testing AI Models in Any Cloud Environment

Microsoft Open-sources Tool For Testing AI Models in Any Cloud Environment

Microsoft’s new open-source tool is poised to help organizations protect their artificial intelligence (AI) systems against machine-learning attacks. Developers can use Counterfit, the new tool, to test the security of AI systems hosted in any cloud environment, on-premises, or on edge networks.

The Counterfit project is available on GitHub. Microsoft says the project was prompted by a previous study that found most organizations don’t have the tools to address adversarial attacks based on machine-learning algorithms.

“This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative,” Microsoft said in a blogpost.

The new command-line tool from Microsoft is a “generic automation tool to attack multiple AI systems at scale.” Microsoft’s red team operations used it for testing its own AI models. Microsoft says Counterfit can also be used in the AI development phase.

Developers can deploy Counterfit via Azure Shell from a browser or locally install it in an Anaconda Python environment.

Microsoft says the tool is model-agnostic and strives to be data-agnostic. It can assess models hosted in any cloud environment, on-premises, or on edge networks. Counterfit is also applicable to models that use text, images, or generic input.

“Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft says.

This tool in part could be used to prevent adversarial machine-learning operations, where an attacker tricks a machine-learning model with manipulative data. An example of such a case can be McAfee’s hack of an older Tesla with MobileEye cameras – researchers tricked cametras into misreading the speed limit by placing pieces of black tape on speed signs. Another example was Microsoft’s Tay chatbot that was tempered with to tweet racist comments.

Its workflow has also been designed based on Metasploit or PowerShell Empire frameworks, Microsoft says.

“The tool comes preloaded with published attack algorithms that can be used to bootstrap red team operations to evade and steal AI models,” explains Microsoft.

Developers can use it to perform vulnerability scanning of AI systems and create logs to record attacks against a target model.

Microsoft has successfully tested Counterfit with several customers, one of them the aerospace giant Airbus.

 

About the author

CIM Team

CIM Team

CyberIntelMag is the trusted authority in cybersecurity, comprised of leading industry experts for over 20 years, dedicated to serving cybersecurity professionals. Our goal is to provide a one-stop shop for knowledge and insight needed to navigate throughout today’s emerging cybersecurity landscape through in-depth coverage of breaking news, tutorials, product reviews, videos and industry influencers.

Share: