Microsoft launches open source tool Counterfeit to prevent AI hacking
Microsoft has launched an open source tool to help developers assess the security of their machine learning systems.
The Counterfit project, now available on GitHub, comprises a command-line tool and generic automation layer to allow developers to simulate cyber attacks against AI systems. Microsoft’s red team have used Counterfit to test its own AI models, while the wider company is also exploring using the tool in AI development.
Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. Microsoft also promoted its flexibility by highlighting the fact that it’s agnostic to AI models and also supports a variety of data types, including text, images, or generic input.