Google has formed the Coalition for Secure AI (CoSAI) to advance security measures for AI, expanding on its previously announced Secure AI Framework (SAIF). The group aims to create collaborative, open-source solutions to ensure AI security, with companies like Amazon, IBM, Microsoft, NVIDIA, and OpenAI joining the initiative.
According to Google: “AI needs a security framework and applied standards that can keep pace with its rapid growth. That’s why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others - and above all a forum to make that happen."
This move is part of a broader trend of forming industry groups focused on sustainable and secure AI development. Examples include the Frontier Model Forum (FMF), which aims to establish industry standards and regulations around AI development, with participation from Meta, Amazon, Google, Microsoft, and OpenAI. Thorn's ‘Safety by Design’ program, focused on responsibly sourced AI training datasets to prevent child sexual abuse material, supported by Meta, Google, Amazon, Microsoft, and OpenAI. The Tech Accord to Combat Deceptive Use of AI, which has representatives from major tech companies agreeing to precautions to prevent AI tools from disrupting democratic elections.
While these forums and agreements are positive steps towards safe AI development, they are not legally enforceable, being voluntary commitments by AI developers. Critics argue that these measures serve to delay stricter regulations. EU officials and other regions are evaluating AI's potential harms and considering regulatory frameworks, including financial penalties.