The National Security Agency (NSA) has announced the establishment of a new Artificial Intelligence Security Center (AISC) that will serve as the focal point for developing and integrating secure and responsible AI capabilities within the U.S. national security systems and the defense industrial base. The AISC will also leverage the NSA’s foreign intelligence insights and collaborate with industry, academia, and international partners to promote the best practices and standards for AI security.
The AISC was unveiled by NSA Director and U.S. Cyber Command Commander Gen. Paul Nakasone on Thursday, September 28, 2023, at an event hosted by the National Press Club in Washington, D.C. Nakasone said that the AISC will be part of the NSA’s Cybersecurity Collaboration Center (CCC), which was established in 2020 to enhance the cybersecurity of the U.S. defense and intelligence sectors, as well as their critical suppliers and partners.
Nakasone said that the AISC will have four main functions:
- To provide guidance, principles, evaluation methodology, and risk frameworks for AI security, based on the NSA’s expertise and experience in securing national security systems and information.
- To conduct research and development on AI security, including identifying and mitigating vulnerabilities, threats, and risks associated with AI technologies and applications.
- To support the secure adoption and integration of AI capabilities across the national security enterprise and the defense industrial base, by providing technical assistance, training, and certification.
- To engage with U.S. industry, national labs, academia, the intelligence community, the Department of Defense, and select foreign partners, to share information, best practices, and lessons learned on AI security, and to foster innovation and collaboration.
Nakasone emphasized the importance and urgency of securing AI, as the technology is becoming increasingly consequential for national security, diplomacy, technology, and economy. He said that the U.S. currently leads in AI, but this lead should not be taken for granted, as the U.S. faces fierce competition and challenges from its adversaries, especially China, which has been using theft and exploitation of U.S. intellectual property to advance its interests and capabilities in AI.
Nakasone said that the U.S. must ensure that its AI capabilities are developed and used in a responsible and ethical manner, and that they are protected from sabotage, manipulation, or misuse by malicious actors. He said that the NSA, as the nation’s premier cryptologic and cybersecurity agency, has a unique and vital role to play in securing AI, and that the AISC will help the NSA fulfill this role.
Nakasone also highlighted some of the recent initiatives and achievements of the NSA and the U.S. government in advancing and securing AI, such as:
- Updating the 2012 directive that governs the responsible development of autonomous weapon systems, to align with the standards and advances in AI.
- Publishing the Responsible AI Strategy and Implementation Pathway, which outlines the vision, goals, and actions for the Department of Defense to lead in the development and use of trustworthy AI.
- Introducing a political declaration on the responsible military use of AI, which seeks to establish norms and principles for the ethical and lawful use of AI in military operations.
- Disclosing and patching two zero-day vulnerabilities in the WebM and WebP media encoding formats, which are supported by libvpx, a library that implements AI technologies, and which were being exploited in the wild by a commercial surveillance vendor.
Nakasone concluded that the AISC is a “significant step” for the NSA and the U.S. to secure AI and to maintain the U.S. leadership and competitive edge in this critical domain. He said that the AISC will continue to monitor and investigate the AI security landscape and its implications, and to hold the responsible parties accountable for any malicious or harmful activities involving AI.