ORNL is home to the world's fastest exascale supercomputer, Frontier, which was built in part to facilitate energy-efficient and scalable AI-based algorithms and simulations. Credit: Carlos Jones/ORNL, U.S. Dept. of Energy
- In October, President Biden signed an executive order outlining how the United States will promote safe, secure and trustworthy AI.
- It supports the creation of standards, tools and tests to regulate the field, alongside cybersecurity programs that can find and fix vulnerabilities in critical software.
- The executive order aligns nicely with ORNL鈥檚 AI Initiative, which supports the field鈥檚 development by connecting subject matter experts with the laboratory鈥檚 resources and the development of secure, trustworthy and energy-efficient AI for scientific discovery and experimental facilities and national security applications.
As artificial intelligence technologies improve, they increase the efficiency and capabilities of research across the scientific spectrum. Because of the rapid pace of the field, AI tools must be developed sustainably, a guiding principle for the Department of Energy鈥檚 Oak Ridge National Laboratory throughout its 40 years of AI research. Now, its extensive array of resources are supporting the nation as it harnesses the power of these transformative technologies.
In October, President Biden signed an executive order outlining how the United States will promote safe, secure and trustworthy AI. The order various requirements for AI across industry, academia, national laboratories and other federal institutions. It supports the creation of standards, tools and tests to regulate the field, alongside cybersecurity programs that can find and fix vulnerabilities in critical software.
Other goals of the executive order include:
- Developing tools to understand and mitigate the risks of AI
- Establishing a pilot program to enhance training programs for scientists
- Reducing risks at the intersection of AI and chemical, biological, radiological and nuclear, or CBRN, threats
- Developing guidelines, standards and best practices for AI safety and security
- Expanding new capabilities in AI to accelerate progress and identifies the pressing need for scientific grounding in areas such as bias, transparency, security and validation.
The executive order aligns well with ORNL鈥檚 AI Initiative, which supports the field鈥檚 development by connecting subject matter experts with the laboratory鈥檚 resources. 鈥淭he overarching goal is to develop secure, trustworthy and energy-efficient AI for scientific discovery and experimental facilities and national security applications,鈥� said Prasanna Balaprakash, director of AI programs at ORNL. 鈥淭he initiative empowers systems that align with both the scientific objectives and goals, creating technologies that support ethical and societal goals.鈥�
The Oak Ridge Leadership Computing Facility, or OLCF, is an important resource for the AI community because it enables researchers to tackle a large of range of the most complex scientific questions and was constructed in part to facilitate AI applications. 鈥淭he OLCF has Frontier, which is the fastest supercomputer in the world and the first to break the exascale barrier,鈥� said Balaprakash. 鈥淚ts expansive and energy-efficient power gives us the capability to train large AI models in a responsible way.鈥�
Further, ORNL established the Center for AI Security Research, or CAISER, to address and respond to threats against AI in government and industry. The center supports basic and applied scientific research about the vulnerabilities, risks and national security threats related to AI.
鈥淪ome call national laboratories the brains of the federal government,鈥� said ORNL鈥檚 Edmon Begoli, founding director of CAISER. 鈥淲e take that responsibility seriously. We observe potential vulnerabilities in AI systems, and we work to understand those experimentally and theoretically.鈥�
鈥淭his executive order highlights areas that ORNL has been in a very strong leadership position for quite a few years,鈥� he added. 鈥淓arlier this year, CAISER was established as one of the earliest research organizations that researched these topics in a scientific setting. We create capabilities to test and evaluate the robustness and vulnerabilities of AI tools and products.鈥�
CAISER also provides outreach to inform the public, policymakers and the national security community on the true promise, and potential pitfalls, of AI. Because there鈥檚 a conception among many that AI is inherently harmful, CAISER works to both protect and educate the public on responsible policies.
Overall, ORNL鈥檚 community and infrastructure will support the goals and guidance set forth in the recent executive order, ensuring that this promising technology is developed to be safe, secure and trustworthy.
鈥淎I is completely changing the way that we do science,鈥� Balaprakash said. 鈥淚t's transformative, but it is important to evaluate and develop these models in a much more systematic, rigorous and responsible way that maximizes the potential while minimizing the risks.鈥�
UT-Battelle manages ORNL for the Department of Energy鈥檚 Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science. 鈥� Reece Brown