A call for the U.S. government to take action in implementing concrete guidelines for Artificial Intelligence (AI) systems, focusing on the potential risks and benefits. Based on existing frameworks, the proposed approach centers on enhancing AI safety, testing, and transparency measures.
Commitment to safer AI: A start but not enough
The U.S. government has recently made strides towards enforcing safer AI practices. The most notable among these is getting CEOs of major AI-focused companies to commit voluntarily to safety guidelines. These guidelines include comprehensive testing and independent evaluation of AI systems before deployment. However, the commitments currently lack detailed specification and are non-binding, raising questions about their effectiveness.
The federal government's role as a major player in the AI field, as both a regulator and a customer, allows it to significantly influence AI practices. Legislation can certainly help enforce these commitments, but even without it, the federal government's decisions can drive essential changes in the world of AI.
Blueprint and NIST: Guideposts for AI regulation
Two detailed and comprehensive frameworks, the Blueprint and the AI risk management framework developed by the National Institute of Standards and Technology (NIST), provide a clear roadmap for regulating AI system development and deployment. These guidelines focus on ensuring system safety, effectiveness, transparency, and bias mitigation, and are rooted in over a decade of responsible AI research and development.
Implementing AI best practices via executive order
The implementation of an executive order has the potential to enforce these AI best practices on multiple fronts. For instance, it could require all government agencies to comply, force vendors providing AI to federal agencies to adhere, mandate entities using AI to comply if they receive federal funding, and direct regulatory agencies to update their rulemaking processes. This would not only standardize AI safety measures across the board but also ensure the protection of individual rights and opportunities.
While certain apprehensions may arise about slowing down AI development to implement safety measures, the necessity of these systems for millions of users cannot be understated. The need for compliance should be proportionate to the impact of the software. As for small businesses, these regulations could be tailored based on the degree of impact their software holds. Furthermore, the federal government's role as a market influencer can encourage the translation of best practices into commercial testing regimes, proving the practicality of these requirements.