Connecticut is one of the states that has taken a proactive approach to regulating artificial intelligence (AI) in government and private sector. The state plans to inventory all of its government systems using AI by the end of 2023, posting the information online. And starting next year, state officials must regularly review these systems to ensure they won’t lead to unlawful discrimination1.
Connecticut state Sen. James Maroney, a Democrat who has become a go-to AI authority in the General Assembly, said Connecticut lawmakers will likely focus on private industry next year. He plans to work this fall on model AI legislation with lawmakers in Colorado, New York, Virginia, Minnesota and elsewhere that includes “broad guardrails” and focuses on matters like product liability and requiring impact assessments of AI systems1.
“It’s rapidly changing and there’s a rapid adoption of people using it. So we need to get ahead of this,” he said in a later interview. “We’re actually already behind it, but we can’t really wait too much longer to put in some form of accountability.”1
Other states follow suit with AI bills and resolutions
Connecticut is not alone in its efforts to address the opportunities and challenges posed by AI. According to the National Conference of State Legislatures (NCSL), at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills this year. As of late July, 14 states and Puerto Rico had adopted resolutions or enacted legislation1. The list doesn’t include bills focused on specific AI technologies, such as facial recognition or autonomous cars, something NCSL is tracking separately.
Some of the common themes among the state-level AI bills and resolutions are:
- Creating advisory bodies to study and monitor AI systems their respective state agencies are using, such as in Texas, North Dakota, West Virginia and Puerto Rico1.
- Forming new technology and cyber security committees to study AI’s impact on state operations, procurement and policy, such as in Louisiana1.
- Establishing ethical principles and guidelines for the development and use of AI, such as in Utah, Hawaii and New Jersey2.
- Requiring transparency and accountability for AI decision-making, such as in Washington, Illinois and Maryland2.
- Addressing potential bias and discrimination caused by AI, such as in California, New York and Oregon2.
The need for federal action on AI
While state lawmakers are trying to get a handle on fast-evolving AI technology, some experts argue that there is also a need for federal action on AI. They point out that AI has implications for national security, economic competitiveness, civil rights and human dignity that transcend state boundaries3.
For example, the Center for Democracy & Technology (CDT), a nonprofit advocacy group, has called for a national strategy on AI that includes:
- A coordinated federal research agenda that prioritizes public interest issues such as fairness, accountability and privacy.
- A robust regulatory framework that ensures AI is developed and deployed in a manner consistent with human rights and democratic values.
- A comprehensive education and workforce development plan that prepares Americans for the opportunities and challenges of an AI-driven economy.
- A strong international engagement that promotes global cooperation and standards on AI.
CDT also urges Congress to pass legislation that would establish an independent commission on AI to provide guidance and oversight for federal agencies and policymakers.
“AI is transforming every aspect of our society, from health care to education to transportation. We need a national strategy that ensures AI serves the public interest, respects human dignity and protects our democracy,” said Alexandra Givens, president and CEO of CDT.