State lawmakers seek to regulate AI in government and private sector

State lawmakers

Connecticut is one of the states that has taken a proactive approach to regulating artificial intelligence (AI) in government and private sector. The state plans to inventory all of its government systems using AI by the end of 2023, posting the information online. And starting next year, state officials must regularly review these systems to ensure they won’t lead to unlawful discrimination1.

Connecticut state Sen. James Maroney, a Democrat who has become a go-to AI authority in the General Assembly, said Connecticut lawmakers will likely focus on private industry next year. He plans to work this fall on model AI legislation with lawmakers in Colorado, New York, Virginia, Minnesota and elsewhere that includes “broad guardrails” and focuses on matters like product liability and requiring impact assessments of AI systems1.

“It’s rapidly changing and there’s a rapid adoption of people using it. So we need to get ahead of this,” he said in a later interview. “We’re actually already behind it, but we can’t really wait too much longer to put in some form of accountability.”1

State lawmakers

Other states follow suit with AI bills and resolutions

Connecticut is not alone in its efforts to address the opportunities and challenges posed by AI. According to the National Conference of State Legislatures (NCSL), at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills this year. As of late July, 14 states and Puerto Rico had adopted resolutions or enacted legislation1. The list doesn’t include bills focused on specific AI technologies, such as facial recognition or autonomous cars, something NCSL is tracking separately.

Some of the common themes among the state-level AI bills and resolutions are:

The need for federal action on AI

While state lawmakers are trying to get a handle on fast-evolving AI technology, some experts argue that there is also a need for federal action on AI. They point out that AI has implications for national security, economic competitiveness, civil rights and human dignity that transcend state boundaries3.

For example, the Center for Democracy & Technology (CDT), a nonprofit advocacy group, has called for a national strategy on AI that includes:

  • A coordinated federal research agenda that prioritizes public interest issues such as fairness, accountability and privacy.
  • A robust regulatory framework that ensures AI is developed and deployed in a manner consistent with human rights and democratic values.
  • A comprehensive education and workforce development plan that prepares Americans for the opportunities and challenges of an AI-driven economy.
  • A strong international engagement that promotes global cooperation and standards on AI.

CDT also urges Congress to pass legislation that would establish an independent commission on AI to provide guidance and oversight for federal agencies and policymakers.

“AI is transforming every aspect of our society, from health care to education to transportation. We need a national strategy that ensures AI serves the public interest, respects human dignity and protects our democracy,” said Alexandra Givens, president and CEO of CDT.

Leave a Reply

Your email address will not be published. Required fields are marked *