IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI in Government: Experts Are of Two Minds

As a California commission with the power to recommend policy to the state Legislature studies artificial intelligence, it is getting mixed signals about just how heavy-handed the state should be in regulating the technology. Some think the state should wait to regulate AI until it knows more about it, and some think it should approach AI with more caution.

As a California commission with the power to recommend policy to the state Legislature studies artificial intelligence, it is getting mixed signals about just how heavy-handed the state should be in regulating the technology.

Some think the state should wait to regulate AI until it knows more about it, and some think it should approach AI with more caution.

The Little Hoover Commission, tasked with studying California government operations and emerging economic issues, called together several experts at the state Capitol on Thursday to ask their opinions on the matter. They ran the gamut from industry association representatives to university professors.

Generally the speakers said it’s good for the state to turn to a broad coalition of AI experts to better understand the technology before moving forward. But Annette Bernhardt, a senior researcher for the Institute for Research on Labor and Employment at the University of California, Berkeley, said the state should be regulating AI sooner rather than later.

“There’s a lot of economic research coming out that is suggesting that left alone, our path forward will be growing inequality (and) greater polarization, and yet it’s not inevitable,” she said. “This is the great thing about the McKinsey (report). We have some on-ramp here, and so this is exactly the time to jump into the policy arena.”

In some areas, the use of AI is more sensitive than others, pointed out Ryan Calo, an associate professor at the University of Washington School of Law. For example, using AI to teach somebody to play chess better is relatively harmless. But in cases where AI technology can decide whether or not somebody goes to jail — like predictive policing algorithms, which some have accused of amplifying bias against minorities — the state needs to start considering things like equality and due process.

And yet, Calo said, AI could be used for many purposes. Where the technology could send police officers to minority neighborhoods more often, it might also be turned inward and help police departments identify officers most likely to use excessive force before something bad happens.

Ultimately, he said, he thinks that the state needs to better understand the technology and see what effects it has on society before heavily regulating it.

And while the government is still learning the technology, he thinks it should hold off on using it internally.

“Right now these systems are either not doing what they’re supposed to be doing or they’re doing things in ways that allow the companies that are selling them to hide behind trade secrets, so what I would say … is that California should not deploy these systems in any aspect of government until it really feels like it understands what the system does, and that the system is amenable to the kinds of guarantees and processes and procedures that we have made formally to our citizens,” Calo said.

Robert Atkinson, president of the Information Technology and Innovation Foundation, urged the state to, in so many words, back off.

“I would argue we really need to do two separate things. Go slower on regulation — they’re just brand new technologies, you don’t know where they’re going, you don’t have any clue what they’re going to do and how well and where,” Atkinson said. “So slow down on regulation, but speed up on helping workers transition. That’s work that has to get done now, because we do know with pretty strong confidence there will be more labor market disruption.”

Rather, he recommended the state do two things: Focus on setting up paths for workers to become retrained, and set up an outfit under the state CIO’s office to look for ways the government can use AI.

“Bring in 10 really great AI people who understand the use of AI in organizations, and then systematically go through and find the low-hanging fruit where you can apply AI in California government organizations — the ones where there’s less political sensitivity around putting somebody in jail, but the ones that would be easy to do,” he said.

What virtually all the speakers agreed on was that the state should begin looking into education initiatives sooner rather than later. A representative of the consultancy McKinsey and Co. said wholesale job market destruction is not imminent; maybe 15 percent of work might be automated within the next couple decades.

The problem is that if 15 percent of workers find themselves out of a job, the state lacks institutions to retrain them en masse — most people finish their education by their early 20s and never look back.

“I think the U.S. will need to figure out how to get employers to invest in on-the-job training again,” said Bernhardt. “The public education system is not going to be able to do this by itself. U.S. employers invest in on-the-job training the least of any developed country. That is one of the big nuts we’re going to have to crack.”

The meeting was the first on AI for the commission. When studying issues, the commission typically goes through several meetings and public hearings that can go for several months or more than a year, according to LHC Deputy Executive Director Terri Hardy. At the end of the study, the commission will put out a report with recommendations for how the state should proceed on AI policy.

 

Ben Miller is the associate editor of data and business for Government Technology.