AI Hearing: Encourage Research, Beware Pitfalls
When it comes to artificial intelligence, California lawmakers on Tuesday received a clear message: The potential promise to transform society is extremely exciting, but great care must be taken.
This story is limited to Techwire Insider members.
This story is limited to Techwire Insider members. Login below to read this story or learn about membership.
“We are the threshold between AI being a new development that promises significant growth of the economy and AI also causing significant disruptions in society,” warned Olaf Groth, a professor of global strategy, innovation and economics at Hult International Business School.
The role of state government in the emerging field of artificial intelligence, otherwise known as AI, was the subject of a more than three-hour joint legislative hearing held by the Assembly Privacy and Consumer Protection Committee and the Assembly Select Committee on Emerging Technologies and Innovation.
It was convened to educate lawmakers and the public about AI — it’s potential to spur economic growth, increase higher wage-earning jobs, assist law enforcement and transform medicine — just to name a few of the benefits. But, there is also its capacity to displace workers, infringe on people’s privacy and generate biased data that could affect policing or policy decisions.
“Our approach to technology policy in the state has the potential of not only informing the rest of the country, but the rest of the world as well,” Assemblyman Ed Chau, who chaired the hearing, told his colleagues. “We as legislators must facilitate that dialogue to find a balance between protecting our state’s population while simultaneously driving innovation and productivity as a leader in the global marketplace.”
When it comes to artificial intelligence, the array of technologies is vast — speech and image recognition, language translation and predictive analytics. There are self-driving cars and computers winning chess games. But the artificial intelligence of a machine acting like a human — often depicted by Hollywood movies — is decades away, industry experts say.
Nevertheless, policymakers in California and Washington, D.C., are seeking to understand what role, if any, government should have in the development, deployment and regulation of artificial intelligence.
California's Little Hoover Commission is studying the impacts of artificial intelligence, with a project intended to investigate the shape and speed of how AI is changing California, its economic impact and how it might be used to solve societal problems.
And in December, California Reps. Ted Lieu and Mark DeSaulnier co-authored legislation that would create a federal advisory commission to study artificial intelligence issues.
The state itself is already deploying artificial intelligence in its defensive efforts against cyberattacks, using machine learning programs to chew through massive amounts of data to try to identify malicious and suspicious activities, Peter Liebert, California’s chief information security officer, told the panel.
“That helps our analysts to be able to do what they do,” Liebert said. “It’s a continuously evolving skill set. It’s something that has to be tuned and constantly monitored, but it definitely has helped.”
On the flip side, however, is the growing use of artificial intelligence by cyberattackers, who are going to become faster and deliver more tailored attacks that are harder to attribute, Liebert warned.
Industry and business leaders say government has a role in ensuring that workers — especially displaced workers — get digital training and require that students as young as elementary school age are exposed to computer science. The current focus in schools puts California at a competitive disadvantage globally, said Robert Atkinson, president of the Information Technology and Innovation Foundation.
“It’s striking that in California, for example, a larger share of its students take pottery in high school than take computer science,” Atkinson said. “This is a problem. We need more of our students taking computer science.”
By comparison, computer science is a requirement for elementary school students in China, he added.
Broader education is also key to helping the public understand artificial intelligence, with its use fully explained and methods made transparent so that the public and policymakers can trust it, advocates say.
“We need to get ahead of this and make sure that the machine facilities, that AI facilitates our human fabric, our trust, our prosperity … human growth and human development,” said Berkeley’s Groth. “AI needs to facilitate us. We shouldn’t serve AI.”
“At the end of the day,” he said, “this should be about us creating a better society solving society’s problems rather than being run by the machines.”