IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: Consider AI's Inhibitors, Strengths and Quick Wins

Commentary: The ethics of artificial intelligence and machine learning are among the principles that government must keep in mind as it explores new uses for these applications, says IT veteran Robbie Schmidt.

schmidt.jpg
Every day, Artificial Intelligence and Machine Learning are finding new applications in government: Customer service, fraud detection, dispatch optimization and digital health are among the fields already using this technology, and more will follow. It's only in recent years that AI has moved from a minor specialty of computer science academics to a technology that will change the world as we know it.

Having said that, I was first programming AI in Prolog in 1991. I have come full-circle. Having previously served 25 years in the public sector — and now in academics and early-stage startups — I have an exciting opportunity to bridge the gap between AI and business. It is from this vantage point that I will explore how AI and ML have their inhibitors, strengths and quick wins. 

A key area of common concern about AI and ML use in government is that the public sector requires algorithm transparency on behalf of the citizens entrusting them. AI and ML algorithms are commonly referred to as a “black box” where data goes in and an answer flows out. In most cases, it is unknown how the answer is derived, its justification, or whether the result could be mathematically proven. An academic research team from UC Berkeley, the University of Amsterdam and Facebook AI Research is seeking to change just that by requiring AI to provide a justification of its decision  what the public sector will need, as AI applications that are “white box.” In the public sector, a “white box” has its inner components or logic available for inspection, audit and traceability. The public expects this level of transparency, and it is up to government and industry to deliver on this requirement.

A second area of interest is data sets. AI and ML typically require three data sets: training, validation, and test. This usually starts with one large data set where most data is devoted to training, then smaller portions of data to determine model fit and tuning parameters, and final portions of data for testing the final model fit. In other words, AI prefers a large data set to work with. Fortunately, California’s open data portal is a data scientist's gold mine, with over 1,432 data sets across California state government and more open data added over time.

Third, AI in the public sector leads a survey of potential use cases. According to an Accenture survey, “The survey of more than 6,000 citizens from the United States, Australia, the United Kingdom, Singapore, France and Germany found that more than half (54 percent) of citizens said they are willing to use AI services delivered by government, with even more expressing willingness when presented with the potential benefits derived from AI.”

I believe the AI “killer app” for the public sector has yet to be developed but will likely involve citizen engagement, customer service and empowerment of the constituent to perform what government does routinely on their behalf. 

I recently explored this use case and developed a text and voice customer service chatbot for a learning system platform at Georgia Tech on behalf of a nonprofit, Yellow Circle. What I discovered is that chatbots based on AI and Natural Language Processing (NLP) require a new approach to end-user design and testing. In this use case, the target audience was between 13 and 18 years old, so I had to consider the nuances of language, text and voice platforms. My research findings noted that chatbots could provide a 24/7/365 communication channel in multiple languages across several social media and voice platforms rapidly and at a nominal cost.

Fourth are ethical principles. Last week, Google CEO Sundar Pichai announced “AI at Google: our principles,” a primer on current AI issues. Those principles include “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” “Be accountable to people,” “Incorporate privacy design principles,” and “Be made available for uses that accord with these principles,” which apply to business and government alike. We technologists need to recognize our obligation to uphold the principles of ethics in AI. We need to ensure that the development of AI and ML algorithms embraces ethical principals from the start. Government needs to establish ethical principles for its use of AI; doing so will help the public sector measure the risk versus benefit of using AI and ML.

AI has been around since the 1970s. It is only in recent years that it has received a high level of attention and investment. Those of us in the public sector need to learn what we can do for quick wins with AI and ML. Considering the risks and benefits, the public sector can find solutions to problems that were never possible to solve before.

In time, who knows — AI and ML may discover the cure to cancer.

Robbie Schmidt is ranked a Top 10 AI influencer. He is on Twitter as @iamrws, and is a tech co-founder and keynote speaker. He is a graduate researcher in Computer Science at Georgia Institute of Technology with a specialization in Machine Learning. His current research is 3-D motion and control software for autonomous flight and flying transportation. Schmidt has previously served as chief of the California Department of Technology's Office of Technology Services and as an Agency Chief Information Officer (AIO). He can be reached at rschmidt34@gatech.edu

 

Robert Schmidt is the chief of California's Office of Technology Services (OTech).