IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: Speaking the Same Language in Chatbots

"I have been thinking about a theoretical architecture requiring a (chatbot) broker service, a single configuration point in all environments. What are the performance impacts? You might have to wait an extra half a second, but that still beats the hold music on the phone, or waiting in line."

The big problem with a “dynamic chatbot” is the concept of applying the logic to all flavors of virtual assistants — Google Assistant or Alexa, for example.

By itself, dynamic processing requires additional configuration for each chatbot or assistant environment. I have been thinking about a theoretical architecture requiring a broker service, a single configuration point in all environments.

What are the performance impacts? You might have to wait an extra half a second, but that still beats the hold music on the phone, or waiting in line. It just becomes essential to provide the user feedback as to the delayed response.

Also, does this theory only apply to governments? The answer is simple: It does not have to. However, I personally speak from the perspective of a public servant.

This broker service would be charged with handling several complexities. First, the service would have to be capable of handling several request types to replace the existing Webhook API to remain system-agnostic; a few assistants allow only one Webhook API. This would be the first of a few strategic issues ensuring a balanced system that would sustain a good level of performance and decent response times.

The second issue is brokering the dynamic requests. As we all know at this point in the game, it seems that on-premise solutions will never disappear entirely. Some systems, whether legacy or new, will require both on-premise and cloud-based solutions in the foreseeable future. So how would this be accomplished? That’s the million-dollar question. To put it simply: neuro-linguistic programming, or NLP.

Google has an API for NLP that can be manually used to process requests. It is not as good at processing as the proprietary engine that Dialogflow and Alexa use, but it will suffice. This API requires more direct verbiage. However, if one is asking a question that routes to a dynamic intent, the verbiage is most likely more direct. This ensures that the procedure of manually using an API for NLP to route to the proper dynamic service provider would function as expected.

The final major process is that of routing. The broker service, more like an application or engine, would process the request and route, accordingly, parsed with appropriate information, to the proper destination for an accurate, dynamic response, which would then be returned to the user assistant through the broker service. As for standards or a governance board, we have them for IT security and databases, NIST, DISA, and IEEE; why not for assistants and chatbots, too?

The bottom line is that this broker service is being used in part to decipher the proprietary formats that each assistant uses. At the end of the day, they all work and process very similarly.

As the past will tell — take search engines, for example — the battle only lasts so long. I think at this point it is safe to say a few digital assistants have already won the battle for cellphone applications, and third parties should begin to analyze how to interface with them. Requiring users to have to download additional applications to their phones is not sustainable.

Benjamin Palacio is a Senior IT Analyst on the ESSG-Enterprise Solutions Team in the Placer County Information Technology Department and is a CSAC-credentialed IT Executive. The views expressed here are his own. He may be reached at ben.palacio@gmail.com