I’ve always found that travel clears my head. In the middle of a very busy and demanding work schedule, I decided to take my 23-year-old Jaguar for a trip from Spartanburg County, South Carolina to Chattanooga, Tennessee for a special automotive-related event. I had never been to Chattanooga before, although I have spent some time in Memphis and Nashville.
My chosen route was simple; it was not the fastest nor shortest, but the most scenic (as far as interstate highways can be scenic), winding up through Asheville, North Carolina, across the Blue Ridge and on through Knoxville, Tennessee and down to Chattanooga. I selected that out of three possible routes presented by the map app on my phone and started my Saturday trip.
Before I even reached the first stretch of interstate, the app was telling me that there was an accident “170 miles ahead” and started taking me toward one of the other routes. It did not give me the option to stay on my current route; it changed the route without input from me. I pulled over at the next opportunity and changed the selected route back to the original.
Clue Number 1: The mapping app does not understand the likelihood that an accident many miles ahead will be cleared long before I get there.
Even before I reached Asheville, I had heard about enough of the app telling me to take exits I was not in the least interested in taking “to avoid delays”—again deciding that any route was preferable to the one I was on.
Yes, there was heavy traffic getting up to the Blue Ridge Parkway exits, a big destination (apparently) for RVs and camper-trailers. Yes, I had to slow down substantially, but while I had been to Asheville more than once, I had never gone beyond the city and across into Tennessee. I wanted to go through the mountains, along the twisting highway, and back down to the flatter country approaching Knoxville.
Clue Number 2: The mapping app does not understand what I want or why I want it.
Once I was out of the mountains, the app only interrupted my thoughts twice to tell me of possible delays and tell me, “You are still on the fastest route.” The rest of its comments were expected, warning me of upcoming exits to take and which lanes to be in. After 269 miles, I reached my destination in Chattanooga only slightly later than I expected.
The technology behind mapping applications is awesome. GPS (Global Positioning System) knows where I am as well as my direction and speed of movement. The speed limit for nearly every stretch of road is displayed onscreen. My ETA (estimated time of arrival) is continuously updated. Traffic is gauged by the speed of other vehicles along the route. So far, so good.
When the app’s back-end attempts to put some intelligence to work, it both succeeds and fails. Yes, there was an accident 170 miles ahead, but it’s highly unlikely that will impede my progress. More importantly, however, there is no comprehension in the system. It does not understand anything outside a very narrow set of parameters: Fastest route, shortest route, delays. Most scenic? Not so much.
I will admit to feeling concerned that the system was changing my route autonomously. I was not prepared to put a technological system completely in charge of my trip—or shall I say in the best 2021 parlance—my travel experience. I was willing to follow basic directions from the system, but not allow it to govern my choices.
This all led me to consider the state of our relationship with AI. What’s lacking at this point is better communication. If I had been able to say aloud, “Stay on my chosen route” when the system changed my itinerary, I would have been more relaxed. Instead, I felt like I was being ordered to change my route by an automaton. The uneasy feeling that resulted was a window into what’s holding back AI (or machine learning if you prefer) from being more widely accepted: It doesn’t understand.
Make no mistake—things have gotten much better since the early days of people yelling, “Representative!” at their phones in frustration, but we are a long way from being able to give up the steering wheel.
What does this all have to do with Digital Transformation?
Consider the “organizational GPS” that intelligent systems are aiming to provide. Contact center reps are being coached by machine learning systems giving them in-ear prompts or onscreen suggestions. Executives are being aided in decision making by similar systems. AI is plowing through huge stores of previously “dark data” and using analytics to suggest and predict what’s next.
While technology is not transformation, Digital Transformation (DX) is enabled by new and emerging technologies. Without confidence in those technologies, we’re headed for failure. We need to trust the technology with our livelihoods and—in the case of autonomous driving—our lives to systems most of us don’t comprehend and which don’t seem to comprehend us very well either. This is the seedbed of fear of AI and reluctance to adopt new tech.
The resolution of this impasse lies, I believe, in the rapid improvement of Natural Language Processing (NLP), which I have come to regard as the most important component of AI, but which has its own drawbacks apart from its immaturity. Amazon’s Alexa, Google’s Nest, and Apple’s Siri NLP systems are making great strides toward integration and comprehension.
I was recently on call with executives of a company that is moving toward systems capable of completely transforming the experience of using the voice (phone) channel for customer service by using NLP for most conversations, with the option of transfer to a live agent when necessary. This has a tremendous upside, taking the voice channel out of the synchronous realm and cutting queue time to zero; live agents can be attending to other tasks while the system handles all but the most complex calls. Again, confidence and trust will be required.
Of course, I’ll again use the GPS app to guide me in unfamiliar territory, but I’ll also be prepared to force it to return to my chosen route when it doesn’t comprehend my choice. Will executives be doing that with AI-enabled business guidance? You bet they will. Studies repeatedly show that execs don’t trust their own data, so why should they trust guidance based on it?
We need more mature technology to develop the trust that will take DX forward.
Tag/s:Artificial Intelligence
Customer Experience
Trackbacks/Pingbacks