The previous post started out as a response to a comment I’d made on Facebook, the below will become the longtail of the same blog-post and was a response to a post on Linkedin on the same day regarding how rubbish customer-facing tech is in the main.
There is an element of the comedic ridiculous, to
the alleged help bots, and even answer phone systems depending on where they
have been bought and how they are utilised. Option systems that pointlessly go
through a system of gates, rather than just plainly state that you are in a
queue and will be answered shortly or give an engaged tone. After all, does my
mechanic or the various other mechanics and occasional receptionist ever answer
the phone with the words `Hello you’ve reached the Automania MOT reception?’ or
in other circumstances, `Van-hire reception’ or `Service Reception’? No. But
you have selected these options. Where’s the benefit, for me or the mechanic?
It’s that same at the doctors. If someone is
measuring something as you request appointments, results, cancellations etc,
then what is being measured is invisible to me. However, I can understand if
you are picking options and the system is obtaining specific statistical markers
represented by phrases to number: One Hundred calls were for appointments, ten
were for cancellations, however, forty people didn’t turn up. So for seeing
clues by measuring what the general activity is, it has utility for BAU
planning. Simple.
If on the other hand it has been incorporated, so
that it directs your call to speak to a specific function of reception and to
give them some form of preparedness, then, whatever that sift relates to, has
no influence on how you are greeted or how you are dealt with. Remember you
think you are selecting options on a directed path preparing reception for the
nature of the call based on your selections. No. It’s just keeping you busy
while the reception deals with another call. Then it puts you on hold and then
tells you all the things it will do, when it answers the phone, and all the
options that you will be presented with which you just went through to get to
the point where you are just on hold. It’s a ludicrous mix of good intention,
badly implemented, and sold upon the nebulous notion of giving the receiver of
the call an advantage in customer service, whether it’s for an MOT or a funny
rash. Clearly, it’s deeper than this but this isn’t the point, you need to read to
the end.
The other technology that falls into this poorly
implemented or planned help space is the ever more ubiquitous `ChatBot’. The
reason for this long-tail is the story on Linkedin about someone being engaged
by a ChatBot implemented by an airline.
On the human side of the interface, someone has
typed that they want to alter their flight booking. The ChatBot responds that
it doesn’t understand the enquiry. The ChatBot has a name as a person does and
is pretending to be human. And yet this `thing’ that is pretending to be a
human doesn’t actually understand a question relating to it’s designated
function relative to the page it popped up on, Flight bookings’.
`They’ want you to think this `thing’, `using the
over-arching term AI and their component algorithms’, has your best interests
at heart. And as the original Linkedin poster pointed out `Tom Goodwin’, if
this blunt stupid instrument (tool) is supposed to imbue you with confidence
and loyalty then it has failed spectacularly.
There are so many of these Ill thought through bots
nowadays, they are as intrusive as a pushy shop assistant, or their cousins,
Rude and Clueless.
It’s quite difficult to bend your head around 'The
Computer says No'. But it isn't funny when it's reality a lot of the time. Some
of it, is bells and whistle, where clean lines would do, a splash of décor and
bunting that’s supposed to make you feel like the web interface is something
other than a gateway to something you want.
Without question, every-time you go on-line it is
for something you want more than you need, regardless of that needs actual
parameters (goods, services, pleasure, knowledge).
However, I wonder how machines ended up being
programmed with almost the same types of human failures as humans themselves?
Does it actually say more about our inability to
communicate regardless of industry sector (Retail Meeter & greeter/ person
who programmes a Meeter and Greeter) at some deeper level? Or is it that
programmers, programme from experience and have the wrong impressions? Which
loops straight back to core communication. Not communication skills per se,
just that instinctual psychological need to 'step back from the approaching
stranger. They are likely to want something... I may not be able to provide
that something... this may result in hostility'.
It sounds over-simple once you have thought it
through. Even though it took a while to edit this into what looks like a
reasonable set of sentences.
Are the systems behaving more like gatekeepers,
than gateways, because there is a personal space issue built-in that comes from
our deep, not in the slightest bit philanthropic defensive lizard brain? I
don't know.