I want you to remember all those times you
had to talk to an automated phone system?
How was your experience? How did you feel?
I bet you felt frustrated by the number of
times that you had to repeat what you said
over and over again; even then, you might
have been transferred to the wrong section
(department). Finally, when all that is over,
you have to wait in a long queue; listening
to a repetitive song; and a voice that says
'your call is extremely important to us we
will assist you as soon as the next agent
is available'.
Despite all the frustration we go through,
using these speech-enabled automated phone
systems rather than traditional touchtone
ones can save an average call centre up to
£3m annually; and by 2017 this industry could
be worth up to £2 billion.
After all, there seems to be no way to avoid
these speech-enabled voice jails.
But why don't these systems understand what
we say?
Because, they are not as clever as humans;
and they can't adapt to the new speaker's
accent. In fact, a few months ago, Birmingham
city council spent £11m on an automated phone
system that sadly, couldn't recognise the
local's accent.
The goal of my PhD is to solve this by creating
an accent-robust automatic speech recognition
system. So, I have this massive database of
recorded speech, from people with different
regional accents.
When an unknown speaker talks to my system,
it first, recognises their accent; then selects
the data from other speakers in the database
with similar accents to create a personalised
model for their speech. Using this will improve
the performance of my system significantly.
In fact, it reduces the errors by up to 50%
for some difficult accents such as Glaswegian.
The end result of this is that customers will
find something more productive to do with
their time instead of waiting in call-centre
queues and companies will have fewer angry
customers.
In the next stage of my PhD, I am going to
address the issues faced by minorities, such
as children, elderly and people with speech
disorders.
