KEVIN: David, the last time we talked about
AI. You said it was concerned with developing
computer programs that perform tasks which
require intelligence when done by humans.
DAVID: Yes, exactly.
KEVIN: OK. But then you gave some examples
playing games, understanding natural language,
forming plans, er … proving theorems. You
also gave the example of visual perception.
Now surely visual perception – seeing – doesn’t
require intelligence?
DAVID: (laughs) It may not seem so, Kevin,
but perceptual tasks such as seeing and hearing
involve a lot more computation than is apparent.
But because this computation is unconscious
in humans, it’s much harder to simulate.
KEVIN: So you’re saying that AI is better
at intellectual tasks such as game playing
and proving theorems than at perceptual tasks
like seeing and hearing.
DAVID: I’m saying AI has been more successful
at intellectual tasks.
KEVIN: Right, but in any case, AI is trying
to simulate human behaviour.
DAVID: That’s an oversimplification. Sometimes
programs are intended to simulate human behaviour,
as in computational psychology, but sometimes
they’re simply built for technological application,
as in the case of expert systems.
KEVIN: Expert systems are part of AI. Right?
DAVID: Well, to be more accurate, expert systems
are programs built using the programming techniques
of AI, especially techniques built for problem-solving.
The actual subdiscipline of AI concerned with
building expert systems is called ‘knowledge
engineering’.
KEVIN: What are expert systems used for?
DAVID: They’re built for commercial applications.
Up to now they’ve been used for a variety
of tasks – medical diagnosis, electronic
faultfinding, machine translation, and so
on. But the point about them is that you can
interrogate them about how they came to a
particular conclusion.
KEVIN: So, in that respect, they imitate human
experts.
DAVID: Yes. I read recently about a Japanese
system that can be used by lawyers to draw
conclusions about new legal cases. It refers
to databases of statutory laws and legal precedents
and is able to see similarities in the reasoning
processes used to decide each case – exactly
as a skilled lawyer would.
KEVIN: How can it do that?
DAVID: The system has two reasoning mechanisms,
known as inference engines, which work in
parallel. One operates on the written laws,
the other operates on the legal precedents.
They draw all the possible conclusions and
then output them in the form of inference
trees.
KEVIN: Inference trees?
DAVID: Yes. Inference trees show how each
conclusion was arrived at. And that is what
makes this program different from any normal program.
