In distinction to forward chaining is backward chaining. This is sometimes called goal-directed inference, because inferences are not made until the system is made to prove a particular goal (i.e. a question). This is a lazy kind of inference. It does no work until absolutely necessary, in distinction to forward chaining, where the system eagerly snaps up new facts and immediately tries applying existing rules to them as soon as they arrive.
So, if we had the same knowledge base again and we were to add
the system would not try to trigger any conditions. But suppose we were then to ask the question
The system would try to prove this by searching for a suitable rule or
unconditional fact, i.e. ones whose conclusions are about the same
``verb'', needs_to
.
Here, the last fact is the only one. What this fact
means, in effect, is
``to
prove someone needs to work, prove they have Finals''. So the
system would then pose itself the subtask of proving that John has
finals. The first fact says something about this: ``to prove someone has
Finals, prove they're a third year''. So the system now poses the
sub-sub-task of proving John to be a third year. And we have an
unconditional fact
which says he is. So we can complete the proof.