Erann Gat <gat@jpl.nasa.gov> wrote:
+---------------
| Let me tell you a story.
| When I was a graduate student I did my master's thesis on something called
| "the problem of referential transparency". It has to do with avoiding
| unwanted conclusions in AI systems based on formal logic. The classic
| example is, "Mary knows Bill's phone number" and "Bill's phone number is
| the same as John's phone number." From this one does not want to conclude
| that Mary knows John's phone number, but under standard formal logic this
| conclusion is unavoidable if one models these two statements as:
|
| knows(Mary, phone-of(Bill))
| phone-of(Bill) = phone-of(John)
+---------------
It seems obvious that those two statements are in fact *not* a
good model of the situation of interest. They should be instead:
knows(Mary, belongs-to(phone-of(Bill), Bill))
and:
phone-of(Bill) = phone-of(John)
While from those two you *can* conclude that:
knows(Mary, belongs-to(phone-of(John), Bill))
you cannot conclude that:
knows(Mary, belongs-to(phone-of(John), John))
nor:
knows(Mary, belongs-to(phone-of(Bill), John))
nor:
knows(Mary, phone-of(Bill) = phone-of(John))
-Rob
-----
Rob Warnock, PP-ASEL-IA <rpw3@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607