CommonSense Reasoning : The Education Of A Machine — Part 5/7

One of my pet peeve has been (… still is and … will be for a long time) that the machines (… and algorithms) do not understand us; an algorithmic view into a thin slice of human interaction doesn’t define the human — not by a mile, not by a million light year !

The machines need to walk a mile in our shoes — good in many ways — “they will be a mile away from us and we don’t have our shoes” !

Commonsense Reasoning: An Event Calculus Based Approach” is an interesting book that addresses this line of thinking.

Erik has done a good job not only in capturing how to think about commonsense but also how to represent the dynamics.

The definition of Commonsense Reasoning, as a process that involves taking information about certain aspects of a scenario in the world and making inferences about other aspects of the scenario based on our commonsense knowledge, is very relevant.

As I had written earlier, from an interaction point of view, there are at least four modes. There is no modality or synchrony assumed ie. the interaction can be via voice or an interactive UI like the Minority Report or even no immediate interaction.

We probably can get away without extensive Commonsense Reasoning for the 1st 3 modes, but need it for Conversation (… or building a positronic brain !). Commonsense Reasoning is also needed for any type of interactive robots — I am using the word robots in a general sense — could be autonomous cars, companion machines for elderly, teaching assistants, things that augment humans in many tasks,…

For machines to achieve skills at this level, first they need representations of Objects, Properties, Events, Space, Time + the ability to reason over object identities.

Canonical Narratives

I really like the simplicity and complexity (!) of the 5 narratives in this book as listed below.

The obvious conclusion by humans are shown in italics — the question is what is required for machines to come to the similar or better conclusions ? One key point to remember is that NLP can’t solve it as there is not enough fidelity in the sentence for conclusions; inferences and common sense are hidden and can’t be garnished from just words and sentences.

  1. In the living room, Lisa picked up a newspaper and walked into the kitchen. Where did the newspaper end up? It ended up in the kitchen.
  2. Kate set a book on a coffee table and left the living room. When she returned, the book was gone. What happened to the book? Someone must have taken it.
  3. Jamie walks to the kitchen sink, puts the stopper in the drain, turns on the faucet, and leaves the kitchen. What will happen as a result? The water level will increase until it reaches the rim of the sink. Then the water will start spilling onto the floor.
  4. Kimberly turns on a fan. What will happen? The fan will start turning. What if the fan is not plugged in? Then the fan will not start turning.
  5. A hungry cat saw some food on a nearby table. The cat jumped onto a chair near the table. What was the cat about to do? The cat was about to jump from the chair onto the table in order to eat the food.

Essential Event Calculus Primitives

Let me jot down a few interesting event calculus primitives — the book has lot more …

Events, Fluents & TimePoints

  • An Event occurs at a an instant in time [Happens(e,t)]
  • A Fluent is a very useful concept — it represents a time-varying property of the world, such as the location of a physical object. For example Lisa walking with the newspaper or the open faucet with a closed drain are all examples of Fluents. Usually fluents are initiated [initiates(e,f,t)] and terminated. I like fluents !!
  • Of course, in order to resolve, we need to express and consider things like pre-conditions and stare constraints to resolve the actual state e.g. did the fan turn on et al.

The Commonsense Law of Inertia

  • Another very relevant and powerful concept.
  • A quality of the commonsense world is that objects tend to stay in the same state unless they are affected by events. A book sitting on a table remains on the table unless it is picked up, a light stays on until it is turned off, and a falling object continues to fall until it hits something. This is known as the commonsense law of inertia.
  • This what gives the ability to resolve overflowing sinks and even walking books !
  • I am thinking of modeling fluents and the CLOI as lambdas in Python.

The Mental state of agents

  • This is another important piece of the puzzle. Need to model reactive behavior and goal-driven behaviors
  • The cat example has interesting connotations — the cat will jump from chair to table, keep on seeking food until the goal is satisfied. But, if it finds satisfying food on the chair, it might not proceed beyond, to the table.

The Default Reasoning

  • Yet another important piece of the puzzle (in case you are wondering, all pieces of a puzzle are equally important !)
  • Commonsense reasoning requires default reasoning.
  • As we engage in commonsense reasoning, we make certain assumptions so that we can proceed. If we later gain more information, then we may have to revise those assumptions.

In short …

A good book, very rich on concepts with a broader appeal. On the section on Applications, I really enjoyed the depth on IBM Watson, which I have been following for a long time (here, here & here)

I also was able to get good information on knowledge acquisition initiatives like Wordnet, ThoughtTreasure and Open Mind Common Sense. Need to explore and create a knowledge base for reasoning.

I haven’t yet figured out an easy way to implement — it is impossible to express the world literally by event calculus predicates — too many things and rules, becomes very complex too fast … any thoughts ?