Research

My research interests are broadly related to developing robots that are able to make good decisions in the face of uncertainty. Specifically, I am interested in robots that can navigate through environments about which they have incomplete knowledge. There are many environments in which a robot might operate where it cannot be pre-programmed with all of the knowledge it will need in order to make good decisions. These can involve both environments about which little is known by the programmer to ones where there will be elements changing in rapid, unpredictable ways.

Thus far, much of the success in constructing mobile robots, such as with the Mars rovers or with the DARPA Grand Challenge, has generally been under one of two circumstances. In the case of the Mars rovers, they were relatively closely overseen by human operators. The time lag between Earth and Mars is only 15 minutes. As a result, the rovers have been programmed to move very slowly, and to stop at the first sign of trouble and request instructions from Earth.

The Grand Challenge vehicles highlight the other case - where the environment is pretty thoroughly known. True, the exact race course itself was not known, but the consistency of the roadway was fairly well understood, the types of turns and obstacles the robots would encounter were relatively well known, and the only circumstances in which they would have to take into account other vehicles was if they passed them. Even in the Urban Challenge, the laws governing how motor vehicles operate on the roadway are pretty well understood. The failures that occurred were largely failures of either engineering, such as parts failing, or of programming, such as failing to resolve a set of conflicting rules appropriately.

But what if we want to put a robot into an environment where it is not possible to know all the various parameters that might affect the robot's performance, and it is not feasible to monitor that performance? An exotic example might be a rover on the surface of Pluto. Our knowledge of Pluto's surface is sparse, at best. Plus, the lag between Earth and Pluto is several hours, meaning that human oversight is unfeasible. Any robot placed on the surface there would have to be able to operate very autonomously, meaning it would have to be able to learn about its environment and make decisions about to ask without having complete information about that environment or about the results of its actions.

A more down-to-earth example might be a robot designed to clear tables in a cafeteria. Cafeterias are confusing places, with lots of entities moving around in ways that can't be predicted by the robot's programmer. The robot will itself have to make decisions about how to move without having 100% of the information it might want, much like people have to make decisions every day.

In cognitive science, there is a distinction made between the processes of converting stimulus energy into neural signals ("sensation"), selecting which signals to actively process ("attention"), and interpreting those signals to develop a model of the environment ("perception"). Robots and other intelligent systems are fairly good at what is called "direct perception" or "bottom-up" processing, where this is a fairly explicit connection between the information gathered in sensation and the resulting perceptual model constructed. This would take the form of "If you see an object that is red and is an octagon, that object is a stop sign." This does not, however, leave a lot of room for ambiguity. What if the robot sees an object that is red and is half an octagon? As humans, we easily recognize that the stop sign is being occluded by something else due to the Gestalt concept of "shape constancy." We know what stop signs look like, and we know that there are not a lot of half stop signs in the world. Our knowledge about objects in the world and the rules that govern them allows us to fill in the gap created by the incomplete sensation to create a coherent perceptual model of the world, which is called "top-down" processing.

Robotic systems have a harder time with this sort of ambiguity. We can try to overcome it by programming robots with a range of cases that would all reasonably satisfy the idea that a stop sign is present, but it would be impossible to predict or program all possible cases where a stop sign could reasonably be perceived. The problem gets significantly more complex if we step outside of environments that we as programmers know well and put the robot into one where, for example, we might not even be aware of the existence of stop signs at all, much less the various circumstances under which we might encounter them.

My research looks to explore ways in which to integrate learning and top-down processing into autonomous robotic systems to allow them to operate in environments where the information they have is necessarily incomplete. This may involve one or more of several techniques such as pattern matching (as in prototype theories of human categorization), belief structures, subsumption hierarchies, or knowledge bases and production systems.