Autonomous Robot: What’s that mean?

We design and build autonomous security robots, and one of the questions we hear most often is “What does autonomous mean?” Autonomy is a tricky word. One can trace its roots back to ancient Greece, where, at the simplest, it meant the ability to say “no.” From that came the idea of self-rule or self-governance: the right to decide, for one’s self, what to do. It is a simple idea, and one that we take for granted as people. We have autonomy – the right to decide for ourselves what we will do next, and how we will do it. But how does that apply to a robot? In robots we think of autonomy as the ability to choose actions or behaviors in order to achieve some goal of the robot.

A couple of images come to mind when we think of robots: the bomb disposal robot on a newscast, the industrial welding robot we see in commercials assembling cars, perhaps a vacuum cleaning robot scurrying around the floor, and, of course, the movie robots like R2D2, or Wall-e. Each of these can be used to explore the idea of an autonomous robot. Around the lab, we break down these robots into four categories:

  1. tele-operation,
  2. programmatic,
  3. reactive, and
  4. autonomous.

Let’s look at these to see what makes them tick.


Tele-operation sounds a lot more impressive than it is.  From the same roots as telescope: ‘far sight’, and telephone: ‘far sound’, we have ‘far control’.  Basically a tele-operated robot, like a bomb disposal robot, is like a big radio controlled car. The operator is at a distance from the robot, and can control every motion. For dangerous tasks (like bomb disposal) this keeps the people safe. This is a key task for robotics; to enable the robot to take risks rather than people.  Woods Hole submersible exploring the wreck of the TitanicWe use tele-operation on many high profile robotics systems – the rovers on Mars rely on instructions from Earth, the drones flying over war zones receive control commands from thousands of miles away, and the deep-sea submersibles exploring the oceans under water pressures that would kill a person are driven by operators in shirt sleeves thousands of feet above on the surface.

However, there is a drawback.  Without those second by second control signals, the robot sits like a lump. It does nothing on its own, it requires constant control. So, while we keep the operator at a safe distance, we do not get the benefit of reducing the workload.  In fact, for many tele-operated systems the workload is increased. It can take as many as three or four operators to keep a drone flying and doing its job. Because it relies on the human operators understanding the situation, making decisions, planning the next steps and then executing that plan by sending control signals to the robot. That is the trade-off of tele-operated robots.  So, these robots have almost no autonomy, they do what they are told with little or no self-governance. Note: several of these robots have limited safety systems that can over-ride commands that will put them in danger, but that is about it.


At the opposite end of the spectrum are the industrial robots that have made such a huge impact on modern manufacturing. These robots, once they are programmed, require little or no supervision.  They just work. They work day in and day out, doing the same thing over, and over, and over again. This is the kind of thing that we want robots to do, relieve us of the deadly, dull tedium of doing the same thing over and over. And they work.  In a recent study by the International Federation of Robotics, there are over 1.4 million robots at work in our factories, producing our goods. This is a job that robots are really, really good at doing.

Auto assembly line with robots

Auto assembly line with robots

However, here to is a downside.  The robots work really well at routine, tightly defined tasks.  But to enable them to work, they need to be in a very tightly controlled environment. Because they are pre-programmed to do the same thing over and over, they cannot react to any changes.

This has resulted in numerous accidents, and even deaths, resulting from a person being in the workspace of the robot (the envelope) when the robot was moving.  This is also why we have not seen a breakthrough in service robots like that in industrial robots.  The human environments: homes, shops, offices, etc. cannot be as tightly controlled as a factory floor. In human ‘workspaces’ constant change is the order of the day, and pre-programmed robots are unable to react to those changes. As far as autonomy goes, as far as having the ability to say ‘no’, these robots have none, and that is what makes them potentially dangerous.

Reactive System

‘Reactive system’ is a term associated with Rodney Brooks, a co-founder of i-Robot. A reactive system is one that does not follow a rigid sequence of pre-programmed steps, rather it senses the world on a moment by moment basis and selects a behavior. This behavior remains in effect until the sensors show a state that requires a different behavior.

As an example, imagine a robotic vacuum cleaner. When activated it starts a behavior that says go forward 10 feet. When the sensors show 10 feet traveled it starts a behavior that causes it to execute a spiral pattern, cleaning the floor. When a sensor indicates an obstacle, it turns until clear, the goes in a straight line until the next obstacle, this repeats, and repeats.  This behavior (known as a drunkard’s walk) will, eventually, cover the floor, and all will be clean. When the sensors indicate a low battery, the robot turns until it detects the home beacon, then heads for home to recharge.  No real plan, no idea what it is doing, but what the robot does next is totally under the control of the reactive behaviors. So, is this autonomy?

That is a surprisingly tricky question. Clearly, the robot is not under direct control. Nor is it mindlessly following a step by step set of instructions. But, neither is there any ability to say ‘no,’ no ability to choose a behavior. In one sense, this robot is exactly like an industrial robot, except that the pre-programmed actions are triggered by immediate sensor data, rather than by a strict sequence.

Autonomous Robot

So, what is an autonomous robot?  An autonomous robot can make decisions based on the current goals, and the current situation. We will use our security robots as an example, since we are very familiar with them.

Suppose that you are in charge of security for a warehouse, and you have a Vigilus(tm) security robot on duty. Every night, from 11pm until 7am it does security patrols, carrying a camera and sensors around the warehouse. Then, one night, there is a problem down at the loading dock. You need the security robot down there carrying all its sensors and gear. You simple command tells the robot “Go to the Loading Dock,” and the security robot is on its way.

Robot patrolling the receiving dock, and monitoring changing temperatures.

Robot patrolling the receiving dock, and monitoring changing temperatures.

Unlike a tele-operated robot, you don’t have to drive it there, you just tell it to go, and it figures out the best way to get there. It knows where it is, so it can look at hundreds of different possible routes to take, and select one. It can keep track of changes that might affect its travel, and reject a route that requires going through a congested hallway. Once it picks a route, it drives itself – no one needs to run a joystick or a game controller. Along the way it monitors what is going on. If there are problems (perhaps someone left a chair in the middle of the room), it decides what to do: go around the chair, maybe push the chair out of the way, or plan a different route that does not go through this room at all. If it can’t figure out a way to meet the goal it was given, it can decide to try again, or to call for help.

In short, an intelligent autonomous robot acts sort of like you would expect a teammate to work. If you give it a task, it will try to get the task done, using its model of the way things are, and its model of the things it knows how to do to come up with a plan, and carry it out.

So, it differs from the other types of robots in key ways, ways that make it more useful:

  1. Unlike a tele-operated robot, it doesn’t need a driver – it acts on its own,
  2. Unlike a pre-programmed robot, it can deal with a dynamic, changing world, and
  3. Unlike a reactive system, it builds a model of the world, and plans out actions to achieve its goals.
Next up:  Why do you want an autonomous robot.

Where is your robot?  Ours are being built by Gamma Two Robotics, here in Colorado.

  1. #1 by Tony Lewis on June 13, 2012 - 11:38 am

    Here is a hierarchy of autonomy:

    Radio Controlled Car: Not autonomous. Controlled by human every 20 ms or os.
    Radio Controlled Helicopter: Next level up. Part of the control decisions is managed by an onboard computer fed by an IMU. I.e. system makes quick level decision making and keeps the helicopter stable. This was not always the case. It is much easier to pilot a toy helicopter today than a hobby helicopter 20 years ago (trust me on that one!).

    Robot is programmed to run a maze: Here the robot has sensing, and makes decisions autonomously (to turn right or left). Also includes ability to learn.
    Robot vacuum cleaners: Uses clever algorithms to get the job done. Uses more sensors. May include map building or have a kind of memory geometric memory of the world.

    Now we start getting into the launching pad for higher level autonomy.

    here are the necessary components, not in any particular order.

    (1) Understanding the environment. Beyond geometric memorization using SLAM, the robot knows the locations of objects
    (2) The robot understands how it can interact with those objects using its on board capability
    (3) The robot can do more than one qualitatively different thing. It can pick up coke cans but it can also play with the kids.
    (4) The robot can choose which activity to engage in on its own.
    (5) Robot has episodic memory. It can build constructs where it can report, in a narrative form the *important* events of the day. Not just a video playback.
    (6) The robot can reason based on its episodic memory to alway improve its performance (i.e. it can reflect)
    (7) The robot is capable of acquiring skill based memory. It can learn to the fine manipulation skills of interaction with object via experience.

    (8) The robot is a social being. It calls on help from other robots to accomplish tasks that it cannot.

    (9) The robot sees its human owner as a peer. And can use it as part of its social structure to accomplish tasks. I.e. “Lift me up so I can grab a glass on the shelf… ”

    (10) Most importantly. the robot can utter “no” when commanded to do things. being a peer of a human, it can choose to cooperate (if it gets something out of the deal) or not. It views interactions with humans purely as transactions.

    So, this is a specific list that we can flesh out and start building autonomy around. Obviously people have been thinking about all of these things, but IMHO these are crucial elements of the next level of autonomy.

    Ultimately, the next level of a robot operating system will have all of these features cooked into it so that a programmer does not have to re-invent the wheel.

    Projects like ROS as well as modular actuators and sensors (kinect) allow the next generation of roboticist to be the Facebook stars of the future.

    Like the internet, few people know the names of the pioneers who put it together, but everyone knows Zuckerberg, who with minuscule effort in comparison to those who built the internet, made billions.

    While I am not religious at all. We are wondering in the desert and may not live to see the “promised land” of fully autonomous robots in our lifetime.
    But we have to keep focused, cooperate with each other via modularity, division of labor and a focus on a clear vision.

    Well. My coffee is finished brewing so I am signing off…

  2. #2 by gammatworobotics on June 14, 2012 - 11:56 am

    Absolutely love your hierarchy! I especially like #5 – summarization of ‘important events’ is such a tricky one. And it is so dependent on the state of the entity – Ask a 6 year old and his dad to summarize the trip to the park, and you’ll get wildly different summaries.

  3. #3 by Deres on June 20, 2012 - 6:32 am

    In fact each system can broke down into mutiple functions. Each of this function can use a differnt level of autonomy. For instance, for a flying military reconnaissance drone, you can identify several indendant function : control, navigation, guidance, payload management, taking-off, landing. In the first drones, all this function were teleoperated. Nowdays, Control and navigation are fully automated. Some functions are automatized on the customer choice like taking off and landing. Guidance is mainly manual but some autonomous treatment exists. For instance, a drone that lose for a long time its command link will automatically get home without any orders.Thus, the last stage of autonomy is the payload management. You currently need someone to look a the screen to identify what is on the ground. They are nevetheless currently working on it … Algorithm to automatically identify people and vehicles already exists. Algorithm to follow a target are commonly used.

    The travel to full autonomy is a long way and will be attained step by step …

  4. #4 by BillT on August 5, 2012 - 4:51 pm

    All that’s good.

    From the perspective of an employer(purchaser), autonomous essentially means “able to do the whole job without me worrying about it”, both the monotonous and active parts. For a security robot, consider interviewing human guards and determining exactly what their jobs entail. Some of the simplest things they do may be challenging to automate.

    Can it have someone sign in on paper form and verify that it’s done properly? Can it inspect and verify validity of presented identification? Can it find a pen? Can it provide an ID badge, and ensure it’s properly attached/displayed? Can it determine “at a glance” whether each individual in it’s vicinity is properly displaying an ID badge? Can it instill in people a sense of being more secure and safe? Can it do sophisticated analysis of risk based on subtle signals in the environment, facial expressions, or behavior? Can it be suspious of something or someone? Can it escalate the situation appropriately, calling human guards, police, fire department, plumbers, or others. Can it activate appropriate alarms? Can it verify when everyone’s vacated the building in case of fire drill or actual alarm? Can it watch a bank of many security cams, evaluating each view? Can it slowly or rapidly move to indoor/outdoor locations to further investigate the situation? Can it remind visitors, as they leave, to drop off their badges before they make it out the door? Can it delay or detain suspects?

    Most of those capabilities require the higher levels of Tony Lewis’ heirarchy. Many of them might be done indirectly by the whole security system, for example detaining in an “airlock” entryway.

    It’s definately many steps to that point, but maybe less time than we expect.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: