.comment-link {margin-left:.6em;}

I.K.bot

Ivan Kirigin's views on Robotics & Culture: future. perfect. progress.

Tuesday, October 30, 2007

Novel Interfaces for Tele-Operated Robotics

I've been thinking about the various ways to control a teleoperated robot. Autonomous robots can be given task level commands: "clean the floor", "sweep this building".

But teleoperation implies a lower level of user control, making it a harder problem. Live video feeds stream over often unreliable networks, while the operator tries to control the robot.

High level systems haven't yet been deployed. Direct a robot to a particular by clicking on a map, and command it to grasp an object by clicking on it in the video stream. The mapping, obstacle avoidance, navigation, and manipulation are all essentially automatic.

iRobot has a research project called Sentinel. It uses sensors that aren't on any deployed systems, and is able to receive such commands. The controller is a tablet with a map-based interface that can be used to control multiple robots. The operator can still drill down and control each robot, but with the autonomy in the robots, why would they want to. This is the future though. Right now, multiple people are needed to control a single robot. That ratio needs to be inversed and Sentinel can do it.




Deployed systems are unfortunately dumber and thus harder to control.

Take the PackBot EOD. It has a 3 link arm with over 10 degrees of freedom. You can't control this adroitly with a simple controller. The solution is a set of "pucks" which are high degree of freedom joysticks. They can be pushed forward & back, side to side, lifted, pushed down, and twisted. Different user modes allow this to control the chassis treads, chassis flippers, the arm, and the manipulator.

You can see a video overview of the PackBot EOD here:



Interestingly, this allows for experts to complete amazing feats, while novice users complain about the complexity. Embedded video training has been one suggested solution.

Soon, ground robots are going to have a controller that looks very familiar:


Kids that join the armed forces already know how to use it -- they've applied it to many very complicated games. This is a good direction, especially considering the chaos of many vendors providing many different controllers.


AnyBots is working on a humanoid robot. They intend it to replace human laborers, with remote workers providing the brains. Watch this video, which shows a panoply of screens used by the operator. Note the use of accelerometers and multiple joysticks.



They will need to add autonomy to make grasping tasks easier. I also worry about the lag in the video. You need enough pixels to convey the environment, but you also need to present the scene and relay back commands fast enough to avoid faults.


The Nintendo Wii is a new gaming platform that uses gyros to note sweeping motions. The applications to robotics are pretty obvious. I've seen it used to control a PackBot EOD arm with impressive results. Here is a video from US Mechatronics, where they're playing Riil Wii Tenniis.




Finally a note about UAVs. UAVs are easier to control than ground robots. There are very few obstacles, no manipulation, and localization is easy. Still, teleoperation can be difficult. Task level commands, such as "patrol on this GPS waypoint circuit", are often used. Automated landing and takeoff are common, even on aircraft carriers. One lacking component is situational awareness. Luckily, the operators can be in Missouri while bombing Afghanistan, so comfy chairs and multiple monitors abound. Here is a view of Raytheon's Universal Control System.



But can it play Doom?

1 Comments:

Blogger Manipriyan said...

This comment has been removed by the author.

12:21 AM  

Post a Comment

<< Home