PR2 Teleoperation


A few of people have been teleoperating robots with a kinect now, so I thought I’d have a go.  The Openni software performs decent skeletal tracking, and so it is easier than ever to wave your arms around and have your robot obey.  Anywho, I realized that the PR2′s design offers me a unique opportunity:  I can control all of PR2′s joints with my own!

Ok, that needs some explanation:  I’ve worked with 3 big fancy arms recently.

The PR2, by Willow Garage, the LWA3, by Schunk, and the WAM, by Barret.  The PR2 and the Schunk have 7 degrees of freedom, the Wam had 6.

Herb's Wam arm

Barret Wam

We technically have a ton of degrees of freedom (DOF)  in our arms, but our range is really limited in most of them.  In fact, If you look at the picture of my arm, you can see that our effective DOFs is ~ 7, and the motion in what I labeled as 6 is about 45 degrees max, and very uncomfortable.  Now, we have joints in very nice places which, along with the other limited degrees of freedom (like being able to shrug)  give us a large workspace (volume our arm can reach)  But we still only have about 6 main DOFs.

Spankybot's Schunk arm

Schunk LWA3

Unfortunately, our joints don’t usually map well to robot joints.  Take the Schunk arm, for example.  My arm’s joint 2 looks like it is the Schunk’s joint 2, and my joint 3 looks like the schunk’s joint 1.  But where is  my joint 1?  There isn’t any equivalent on the Schunk.

PR2

It gets to places by assuming joint angles I cannot reach, and visa versa.  The Wam arm is a little better, but it still has the same problem.  The PR2, however, is a great match for the human arm.  Up to joint 5, our arms match very well.  It even has similar joint limits – Just like our elbow, the PR2′s joint 4 does not bend more than 10 degrees past horizontal.  The only real limitation that I see is that the PR2′s joint 2 does not go up very high.  As for the other two DOFs, Our joint 7 is basically the PR2′s joint 6.  Our joint 6 (which is not so useful) is replaced by the PR2′s joint 7, which can spin continuously, which is great for screwing things in.

Now, because robot arms are often very different from our own, we cannot just tell their joint 1 to be at whatever angle our joint 1 is at.  (ignoring for now that it used to be pretty difficult to get a live update of what our joint 1 is)  Roboticists would usually  get around this limitation by using the 6 DOF position of our hand to tell the robot where to put it’s hand.  The robot would have to figure out what angles its joints would have to be at to achieve this, (and in many cases, the robot just can’t get there) but hey, that is what the study of inverse kinematics is all about!

However, with the PR2, I can directly map my joint angles to the robot’s joint angles allowing me to, for example, lift my elbow to move out of the way of an obstacle.  And here’s the video:

I just knocked this out on Wednesday, and it needs a bit more work.  In particular, the delay is caused by the fact that the skeletal tracker is a bit noisy, and the arm with jump around if I don’t put a filter on it.  I put a pretty aggressive window filter on, which greatly increases the delay, and isn’t terribly efficient at smoothing, but it is really easy to implement.

I’m very happy with the results so far.  Leaning (forward, back and to the sides) move the robot straight, and turning my torso turns the robot base.  I control the grippers by opening and closing my hands.  The only feature  left to implement before I just start fine tuning is getting using the  hand orientation, from the hand detector to orient the gripper.

  1. No comments yet.
(will not be published)