So now we feel pretty confident abut our ability to design controllers that take a
robot to a goal location. In fact we've seen it in design, we've seen it in
simulation, and we've even seen it in real life. But we have not discussed the issue
of. Well what if the robot needs to do something slightly more elaborate than
just get to the point for instance you typically don't want to hit things on the
way over there so the one behavior that I want to talk a little bit about is
obstacle avoidance because goal to goal and obstacle avoidance are really the
dynamic duo of mobile robatics. ,, . We're going to do more things, of course. But
underneath the hood, there will always be a goal to goal, and an obstacle avoidance
behavior. And let's think a little bit about how one should avoid driving into
obstacles. Because going to goal location was not. Particularly complicated.
Well, we can clearly use the same idea as we did in go-to-goal by defining a desired
heading, and then simply, you know, steer in that direction. So, let's say that we
have the following. The robot, well, it's the blue ball. And then we have this
little red. C, which is the obstacle, located at xo yo. And the reason that we
know the location of this is because the, the disc obstraction we talked about, when
we talked about the sensors. Okay, if this is a goal, we would steer towards it. That
much is clear, now, the question is If it is an obstacle which direction should we
steer it, when it's not as clear. I mean, here is a direction we can go in. You
know, let's run away from the obstacle. But that's a little overly cautious I
think. At least sometimes, if I'm not even on m way towards the, The red thing. Why
do all of a sudden I have to insist on going the opposite direction? So. This is
one direction which we can go in, but it seems a little, how should I put it. It
seems a little skittish, or paranoid. We should be able to be a little bit more.
Clever, maybe like this. So if we're going in this general direction, then we should
maybe go perpendicular to the, the direction to the obstacle. That's one way
in which we could be thinking but there are other choices we could make. Let's say
that we have a goal. Again, we're not just avoiding obstacles, we're actually trying
to go somewhere. This obstacle The red obstacle we see here. Well, it doesn't
seem to matter if I'm going towards the goal, what do I care about that obstacle.
So, hey we could just ignore it. That's one direction we could go in. Or we could
somehow combine the direction to the goal with some way of avoiding an obstacle. So,
we could kind of move away from the obstacle while somewhat getting towards
it. To goal. The point here is that there is no obvious
correct answer, going to goal its clear which direction we want to go in. When
we're avoiding an obstacle its not as clear, its not obvi, obviously have to go
in this direction and we have choices and some how some choices are better than
others. So lets, lets look at some of these choices that we have a little bit.
Okay. So we have the robot in blue, we have the
obstacle in red. And we have the goal in yellow. This was choice one. Pi 1 is, I'm
going to call it Pi obst plus Pi. So Pi obst is this angle, right? So, here is phi
obst. And in fact, Pi obst is we can write it as arc tangent y obst minus y over x
obst minus x. So this is some way in which we compute the angle to the obstacle and
then we can say well Pi 1 suggestion one which is the. The super paranoid robot
who's avoiding obstacles at all cost, it's adding Pi to the mix. And by the way why
am I adding Pi, and not subtracting Pi? All right, so, here's the, the, the angle
I want to go to the, this is phi obst. And, what I'm doing no is adding Pi.
Right, so, this is , well the point is that it actually doesn't matter if you add
Pi, or subtract Pi, because by now we know that angles are slightly scary objects.
And, we always take something like r tangents too, to ensure that we stay
within minus Pi and Pi. Adding Pi or subtrac ting Pi as long as we take. Some
safety measures, it doesn't matter. So, we can do the same thing. It doesn't matter
which one we choose. But, that's one way. Now, this direction is pure in the sense
that I don't care where the goal is. I am just going to move away from the obstacle
as much as I can. So, I'm going to call this pure avoidance. No notion of where
I'm supposed to be going. Well, we had another choice, right? We said, what if we
go perpendicularly to this direction? Well, so Pi 2 is Pi obstacle plus minus
Pi. Well, what does that mean? It means that if I do minus Pi over 2, I go in this
direction. If I do plus Pi over 2, I go in that direction. And there, here it
actually matters if I do plus or minus. And the question is, which one should we
choose? Well, typically that depends on where the goal is. So we should pick in
this case minus Pi over 2 because that moves us closer to the goal, while plus Pi
over 2 moves us further away from the goal. And the punchline here is really
that this is not a pure strategy, because we need to know where the goal is.
Instead, what we're doing is we're actually, I'm calling it blended in, in,
in the sense that we're taking the direction to the goal into account when we
are figuring out in which direction we should be going. So it's not a pure
obstacle avoidance. If I just ask you to avoid an obstacle, you say I can't,
because it needs to know where the goal is. In that sense, it's not pure. So,
that's one choice. Well, remember this one? We said, you know what? This obstacle
is no big deal to us, we are just simply going to go in the direction of the goal.
Well, this is pure goal to goal. We're just running one behavior. goal to goal,
we don't care about the obstacle. And what's more interesting is this choice, Fi
4 which is really a combination of the direction to the goal and the direction to
the obstacle. And the interesting thing here is that this is clearly a blended
mechanism, somehow we combining go to goal, type oyds with obstacl e avoidance
of ideas and the punch line here is that there are really two fundamentally
different ways of combining, avoiding slimming interstuff and getting to goal
points and these ways of combining things is called an arbitration Mechanism so we
saw typically in this case two fundamentally different arbitration
mechanisms one is the winner takes all approach which is a hard switch when we're
just going straight away from the obstacle right so here was the obstacle here was
the robot with the robot was going there we're doing just avoid obstacles. Or, if
the goal was here, right? And we're going straight to the goal, we're doing just go
to goal. So, these would be two examples of hard switches. Now, the two other
examples we saw were blended behaviors. One was, you're combining somehow, the
angle to the goal and the angle to the obstacle to produce a new desired angle.