>> Yeah, so in such case it looks like,
since we would have to visit every single node that's unvisited in the graph,
then wouldn't it be very similar in terms of run time?
>> Yeah, so- >> I agree
>> All right,
now let's think through this question together.
Let's think about, in particular, what the worst case analysis should look like.
And so we have to think about what our algorithm will do and
in the very worst case.
So worst case in when we have to do the most work.
And we can think about starting, and then we have to visit more and
more and more nodes.
And we'd like to get to the goal, and when will that happen in the worst case?
Well it's when we've explored everything else that was possible and
we've gone through all the dead ends that we had and
we still need to keep on going further until we get to the goal.
And so if we think about how many nodes we have to visit until we get to the goal,
in the worst case we're gonna have to visit
every single one of the nodes before we get to the goal.
And so the number of nodes in each one of these algorithms we'll visit will be
size of the vertex set minus one, which is order the size of the vertex set.
Now, is that it?
Is that just the run time that we have to think about?
And when we think back to the algorithm, in either case we definitely
do something for every node that we visit, but we also as part of our algorithm
have to keep track of the edges that are possible to us from that particular
node that we are visiting or that we are exploring at that particular moment.
And so what we end up doing is doing some work for each of the current node's
unvisited neighbors, which means going through all possible edges.
And so not only do we have to think about which nodes are visited,
we also have to think about which edges are traversed by this algorithm.
And in the worst case we traverse every single one of them.
And so taking the fact that we have to do some constant amount of work for
traversing each edge in the graph, we see that our run time will have a contribution
that corresponds to O of the number of edges in the graph as well.
And so putting that together, the worst case run time for
both depth first search and breadth first search is gonna be O of
the number of vertices plus the number of edges.
Now you might stop there and say, well hold on a second.
I know my asymptotics and I know that if the number of edges is
square of the number of vertices, that's going to be the dominant term,
the number of edges might be the dominant term.
And I can just drop the number of vertices and
say, worst case running time is just O of the number of edges.
It's useful to us in our analysis of an algorithm to keep apart this notion of
the number of vertices, the number of edges, how they impact performance, and
that way we can tune our analysis to the actual graphs that we're working with.
So for example, in the graph that you're working with for the project,
you might notice that the number of edges that we have isn't anywhere near quadratic
in the the number of vertices, it's really linear in the number of vertices.
And so in that situation, the performance that we have for depth first search and
breadth first search is really going to be linear in the size of the graph
rather than quadratic.