Hi, this video is about conditional independence in d-separation. We have three objectives here. So, one is to just understand what is meant by blocking. Another is to understand what colliders are and what happens when you condition on them. Finally, our last goal is to understand what d-separation is and the rules that are associated with it. We will begin with blocking. So, paths can be blocked by conditioning on nodes in the path. So consider the path A affecting G, affecting B. Imagine now that we condition on G. That is a node in the middle of this chain. In that case, we would block the path from A to B. So previously, we showed that in a chain like this, information does get from A to B. So A and B are dependent, but they are dependent through G. So, A affects G and then G ultimately affects B. So, it's G that's really causing this association between A and B in a sense. It's the thing that links the two together. So if we were to condition on G, conditioning on G is another way to say, controlling for G. That's blocking it. That's blocking this path. So that's the main idea here is if G is sitting in the middle of this chain, if we blocked it, then we could get independence between A and B. So as a more concrete sort of hypothetical example, let's think of A as temperature. So the outside temperature, essentially. That's A and let's imagine that G is whether or not sidewalks are icy. So clearly, the outside temperature A will affect whether or not sidewalks are icy and then B is whether or not someone falls. So clearly, G would affect B. So whether or not the sidewalk is icy might affect whether or not somebody falls, but I don't have an arrow from A to B. And so, we'll just assume that temperature does not directly affect whether or not somebody falls. So if it's cold outside, but there's no ice, you're no more likely to fall than if it's warm outside and there's no ice. So, that's an assumption we'll make just for illustration. So A and B are here, they're marginally associated with each other. Clearly, temperature and whether or not someone falls are associated with each other. So if it's a warm day, there's no chance that there's ice. And so, somebody's unlikely to fall. However, imagine now we're going to block G. We're going to condition on G. Condition on G means that we make homogeneous. So in this case, that means we hold fix whether or not the sidewalks are icy. So if we hold that fixed, then there should not be an association between A and B. If for example, I by controlling for G. I may not say, imagine a world where the sidewalk is icy, then temperature and falling should be unrelated. Similarly, if I conditioned on G and said, the sidewalk is not icy. Temperature and someone falling still should be unrelated with each other. So, we're blocking G by making it the same for everybody. That's another way of saying that we're controlling for it. So, that's what blocking is. So, associations on a fork can also be blocked. So previously, I just showed you blocking on a chain, but we can also block on a fork. So here, we have a path where G effects both A and B. So, we have a fork. In this case, again, A and B are associated with each other, because G effects both of them. So, we've talked about that on a previous video. But now if we condition on G, we'll block the path from A to B. So in this diagram, in this DAG, the only reason A and B are associated with each other is because G affects both of them. So if we hold G fixed, if we pick a value for G, if we condition on it. We make it the same for everybody. Now there’s not relationship between, well, G and A and B. In other words, G is no longer affecting A and B, because we’re holding it fixed. So, we’ve blocked that path. And so now, A and B are independent from each other. So if we condition on G, we’ll block this path. So, that's hopefully straightforward enough. If we can block those kind of paths, we can block forks, we can block chains. But colliders create a different situation, something that you could almost think about it as the opposite. So now, consider an inverted fork. So, we have a collision in G. So if you recall from a previous video, A and B are actually independent here. So, A and B are not associated via this path. Information is colliding at G. So if we really want A and B to be independent from each other, we actually don't have to do anything here. There's no path that needs to be blocked. We would just be done. However, if you were to condition on G, we would actually create an association between A and B. So here, if you were to block G, it would actually create association between A and B. So, it'll kind of have the opposite effect of what we saw previously. So as an example, this is a really simple example to get the idea across, because I think it sometimes it's counterintuitive. This example might help. Imagine A is the state of an on/off switch. So you can think of it as a light switch, for example. It's either on or off and we'll call that A. Imagine there's a second on, off switch and B is the state of that on/off switch. So there's two on/off switches, A and B and G is whether or not a lightbulb is lit up. Now, imagine that we determine A by a coin flip. So, A is just totally independent of everything. We just flip a coin and decide whether to flip A on or off. Same thing with B. We have a separate coin. We flip it and we use that to determine whether to have switch B turned on or off. And now imagine that G is only lit up, if both A and B are in the on position. So clearly, A and B are independent from each other, because they were just based on two coin flips. So, we can depict that with this DAG. So A and B both affect G, because G can only be lit up if both A and B are on. So A and B affect G, but A and B are independent from each other. So as we said, A and B are independent. So for example, if I told you B is on, it's not telling you anything about A. However, A and B are actually dependent now, given G. So if you block G or if you condition on G, you induce an association between A and B. And I'm going to depict blocking here with this rectangle around G. And then I'm putting this dashed curve between A and B to let you know that when you condition on G, you open up a path between A and B. So imagine I condition on G, which means that I tell you what G is or I fix G. So imagine that I told you that the light is off. Well, then A must be off if B is on and vice versa. So we know something about the relationship between A and B, if I tell you what G is. Previously, if I told you what B is, I wouldn't have known anything about A. because A was just determined by coin flip. But once I tell you what G is, now A and B have information about each other. So, we're depicting that in the DAG below with that dashed curve. Just indicating that if you were to block G, you open up a new path between A and B. So now A and B are conditionally dependent, given G. Next, we're going to discuss something called d-separation. So, we're going to use some of the rules that we just learned to think about whether a set of nodes creates independence between variables on a given path. If a set of nodes does do that, we're going to call it d-separated. D here stands for dependence. So, what we're thinking of is a path that has a dependency between nodes and we want to know does a set of variable C remove the dependency. So, we will say that it desperate them. This will become more clear as we get into it but that's the main idea. So a path is d-separated by a set of nodes C, if it contains a chain and the middle part is in C. So what we mean by the middle part in this case, in the simple DAG is E is in the middle. And so, when we're thinking about what is a set of nodes that we need to be controlled for. To have this separation, to have independence essentially, we would need E to be in there. So we saw that, previously. So, if you can think of C as being a collection of all the nodes that we are going to take control for. So as long as the middle part of the chain is in there, we should be okay or if the path contains a fork and the middle part is in C. So, sort of the middle part here is sort of what we saw earlier was the top part of the path where information flowed down to the other variables. So we would need to control for that, if there's a fork or if there's an inverted fork. We don't want to actually control for that middle part. We saw that previously. If we control for that middle part, we induced an association. So if it contains an inverted fork, we don't want that middle part to be in C. We also don't want any descendants of the collider to B and C. So at this point, some of these might be unclear, but we're going to go through it in much more detail, but those are some of the main ideas. And you'll notice that these three things that I just pointed out, all follow from earlier slides where we looked at these different types of paths and when there's associations and when we can block it. On the previous slide, we talked about d-separation of a path. But of course, there could be many paths between two variables. So next, we'll think about d-separation between two nodes in general. So two nodes, A and B are set to be d-separated by a set of node C, if C blocks every path from A to B. So you can imagine many paths from A to B and we will attempt to create conditional independence between A, and B by blocking using a set of node C. So, we'll condition on C. We'll control for C. And if we're successful, if it's blocked all the paths, then it's d-separated and we could say that A is independent from B conditional on C. And so far, what we're doing is learning a lot of these rules, but building up to something. And in particular, as motivation, recall the ignorability assumption. So the ignorability assumption had to do with treatment assignment A, being independent of potential outcomes, conditional on X. So hopefully, you can see the relationship here. Our long-term goal is to identify a set of variables X that will create conditional independence between A and the potential outcomes. In other words, we would have d-separation between any potential outcomes. So, that is what we're moving towards. At this point, we're strictly learning the rules that we'll have to use to figure out when ignorability holds.