So, in Lecture 3, we're going to continue talking about computational Boolean

algebra. But we're going to focus on real,

practical data structures and operators that are the, basically the foundational

methods that everybody uses to do large industrial scale sorts of designs.

When we talked about the Unate Recursive Paradigm at the end of the last lecture,

we introduced a simple kind of representation and just one basic

operation, Tautology. Urp is a historically important

representational scheme, but it's not really the way we do things to day.

So what I want to start talking about is one of the most powerful data structures

in the boolean universe and that's the binary decision diagram.

So they're called BDDs. And in this lecture, we're going to

introduce what a decision diagram is and that we're going to start manipulating it

and restricting it and constraining it so that it becomes a really powerful data

structure on which we can do important operations.

So lets start. So, here we are again, focusing on

representations and how important they are in our study of computational boolean

algebra. Where the goal is to be able to manipulate

boolean objects as data structures that are operated on by operators that we can

build in software. And as way of example, I'm just showing

what we ended up with at the end of the last lecture which was the URP tautology

algorithm. And you know this is really a great warm

up example. I, I just got a little example on the

right here. We've got an internal node in a tautology

computation that has cubes b, c, and b bar c bar.

So it would seem that, we've already had a set to 1 before we got here.

We cannot tell anything about tautology for this particular collection of cubes.

So, we're going to have to go a little further set b equals 1 on the left hand

side, set b equals 0 on the right hand side.

Do the co factoring we see on the left hand side, oh, I can tell its a tautology

because I've got an[UNKNOWN] cube, there's a one in there.

Now on the right hand side we can again see it's a tautology because we've got two

unit, two single variable cubes c and c bar.

And when we OR those together, we get a 1. This really does show the big idea of

boolean functions as things we manipulate with software.

But I have to admit that URP is you know, it's not the real way that people do it or

it's not the most popular the most influential, the most widely deployed way,

it's not one of those. So what we want to do in this sequence of

lectures is look at a really important, elegant way to do this a really powerful

and interesting way. These things called binary decision

diagrams. So let's talk about those.

Now, decision diagrams were actually studied by lots and lots of people, Lee

and then Boute and then Akers, way back when.

But they got seriously useful when Randy Bryant of Carnegie Mellon University made,

made a really remarkable breakthrough. So these things are technically called

reduced ordered BDDs or ROBDDs, although honestly, everybody just calls them BDDs

these days. This was really the breakthrough that made

these things useful as a, as a remarkable data structure.

There's a, there's a big and interesting Wikipedia page on these that's certainly

worth looking at. And I'll also say that Randy provided lots

of the nice BDD pictures in this lecture and some of the foundation structure of

this lecture. So big thanks to Randy Bryant for, for

letting us use that material. So, our first step in the development of,

of reduced order BDDs is just to, to talk about what a decision diagram is.

So the first big idea is just to turn a boolean function into a decision diagram.

And you can, you can kind of think of this as taking the truth table.

And I've got one shown at the upper left. And kind of turning it on it's side and

putting it in the form of a tree. So this is a truth table for three binary

variables. X1 x2 x3, and so it's got eight rows.

The first four are 0001 and the second four 0101.

And the big idea of a decision tree is that it, it answers the question what is

the value of the function as the sequence of decisions.

So every vertex represents a variable and so here is a vertex that, that represents

variable x1, I've just got shown here. And the edges that go out of those

vertices are decisions to set that variable to a value.

So, if you follow a dotted green line then you've decided to set the value to a 0,

and so I can say over here x1 equals 0. And if you go on a solid red line, you've

decided to set the value equal to a 1, so I can write x1 equals 1 over here.

Now, all of these arrows are really directed.

They real, they really, they're all, they're all going down.

But by convention we just don't draw them because we know where they go, and the

answer is that they all point down. Right, so I don't really need to draw

that. And then the idea is that you follow a

sequence of decisions as you move through the variables and out their edges until

you have visited all the variables you need to be able to make a decision.

And so, for example, the first row of the truth table, and I'm just going to

highlight it like this, on the first row of the truth table you'd say, well, if x1

is 0, now what? Okay, well, now I need to know something

else. Okay, well if x2 is 0, now what?

So, well now I need to know something else.

Well, what if x3 is 0? Oh, well actually I know enough to, to

tell you the value of this function. The value of this function is a 0.

That's just that row of the truth table. And this very simple decision tree which

completely defines every single row of the truth table as a unique path through the

decision tree. Just kind of maps out the truth table.

So it's really going to be 0, 0, 0, 1. Like the first four rows of the truth

table 0, 1, 0, 1 like the next four rows of the truth table is we map that, that

decision tree out. So that's really all there is to a

decision tree. It's kind of like a truth table turned on

it's side rendered in the form of a sequence of variable value, variable value

decisions until you can tell me what the value of the function is.

So there's a little bit of terminology. We can just walk through that here.

We talked about the vertices, but we just tend to call them variables.

That just makes life easier, because they're all associated with a variable.

And we often talk about the, the, the edges, that come out of a node as being

pointers, because they are. And sometimes we talk about the low

pointer as being the pointer that, that's what happen when I assign the value to a

0. We also talk about the high point or the

high child as being what happens when we assign the value to a 1.

And a variable ordering is just the order in which the decisions about the variables

are made. So I'm just circling that big one right

here. X1 is gets a decision.

And then x2 gets a decision. And then x3 gets a decision and then we

know enough to actually tell you what the value of the function is.

In this case it's a 1. That's called variable ordering.

And the things at the bottom are called constants, right?

No surprise, because they're not variables, and they're also the leaves of

the tree. They're the things that, that you get at

the bottom when you can't go any further and you don't need to visit any more

variables. Now, one of the things to note is that

different variable orders are possible. I mean, I haven't said anything about why

one would choose a particular variable order.

I mean, I'd, I did it in a somewhat obvious manner here all of the x1s are at

the top of the tree and then the next level are x2s.

And then the next level are x3s but there's no reason why I had to do that.

I could have flipped that. So in this particular example, after I

choose to, say, set x1 equal to 0, I could decide that x3 is the next variable that

I'd like to look at. And then x2.

And in this case I'm going to get a different tree.

In this case when x3 is a 0 then it doesn't matter what x2 is the value of the

function is a 0. And when x3 is a 1, it also doesn't matter

what the value of the function is sorry it also does not matter what the value of x2

is the value of the function is a 1. And so this is a different tree.