So welcome to the summary of module four. So Jimmy, what's happening in the story? >> Well, quite a bit happened. Cao cao was getting stronger. At the same time, he's suspicious of Liu Bei for trying to compete with him. So, you know, they had the famous cooking, wine, judging hero dinner together. And of course, Emperor Xian also found Cao Cao to be quite arrogant so he sent a secret order to Liu Bei to have Cao Cao removed. Of course, Liu Bei understood that he's still under Cao Cao so he's trying his best to gain the trust of Cao Cao as well. We also saw for the first time the appearance of the famous doctor, Hua Tuo, where he is using his acupuncture expertise to heal Guan Yu of his battle wounds. >> So technically what we've looked at are permutation and matching problems. So these are very important class of discrete optimization problems. And really the key is that for each of these kinds of problems, we have this idea of multiple modeling and multiple viewpoints. >> The idea is a viewpoint is very important. When we are looking at the problem, when we look at a problem from a particular angle, we would be able to decide on a set of decision variables. Based on these decision variables, we would be able to write down the restrictions or the rules of the problem as constraints. When we look at the problem from a different viewpoint, we will have different decision variables, and then we would express our restrictions and conditions with different constraints. And of course with different viewpoints, different models, one thing we could do is to channel them together to form a combined model and that's where we also introduced the global constraint, inverse, as well. >> Yes, so the inverse global constraint is exactly a way of looking at a injective function, which you can think of a bijective function as a mapping from domain to co-domain, but also as a mapping from co-domain to domain. And so the inverse is exactly the same but these two functions, these two viewpoints of functions are inverses. >> So essentially, the inverse constraint makes the two models agree with each other. >> Absolutely, and whenever we're doing multiple modelling where we have two viewpoints with the same discreet optimization problem, then the decisions have to be made to agree and that's what this notion of channeling constraints is about. >> Okay, we've been talking about combining models. But why do we want to do it in the first place? >> Well one of the reasons is that some constraints will be very easy to express from one viewpoint and very difficult to express from the other viewpoint. So we should basically, if we have a multiple modeling then we can choose the viewpoint which makes it most easy to express the constraint and express the constraint only in that viewpoint. And that can make it much easier to write the model. And indeed, because if we write our constraints in a simpler way, it's typically going to make it easier for the solver to solve. So it's also going to make that model more efficient. >> Actually we have also seen example in which it is impossible to express certain constraints in the particular viewpoint. >> Right, and in that case, we really have to have the other viewpoint in order to express the constraint at all. >> Right, okay. Towards the end of the module, we also see a very interesting example of channelling. We are not channelling viewpoints. What are we channelling? >> So previously, we've seen that we can represent a set in multiple ways. Well, we can channel between those two representations as well. So if we're making use of two representations of a set and we want them to agree, then the channeling is just the constraints that force those two, representations of the set rather than the viewpoints of the model to agree. >> Okay, let's talk about applications, Peter. >> Yeah, so we've seen some traveling salesmen and routing style application in this module, which is a very classic use of discrete optimization. And now as usual, if you just want to solve a traveling salesman problem then you could use specific technology for that. Like Concorde, which is a very powerful algorithm for that. But the real world's not very full of pure traveling salesman problem. So they often have side constraints and they may need to look at more generic technologies like what we're using in the course. >> Right. Actually, the belt problem that we saw in the lectures, is a variant of the famous Langford’s problem, which finds application in circuit design and other applications as well. >> All right, now the workshops and assignments. >> Okay, my understanding is that we're going to introduce yet another important person, who is the future wife of Liu Bei. >> Absolutely, so Liu Bei, in the work shop, is going to set our first task for. Sorry Liu Bei's future wife is going to set a task for him, which is a musical composition. And so this is an example of a permutation problem and we'll see two different viewpoints of that problem that can be used. And we'll look at which way is the best way of modeling that problem? >> And of course, either one of the model would be able to solve the problem entirely, but then by combining them we would make things much more efficient. >> We hope so. So in assignment four, we're going to look at a matching problem. We're matching horses with riders and again this obviously has two view points and we'll see some complex constraints, and we'll need to figure out which view point we should use each of those constraints. So it's a challenging assignments that brings us to the end of course one and I hope you find it interesting. >> Of course in course number one, you have learned many many basic modelling concepts. We have also given you enough tools so that you would be able to solve some discrete optimisation problem, but that is not enough. >> That's right, we highly encourage you to move on to do course two in the specialization, which will introduce many more discrete optimization techniques and global constraints and things like this. >> In the first module over there we will talk about debugging, because as in computer programming, bugs happen all the time and of course we have bugs in our models as well. >> Yes, so that's a critical skill that's really going to improve what you can get out of modeling discrete optimization problems. We'll then move on to predicates, so predicates is a concept allowing us to encapsulate part of our model in a name and reuse that multiple times in the same model or in different models. >> That's actually very similar to procedures and functions in conventional computer programming language. >> Absolutely, a predicate is equivalent to a procedure in a regular language. >> And that would make our model much more precise and easier to understand. >> Absolutely. After that, we'll look at the really important application areas of scheduling and packing. So these are very challenging discrete optimization problems and we'll look at the globals that we can make use of to describe those effectively. >> Towards the end, we'll also look at symmetries in more details and look at various ways of removing these symmetries to increase the solving efficiency. Yes, so symmetry is a bugbear of any kind of discrete optimization, and really being able to handle symmetries effectively, is important recognizing symmetries as well. >> Okay, we hope to see you in course two. >> Yes. >> Goodbye. >> Goodbye,