Is the quality or reliability of our assessment robust enough to make high stakes decisions, and how could we secure the quality in order to make the right decisions? I think the quality of assessments is defined at various levels actually. So the first level is that you should have a good assessment program within your curriculum. You should think about, what does the workplace ask for my students? And then, how should I design my curriculum? And how do they come into the program to what do you want them to know and be able to do when they go out of the program? And how do I then put in different assessment forms within that program? Then at another level say, the course level or medium program level, it is actually that you should align both or our three things and meet the learning goals. What should students learn in a given phase, or a given program, or a given course with the educational format you use with it. And the education, and the resources that you provide students in the training, in the lectures, or whatever, and the assessment they should be aligned. And there's quite a few theories say, well, they should be aligned. If they're not aligned you have a non-functioning program. And there's always one thing that happens then. Students they tend to look at the assessment, what is the bottom line that I need to do? And they will do that. And the rest will not function very properly. And then while the lowest layer will be, you can have this nice programming and you have this nice alignment, then the devil is in the details. Within assessment, the weakest link defines actually the strength of the whole thing. So you can have very well suited, very well valid multiple choice exams, but if you put in bad questions, it's still a poor exam, though it might be very valid to do a multiple choice exam. It can be very valid that you have a project in your curriculum. But if the task of the project is not very well-defined, if the judgment criteria are not very well-defined, if you do not look in how I can get the feedback within my student group, or the tutor in the students, how do I use it? Then it can still be a bad form of learning assessment. So you should take care of the little details in everything that you do in assessment. So these are the three main things I think. I can also easily demonstrate to you that most of our assessments in actual practice are simply unreliable. So we take many false negative and false positive decisions. And if we do that a lot which normally we do in the hurdles, we stack those errors. And what you then see is that your attrition will be problematic. You have problems with your attrition. So that's why it is important not to take decisions either, or if you take decisions, to have a bit of compensation across different methods of assessment. So should teachers be trained in that to make a professional judgment? Definitely, definitely. Well, to be more specific, teachers should be trained in how to give feedback. And learners should be trained on how to give feedback. So giving and receiving feedback, to me is a learning outcome. Fortunately enough, if you have a multiple choice exam, it's not that students fail on one question in the exam which was poorly designed, it's because they failed on the other half of the questions which were well-designed which they did not answer correctly. So yes, it's fortunate that in an MCE exam there is not one question but there's like 40 or 50, so that's the reason to put in 50 questions for example. The same is true for say, a year one of a curriculum. So if a student fails on exam one because it was poorly designed, fortunately, there's exam two, and exam three, and exam four. So it all compensates with one another. And that is needed because each exam is a compromise which I already told in time and money, but also it's error prone. Not a single measurement instrument is without any error, and our instruments have a lot of error actually. So we should expect it to be there. And a lot programs don't compensate their assessment lecturer? Yes, that is true. And people feel strongly about this. They say, well, we cannot do with compensation, which conjunction in this case. But for the longer run, being able to perform well in the workplace, well, it doesn't matter. And if you have a non-compensating curriculum, actually it's very sad for your students and for yourself because you check out too many students which would be very well suited for their domain eventually. So making sure that you don't compensate apples and oranges, it's very well doable. But it's feels counter-intuitive to do it. I am actually in favor of a concept called programmatic assessment. And in programmatic assessment, we see every individual assessment as one data point. And I can easily demonstrate to you that any individual assessment has a number of poor qualities, will never be perfect, will be far from perfect. Therefore in programmatic assessment, we take one assessment only as one data point. And that data point is optimized for its learning function to the students, so we provide feedback. And then we take away the decision making from an individual data point, right? Yes. Because we actually say that, and I can prove that, that this individual data point, this information is not sufficient enough to take a decision. So let's remove the decision and focus entirely on providing meaningful information to the learner. It shouldn't be a hurdle. It shouldn't be a hurdle, it should be a moment of feedback. And I call that low stake, right? Yeah, yeah. Then if you have an education program, you have a number of these data points. And then naturally at some point in time, there should be some decisions being made about progress. But the heaviness or the stake of the decision really depends also on the number of data points you. If you have a lot of data points, and if the emerging picture is clear, then you can take decisions, but not before that. So in programmatic assessment, you have a lot of different data points, a lot of different methods of assessment as well, you want a lot of variability in that because you want your learners to orally defend themselves over here, to write over there, to do a project over there. So you want variability in the format. All of that will have feedback attached to it and that will be used ultimately also for decision making. I think you can have a very nicely designed program of assessment in your curriculum, but if the practical execution is bad, then the whole thing also fails. So we should put in more effort in carefully designing assessment tasks, carefully designing the MC questions are open-ended questions, carefully designing project tasks, carefully designing judgment schemas, there we could improve a lot actually. So quality of the instrument? The quality of the instruments can be quite heavily improved I think. But I think what we see currently in higher education is these qualifications, teaching qualifications, assessment being part of that. So I think we're moving into a better direction. So I think that we slowly will have more experts. There's even a Master's program on assessment currently. So you spend a year full-time on developing your assessment expertise. So slowly I think things are changing. And in the end every teacher will be [inaudible]. And hopeful of it. Because in my view, assessment and teaching should be merged as much as possible, should not be distinct from each other because that's asking for trouble. That's asking for trouble.