Welcome to Lesson 1 of module 11 on Normalization Concepts and Practice. I'm going to start with an important motivational question about eliminating unwanted redundancy through the normalization process. Which processing environment has more motivation to avoid modification anomalies? Lesson one begins your learning about normalization concepts and practice. Before getting immersed in details, you should clearly understand the motivation for eliminating unwanted redundancy in a table design. This lesson covers modification anomalies, the motivation for removing unwanted redundancy. The objectives in this lesson involve obtaining a solid grounding about the motivation for avoiding unwanted redundancies in a table design. You should be able to define modification anomaly and provide examples of the three type of modification anomalies, insert, update, and delete. With this background, you should understanding the processing orientation that's most sensitive to avoid modification anomalies. A good database design ensures that users can change rows of a database without unexpected side effects. For example in a university database, a user should be able to insert a new course without having to simultaneously insert a new offering of a course and a new student enrolled in the course. Likewise when a student is deleted from the database due to graduation, course data should not be inadvertently lost. These types of problems are examples of modification anomalies, unexpected side effects that occur when changing the rows of a table with excessive redundancies. A good database design avoids modification anomalies by eliminating excessive redundancies. To avoid redundancy, a database design should support one fact in one place. The details of the normalization process will provide clarity to this goal. To understand more precisely the impact of modification anomalies let us consider a poorly designed database. Imagine that a university data base consists of a single table, with a primary key as a combination of student number and offer number. The big university data base table contains columns about students, offerings, courses, and enrollments. The sample rows depict the meaning of the table. For example, the first row indicates that junior student S1 has achieved the grade of 3.5 in offer O1 in 2013 of database course C1. That sentence is a mouthful. Partial evidence of a poor design. For brevity, some of the columns have been removed. Otherwise the sentence documented in a new row, would've been even more convoluted. This single table design is not as extreme as it may seem. Users without proper database training, often design a database using a single table. This poor design makes it easy to identify modification anomalies. Let's review examples of three types of modification anomalies for insert, update, and delete operations. In these examples, you should remember that the combination of student number and offer number is the primary key of the big university database table. An insertion anomaly occurs when extra column values beyond the target values must be used in an assert statement. For example, to insert a course C4, it is necessary to know a student and an offering enrolled in the course, because a combination of student number and offer number is the primary key. Remember that a row cannot exist with null values with any part of this primary key. An update anomaly occurs when it is necessary to change multiple rows to modify only a single fact. For example, if we change the course description of course C2, two rows must be changed. If C2 had an enrollment of 10 students, 10 rows must be changed. A deletion anomaly occurs whenever deleting a row inadvertently causes other data to be deleted. For example, if we delete the first row with student S1 enrolled in offering O1, we lose information about offering O1 and course C1. To deal with these anomalies, users may circumvent them, such as a default primary key to insert a new course. Alternatively, database analysts may write code to prevent inadvertant loss of data. A better solution is to modify the table design to remove redundancies that cause anomalies. Let's wrap up lesson one about modification anomalies. A modification anomaly is an unexpected side effect from trying to insert, update, or delete a row. Essentially more data must be provided to accomplish an operation than would be expected. Avoidance of modification anomalies is the motivation for the normalization process. This lesson provided examples of the three types of modification anomalies for insert, update, and delete operations. In answer to the opening question, transaction processing is more sensitive to modification anomalies than business intelligence processing. To understand this point, let us consider two alternative designs for the university database. The single table design to present in this lesson, and the five table design used in past modules. You can think of this single table database as a natural joint of the five table design. The single table solution is easier to query because joint operations are not necessary. However, modification anomalies make transaction processing more complex to code, and more resource intensive. Business intelligence processing, dominated by reporting, may prefer a single table design, or at least fewer tables. Transaction processing strongly prefers the larger five table design. After this background on the motivation for normalization you are ready to learn the details in the remaining lessons of Module 11. You should not lose sight of the motivation in the remaining lessons as you master normalization details.