Throughout the full program, of course, is to prepare you for the DP-2O3 Data Engineering on Microsoft Azure exam. You covered a range of topics and key concepts. Let's recap on those now. You looked at how the world of Data is evolving. Not least because of the increasing amounts of data, different data types, a new requirements for the processing of data. The development of new technologies, new and changing roles, and emerging approaches to working with data are affecting data professionals everywhere. Anyone who wants to work with data needs to understand the continuing change in the data landscape, and must realize how they can adapt to the roles and technologies that are evolving. As a data engineer, you should understand how these changes affect your career path and your daily working life. You learn to explain the difference between on-premises and Cloud-based servers. You saw how the Data Engineer, Data Scientist, and Artificial Intelligence Engineer roles are developing in modern Data projects. You'll also learn to describe Azure technologies that analyze text and images and relational, non-relational, or streaming data. You learned how to outline a high level architecting process for a data engineering project, and learned how to choose Microsoft Azure technologies that meet different business needs and scale to meet demand securely. You learned how to choose the right model for storing data in the Cloud in different business scenarios. This can involve using Azure Storage, Azure SQL Database, and Azure Cosmos DB, or a combination of them. You also created an Azure Storage account with the correct options for your business needs. You created a simple application and added configuration, Client Library references and code to connect your application to Azure Storage. You explored the use of access keys to secure networks and used Advanced Threat Protection to proactively monitor your system. Learned how to build an app that stores user files with Azure Blob Storage. You also learned to describe the core components of Azure Data Factory that enabled you to create large-scale data ingestion solutions in the Cloud, and learned to describe the various methods that can be used to ingest data between various data stores using Azure Data Factory. You performed common data transformation and cleansing activities within Azure Data Factory without using any code and implemented slowly changing dimensions using Azure Data Factory or Azure Synapse pipelines. You learned how to use Azure Data Factory to orchestrate large-scale data movement by using other Azure Data platforms on machine learning technologies. You integrated SQL Server Integration Services packages into an Azure Data Factory solution and published your Azure Data Factory work between different environments. You learned how to ingest data from Azure Data Share into Azure Data pipelines to build automated ingestion pipelines, and then reviewed the features and components that Azure Synapse Analytics offers to provide a one-stop shop for all your analytical needs. You worked through the various components of Azure Synapse Analytics that enables you to build your analytical solutions in one place. You reviewed the Core azure Synapse Studio application used to interact with the various components of Azure Synapse Analytics. You learned that it organizes itself into hubs, which allows you to perform a wide range of activities against your data. You learned how Azure Synapse Analytics enables you to build Data Warehouses using modern architecture patterns and explained the common schema implemented in a data warehouse. You learned about by data loading and had to load data into a data warehouse in Azure Synapse Analytics. You also learned how to optimize query performance within Azure Synapse Analytics using various methods such as indexes, caching a materialized views. You learned how to integrate SQL and Apache Spark pools and explored the language capabilities that are available to create a data warehouse in Azure Synapse Analytics. You learned about some of the features you can use to manage and monitor Azure Synapse Analytics, and analyzed information used to optimize a data warehouse. You also investigated the approach to use when implementing security to protect your data with Azure Synapse Analytics. You learned how to differentiate between Apache Spark, Azure Databricks, HDInsight, and SQL pools. You used Apache Spark Notebooks in Azure Synapse Analytics to ingest data and transformed complex data types by using data frames in the Azure Synapse Studio. You learned how to integrate SQL and Apache Spark pools, monitor Spark pools in Azure Synapse Analytics using the Monitor Hub, and manage data engineering workloads with Apache Spark. You learned how to configure and enable Azure Synapse Link to create a tight, seamless integration between azure Cosmos DB and Azure Synapse Analytics, and explored how hybrid transactional and analytical processing can help you perform operational analytics. You queried Azure Cosmos DB with Apache Spark for Azure Synapse Analytics and performed a transactional store query using Azure Cosmos DB with SQL serverless. You explored the capabilities of Azure Databricks and the Apache Spark Notebook. You also looked at the Azure Databricks platform to identify the types of tasks well suited for Apache Spark, and the architecture of an Azure Databricks Spark cluster and Spark jobs. You discovered how to use Azure Databricks to perform reads, writes, and queries to prepare the data for advanced analytics and machine learning operations to support day-to-day data handling functions. Performed data transformations in data frames, and executed actions to display the transformed data. Explained the difference between a transform and an action, lazy and eager evaluations, wide and narrow transformations, and other optimizations in Azure Databricks. You used the DataFrame Column class in Azure Databricks to apply Column level transformations such as sorts, filters, and aggregations, and used advanced DataFrame functions operations to manipulate data, apply aggregates, and perform date and time operations in Azure Databricks. You learned about the Azure Databricks platform architecture and how it is secured. You used Azure Key Vault to store secrets used by Azure Databricks and other services, and accessed Azure Storage with Key Vault based secrets. You learned how to use Delta Lake to create, append, and upsert data to Apache Spark Tables, taking advantage of built-in reliability and optimizations, and how to navigate Azure Databricks Delta Lake Architecture. You used Azure Event Hubs together with structured streaming to process and analyze messages in real time, and created production workloads on Azure Databricks with Azure Data Factory. You learned how to put an Azure Databricks Notebook under version control in an Azure DevOps repository. By using Azure DevOps, you can then build deployment pipelines to manage your release process. You discovered how to integrate Azure Databricks and Azure Synapse Analytics as part of your data architecture and learned best practices for workspace administration, security tools, integration, Databricks, runtime, HA and DR, and clusters in Azure Databricks. You learned that Azure Data Lake Storage Gen2 provides a Cloud storage service that is available, secure, durable, scalable, and redundant. That Azure Data Lake Storage brings new efficiencies to process big data analytics workloads. You also discovered the various ways to upload data to Azure Data Lake Storage Gen2. You learned how Azure Storage provides multi-layered security to protect your data and found out how to use access keys to secure networks and to use Advanced Threat Protection to proactively monitor your system. You learned about the concepts of event processing and streaming data and how this applies to Azure Stream Analytics, and setup a Stream Analytics job to stream data. Finally, you learned how to manage and monitor a running job. Well-done on getting this far. You are now ready to take the full practice exam. Good luck.