Covers various topics in Data Engineering in support of decision support systems, data analytics, data mining, machine learning, and artificial intelligence. Studies on-premises data warehouse architecture, dimensional modeling of data warehouses, Extract-Transform-Load (ETL) integration from source systems to data warehouse, On-line Analytical Processing (OLAP) systems, and the evolving world of data quality and data governance. Offers students an opportunity to design, develop and maintain cloud-based data pipelines. Both on-premises and cloud-based platforms will be used to illustrate and implement Data Engineering techniques using operational and analytical data warehouses.

This Labor Day, enjoy $120 off Coursera Plus. Unlock access to 10,000+ programs. Save today.


Skills you'll gain
Details to know

Add to your LinkedIn profile
August 2025
9 assignments
See how employees at top companies are mastering in-demand skills

There are 6 modules in this course
In this module, you'll learn about ETL (Extract, Transform, Load) processes, an essential part of Data Warehousing and Data Integration solutions. ETL processes can be complex and costly, but effective design and modeling can significantly reduce development and maintenance costs. You'll be introduced to the basics of Business Process Modeling Notation (BPMN), which is crucial for modeling business processes. We’ll focus on the basics of BPMN, including key components such as flow objects, gateways, events, and artifacts, which are essential for modeling business processes. You will explore how BPMN can be customized to conceptual modeling of ETL tasks, with a particular focus on differentiating control tasks from data tasks. Control tasks manage the orchestration of ETL processes, while data tasks handle data manipulation, both of which are critical in conceptualizing ETL workflows. By the end of this module, you’ll gain a solid understanding of how to design ETL processes using BPMN, enabling greater flexibility and adaptability across various tools.
What's included
2 videos8 readings2 assignments
In this module you will dive into Talend Studio, a powerful Eclipse-based data integration platform that transforms complex ETL operations into intuitive visual workflows. By explorating Talend's drag-and-drop interface, you will learn to navigate the core components of the platform. You’ll master fundamental ETL operations by studying essential components like tMap for complex data transformations and joins, tJoin for straightforward data linking, and various input/output components for connecting to databases, files, and APIs. By the end of the module you will understand how Talend automatically generates executable Java code from visual designs, enabling you to create scalable, production-ready data integration solutions that can handle both batch processing and real-time data scenarios across diverse technological environments.
What's included
3 readings1 assignment
In this module, we transition from on-premises Data Warehousing to Data Engineering. While Data Engineering has its roots in Data Warehousing, it encompasses much more. We’ll explore the key enablers of this evolution, specifically cloud computing and DevOps. You will learn about the benefits of cloud development, including enhanced scalability, cost efficiency, and flexibility in data operations. We will also dive into how traditional IT infrastructure components—such as security, networking, and compute resources—are redefined in cloud environments using AWS. Additionally, you'll gain an understanding of DevOps in the cloud, focusing on the use of virtual machines and containers to streamline continuous integration and deployment. We will cover key DevOps practices like Infrastructure as Code (IaC), CI/CD pipelines, and automated testing, emphasizing their role in ensuring consistency, faster development cycles, and secure applications. You will then explore what data engineering entails and the skills required to become a data engineer. Finally, we’ll introduce the concept of the data engineering lifecycle and its different phases, focusing on the first two: Data Generation and Storage.
What's included
1 video12 readings2 assignments
In this module, we will explore the next two phases of the data engineering lifecycle: Ingestion and Transformation. Data ingestion refers to the process of moving data from source systems into storage, making it available for processing and analysis. As you delve into the reading, you will examine key ingestion patterns, including batch versus streaming ingestion, synchronous versus asynchronous methods, and push, pull, and hybrid approaches. You’ll also explore essential engineering considerations such as scalability, reliability, and data quality management, along with the challenges posed by schema changes. The reading will introduce various technologies that enable data ingestion, such as JDBC/ODBC, Change Data Capture (CDC), APIs, and event-streaming platforms like Kafka. We then shift focus to the transformation phase of the lifecycle, exploring different types of transformations that integrate complex business logic into data pipelines. At the end of the module, we will focus on data architecture and implementing good architecture principles to build scalable and reliable data pipelines.
What's included
4 videos12 readings2 assignments2 app items
In this module, we will explore data characteristics and how they drive infrastructure decisions. In today’s data-driven world, understanding the properties of your data is essential for designing robust data pipelines. We’ll go over key characteristics like volume, which refers to the size of datasets, and velocity, which concerns how frequently new data is generated. We’ll also take a look at variety, which focuses on data formats and sources, and veracity, which emphasizes data accuracy and trustworthiness. The ultimate goal is to uncover value from data through insightful analysis. As we delve into pipeline design, you'll learn how these characteristics influence key decisions, such as the choice of storage, processing, and analytics tools. We will also cover essential AWS services like Amazon S3, Glue, and Athena, exploring how they support scalable and flexible data engineering. By the end of this module, you’ll have a comprehensive understanding of how to build effective data solutions to meet both technical and business needs.
What's included
6 readings1 assignment
Welcome to the final stage of the data engineering lifecycle: serving data. In this module, we will focus on how to effectively serve data for analytics, machine learning (ML), and reverse ETL to ensure that the data products you design are reliable, actionable, and trusted by stakeholders. Key topics include setting SLAs, identifying use cases, evolving data products with feedback, standardizing data definitions, and exploring delivery methods such as file exchanges, databases, and streaming systems. We’ll also cover the use of reverse ETL to improve business processes and discuss the importance of context for choosing the best visualization type and tools. We then delve into KPIs and metrics and how to classify them, including how to identify robust KPIs based on the business context. Finally, we will focus on creating intuitive dashboards by choosing the right analysis, visualizations, and metrics to showcase based on the business context and audience involved. By the end of this module, you will understand how to design and serve data solutions that drive meaningful action and are trusted by end users.
What's included
11 readings1 assignment
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.
Instructor

Offered by
Why people choose Coursera for their career





Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
You will be eligible for a full refund until two weeks after your payment date, or (for courses that have just launched) until two weeks after the first session of the course begins, whichever is later. You cannot receive a refund once you’ve earned a Course Certificate, even if you complete the course within the two-week refund period. See our full refund policy.
More questions
Financial aid available,