Northeastern University
Applied Natural Language Processing in Engineering Part 2
Northeastern University

Applied Natural Language Processing in Engineering Part 2

Ramin Mohammadi

Instructor: Ramin Mohammadi

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

October 2025

Assessments

21 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

There are 7 modules in this course

This module delves into the critical preprocessing step of tokenization in NLP, where text is segmented into smaller units called tokens. You will explore various tokenization techniques, including character-based, word-level, Byte Pair Encoding (BPE), WordPiece, and Unigram tokenization. Then you’ll examine the importance of normalization and pre-tokenization processes to ensure text uniformity and improve tokenization accuracy. Through practical examples and hands-on exercises, students will learn to handle out-of-vocabulary (OOV) issues, manage large vocabularies efficiently, and understand the computational complexities involved. By the end of the module, you will be equipped with the knowledge to implement and optimize tokenization methods for diverse NLP applications.

What's included

1 video13 readings2 assignments1 app item

In this module, we will explore foundational models in natural language processing (NLP), focusing on language models, feedforward neural networks (FFNNs), and Hidden Markov Models (HMMs). Language models are crucial in predicting and generating sequences of text by assigning probabilities to words or phrases within a sentence, allowing for applications such as autocomplete and text generation. FFNNs, though limited to fixed-size contexts, are foundational neural architectures used in language modeling, learning complex word relationships through non-linear transformations. In contrast, HMMs model sequences based on hidden states, which influence observable outcomes. They are particularly useful in tasks like part-of-speech tagging and speech recognition. As the module progresses, we will also examine modern advancements like neural transition-based parsing and the evolution of language models into sophisticated architectures such as transformers and large-scale pre-trained models like BERT and GPT. This module provides a comprehensive view of how language modeling has developed from statistical methods to cutting-edge neural architectures.

What's included

2 videos19 readings4 assignments

In this module, we will explore Recurrent Neural Networks (RNNs), a fundamental architecture in deep learning designed for sequential data. RNNs are particularly well-suited for tasks where the order of inputs matters, such as time series prediction, language modeling, and speech recognition. Unlike traditional neural networks, RNNs have connections that allow them to “remember” information from previous steps by sharing parameters across time steps. This ability enables them to capture temporal dependencies in data, making them powerful for sequence-based tasks. However, RNNs come with challenges like vanishing and exploding gradients which affect their ability to learn long-term dependencies. Throughout the module, you will explore different RNN variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), which address these challenges. You will also delve into advanced training techniques and applications of RNNs in real-world NLP and time series problems.

What's included

2 videos22 readings2 assignments1 app item

This module introduces students to advanced Natural Language Processing (NLP) techniques, focusing on foundational tasks such as Part-of-Speech (PoS) tagging, sentiment analysis, and sequence modeling with recurrent neural networks (RNNs). Students will examine how PoS tagging helps in understanding grammatical structures, enabling applications such as machine translation and named entity recognition (NER). The module delves into sentiment analysis, highlighting various approaches from traditional machine learning models (e.g., Naive Bayes) to advanced deep learning techniques (e.g., bidirectional RNNs and transformers). Students will learn to implement both forward and backward contextual understanding using bidirectional RNNs, which improves accuracy in tasks where sequence order impacts meaning. By the end of the course, students will gain hands-on experience building NLP models for real-world applications, equipping them to handle sequential data and capture complex dependencies in text analysis.

What's included

1 video15 readings4 assignments

This module introduces you to core tasks and advanced techniques in Natural Language Processing (NLP), with a focus on structured prediction, machine translation, and sequence labeling. You will explore foundational topics such as Named Entity Recognition (NER), Part-of-Speech (PoS) tagging, and sentiment analysis and use neural network architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Conditional Random Fields (CRFs). The module will cover key concepts in sequence modeling, such as bidirectional and multi-layer RNNs, which capture both past and future context to enhance the accuracy of tasks like NER and PoS tagging. Additionally, you will delve into Neural Machine Translation (NMT), examining encoder-decoder models with attention mechanisms to address challenges in translating long sequences. Practical implementations will involve integrating these models into real-world applications, focusing on handling complex language structures, rare words, and sequential dependencies. By the end of this module, you will be proficient in building and optimizing deep learning models for a variety of NLP tasks.

What's included

3 videos18 readings4 assignments

In this module we’ll focus on attention mechanisms and explore the evolution and significance of attention in neural networks, starting with its introduction in neural machine translation. We’ll cover the challenges of traditional sequence-to-sequence models and how attention mechanisms, particularly in Transformer architectures, address issues like long-range dependencies and parallelization, which enhances the model's ability to focus on relevant parts of the input sequence dynamically. Then, we’ll turn our attention to Transformers and delve into the revolutionary architecture introduced by Vaswani et al. in 2017, which has significantly advanced natural language processing. We’ll cover the core components of Transformers, including self-attention, multi-head attention, and positional encoding to explain how these innovations address the limitations of traditional sequence models and enable efficient parallel processing and handling of long-range dependencies in text.

What's included

2 videos25 readings3 assignments2 app items

In this module, we’ll hone in on pre-training and explore the foundational role of pre-training in modern NLP models, highlighting how models are initially trained on large, general datasets to learn language structures and semantics. This pre-training phase, often involving tasks like masked language modeling, equips models with broad linguistic knowledge, which can then be fine-tuned on specific tasks, enhancing performance and reducing the need for extensive task-specific data.

What's included

1 video19 readings2 assignments

Instructor

Ramin Mohammadi
Northeastern University
4 Courses488 learners

Offered by

Explore more from Machine Learning

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions