Northeastern University
Applied Natural Language Processing in Engineering Part 1
Northeastern University

Applied Natural Language Processing in Engineering Part 1

Ramin Mohammadi

Instructor: Ramin Mohammadi

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
3 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

October 2025

Assessments

22 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

There are 7 modules in this course

This module provides an in-depth exploration of Natural Language Processing (NLP), a crucial area of artificial intelligence that enables computers to understand, interpret, and generate human language. By combining computational linguistics with machine learning, NLP is applied in various technologies, from chatbots and sentiment analysis to machine translation and speech recognition. The module introduces fundamental NLP tasks such as text classification, Named Entity Recognition (NER), and neural machine translation, showcasing how these applications shape real-world interactions with AI. Additionally, it highlights the complexities of teaching language to machines, including handling ambiguity, grammar, and cultural nuances. Through the course, you will gain hands-on experience and knowledge about key techniques like word representation and distributional semantics, preparing them to solve language-related challenges in modern AI systems.

What's included

4 videos19 readings2 assignments1 app item

This module focuses on optimization techniques critical for machine learning, particularly in natural language processing (NLP) tasks. It introduces Gradient Descent (GD), a fundamental algorithm used to minimize cost functions by iteratively adjusting model parameters. You’ll explore variants like Stochastic Gradient Descent (SGD) and Mini-Batch Gradient Descent to learn more about their efficiency in handling large datasets. Advanced methods such as Momentum and Adam are covered to give you insight on how to enhance convergence speed by smoothing updates and adapting learning rates. The module also covers second-order techniques like Newton’s Method and Quasi-Newton methods (e.g., BFGS), which leverage curvature information for more direct optimization, although they come with higher computational costs. Overall, this module emphasizes balancing efficiency, accuracy, and computational feasibility in optimizing machine learning models.

What's included

2 videos16 readings3 assignments

This module explores Named Entity Recognition (NER), a core task in Natural Language Processing (NLP) that identifies and classifies entities like people, locations, and organizations in text. We’ll begin by examining how logistic regression can be used to model NER as a binary classification problem, though this approach faces limitations with complexity and context capture. We’ll then transition to more advanced techniques, such as neural networks, which excel at handling the complex patterns and large-scale data that traditional models struggle with. Neural networks' ability to learn hierarchical features makes them ideal for NER tasks, as they can capture contextual information more effectively than simpler models. Throughout the module, we compare these methods and highlight how deep learning approaches such as Recurrent Neural Networks (RNNs) and transformers like BERT improve NER accuracy and scalability.

What's included

2 videos14 readings3 assignments1 app item

The Word2Vec and GloVe models are popular word embedding techniques in Natural Language Processing (NLP), each offering unique advantages. Word2Vec, developed by Google, operates via two key models: Continuous Bag of Words (CBOW) and Skip-gram, focusing on predicting a word based on its context or vice versa (Word2Vec). The GloVe model, on the other hand, created by Stanford, combines count-based and predictive approaches by leveraging word co-occurrence matrices to learn word vectors (GloVe). Both models represent words in a high-dimensional vector space and capture semantic relationships. Word2Vec focuses on local contexts, learning efficiently from large datasets, while GloVe emphasizes global word co-occurrence patterns across the entire corpus, revealing deeper word associations. These embeddings enable tasks like analogy-solving, semantic similarity, and other linguistic computations, making them central to modern NLP applications.

What's included

3 videos29 readings4 assignments1 app item

This module delves into the evaluation techniques for Natural Language Processing (NLP) models, focusing on both intrinsic and extrinsic evaluation methods. Intrinsic evaluation assesses the model's performance based on internal criteria, such as word embedding quality, parsing accuracy, and language model perplexity. In contrast, extrinsic evaluation measures the model's effectiveness in real-world applications, including tasks like machine translation, sentiment analysis, and named entity recognition. You’ll also learn more about key differences between these evaluation types, and the importance of context and application in determining a model's utility. Additionally, you’ll review specific metrics like cross-entropy loss, perplexity, BLEU, and ROUGE scores, providing a comprehensive understanding of how to evaluate and improve NLP models.

What's included

9 readings2 assignments1 app item

This module explores various techniques for topic modeling in natural language processing (NLP), focusing on methods like Latent Semantic Analysis (LSA), Non-Negative Matrix Factorization (NMF), and Latent Dirichlet Allocation (LDA). It begins with an introduction to matrix factorization and the importance of transforming textual data into numerical representations. You’ll delve into the mechanics of LSA and NMF, paying attention to their use of TF-IDF and Singular Value Decomposition (SVD) to uncover latent semantic structures. Additionally, you’ll review LDA's probabilistic approach to topic modeling, explaining its reliance on Dirichlet distributions and Bayesian inference to identify hidden topics within a corpus. Through detailed examples and mathematical explanations, the module provides a comprehensive understanding of how these techniques can be applied to extract meaningful topics from large text datasets.

What's included

1 video16 readings4 assignments1 app item

This module delves into the essential techniques of syntactic and semantic parsing in natural language processing (NLP). You’ll begin with an exploration of linguistic structures, focusing on phrase structure and dependency structure, which are fundamental for understanding sentence syntax. Then you’ll review various parsing methods, including transition-based and graph-based dependency parsing, highlighting their respective advantages and challenges. Additionally, you’ll touch on neural transition-based parsing, which leverages neural networks for improved accuracy and efficiency. Finally, the module touches on semantic parsing, emphasizing its role in mapping sentences to formal representations of meaning, crucial for applications like dialogue systems and information extraction.

What's included

2 videos32 readings4 assignments

Instructor

Ramin Mohammadi
Northeastern University
4 Courses488 learners

Offered by

Explore more from Machine Learning

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions