I am now done watching the week 2 videos from my NLP course on Coursera. I have never been so enthusiastic before, taking a course online and doing it so seriously. The one unique feature of coursera I like is all the lectures are split into multiple short segment videos of max 10 - 15 minutes each. This is good because me as a listener does not get bored and get bombarded with too much of information at once.
The week 2 videos were a little harder and it involves a lot of probabilistic math. But Professor Collins did a good job on explaining all the concepts with good examples :) Here is what I gained from week 2 videos:
- The Tagging problem
- Part of speech Tagging
- Named Entity Tagging
Supervised Learning Problems
Generative models, and the noisy-channel model, for supervised learning
- Hidden Markov Model (HMM) taggers
- Trigram Hidden Markov Models (Trigram HMMs)
- Basic definitions
- Parameter estimation
- The Viterbi algorithm
Now I am waiting for the programming assigment to be posted which is probably going to be implementing the Viterbi algorithm.