layout | title | description | modified | category | tags | image | comments | share | ||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
post |
HMM and Viterbi |
A notes on reading standford cs229 ML course |
2014-05-28 20:58:37 -0400 |
|
|
- stanford Handout
- Artificial Intelligence: A Modern Approach(R&N) chap 15
\[P(z_t|z_{t-1} z_{t-2} ...z_1) = P(z_t|z_{t-1})\]
\[P(z_t|z_{t-1}) = P(z_2|z_1)\quad t\in 2...T\]
The model need to maximize the log-likehood of the state sequence for transition matrix.\( \vec{z}\) is the state sequence and
Both wiki and R&N have a good example that illustrate the intuition of HMM. Basically, we have known observation sequence and unknown state sequence. As an example in wiki, the known observation sequence is the patient feeling(normal, cold, dizzy) and the unknown state sequence is the health condition of the patient(Healthy, Fever). Below is the image in wiki Viterbi algorithm.
By the probability of state transition(P(Healthy->Fever)) and emission matrix of state to observation(P(Normal\|Healthy)), we are able to approximate the state sequence or predict the next state or maybe next observation based on the specific tasks.