課程名稱︰ 數位語音處理
課程教師︰ 李琳山
開課系所︰ 電機系
考試時間︰ 2006/5/16
是否需發放獎勵金:yes
(如未明確表示,則不予發放)
試題 :
_
1.(20) Given a HMM λ = (A, B, π), an observation sequence O = o1,o2,...ot
_
...oT and a state sequence q = q1,q2,...,qt,...qT, define
αt(i) = Prob[o1o2...ot, qt = i | λ]
βt(i) = Prob[o(t+1),o(t+2),...,oT | qt = i, λ]
_ N
(a) (5) Show that Prob(O |λ) = Σ [αt(i)βt(i)]
i=1
_ αt(i)βt(i)
(b) (5) Show that Prob(qt = i| O,λ) = ----------------
N
Σ [αt(i)βt(i)]
i=1
(c) (10) Formulate and describe the procedures for Viterbi algorithm to
_* * * * *
find the best state sequence q = q1q2...qt...qT
2.(10) Given a descrete-valued random varibale X with probability
distribution
M
{ pi = Prob(X = xi), i-1, 2, 3, ..., M}, Σ pi = 1
M i=1
Explain the meaning of H(x) = - Σ pi[log(pi)]
i=1
3.(10) What is the problem of coarticulation and context dependency
considered in acoustic modeling? Why tri-phone models are difficult
to train?
4.(10) For Chinese language models, the N-gram can be trained based on
either characters or words. Discuss the considerations in the choice
between them.
5.(10) Explain the basic principles of back-off and interpolation to be
used for language model smoothing.
6.(10) In feature extraction for speech recognition, after you botain 12
MFCC parameters plus a short-time energy (a total of 13 parameters),
explain how to obtain the other 26 parameters and what they are.
7.(10) Explain why the use of a window with finite length, w(n),
n = 0, 1, 2, ..., L-1, is necessary for feature extraction in speech
recognition.
8.(10) What do we mean by spoken document understanding and organization?
9.(30) Write down anything you learned about the following subjects that
were NOT mentioned in the class. Don't writhe anything mentioned in the
class.
(a)(15) classification and regression tree (CART)
(b)(15) search problem/algorithm for large vocabulary continuous speech
recognition.
--
成功不是靠一陣子的熱心,而是靠一輩子的堅持
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 61.228.24.79