推 andy74139 :已收錄至精華區!! 12/09 23:25
課程名稱︰數位語音處理概論
課程性質︰選修
課程教師︰李琳山
開課學院:電資學院
開課系所︰資工系
考試日期(年月日)︰2010.5.19
考試時限(分鐘):120
是否需發放獎勵金:是
(如未明確表示,則不予發放)
試題 :
#OPEN Lecture Power Point (Printed Version) and Personal Notes
#You have to use CHINESE sentences to answer all the questions, but you can use
English terminologies
#Total points: 160
------------------------------------------------------------------------------
_
1.(20)Given a GMM λ=(A,B,π)with N states, an observation sequence O=o1o2...ot
...oT
αt(i) = Prob[o1o2...ot,qt = i | λ]
βt(i) = Prob[o[t+1]o[t+2]...oT | qt = i, λ]
N
(a)Derive the meaning ofΣ αt(i)βt(i) in probability.
i-1
αt(i)βt(i)
(b)Derive the meaning of --------------- in probability.
N
Σ αt(j)βt(j)
j=1
(c)Derive the meaning of αt(i)a[ij]bj(o[t+1])β[t+1](j) in probability.
(d)Formulate and describe the Viterbi algorithm to find the best state sequence
_* * * * * _ _*
q = q1q2...qt...qT giving the highest probability Prob[O, q | λ].
Why do we need backtracking?
2.(20)What is the problem of coarticulation and context dependency considered
in acoustic modeling? Describe a situation that mono-phone model outperforms
tri-phone model.
3.(20)In large vocabulary continuous speech recognition, explain:
(a)What the "language model weight" is.
(b)Why the language model has the function as the penalty of inserting extra
words.
4.(20)Explain what the class-based language model is. What can be the possible
reasons if class-based N-gram model perform worse than word-based N-gram
model?
5.(20)What is the perplexity of a language source? What is the perplexity of
a language model with respect to a corpus? How are they related to a
"virtual vocabulary"?
6.(20)In feature extraction for speech recognition, after you obtain 12 MFCC
parameters plus a short-time energy (a total of 13 parameters), explain how
to obtain the other 26 parameters and what they are.
7.(20)In training HMM models for isolated word recognition, do you think the
more number of iterations you perform, the higher recognition accuracy you
will get? Note that it is guaranteed that the likelihood will be increased
in each iteration. Explain your answer.
8.(20)What is the pre-emphasis procedure in MFCC extraction? Why do we perform
pre-emphasis?
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.30.96