EXPERIMENTAL RESULTS
In this paper we present the results of speaker identification experiments using short utterances.The commercially available TI46(Texas Instruments)speech data corpus was used to compare classifiers.There are 16 speakers,8 female and 8 male,labelled f 1-f 8 and m 1-m 8,respectively.The vocabulary contains a set of ten singleword computer commands:{enter,erase,go,help,no,rubout,repeat,stop,start,yes}.Each speaker repeated the words ten times in a single training session,and then again twice in each of 8 later testing sessions.The corpus is sampled at 12500 samples per second and 12 bits per sample.The data were processed in 20.48 ms frames(256 samples)at a frame rate of 125 frames per second(100 sample shift).Frames were Hamming windowed and preemphasised withμ=0.9.For each frame,46 mel-spectral bands of a width of 110 mel and 20 mel-frequency cepstral coefficients(MFCC)were determined[15].In the training phase,100 training tokens(10 utterances x 1 training session x 10 repetitions)of each speaker were used to train codebooks of 32,64,128 codevectors using the LBG algorithm.
Speaker identification was carried out by testing all 2560 test tokens(16 speakers x 10 utterances x 8 test sessions x 2 repetitions)for three following experiments:
· Experiment 1:the nearest prototype classifier,using the distortions in(1),(7)and the decision rule in(8);
· Experiment 2:the fuzzy nearest prototype classifier,using the distortion in(1),the fuzzy membership function in(9)with degree of fuzziness m=1.17,and the decision rule in(13);
· Experiment 3:the fuzzy nearest prototype classifier,using the distortion in(5),the fuzzy membership function in(9)with degrees of fuzziness m=1.17,n=1.06,and the decision rulein(13).
The experimental results are shown in Table 1 for three codebook sizes and as follows:
Table 1 Speaker Identification Rate(%)for the 16 speakers with three codebook sizes
Through these experimental results,it can be seen that the speaker identification system using the fuzzy nearest prototype classifier(experiments 2 and 3)is significantly improved in comparison with using the nearest prototype classifier.