


default search action
ICSLP 1990: Kobe, Japan
- The First International Conference on Spoken Language Processing, ICSLP 1990, Kobe, Japan, November 18-22, 1990. ISCA 1990

Temporal Control in the Spoken Language
- Morio Kohno, Tomoko Tanioka:

The nature of timing control in language. 1-4 - Mary E. Beckman, Maria G. Swora, Jane Rauschenberg, Kenneth de Jong:

Stress shift, stress clash, and polysyllabic shortening in a prosodically annotated discourse. 5-8 - W. Nick Campbell:

Evidence for a syllable-based model of speech timing. 9-12 - Patti Price, Colin W. Wightman, Mari Ostendorf, John Bear:

The use of relative duration in syntactic disambiguation. 13-16 - Nobuyoshi Kaiki, Kazuya Takeda, Yoshinori Sagisaka:

Statistical analysis for segmental duration rules in Japanese speech synthesis. 17-20 - Florien J. Koopmans-van Beinum:

Spectro-temporal reduction and expansion in spontaneous speech and read text: the role of focus words. 21-24 - Yasuko Nagano-Madsen:

Perception of mora in the three dialects of Japanese. 25-28
Speech Analysis
- Shihua Wang, Erdal Paksoy, Allen Gersho:

Performance of nonlinear prediction of speech. 29-32 - Ren-Hua Wang, Quan fen Guan, Hiroya Fujisaki:

A method for robust GARMA analysis of speech. 33-36 - Keiichi Tokuda, Takao Kobayashi, Satoshi Imai:

Generalized cepstral analysis of speech - unified approach to LPC and cepstral method. 37-40 - Paul J. Dix, Gerrit Bloothooft, E. J. M. van Mierlo:

A geometrical argument for imposing an additional constraint on temporal decomposition. 41-44 - Keiichi Funaki, Yukio Mitome:

A speech analysis method based on a glottal source model. 45-48 - Ki Yong Lee, Inhyok Cha, Eckho Song, Souguil Ann:

An improved method for multipulse speech analysis. 49-52 - Lu Chang, M. M. Bayoumi:

New results on theory of hidden Markov models. 53-56
Voice Source Dynamics; Facts and Models
- Ronald C. Scherer, Chwen-geng Guo:

Laryngeal modeling: translaryngeal pressure for a model with many glottal shapes. 57-60 - Shigeru Kiritani, Hiroshi Imagawa, Hajime Hirose:

Vocal cord vibration and voice source characteristics - observations by a high-speed digital image recording -. 61-64 - Bert Cranen:

Interpretation of EGG and glottal flow by means of a parametrical glottal geometry model. 65-68 - Inger Karlsson:

Voice source dynamics for female speakers. 69-72 - Takuya Koizumi, Shuji Taniguchi:

A novel model of pathological vocal cords and its application to the diagnosis of vocal cord polyp. 73-76 - Hirohisa Iijima, Nobuhiro Miki, Nobuo Nagai:

Glottal flow analysis based on a finite element simulation of a two-dimensional unsteady viscous fluid. 77-80 - Hideki Kasuya, Yuji Ando, Jinlin Lu, Osamu Komuro:

A voice source model for synthesizing speech with various voice quality variations. 81-84 - Ailbhe Ní Chasaide, Christer Gobl:

Linguistic and paralinguistic variation in the voice source. 85-88
Speech Coding and Transmission
- Paavo Alku:

Glottal-LPC based coding of telephone band vowels with simple all-pole excitation. 89-92 - Suat Yeldener, Ahmet M. Kondoz, Barry G. Evans:

Sine wave excited linear predictive coding of speech. 93-96 - Toshiki Miyano, Kazunori Ozawa:

Improvement on 8 kb/s CELP using learned codebook: LCELP. 97-100 - Samir Saoudi, Jean-Marc Boucher, Alain Le Guyader:

Optimal scalar quantization of the LSP and the LAR for speech coding. 101-104 - Shinya Takahashi, Kunio Nakajima:

4.8 kbps speech coding using frame synchronous time domain compression (FS-TDC). 105-108 - Hirohisa Tasaki, Kunio Nakajima:

Time-domain flexible matrix quantization for very-low-rate speech coding. 109-112 - Tomohiko Taniguchi, Mark Johnson, Yasuji Ohta:

Multi-vector pitch-orthogonal LPC: quality speech with low complexity at rates between 4 and 8 kbps. 113-116 - Yair Shoham, Erik Ordentlich:

Low-delay code-excited linear-predictive coding of wideband speech at 32 kbps. 117-120 - Yoshihiro Unno, Makio Nakamura, Toshifumi Sato, Toshiki Miyano, Kazunori Ozawa:

11.2 kb/s LCELP speech codec for digital cellular radio. 121-124 - Tomoyuki Ohya, Hirohito Suda, Toshio Miki, Shinji Uebayashi, Takehiro Moriya:

Revised TC-WVQ speech coder for mobile communication system. 125-128
Extraction and Processing of Voice Individuality
- Hideki Noda, Masuzo Yanagida:

Extraction of phoneme-dependent individuality using HMM-based segmentation for text-independent speaker recognition. 129-132 - Julian P. Eatock, John S. D. Mason:

Automatically focusing on good discriminating speech segments in speaker recognition. 133-136 - Tomoko Matsui, Sadaoki Furui:

Text-independent speaker recognition using vocal tract and pitch information. 137-140 - Aaron E. Rosenberg, Chin-Hui Lee, Frank K. Soong, Maureen A. McGee:

Experiments in automatic talker verification using sub-word unit hidden Markov models. 141-144 - Myoung-Wan Koo, Chong Kwan Un, Hwang Soo Lee, Jun Mo Koo, H. R. Kim:

A comparative study of speaker adaptation methods for HMM-based speech recognition. 145-148 - Hiroaki Hattori, Satoshi Nakamura, Kiyohiro Shikano, Shigeki Sagayama:

Speaker weighted training of HMM using multiple reference speakers. 149-152 - Francis Kubala, Richard M. Schwartz:

Improved speaker adaptation using multiple reference speakers. 153-156 - Masanobu Abe, Shigeki Sagayama:

Statistical study on voice individuality conversion across different languages. 157-160 - Hiroshi Matsumoto, Hirowo Inoue:

A minimum distortion spectral mapping applied to voice quality conversion. 161-164
Voice Source Characteristics and Synthesis
- Anna M. Barney, Christine H. Shadle, David W. Thomas:

Airflow measurement in a dynamic mechanical model of the vocal folds. 165-168 - Jo Estill, Noriko Kobayashi, Kiyoshi Honda, Yuki Kakita:

A study on respiratory and glottal controls in six western singing qualities: airflow and intensity measurement of professional singing. 169-172 - Satoshi Imaizumi, Hiroshi Imagawa, Shigeru Kiritani:

A model of dynamic characteristics of the voice source and formant trajectories. 173-176 - Takayuki Nakajima, Hiroshi Ohmura:

Pole-zero structure based on two-source vocal tract model, PSE inspection of continuous speech vowel part. 177-180 - Gang Wang, Nobuhiro Miki, Nobuo Nagai:

Evaluation of speech synthesis using an ARMA estimation and excitation sources. 181-184 - Kazuhiko Iwata, Yukio Mitome, Jun Kametani, Minoru Akamatsu, Seimitsu Tomotake, Kazunori Ozawa, Takao Watanabe:

A rule-based speech synthesizer using pitch controlled residual wave excitation method. 185-188 - Kenzo Itoh, Hideyuki Mizuno, Tetsuya Nomura, Hirokazu Sato:

Phoneme segment concatenation and excitation control based on spectral distortion criterion for speech synthesis. 189-192 - Stephen D. Pearson, Hector R. Javkin, Kenji Matsui, Takahiro Kamai:

Text-to-speech synthesis using a natural voice source. 193-196 - Paavo Alku, Erkki Vilkman, Unto K. Laine:

A comparison of egg and a new automatic inverse filtering method in phonation change from breathy to normal. 197-200
Speech Recognition and Enhancement
- Ki Chul Kim, Hyunsoo Yoon, Jung Wan Cho:

Enhanced parametric representation using binarized spectrum. 201-204 - Kiyoshi Asai, Shigeru Chiba:

Voiced-unvoiced classification using weighted distance measures. 205-208 - Kei Miki:

Phoneme recognition using a hierarchical time spectrum pattern. 209-212 - Susumu Sato, Takeshi Fukabayashi:

Recognition of plosive using mixed features by fisher's linear discriminant. 213-216 - Akio Ando, Kazuhiko Ozeki:

Clustering algorithms to minimize recognition error function and their applications to the vowel template learninig. 217-220 - Changfu Wang, Hiroya Fujisaki, Keikichi Hirose:

Chinese four tone recognition based on the model for process of generating F0 contours of sentences. 221-224 - Nam Soo Kim, Chong Kwan Un:

Generalized training of hidden Markov model parameters for speech recognition. 225-228 - Tatsuya Kawahara, Toru Ogawa, Shigeyoshi Kitazawa, Shuji Doshita:

Phoneme recognition by combining Bayesian linear discriminations of selected pairs of classes. 229-232 - S. Atkins, P. E. Kenne, D. Landy, S. Nulsen, Mary O'Kane:

WAL - a speech recognition programming language. 233-236 - Mario Rossi:

Automatic segmentation: why and what segments? 237-240 - Shozo Makino, Akinori Ito, Mitsuru Endo, Ken'iti Kido:

A Japanese text dictation system based on phoneme recognition using a modified LVQ2 method. 241-244 - Shinobu Mizuta, Kunio Nakajima:

An optimal discriminative training method for continuous mixture density HMMs. 245-248 - Sekharjit Datta, M. Al-Zabibi:

Discrimination of words in a large vocabulary speech recognition system. 249-252 - Jun Mo Koo, Chong Kwan Un, Hwang Soo Lee, H. R. Kim, Myoung-Wan Koo:

A recognition time reduction algorithm for large-vocabulary speech recognition. 253-256 - Hyung Soon Kim, Chong Kwan Un:

Speech recognition method based on the dual processing nature of speech perception. 257-260 - Koichi Shinoda, Ken-ichi Iso, Takao Watanabe:

Speaker adaptation for demi-syllable based speech recognition using continuous HMM. 261-264 - Toby E. Skinner:

Speech signal processing on a neurocomputer. 265-268 - Shigeru Ono:

Syllable structure parsing for continuous speech recognition. 269-272 - Hiroyuki Tsuboi, Hiroshi Kanazawa, Yoichi Takebayashi:

An accelerator for high-speed spoken word-spotting and noise immunity learning system. 273-276 - Zainul Abidin Md. Sharrif, Masuri Othman, Mohammad Ibrahim A. K. B. Maiden:

Recognition of standard malaysian language pronunciation. 277-280 - M. Djoudi, Jean Paul Haton:

The SAPHA acoustic-phonetic decoder system for standard Arabic. 281-284 - Markus Bodden:

A concept for a cocktail-party-processor. 285-288 - Tsuyoshi Usagawa, Yuji Morita, Masanao Ebata:

Remote control system using speech-reduction of known noise. 289-292 - Yumi Takizawa, Masahiro Hamada:

Lombard speech recognition by formant-frequency-shifted LPC cepstrum. 293-296 - Hiroshi Matsumoto, Hirokazu Mitsui:

A robust distance measure based on group delay difference weighted by power spectra. 297-300 - B. Yegnanarayana, Hema A. Murthy, V. R. Ramachandran:

Speech enhancement using group delay functions. 301-304 - Hong Wang, Fumitada Itakura:

Recovery of reverberated speech using multi-microphone sub-band envelope estimation. 305-308 - Alain Marchal, Marie-Hélène Casanova, P. Gavarry, M. Avon:

DISPE: a divers' speech data-base. 309-312
Synthesis of Spoken Language
- Rolf Carlson, Björn Granström, Sheri Hunnicutt:

Lexical components in rule-based speech systems. 313-316 - Ken Ceder, Bertil Lyberg:

The integration of linguistic levels in a text-to-speech conversion system. 317-320 - Tohru Shimizu, Norio Higuchi, Hisashi Kawai, Seiichi Yamamoto:

The linguistic processing module for Japanese text-to-speech system. 321-324 - Yukiko Yamaguchi, Tatsuro Matsumoto:

A neural network approach to multi-language text-to-speech system. 325-328 - Hiroya Fujisaki, Keikichi Hirose, Yasuharu Asano:

Proposal and evaluation of a new type of terminal analog speech synthesizer. 329-332 - Bathsheba J. Malsheen, Mariscela Amador-Hernandez:

The interrelationship of intelligibility and naturalness in text-to-speech. 333-336 - Tomohisa Hirokawa, Kazuo Hakoda:

Segment selection and pitch modification for high quality speech synthesis using waveform segments. 337-340 - Kazuya Takeda, Katsuo Abe, Yoshinori Sagisaka:

On the unit search criteria and algorithms for speech synthesis using non-uniform units. 341-344 - Katsuhiko Shirai, Y. Sato, Kazuo Hashimoto:

Speech synthesis using superposition of sinusoidal waves generated by synchronized oscillators. 345-348 - David Rainton, Steve J. Young:

Time-frequency spectral analysis of speech. 349-352 - Bert Van Coile:

Inductive learning of grapheme-to-phoneme rules. 765-768 - Yoichi Yamashita, Hiroyuki Fujiwara, Yasuo Nomura, Nobuyoshi Kaiki, Riichiro Mizoguchi:

A support environment based on rule interpreter for synthesis by rule. 769-772 - Jung-Chul Lee, Yong-Ju Lee, Hee-il Han, Eung-Bae Kim, Chang-Joo Kim, Kyung-Tae Kim:

Speech synthesis using demisyllables for Korean: a preliminary system. 773-776 - Seung-Kwon Ahn, Koeng-Mo Sung:

The rules in a Korean text-to-speech system. 777-780 - Chi-Shi Liu, Wern-Jun Wang, Shiow-Min Yu, Hsiao-Chuan Wang:

Mandarin speech synthesis by the unit of coarticulatory demi-syllable. 781-784 - Ryunen Teranishi:

A study on various prosody styles in Japanese speech synthesizable with the text-to-speech system. 785-788 - Hiroki Kamanaka, Takashi Yazu, Keiichi Chihara, Makoto Morito:

Japanese text-to-speech conversion system. 789-792 - Yasushi Ishikawa, Kunio Nakajima:

Neural network based concatenation method of synthesis units for synthesis by rule. 793-796 - Norio Higuchi, Hisashi Kawai, Tohru Shimizu, Seiichi Yamamoto:

Improvement of the synthetic speech quality of the formant-type speech synthesizer and its subjective evaluation. 797-800 - Thierry Galas, Xavier Rodet:

A parametric model of speech signals: application to high quality speech synthesis by spectral and prosodic modifications. 801-804 - Tomoki Hamagami, Shinichiro Hashimoto:

The improved source model for high-quality synthetic speech sound. 805-808 - Kazuo Hakoda, Shin'ya Nakajima, Tomohisa Hirokawa, Hideyuki Mizuno:

A new Japanese text-to-speech synthesizer based on COC synthesis method. 809-812 - G. M. Asher, K. Mervyn Curtis, J. R. Andrews, J. Burniston:

A parallel multialgorithmic approach for an accurate and fast English text to speech transcriber. 813-816 - K. Mervyn Curtis, G. M. Asher, S. E. Pack, J. R. Andrews:

A highly programmable formant speech synthesiser utilising parallel processors. 817-820 - Kris Maeda, Yasuki Yamashita, Yoichi Takebayashi:

Enhancement of human-computer interaction through the synthesis of nonverbal expressions. 821-824 - W. Nick Campbell, Stephen D. Isard, Alex I. C. Monaghan, Jo Verhoeven:

Duration, pitch and diphones in the CSTR TTS system. 825-828 - Sin-Horng Chen, Su-Min Lee, Saga Chang:

A Chinese fundamental frequency synthesizer based on a statistical model. 829-832 - Cinzia Avesani:

A contribution to the synthesis of Italian intonation. 833-836 - Kazuhiko Iwata, Yukio Mitome, Takao Watanabe:

Pause rule for Japanese text-to-speech conversion using pause insertion probability. 837-840 - Hiroya Fujisaki, Keikichi Hirose, Pierre A. Hallé, Haitao Lei:

Analysis and modeling of tonal features in polysyllabic words and sentences of the standard Chinese. 841-844 - Akira Yamamura, Hiroharu Kunizawa, Noboru Ueji, Hiroshi Itoyama, Osamu Kakusho:

Voice response unit embedded in factory automation systems. 845-848 - Klaus Wothke:

Tetos - a text-to-speech system for German. 849-852 - Michel Divay:

A written text processing expert system for text to phoneme conversion. 853-856 - Mikio Yamaguchi:

Trial production of a module for speech synthesis by rule. 857-860
Phoneme Recognition
- Katsuhiko Shirai, Naoki Hosaka, Eiichiro Kitagawa, T. Endo:

Speaker adaptable phoneme recognition selecting reliable acoustic features based on mutual information. 353-356 - Claude Montacié, Marie-José Caraty, Xavier Rodet:

Experiments in the use of an automatic learning system for acoustic-phonetic decoding. 357-360 - Shigeki Sagayama, Shigeru Honrna:

Estimation of unknown context using a phoneme environment clustering algorithm. 361-364 - Yves Laprie, Jean Paul Haton, Jean-Marie Pierrel:

Phonetic triplets in knowledge based approach of acoustic-phonetic decoding. 365-368 - Yasuo Ariki, Andrew M. Sutherland, Mervyn A. Jack:

Optimisation of English phoneme recognition based on HMM. 369-372 - Horacio Franco, António Joaquim Serralheiro:

A new discriminative training algorithm for hidden Markov models. 373-376 - Yoshimitsu Hirata, Seiichi Nakagawa:

Speaker adaptation of continuous parameter HMM. 377-380 - Tatsuya Hirahara, Hitoshi Iwamida:

Auditory spectrograms in HMM phoneme recognition. 381-384
Recent Progress in Speech Perception Research
- Sieb G. Nooteboom, P. Scharpff, Vincent J. van Heuven:

Effects of several pausing strategies on the recognizability of words in synthetic speech. 385-388 - Yoshinori Kitahara, Yoh'ichi Tohkura:

The role of temporal structure of speech in word perception and spoken language understanding. 389-392 - Judith C. Goodman, Howard C. Nusbaum, Lisa Lee, Kevin Broihier:

The effects of syntactic and discourse variables on the segmental intelligibility of speech. 393-396 - Shigeaki Amano:

Lexical and coarticulatory effects on phoneme monitoring before and after a word identification point in spoken Japanese words. 397-400 - David B. Pisoni, Ellen E. Garber:

Lexical memory in visual and auditory modalities: the case for a common mental lexicon. 401-404 - John J. Ohala, Elizabeth Shriberg:

Hypercorrection in speech perception. 405-408 - Howard C. Nusbaum:

The role of learning and attention in speech perception. 409-412 - Dominic W. Massaro, Michael M. Cohen:

The joint influence of stimulus information and context in speech perception. 413-416 - Hiroya Fujisaki, Keikichi Hirose, Sumio Ohno, Nobuaki Minematsu:

Influence of context and knowledge on the perception of continuous speech. 417-420
Speech Production, Prosody and Analysis
- Arne Kjell Foldvik, O. Husby, Jorn Kvaerness, I. C. Nordli, Peter A. Rinck:

MRI (magnetic resonance imaging) film of articulatory movements. 421-424 - Masafumi Matsumura, Atsushi Sugiura:

Modeling of 3-dimensional vocal tract shapes obtained by magnetic resonance imaging for speech synthesis. 425-428 - Tokihiko Kaburagi, Masaaki Honda:

Ultrasonic measurement of tongue motion. 429-432 - Kunitoshi Motoki, Nobuhiro Miki, Nobuo Nagai:

Measurement of sound wave characteristics in the vocal tract. 433-436 - Hisayoshi Suzuki, Takayoshi Nakai, Jianwu Dang, Chengxiang Lu:

Speech production model involving subglottal structure and oral-nasal coupling through closed velum. 437-440 - Yorinobu Sonoda, Keisuke Mori, Tetsuaki Kuriyama:

Articulatory characteristics of lip shape during the production of Japanese. 441-444 - Naoki Kusakawa, Kiyoshi Honda, Yuki Kakita:

Sequential control model of speech articulation in producing word utterance. 445-448 - Zyun'ici B. Simada, Satoshi Horiguchi, Seiji Niimi, Hajime Hirose:

Sternohyoid muscle activity and pitch control at the onset of utterances. 449-452 - Junichi Azuma, Yoshimasa Tsukuma:

Prosodic features marking the major syntactic boundary of Japanese: a study on syntactically ambiguous sentences of the kinki dialect. 453-456 - Hai-Dong Wang, Gérard Bailly, Denis Tuffelli:

Automatic segmentation and alignment of continuous speech based on temporal decomposition model. 457-460 - Hee-Il Hahn, Minsoo Hahn:

Voiced/unvoiced/silence classification of spoken Korean. 461-464 - E. Angderi, M. Barsotti, L. Mazzei, L. Vttrano, R. Volpentesta:

Vocal pauses in teaching: statistical analysis and applications. 465-468 - Shubha Kadambe, Gloria Faye Boudreaux-Bartels:

A pitch detector based on event detection using the dyadic wavelet tranform. 469-472 - Hiroya Fujisaki, Keikichi Hirose, Shigenobu Seto:

Proposal and evaluation of a new scheme for reliable pitch extraction of speech. 473-476 - Masahide Sugiyama:

Spectral interpolation using distortion geodesic lines. 477-480 - Hirofumi Yogo, Naoki Inagaki:

Adaptive speech processing using an accelerated stochastic approximation method. 481-484
The Role of Prosody in Production and Perception of Spoken Language
- Hiroya Fujisaki, Keikichi Hirose, Noboru Takahashi:

Manifestation of linguistic and para-linguistic information in the voice fundamental frequency contours of spoken Japanese. 485-488 - Gösta Bruce, Paul Touati:

Analysis and synthesis of dialogue prosody. 489-492 - Shoichi Takeda, Akira Ichikawa:

Analysis of prosodic features of prominence in spoken Japanese sentences. 493-496 - Nancy A. Daly, Victor W. Zue:

Acoustic, perceptual, and linguistic analyses of intonation contours in human/machine dialogues. 497-500 - Haruo Kubozono:

The role of the mora in speech production of Japanese. 501-504 - Yoshimasa Tsukuma, Junichi Azuma:

Prosodic features determining the comprehension of syntactically ambiguous sentences in Mandarin Chinese. 505-508 - Dieter Huber:

Prosodic transfer in spoken language interpretation. 509-512 - Miyoko Sugito:

On the role of pauses in production and perception of discourse. 513-516 - Kikuo Maekawa:

Production and perception of the accent in the consecutively devoiced syllables in tokyo Japanese. 517-520
Word Recognition
- Fikret S. Gürgen, Shigeki Sagayama, Sadaoki Furui:

Line spectrum pair frequency - based distance measures for speech recognition. 521-524 - Hiroshi Shimodaira, Yoshio Horiuchi, Masayuki Kimura:

Speaker independent isolated word recognition using local and global structural features. 525-528 - Jorge A. Gurlekian, Horacio Franco, Miguel Santagada:

Speaker independent recognition of isolated Spanish digits. 529-532 - Nobuo Sugi, Jun'ichi Iwasaki, Hiroshi Matsu'ura, Tsuneo Nitta, Akira Fukumine, Akira Nakayama:

Speaker independent word recognition system based on the structured transition network of phonetic segments. 533-536 - Akihiro Imamura, Yoshitake Suzuki:

Speaker-independent word spotting and a transputer-based implementation. 537-540 - Jin Yul Kim, Yun-Seok Cho, Soon Young Yoon, Hwang Soo Lee, Chong Kwan Un:

An efficient viterbi scoring architecture for HMM-based isolated word recognition systems. 541-544 - Tatsuo Matsuoka:

Word spotting using context-dependent phoneme-based HMMs. 545-548 - V. Vittorelli, Gilles Adda, Roberto Billi, Lou Boves, Mervyn A. Jack, Enrico Vivalda:

POLYGLOT: multilingual speech recognition and synthesis. 549-552 - Satoshi Takahashi, Shoichi Matsunaga, Shigeki Sagayama:

Isolated word recognition using pitch pattern information. 553-556
Perception of Spoken Language
- Makio Kashino:

Distribution of perceptual cues for Japanese intervocalic stop consonants. 557-560 - Winfried Datscheweit:

Frication noise and formant-onset frequency as independent cues for the perception of /f/, /s/ and /// in vowel-fricative-vowel stimuli. 561-564 - Minoru Tsuzaki, Jorge A. Gurlekian:

Effects of different standards on the within-category discrimination of synthesized /ABA/ sequences: comparison between Japanese and Spanish. 565-569 - Masato Akagi:

Contextual effect models and psycho acoustic evidence for the models. 569-572 - Sumi Shigeno:

Vowel-contingent anchoring effects on the perception of stop consonants. 573-576 - Dominic W. Massaro:

Process and connectionist models of speech perception. 577-580 - Anne Cutler, Dennis Norris, Brit van Ooyen:

Vowels as phoneme detection targets. 581-584 - Noriko Uosaki, Morio Kohno:

Perception of rhythm: a comparison between americans and Japanese. 585-588 - Sotaro Sekimoto:

Perceptual frequency normalization of frequency compressed or expanded voiceless consonants. 589-592
Perception, Impairments/Aids, Phonetics in Language Teaching and Speech Coding
- Akiko Hayashi, Satoshi Imaizumi, Takehiko Harada, Hideaki Seki, Hiroshi Hosoi:

Effects of temporal factors on the speech perception of the hearing impaired. 593-596 - Shinobu Masaki, Itaru F. Tatsumi, Sumiko Sasanuma:

Analysis of temporal coordination between articulatory movements and pitch control in the realization of Japanese word accent by a patient with apraxia of speech. 597-600 - Brian C. J. Moore, Jeannette Seloover Johnson, Vincent Pluvinage, Teresa M. Clark:

Multiband dynamic range compression sound processing for hearing impaired patients: effect on intelligibility of speech in background noise. 601-604 - Takao Mizutani, Kiyoshi Hashimoto, Masahiko Wakumoto, Ken-ich Michi, Hareo Hamada, Tanetoshi Miura:

New graphical expression of the high-speed palatographic data in study of the articulatory behaviors of the tongue. 605-608 - Makoto Kariyasu, Kukiko Maruyama:

Aging in the rate and regularity of maximum syllable repetition under bite-block. 609-612 - Minje Zhi, Yong-Ju Lee:

Vowel quantity contrast in Korean: production and perception. 613-616 - Jan-Olof Svantesson:

Phonetic correlates of stress in mongolian. 617-620 - Ray Iwata, Hajime Hirose, Seiji Niimi, Masayuki Sawashima, Satoshi Horiguchi:

Syllable final stops LN east asian languages: southern Chinese, Thai and Korean. 621-624 - Seiji Niimi, Qun Yan, Satoshi Horiguchi, Hajime Hirose:

An electromyographic study on laryngeal adjustment for production of the light tone in Mandarin Chinese. 625-628 - Jingxu Cui, Shuichi Itahashi:

A comparison of the articulation of the Chinese /i, l, l/ by Chinese and Japanese speakers. 629-632 - Hirotake Nakashima, Masao Yamaguchi:

The durations of Japanese long vowels and geminated consonants uttered by indonesian. 633-636 - Izumi Saita:

On phrasing of Japanese language learners. 637-640 - PROTS (pronunciation training system) - Kawai Musical Instruments.

- Yair Shoham:

Constrained-stochastic excitation coding of speech at 4.8 kb/s. 645-648 - Fumie Hazu, Akihiko Sugiyama, Masahiro Iwadare, Takao Nishitani:

Adaptive transform coding with an adaptive block size using a modified DCT. 649-652 - Takehiro Moriya:

Medium-delay 8 kbit/s speech coder based on conditional pitch prediction. 653-656 - Sung Ro Lee, Hwang Soo Lee, Chong Kwan Un:

A low rate VQ speech coding algorithm with variable transmission frame length. 657-660
Neural Networks for Speech Processing I, II
- Ken-ichi Iso, Takao Watanabe:

Speech recognition using demi-syllable neural prediction model. 661-664 - Frédéric Bimbot, Gérard Chollet, Jean-Pierre Tubach:

Phonetic features extraction using time-delay neural networks. 665-668 - Masami Nakamura, Shinichi Tamura:

Vowel recognition by phoneme filter neural networks. 669-672 - Kari Torkkola, Mikko Kokkonen:

A comparison of two methods to transcribe speech into phonemes: a rule-based method vs. back-propagation. 673-676 - Jun-ichi Takami, Shigeki Sagayama:

Phoneme recognition by pairwise discriminant TDNNs. 677-680 - Yasuyuki Masai, Hiroshi Matsu'ura, Tsuneo Nitta:

Speaker independent speech recognition based on neural networks of each category with embedded eigenvectors. 681-684 - Kiyoaki Aikawa, Alexander H. Waibel:

Speech recognition using sub-phoneme recognition neural network. 685-688 - Li-Qun Xu, Tie-Cheng Yu, G. D. Tattersall:

Speech recognition based on the integration of FSVQ and neural network. 689-692 - Samir I. Sayegh:

Fast text-to-speech learning. 693-696 - Nelson Morgan, Chuck Wooters, Hervé Bourlard, Michael Cohen:

Continuous speech recognition on the resource management database using connectionist probability estimation. 1337-1340 - Eiichi Tsuboka, Yoshihiro Takada, Hisashi Wakita:

Neural predictive hidden Markov model. 1341-1344 - Yasuhiro Minami, Toshiyuki Hanazawa, Hitoshi Iwamida, Erik McDermott, Kiyohiro Shikano, Shigeru Katagiri, Masaona Kagawa:

On the robustness of HMM and ANN speech recognition algorithms. 1345-1348 - Hidefumi Sawai:

The TDNN-LR large-vocabulary and continuous speech recognition system. 1349-1352 - Rémy Bulot, Henri Meloni, Pascal Nocera:

Rule-driven neural networks for acoustic-phonetic decoding. 1353-1356 - Franck Poirier:

Knowledge-based segmentation and feature maps for speech recognition. 1357-1360 - Mark A. Fanty, Ronald A. Cole:

Speaker-independent English alphabet recognition: experiments with the e-set. 1361-1364 - Pinaki Poddar, P. V. S. Rao:

Neural network based segmentation of continuous speech. 1365-1368 - Tomio Takara, Motonori Tamaki:

A normalization of coarticulation of connected vowels using neural network. 1369-1372 - Tomio Watanabe, Masaki Kohda:

Lip-reading of Japanese vowels using neural networks. 1373-1376 - H. Lucke, Frank Fallside:

Application of the compositional representation to lexical access using neural networks. 1377-1380 - Abdul Mobin, Shyam S. Agrawal, Anil Kumar, K. D. Pavate:

A voice input-output system using isolated words. 1381-1384 - Tatiana Slama-Cazacu:

A psycholinguistic model of first and second language learning. 1385-1388
Continuous Speech Recognition
- Yunxin Zhao, Hisashi Wakita:

Experiments with a speaker-independent continuous speech recognition system on the timit database. 697-700 - Walter Weigel:

Continuous speech recognition with vowel-context-independent hidden-Markov-models for demisyllables. 701-704 - Satoru Hayamizu, Kai-Fu Lee, Hsiao-Wuen Hon:

Description of acoustic variations by tree-based phone modeling. 705-708 - Frank K. Soong, Eng-Fong Huang:

A tree-trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition. 709-712 - Fabio Gabrieli, A. Dimundo, Antonello Rizzi, G. Colangelit, A. Stagni:

Modeling vocabularies for a connected speech recognizer. 713-716 - Takeshi Kawabata, Toshiyuki Hanazawa, Katsunobu Itou, Kiyohiro Shikano:

Japanese phonetic typewriter using HMM phone units and syllable trigrams. 717-720 - Minoru Shigenaga, Yoshihiro Sekiguchi, Toshihiko Hanagata, Takehiro Yamaguchi, Ryouta Masuda:

A large vocabulary continuous speech recognition system with high prediction capability. 721-724 - Yutaka Kobayashi, Yasuhisa Niimi:

Evaluation of a speech understanding system - suskit-2. 725-728 - Patti Price, Victor Abrash, Douglas E. Appelt, John Bear, Jared Bernstein, Bridget Bly, John Butzberger, Michael Cohen, Eric Jackson, Robert C. Moore, Douglas B. Moran, Hy Murveit, Mitchel Weintraub:

Spoken language system integration and development. 729-732
Modeling of First and Second Language Acquisition
- Paula Menyuk:

Relationship between speech perception and production in language acquisition. 733-736 - Andrew N. Meltzoff, Alison Gopnik:

Relations between thought and language in infancy. 736-740 - Movto Kohno:

The role of rhythm in the first and second language aquisition. 741-744 - Patricia K. Kuhl:

Towards a new theory of the development of speech perception. 745-748 - Shozo Kojima:

Audition and speech perception in the chimpanzee. 749-752 - Pierre A. Hallé, Benedicte de Boysson-Bardies:

Prosodic and phonetic patterning of disyllables produced by Japanese versus French infants. 753-756 - Reiko Akahane-Yamada, Yoh'ichi Tohkura:

Perception and production of syllable-initial English /r/ and /l/ by native speakers of Japanese. 757-760 - Michiko Mochizuki-Sudo, Shigeru Kiritani:

The perception of inter-stress-intervals in Japanese speakers of English. 761-764
Application of Speech Recognition / Synthesis Technologies
- David A. Berkley, James L. Flanagan:

Integration of speech recognition, text-to-speech synthesis, and talker verification into a hands-free audio/image teleconferencing system (humanet). 861-864 - G. Velius, Candace A. Kamm, Mary Jo Altom, T. C. Feustel, Marian J. Macchi, Murray F. Spiegel:

Bellcore efforts in applying speech technology to telephone network services. 865-868 - Fumihiro Yato, Kazuki Katagisi, Norio Higuchi:

Extension number guidance system. 869-872 - Hirokazu Sato:

Japanese text-to-speech equipment: current applications and trends. 873-876 - Mariscela Amador-Hernandez, Bathsheba J. Malsheen:

The synthesis of dialectal variation in English and Spanish. 877-880 - Hiroyoshi Saito, Motoshi Kurihara, Ken-ichiro Kobayashi, Yoshiyuki Hara, Naritoshi Saito:

A Japanese text-to-speech system for electronic mail. 881-884 - Tsuneo Nitta, Nobuo Sugi:

Issues concerning voice input applications. 885-888 - Toshiaki Tsuboi, Noboru Sugamura:

A prototype for a speech-to-text transcription system. 889-892 - Masahiro Hamada, Yumi Takizawa, Takeshi Norimatsu:

A noise robust speech recognition system. 893-896
Language Modeling
- A. Corazzat, Renato de Mori, Roberto Gretter, Giorgio Satta:

Computation of probabilities for island-driven parsers. 897-900 - Keh-Yih Su, Tung-Hui Chiang, Yi-Chung Lin:

A unified probabilistic score function for integrating speech and language information in spoken language processing. 901-904 - Kenji Kita, Toshiyuki Takezawa, Junko Hosaka, Terumasa Ehara, Tsuyoshi Morimoto:

Continuous speech recognition using two-level LR parsing. 905-908 - Hiroaki Saito:

Gap-filling LR parsing for noisy spoken input: towards interactive speech recognition. 909-912 - S. Bornerand, Françoise D. Néel, Gérard Sabah:

Semantic weights derived from syntax-directed understanding in DTW-based spoken language processing. 913-916 - Hiroaki Kitano, Tetsuya Higuchi, Masaru Tomita:

Massively parallel spoken language processing using a parallel associative processor IXM2. 917-920 - Tsuyoshi Morimoto, Kiyohiro Shikano, Hitoshi Iida, Akira Kurematsu:

Integration of speech recognition and language processing in spoken language translation system (SL-TRANS). 921-924 - Toshiya Sakano, Tsuyoshi Morimoto:

Design principle of language model for speech recognition. 925-928 - Shoichi Matsunaga, Shigeki Sagayama:

Sentence speech recognition using semantic dependency analysis. 929-932
Phonetics and Phonology
- Leigh Lisker:

Distinctive, redundant, predictable, neotssary, sufficffint accounting for English /bdg/-/ptk/. 933-936 - Rob Kassel, Victor W. Zue:

An information theoretic approach to the study of phoneme collocational constraints. 937-940 - Bruce L. Derwing, Terrance M. Nearey:

Real-time effects of some intrasyllabic collocational constraints in English. 941-944 - Paul Dalsgaard, William J. Barry:

Acoustic-phonetic features in the framework of neural-network multi-lingual label alignment. 945-948 - James Hieronymus:

Preliminary study of vowel coarticulation in british English. 949-952 - Caroline B. Huang:

Effects of context, stress, and speech style on american vowels. 953-956 - M. Djoudi, H. Aouizerat, Jean Paul Haton:

Phonetic study and recognition of standard Arabic emphatic consonants. 957-960 - Daniel Recasens, Edda Farnetani:

Articulatory and acoustic properties of different allophones of /l/ in american English, catalan and Italian. 961-964 - Hiroshi Suzuki, Ghen Ohyama, Shigeru Kiritani:

In search of a method to improve the prosodic features of English spoken by Japanese. 965-968
Assessment / Human Factors, Database and Neural Networks
- Zinny S. Bond, Thomas J. Moore:

A note on loud and lombard speech. 969-972 - Ute Jekosch:

A weighted intelligibility measure for speech assessment. 973-976 - Shinji Hayashi:

Improvements in binaural articulation score by simulated localization using head-related transfer functions. 977-980 - Kim E. A. Silverman, Sara Basson, Suzi Levas:

Evaluating synthesiser performance: is segmental intelligibility enough? 981-984 - Fumio Maehara, Masamichi Nakagawa, Kunio Nobori, Toshiyuki Maeda, Tsutomu Mori, Makoto Fujimoto:

Media conversion into language and voice for intelligent communication. 985-988 - Rolf Carlson, Björn Granström, Lennart Nord:

Segmental intelligibility of synthetic and natural speech in real and nonsense words. 989-992 - Chorkin Chan, Ren-Hua Wang:

The HKU-USTC speech corpus. 993-996 - Torbjørn Svendsen, Knut Kvale:

Automatic alignment of phonemic labels with continuous speech. 997-1000 - Denis Tuffelli, Hai-Dong Wang:

TELS: a speech time-expansion labelling system. 1001-1004 - Kazuhiro Arai, Yoichi Yamashita, Tadahiro Kitahashi, Riichiro Mizoguchi:

A speech labeling system based on knowledge processing. 1005-1008 - Hans G. Tillmann, Maximilian Hadersbeck, Hans Georg Piroth, Barbara Eisen:

Development and experimental use of phonwork a new phonetic workbench. 1009-1012 - Hiroyuki Chimoto, Hideaki Shinchi, Hideki Hashimoto, Shinya Amano:

A speech recognition research environment based on large-scale word and concept dictionaries. 1013-1016 - Benjamin Chigier, Judith Spitz:

Are laboratory databases appropriate for training and testing telephone speech recognizers? 1017-1021 - Sven W. Danielsen:

Standardisation of speech input assessment within the SAM esprit project. 1021-1024 - Hiroshi Irii, Kenzo Itoh, Nobuhiko Kitawaki:

Multilingual speech data base for evaluating quality of digitized speech. 1025-1028 - Lizhong Wu, Frank Fallside:

The optimal gain sequence for fastest learning in connectionist vector quantiser design. 1029-1032 - Tony Robinson, John Holdsworth, Roy D. Patterson, Frank Fallside:

A comparison of preprocessors for the cambridge recurrent error propagation network speech recognition system. 1033-1036 - Robert B. Allen, Candace A. Kamm, S. B. James:

A recurrent neural network for word identification from phoneme sequences. 1037-1040 - Lieven Depuydt, Jean-Pierre Martens, Luc Van Immerseel, Nico Weymaere:

Improved broad phonetic classification and segmentation with a neural network and a new auditory model. 1041-1044 - Kazuaki Obara, Hideyuki Takagi:

Formant extraction model by neural networks and auditory model based on signal processing theory. 1045-1048 - Noboru Kanedera, Tetsuo Funada:

/b, d, g/ recognition with elliptic discrimination neural units. 1049-1052 - Helen M. Meng, Victor W. Zue:

A comparative study of acoustic representations of speech for vowel classification using multi-layer perceptrons. 1053-1056 - Yong Duk Cho, Ki Chul Kim, Hyunsoo Yoon, Seung Ryoul Maeng, Jung Wan Cho:

Extended elman's recurrent neural network for syllable recognition. 1057-1060 - Hong C. Leung, James R. Glass, Michael S. Phillips, Victor W. Zue:

Detection and classification of phonemes using context-independent error back-propagation. 1061-1064 - Shigeru Chiba, Kiyoshi Asai:

A new method of consonant detection and classification using neural networks. 1065-1068 - Shigeyoshi Kitazawa, Masahiro Serizawa:

An artificial neural network for the burst point detection. 1069-1072 - Claude Lefèbvre, Dariusz A. Zwierzynski:

The use of discriminant neural networks in the integration of acoustic cues for voicing into a continuous-word recognition system. 1073-1076 - Kouichi Yamaguchi, Kenji Sakamoto, Toshio Akabane, Yoshiji Fujimoto:

A neural network for speaker-independent isolated word recognition. 1077-1080
Speech I/O Assessment and Database I, II
- Shuichi Itahashi:

Recent speech database projects in Japan. 1081-1084 - Joon-Hyuk Choi, Kyung-Tae Kim:

Construction of a large Korean speech database and its management system in ETRI. 1085-1088 - Yoshinori Sagisaka, Kazuya Takeda, M. Abel, Shigeru Katagiri, T. Umeda, Hisao Kuwabara:

A large-scale Japanese speech database. 1089-1092 - Terumasa Ehara, Kentaro Ogura, Tsuyoshi Morimoto:

ATR dialogue database. 1093-1096 - Jean-Luc Gauvain, Lori Lamel, Maxine Eskénazi:

Design considerations and text selection for BREF, a large French read-speech corpus. 1097-1100 - Kazuyo Tanaka, Satoru Hayamizu, Kozo Ohta:

The ETL speech database for speech analysis and recognition research. 1101-1104 - Michal Soclof, Victor W. Zue:

Collection and analysis of spontaneous and read corpora for spoken language system development. 1105-1108 - Shozo Makino, Toshihiko Shirokaze, Ken'iti Kido:

A distributed speech database with an automatic acquisition system of speech information. 1109-1112 - J. Bruce Millar, Phillip Dermody, Jonathan Harrington, Julie Vonwiller:

A national database of spoken language: concept, design, and implementation. 1281-1284 - Giuseppe Castagneri, Kyriaki Vagges:

The Italian national database for speech recognition. 1285-1288 - Louis C. W. Pols:

How useful are speech databases for rule synthesis development and assessment? 1289-1292 - William J. Hardcastle, Alain Marchal:

Eur-accor: a multi-lingual articulatory and acoustic database. 1293-1296
Speech Recognition in Noisy Environments
- Biing-Hwang Juang:

Recent developments in speech recognition under adverse conditions. 1113-1116 - Brian A. Hanson, Ted H. Applebaum:

Features for noise-robust speaker-independent word recognition. 1117-1120 - Alejandro Acero, Richard M. Stern:

Acoustical pre-processing for robust spoken language systems. 1121-1124 - John H. L. Hansen, Oscar N. Bria:

Lombard effect compensation for robust automatic speech recognition in noise. 1125-1128 - Tadashi Kitamura, Etsuro Hayahara, Yasuhiko Simazciki:

Speaker-independent word recogniton in noisy environments using dynamic and averaged spectral features based on a two-dimensional mel-cepstrum. 1129-1132 - A. Noll:

Problems of speech recognition in mobile environments. 1133-1136 - Luciano Fissore, Pietro Laface, M. Codogno, Giovanni Venuti:

HMM modeling for voice-activated mobile-radio system. 1137-1140 - Yoshio Nakadai, Noboru Sugamura:

A speech recognition method for noise environments using dual inputs. 1141-1144 - Shuji Morii, Toshiyuki Morii, Masakatsu Hoshimi, Shoji Hiraoka, Taisuke Watanabe, Katsuyuki Niyada:

Noise robustness in speaker independent speech recognition. 1145-1148 - Kaoru Gyoutoku, Hidefumi Kobatake:

Maximum likelihood estimation of speech waveform under nonstationary noise environments. 1149-1152
Foreign Language Teaching
- William J. Hardcastle:

Electropalatography in phonetic research and in speech training. 1153-1156 - Michael Rost:

Teaching spoken language: a genre-based approach. 1157-1160 - Kazue Yoshida:

Interaction between native and nonnative speakers in team teaching. 1161-1164 - Ekaterini Nikolarea:

Contrastive phonetics of English, French and modern Greek in language teaching and interpreting. 1165-1168 - Keiko Nagano, Kazunori Ozawa:

English speech training using voice conversion. 1169-1172 - Namie Saeki:

Contrastive analysis of american English and Japanese pronunciation. 1173-1176 - Massoud Rahimpour:

Oral communicative approaches in spoken language processing. 1177-1180 - Hisako Murakawa:

Teaching English pronunciation to Japanese university students: the voiceless fricative /s/ sound. 1181-1184 - Jared Bernstein, Michael Cohen, Hy Murveit, Dimitry Rtischev, Mitchel Weintraub:

Automatic evaluation and training in English pronunciation. 1185-1188
Continuous Speech Recognition and Speaker Recognition
- Yoshiharu Abe, Kunio Nakajima:

Vocabulary independent phrase recognition with a linear phonetic context model. 1189-1192 - Yasuo Ariki, Mervyn A. Jack:

Phoneme probability presentation of continuous speech. 1193-1196 - Haïyan Ye, Jean Caelen:

Duration constraints for the speech input interface in the MULTIWORKS project. 1197-1200 - Zhi-ping Hu, Satoshi Imai:

Chinese continuous speech recognition system using the state transition models both of phonemes and words. 1201-1204 - Jade Goldstein, Akio Amano, Hideki Murayama, Mariko Izawa, Akira Ichikawa:

A new training method for multi-phone speech units for use in a hidden Markov model speech recognition system. 1205-1208 - Yoshio Ueda, Seiichi Nakagawa:

Diction for phoneme/syllable/word-category and identification of language using HMM. 1209-1212 - Takashi Otsuki, Shozo Makino, Toshio Sone, Ken'iti Kido:

Performance evaluation in speech recognition system using transition probability between linguistic units. 1213-1216 - Isao Murase, Seiichi Nakagawa:

Sentence recognition method using word cooccurrence probability and its evaluation. 1217-1220 - Yanghai Lu, Beiqian Dai:

A knowledge-based understanding system for the Chinese spoken language. 1221-1224 - Akio Komatsu, Eiji Oohira, Akira Ichikawa:

Conversational speech understanding based on cooperative problem solving. 1225-1228 - Michio Okada:

A one-pass search algorithm for continuous speech recognition directed by context-free phrase structure grammar. 1229-1232 - Andrea Di Carlo, Rino Falcone:

A blackboard architecture for a word hypothesizer and a chart parser interaction in an ASR system. 1233-1236 - P. Mousel, Jean-Marie Pierrel, Azim Roussanaly:

Heuristic search problems in a natural language task oriented spoken man-machine dialogue system. 1237-1240 - Hiroaki Kitano:

The making of a speech-to-speech translation system: some findings from the dmdialog project. 1241-1244 - Kyung-ho Loken-Kim, Yasuhiro Nara, Shinta Kimura:

Using high level knowledge sources as a means of recovering DLL-formed Japanese sentences distorted by ambient noise. 1245-1248 - Anders Baekgaard, Paul Dalsgaard:

Tools for designing dialogues in speech understanding interfaces. 1249-1252 - Osamu Takizawa, Masuzo Yanagida:

A method for expressing associative relations using fuzzy concepts -aiming at advanced speech recognition-. 1253-1256 - Jean-Pierre Tubach, Raymond Descout, Pierre Isabelle:

Bilingual speech interface for a bidirectional machine translation system. 1257-1260 - Yves Laprie:

Optimum spectral peak track interpretation in terms of formants. 1261-1264 - Spriet Thierry:

A speech understanding system. 1265-1268 - Seiichiro Hangai, Kazuhiro Miyauchi:

Speaker based on multipulse excitation and UPC vocal-tract model. 1269-1272 - I-Chang Jou, Su-Ling Lee, Min-Tau Lin, Chih-Yuan Tseng, Shih-Shien You, Yuh-Juain Tsay:

A neural network based speaker verification system. 1273-1276 - Hujun Yin, Tong Zhou:

Speaker recognition using static and dynamic CEPSTRAL feature by a learning neural network. 1277-1280
Dialogue Modeling and Processing
- Naotoshi Osaka:

Conversational turn-taking model using PETRI net. 1297-1300 - Tetsuya Yamamoto, Yoshikazu Ohta, Yoichi Yamashita, Riichiro Mizoguchi:

Dialog management system mascots in speech understanding system. 1301-1304 - Sharon L. Oviatt, Philip R. Cohen, Ann Podlozny:

Spoken language in interpreted telephone dialogues. 1305-1308 - Tsuyoshi Morimoto, Toshiyuki Takezawa:

Linguistic knowledge for spoken dialogue processing. 1309-1312 - Harald Höge:

SPICOS II - a speech understanding dialogue system. 1313-1316 - Victor W. Zue, James R. Glass, Dave Goddeau, David Goodine, Hong C. Leung, Michael K. McCandless, Michael S. Phillips, Joseph Polifroni, Stephanie Seneff, Dave Whitney:

Recent progress on the MIT VOYAGER spoken language system. 1317-1320
Language Acquisition
- Florien J. Koopmans-van Beinum:

The source-filter model of speech production applied to early speech development. 1321-1324 - Ichiro Miura:

The acquisition of Japanese long consonants, syllabic nasals, and long vowels. 1325-1328 - Yoko Shimura, Satoshi Imaizumi, Kozue Saito, Tamiko Ichijama, Jan Gauffin, Pierre A. Hallé, Itsuro Yamanouchi:

Infants' vocalization observed in verbal communication: acoustic analysis. 1329-1332 - Yukie Masuko, Shigeru Kiritani:

Perception of mora sounds in Japanese by non-native speakers of Japanese. 1333-1336
Plenary Lectures
- Gunnar Fant:

The speech code. segmental and prosodic features. 1389-1398 - David B. Pisoni:

Effects of talker variability on speech perception: implications for current research and theory. 1399-1408 - Fumitada Itakura:

Early developments of LPC speech coding techniques. 1409-1410

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














