default search action
Qiguang Lin
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j5]Qiguang Lin, Chaojie Yan, Qiang Li, Yonggen Ling, Wangwei Lee, Yu Zheng, Zhaoliang Wan, Bidan Huang, Xiaofeng Liu:
Tracking Object's Pose via Dynamic Tactile Interaction. Int. J. Humanoid Robotics 21(3): 2350021:1-2350021:22 (2024) - 2023
- [c25]Qiguang Lin, Chaojie Yan, Qiang Li, Yonggen Ling, Yu Zheng, Wangwei Lee, Zhaoliang Wan, Bidan Huang, Xiaofeng Liu:
Tactile-Based Object Pose Estimation Employing Extended Kalman Filter. ICARM 2023: 118-123 - 2021
- [c24]Jiahong Xu, Xiaofeng Liu, Xu Zhou, Huan Wang, Qiguang Lin:
Planning strategy for intruder agent based on game theory and artificial potential field. RCAR 2021: 933-938
2010 – 2019
- 2019
- [c23]Fuchun Liu, Yang Yang, Qiguang Lin:
Sound Source Localization and Speech Enhancement Algorithm Based on Fixed Beamforming. CACRE 2019: 27:1-27:7 - [c22]Ziyu Xiong, Qiguang Lin, Maolin Wang, Zhouyu Chen:
The effect of focus on trisyllabic syllable duration in Mandarin. O-COCOSDA 2019: 1-5 - 2018
- [c21]Yiwen Shao, Qiguang Lin:
Use of Pitch Continuity for Robust Speech Activity Detection. ICASSP 2018: 5534-5538 - [c20]Qiguang Lin, Yiwen Shao:
A Novel Normalization Method for Autocorrelation Function for Pitch Detection and for Speech Activity Detection. INTERSPEECH 2018: 2097-2101 - 2016
- [j4]Qiguang Lin, Zhiqiang Li:
Book Notice. Phonetica 73(2): 141-143 (2016)
2000 – 2009
- 2005
- [c19]Mario E. Munich, Qiguang Lin:
Auditory image model features for automatic speech recognition. INTERSPEECH 2005: 3037-3040 - 2004
- [c18]Mario E. Munich, Qiguang Lin:
Explicit modelling of common acoustic features for character recognition. EUSIPCO 2004: 353-356
1990 – 1999
- 1999
- [c17]Qiguang Lin, David M. Lubensky, Salim Roukos:
Use of recursive mumble models for confidence measuring. EUROSPEECH 1999: 53-56 - 1998
- [c16]Qiguang Lin, Subrata K. Das, David M. Lubensky, Michael Picheny:
A new confidence measure based on rank-ordering subphone scores. ICSLP 1998 - 1997
- [c15]Qiguang Lin, James L. Flanagan, ChiWei Che:
Distant-Talking Speech Recognition with Microphone-Array Sound Pickup and NN/MLLR Environment Equalization. ICONIP (2) 1997: 1099-1102 - [c14]Qiguang Lin, David M. Lubensky, Michael Picheny, P. Srinivasa Rao:
Key-phrase spotting using an integrated language model of n-grams and finite-state grammar. EUROSPEECH 1997: 255-258 - 1996
- [c13]John C. Pearson, Qiguang Lin, ChiWei Che, Dong-Suk Yuk, Limin Jin, Bert de Vries, James L. Flanagan:
Robust distant-talking speech recognition. ICASSP 1996: 21-24 - [c12]ChiWei Che, Qiguang Lin, Dong-Suk Yuk:
An HMM approach to text-prompted speaker verification. ICASSP 1996: 673-676 - [c11]Dong-Suk Yuk, ChiWei Che, Limin Jin, Qiguang Lin:
Environment-independent continuous speech recognition using neural networks and hidden Markov models. ICASSP 1996: 3358-3361 - [c10]Qiguang Lin, Ea-Ee Jan, ChiWei Che, Dong-Suk Yuk, James L. Flanagan:
Selective use of the speech spectrum and a VQGMM method for speaker identification. ICSLP 1996: 2415-2418 - 1995
- [j3]Qiguang Lin, ChiWei Che:
Normalizing the vocal tract length for speaker independent speech recognition. IEEE Signal Process. Lett. 2(11): 201-203 (1995) - [j2]Qiguang Lin:
A fast algorithm for computing the vocal-tract impulse response from the transfer function. IEEE Trans. Speech Audio Process. 3(6): 449-457 (1995) - [c9]ChiWei Che, Qiguang Lin:
Speaker recognition using HMM with experiments on the yoho database. EUROSPEECH 1995: 625-628 - [c8]Gaël Richard, M. Liu, D. Snider, H. Duncan, Qiguang Lin, James L. Flanagan, Stephen E. Levinson, Donald Davis, Scott Slimon:
Numerical simulations of fluid flow in the vocal tract. EUROSPEECH 1995: 1297-1300 - 1994
- [j1]Qiguang Lin, Ea-Ee Jan, James L. Flanagan:
Microphone arrays and speaker identification. IEEE Trans. Speech Audio Process. 2(4): 622-629 (1994) - [c7]Qiguang Lin, Ea-Ee Jan, ChiWei Che, Bert de Vries:
System of microphone arrays and neural networks for robust speech recognition in multimedia environments. ICSLP 1994: 1247-1250 - [c6]Qiguang Lin, ChiWei Che, Joe French:
Description of the caip speech corpus. ICSLP 1994: 1823-1826 - [c5]ChiWei Che, Qiguang Lin, John C. Pearson, Bert de Vries, James L. Flanagan:
Microphone Arrays and Neural Networks for Robust Speech Recognition. HLT 1994 - [c4]James L. Flanagan, Qiguang Lin, John C. Pearson, Bert de Vries:
A Neural Network System for Large-Vocabulary Continuous Speech Recognition in Variable Acoustic Environments. HLT 1994 - 1992
- [c3]Qiguang Lin, Gunnar Fant:
An articulatory speech synthesizer based on a frequency-domain simulation of the vocal tract. ICASSP 1992: 57-60
1980 – 1989
- 1989
- [c2]Rolf Carlson, Gunnar Fant, Christer Gobl, Björn Granström, Inger Karlsson, Qiguang Lin:
Voice source rules for text-to-speech synthesis. ICASSP 1989: 223-226 - [c1]Qiguang Lin, Gunnar Fant:
Vocal-tract area-function parameters from formant frequencies. EUROSPEECH 1989: 2673-2676
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-01 21:38 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint