default search action
Ji Ming
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Journal Articles
- 2023
- [j27]Andrew D. Moyes, Richard Gault, Kun Zhang, Ji Ming, Danny Crookes, Jing Wang:
Multi-channel auto-encoders for learning domain invariant representations enabling superior classification of histopathology images. Medical Image Anal. 83: 102640 (2023) - 2019
- [j26]Jack Gaston, Ji Ming, Danny Crookes:
Matching Larger Image Areas for Unconstrained Face Identification. IEEE Trans. Cybern. 49(8): 3191-3202 (2019) - 2017
- [j25]Ji Ming, Danny Crookes:
Speech Enhancement Based on Full-Sentence Correlation and Clean Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 25(3): 531-543 (2017) - [j24]Niall McLaughlin, Ji Ming, Danny Crookes:
Largest Matching Areas for Illumination and Occlusion Robust Face Recognition. IEEE Trans. Cybern. 47(3): 796-808 (2017) - 2014
- [j23]Ji Ming, Danny Crookes:
An iterative longest matching segment approach to speech enhancement with additive noise and channel distortion. Comput. Speech Lang. 28(6): 1269-1286 (2014) - [j22]Darryl Stewart, Rowan Seymour, Adrian Pass, Ji Ming:
Robust Audio-Visual Speech Recognition Under Noisy Audio-Video Conditions. IEEE Trans. Cybern. 44(2): 175-184 (2014) - 2013
- [j21]Ji Ming, Ramji Srinivasan, Danny Crookes, Ayeh Jafari:
CLOSE - A Data-Driven Approach to Speech Separation. IEEE Trans. Speech Audio Process. 21(7): 1355-1368 (2013) - [j20]Niall McLaughlin, Ji Ming, Danny Crookes:
Robust Multimodal Person Identification With Limited Training Data. IEEE Trans. Hum. Mach. Syst. 43(2): 214-224 (2013) - 2011
- [j19]Maria Husin, Darryl Stewart, Ji Ming, Francis Jack Smith:
Creating a Spontaneous Conversational Speech Corpus. Data Sci. J. 10: 42-51 (2011) - [j18]Ji Ming, Ramji Srinivasan, Danny Crookes:
A Corpus-Based Approach to Speech Enhancement From Nonstationary Noise. IEEE Trans. Speech Audio Process. 19(4): 822-836 (2011) - 2010
- [j17]Ji Ming, Timothy J. Hazen, James R. Glass:
Combining missing-feature theory, speech enhancement, and speaker-dependent/-independent modeling for speech separation. Comput. Speech Lang. 24(1): 67-76 (2010) - 2009
- [j16]Le Quan Ha, Philip Hanna, Ji Ming, Francis Jack Smith:
Extending Zipf's law to n-grams for large corpora. Artif. Intell. Rev. 32(1-4): 101-113 (2009) - 2008
- [j15]Rowan Seymour, Darryl Stewart, Ji Ming:
Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos. EURASIP J. Image Video Process. 2008 (2008) - 2007
- [j14]Ji Ming, Timothy J. Hazen, James R. Glass, Douglas A. Reynolds:
Robust Speaker Recognition in Noisy Conditions. IEEE Trans. Speech Audio Process. 15(5): 1711-1723 (2007) - 2006
- [j13]Ji Ming, Jie Lin, Francis Jack Smith:
A Posterior Union Model with Applications to Robust Speech and Speaker Recognition. EURASIP J. Adv. Signal Process. 2006 (2006) - [j12]Ji Ming:
Noise compensation for speech recognition with arbitrary additive noise. IEEE Trans. Speech Audio Process. 14(3): 833-844 (2006) - 2005
- [j11]James McAuley, Ji Ming, Darryl Stewart, Philip Hanna:
Subband Correlation and Robust Speech Recognition. IEEE Trans. Speech Audio Process. 13(5-2): 956-964 (2005) - 2003
- [j10]Ji Ming, Francis Jack Smith:
Speech recognition with unknown partial feature corruption - a review of the union model. Comput. Speech Lang. 17(2-3): 287-305 (2003) - [j9]Le Quan Ha, Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
Extension of Zipf's Law to Word and Character N-grams for English and Chinese. Int. J. Comput. Linguistics Chin. Lang. Process. 8(1) (2003) - 2002
- [j8]Ji Ming, Peter Jancovic, Francis Jack Smith:
Robust speech recognition using probabilistic union models. IEEE Trans. Speech Audio Process. 10(6): 403-414 (2002) - 2001
- [j7]Ji Ming, Francis Jack Smith:
Union: a model for partial temporal corruption of speech. Comput. Speech Lang. 15(3): 217-231 (2001) - [j6]Ji Ming, Francis Jack Smith:
Union: A new approach for combining sub-band observations for noisy speech recognition. Speech Commun. 34(1-2): 41-55 (2001) - 1999
- [j5]Ji Ming, Francis Jack Smith:
A Bayesian triphone model. Comput. Speech Lang. 13(2): 195-206 (1999) - [j4]Peter O'Boyle, Ji Ming, Marie Owens, Francis Jack Smith:
Adaptive Parameter Training in an Interpolated N-gram Language Model. J. Quant. Linguistics 6(1): 10-28 (1999) - [j3]Philip Hanna, Ji Ming, Francis Jack Smith:
Inter-frame dependence arising from preceding and succeeding frames - Application to speech recognition. Speech Commun. 28(4): 301-312 (1999) - [j2]Ji Ming, Peter O'Boyle, Marie Owens, Francis Jack Smith:
A Bayesian approach for building triphone models for continuous speech recognition. IEEE Trans. Speech Audio Process. 7(6): 678-684 (1999) - 1996
- [j1]Ji Ming, Francis Jack Smith:
Modelling of the interframe dependence in an HMM using conditional Gaussian mixtures. Comput. Speech Lang. 10(4): 229-247 (1996)
Conference and Workshop Papers
- 2019
- [c65]Ji Ming, Danny Crookes:
Full-Sentence Correlation: A Method to Handle Unpredictable Noise for Robust Speech Recognition. INTERSPEECH 2019: 436-440 - 2018
- [c64]David Nesbitt, Danny Crookes, Ji Ming:
Speech Segment Clustering for Real-Time Exemplar-Based Speech Enhancement. ICASSP 2018: 5419-5423 - 2016
- [c63]Jack Gaston, Ji Ming, Danny Crookes:
A largest matching area approach to image denoising. ICASSP 2016: 1194-1198 - [c62]Ji Ming, Danny Crookes:
Wide matching - An approach to improving noise robustness for speech enhancement. ICASSP 2016: 5910-5914 - [c61]Gao Qiang, Ji Ming, Pang Lan, Wang Xiao-Tian, Li Jie, Wang Ma-Qiang:
Research of Large-Scale Terrain Data Organization Method in Virtual Reality. ISCID (2) 2016: 108-111 - [c60]Chen Wei, Ji Ming, Zhu Lei, Guohua Jiao, Jiancheng Lv:
Hysteresis compensation for piezoelectric laser scanner with open-loop control method. RCAR 2016: 22-26 - 2014
- [c59]Ji Ming, Danny Crookes:
Speech enhancement from additive noise and channel distortion - a corpus-based approach. INTERSPEECH 2014: 2710-2714 - 2012
- [c58]Niall McLaughlin, Ji Ming, Danny Crookes:
Illumination invariant facial recognition using a piecewise-constant lighting model. ICASSP 2012: 1537-1540 - [c57]Ramji Srinivasan, Ji Ming, Danny Crookes:
Single-channel speaker-pair identification: A new approach based on automatic frame selection. ICASSP 2012: 4369-4372 - [c56]Ji Ming, Ramji Srinivasan, Danny Crookes:
Unconstrained Speech Separation by Composition of Longest Segments. INTERSPEECH 2012: 1540-1543 - 2011
- [c55]Niall McLaughlin, Ji Ming, Danny Crookes:
Speaker recognition in noisy conditions with limited training data. EUSIPCO 2011: 1294-1298 - [c54]Ayeh Jafari, Ramji Srinivasan, Danny Crookes, Ji Ming:
Exploiting long-range temporal dynamics of speech for noise-robust speaker recognition. EUSIPCO 2011: 2123-2127 - [c53]Niall McLaughlin, Ji Ming, Danny Crookes:
Robust Bimodal Person Identification Using Face and Speech with Limited Training Data and Corruption of Both Modalities. INTERSPEECH 2011: 585-588 - [c52]Ayeh Jafari, Ramji Srinivasan, Danny Crookes, Ji Ming:
A Longest Matching Segment Approach with Baysian Adaptation - Application to Noise-Robust Speaker Recognition. INTERSPEECH 2011: 2749-2752 - 2010
- [c51]Jianhua Lu, Ji Ming, Roger F. Woods:
Adapting noisy speech models - Extended uncertainty decoding. ICASSP 2010: 4322-4325 - [c50]Adrian Pass, Ji Ming, Philip Hanna, Jianguo Zhang, Darryl Stewart:
Inter-frame contextual modelling for visual speech recognition. ICIP 2010: 93-96 - [c49]Ji Ming, Ramji Srinivasan, Danny Crookes:
A corpus-based approach to speech enhancement from nonstationary noise. INTERSPEECH 2010: 1097-1100 - [c48]Ayeh Jafari, Ramji Srinivasan, Danny Crookes, Ji Ming:
A longest matching segment approach for text-independent speaker recognition. INTERSPEECH 2010: 1469-1472 - 2009
- [c47]Jie Lin, Ji Ming, Danny Crookes:
Robust face recognition with partially occluded images based on a single or a small number of training samples. ICASSP 2009: 881-884 - [c46]Ji Ming:
Maximizing the continuity in segmentation - A new approach to model, segment and recognize speech. ICASSP 2009: 3849-3852 - [c45]Jianhua Lu, Ji Ming, Roger F. Woods:
Replacing uncertainty decoding with subband re-estimation for large vocabulary speech recognition in noise. INTERSPEECH 2009: 2407-2410 - 2008
- [c44]Jie Lin, Ji Ming, Danny Crookes:
A probabilistic union approach to robust face recognition with partial distortion and occlusion. ICASSP 2008: 993-996 - [c43]Ji Ming, Jie Lin:
Modeling long-range dependencies in speech data for text-independent speaker recognition. ICASSP 2008: 4825-4828 - [c42]Jianhua Lu, Ji Ming, Roger F. Woods:
Combining noise compensation and missing-feature decoding for large vocabulary speech recognition in noise. INTERSPEECH 2008: 1269-1272 - 2007
- [c41]Rowan Seymour, Darryl Stewart, Ji Ming:
Audio-visual integration for robust speech recognition using maximum weighted stream posteriors. INTERSPEECH 2007: 654-657 - 2006
- [c40]Ji Ming, Timothy J. Hazen, James R. Glass:
Speaker Verification Over Handheld Devices with Realistic Noisy Speech Data. ICASSP (1) 2006: 637-640 - [c39]Ji Ming, Timothy J. Hazen, James R. Glass:
Combining missing-feature theory, speech enhancement and speaker-dependent/-independent modeling for speech separation. INTERSPEECH 2006 - [c38]Ji Ming, Timothy J. Hazen, James R. Glass:
A Comparative Study of Methods for Handheld Speaker Verification in Realistic Noisy Conditions. Odyssey 2006: 1-8 - 2005
- [c37]Ji Ming, Darryl Stewart, Saeed Vaseghi:
Speaker Identification in Unknown Noisy Conditions - A Universal Compensation Approach. ICASSP (1) 2005: 617-620 - [c36]Rowan Seymour, Ji Ming, Darryl Stewart:
A new posterior based audio-visual integration method for robust speech recognition. INTERSPEECH 2005: 1229-1232 - [c35]Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
A posteriori multiple word-domain language model. INTERSPEECH 2005: 1285-1288 - [c34]James McAuley, Ji Ming, Pat Corr:
Speaker verification in noisy conditions using correlated subband features. INTERSPEECH 2005: 2001-2004 - 2004
- [c33]Ji Ming:
Universal compensation - an approach to noisy speech recognition assuming no knowledge of noise. ICASSP (1) 2004: 961-964 - [c32]James McAuley, Ji Ming, Philip Hanna, Darryl Stewart:
Modeling sub-band correlation for noise-robust speech recognition. ICASSP (1) 2004: 1017-1020 - [c31]Ji Ming, Baochun Hou:
Evaluation of universal compensation on Aurora 2 and 3 and beyond. INTERSPEECH 2004: 97-100 - 2003
- [c30]Ji Ming, Francis Jack Smith:
A posterior union model for improved robust speech recognition in nonstationary noise. ICASSP (1) 2003: 420-423 - [c29]Ji Ming, Darryl Stewart, Philip Hanna, Pat Corr, Francis Jack Smith, Saeed Vaseghi:
Robust speaker identification using posterior union models. INTERSPEECH 2003: 2645-2648 - [c28]Wei Qi, Jin Hongzhang, Guo Jian, Ji Ming:
Study on complex system based on the brittleness. SMC 2003: 3056-3061 - 2002
- [c27]Le Quan Ha, Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
Extension of Zipf's Law to Words and Phrases. COLING 2002 - [c26]Peter Jancovic, Ji Ming:
Combining the union model and missing feature method to improve noise robustness in ASR. ICASSP 2002: 69-72 - [c25]Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
Individual word language models and the frequency approach. INTERSPEECH 2002: 897-900 - 2001
- [c24]Peter Jancovic, Ji Ming:
A multi-band approach based on the probabilistic union model and frequency-filtering features for robust speech recognition. INTERSPEECH 2001: 579-582 - [c23]Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
Triggering individual word domains in n-gram language models. INTERSPEECH 2001: 701-704 - [c22]Ji Ming, Peter Jancovic, Philip Hanna, Darryl Stewart:
Modeling the mixtures of known noise and unknown unexpected noise for robust speech recognition. INTERSPEECH 2001: 1111-1114 - 2000
- [c21]Elvira I. Sicilia-Garcia, Ji Ming, Francis Jack Smith:
A Dynamic Language Model Based on Individual Word Domains. COLING 2000: 789-794 - [c20]Ji Ming, Philip Hanna, Darryl Stewart, Peter Jancovic, Francis Jack Smith:
Union: A model for speech recognition subjected to partial and temporal corruption with unknown, time-varying noise statistics. EUSIPCO 2000: 1-4 - [c19]Ji Ming, Francis Jack Smith:
A probabilistic union model for sub-band based robust speech recognition. ICASSP 2000: 1787-1790 - [c18]Pat Corr, Darryl Stewart, Philip Hanna, Ji Ming, Francis Jack Smith:
Discrete Chebyshev Transform - A Natural Modification of the DCT. ICPR 2000: 1142-1145 - [c17]Philip Hanna, Darryl Stewart, Ji Ming, Francis Jack Smith:
Improved lexicon formation through removal of co-articulation and acoustic recognition errors. INTERSPEECH 2000: 50-53 - [c16]Ji Ming, Peter Jancovic, Philip Hanna, Darryl Stewart, Francis Jack Smith:
Robust feature selection using probabilistic union models. INTERSPEECH 2000: 546-549 - [c15]Peter Jancovic, Ji Ming, Philip Hanna, Darryl Stewart, Francis Jack Smith:
Combining Multi-band and Frequency-Filtering Techniques for Speech Recognition in Noisy Environments. TSD 2000: 265-270 - 1999
- [c14]Ji Ming, Philip Hanna, Darryl Stewart, Marie Owens, Francis Jack Smith:
Improving speech recognition performance by using multi-model approaches. ICASSP 1999: 161-164 - [c13]Paul Gerard Donnelly, Francis Jack Smith, Elvira I. Sicilia-Garcia, Ji Ming:
Language modelling with hierarchical domains. EUROSPEECH 1999 - [c12]Marie Owens, Anja Kürger, Paul Gerard Donnelly, Francis Jack Smith, Ji Ming:
A missing-word test comparison of human and statistical language model performance. EUROSPEECH 1999: 145-148 - [c11]Philip Hanna, Darryl Stewart, Ji Ming:
The application of an improved DP match for automatic lexicon generation. EUROSPEECH 1999: 475-478 - 1998
- [c10]Ji Ming, Marie Owens, Francis Jack Smith:
A Bayesian triphone model with parameter tying. EUSIPCO 1998: 1-4 - [c9]Ji Ming, Francis Jack Smith:
Improved phone recognition using Bayesian triphone models. ICASSP 1998: 409-412 - [c8]Ji Ming, Philip Hanna, Darryl Stewart, Saeed Vaseghi, Francis Jack Smith:
Capturing discriminative information using multiple modeling techniques. ICSLP 1998 - 1997
- [c7]Philip Hanna, Ji Ming, Peter O'Boyle, Francis Jack Smith:
Modelling inter-frame dependence with preceeding and succeeding frames. EUROSPEECH 1997: 1167-1170 - [c6]Peter O'Boyle, Ji Ming, Marie Owens, Francis Jack Smith:
From phone identification to phone clustering using mutual information. EUROSPEECH 1997: 2391-2394 - 1996
- [c5]Peter O'Boyle, Ji Ming, John G. McMahon, Francis Jack Smith:
Improving n-gram models by incorporating enhanced distributions. ICASSP 1996: 168-171 - [c4]Ji Ming, Peter O'Boyle, John G. McMahon, Francis Jack Smith:
Speech recognition using a strong correlation assumption for the instantaneous spectra. ICSLP 1996: 1061-1064 - 1995
- [c3]Francis Jack Smith, Ji Ming, Peter O'Boyle, A. D. Irvine:
A hidden Markov model with optimized inter-frame dependence. ICASSP 1995: 209-212 - [c2]Ji Ming, Peter O'Boyle, Francis Jack Smith:
An HMM with optimized segment-dependent observations for speech recognition. EUROSPEECH 1995: 1475-1478 - 1990
- [c1]Ji Ming:
The statistical information formulation for noisy speech recognition. ICPR (2) 1990: 237-239
Informal and Other Publications
- 2021
- [i1]Andrew D. Moyes, Richard Gault, Kun Zhang, Ji Ming, Danny Crookes, Jing Wang:
Multi-Channel Auto-Encoders and a Novel Dataset for Learning Domain Invariant Representations of Histopathology Images. CoRR abs/2107.07271 (2021)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-20 00:38 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint