


Остановите войну!
for scientists:


default search action
Roger K. Moore
Person information

- affiliation: University of Sheffield, England, UK
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [j27]Matthew Marge, Carol Y. Espy-Wilson, Nigel G. Ward, Abeer Alwan, Yoav Artzi
, Mohit Bansal, Gilmer L. Blankenship, Joyce Chai, Hal Daumé III, Debadeepta Dey, Mary P. Harper, Thomas Howard
, Casey Kennington
, Ivana Kruijff-Korbayová, Dinesh Manocha
, Cynthia Matuszek, Ross Mead
, Raymond J. Mooney, Roger K. Moore
, Mari Ostendorf, Heather Pon-Barry
, Alexander I. Rudnicky, Matthias Scheutz
, Robert St. Amant, Tong Sun, Stefanie Tellex, David R. Traum
, Zhou Yu:
Spoken language interaction with robots: Recommendations for future research. Comput. Speech Lang. 71: 101255 (2022) - [j26]Guanyu Huang, Roger K. Moore
:
Is honesty the best policy for mismatched partners? Aligning multi-modal affordances of a social robot: An opinion paper. Frontiers Virtual Real. 3 (2022) - [i13]Roger K. Moore:
Whither the Priors for (Vocal) Interactivity? CoRR abs/2203.08578 (2022) - 2021
- [c97]Samuel J. Broughton, Md. Asif Jalal, Roger K. Moore
:
Investigating Deep Neural Structures and their Interpretability in the Domain of Voice Conversion. Interspeech 2021: 806-810 - [i12]Samuel J. Broughton, Md Asif Jalal, Roger K. Moore:
Investigating Deep Neural Structures and their Interpretability in the Domain of Voice Conversion. CoRR abs/2102.11420 (2021) - 2020
- [c96]Md Asif Jalal, Rosanna Milner
, Thomas Hain
, Roger K. Moore
:
Removing Bias with Residual Mixture of Multi-View Attention for Speech Emotion Recognition. INTERSPEECH 2020: 4084-4088 - [c95]Benjamin Hawker
, Roger K. Moore
:
A Structural Approach to Dealing with High Dimensionality Parameter Search Spaces. TAROS 2020: 159-170 - [i11]Laurence Devillers, Tatsuya Kawahara, Roger K. Moore, Matthias Scheutz:
Spoken Language Interaction with Virtual Agents and Robots (SLIVAR): Towards Effective and Ethical Interaction (Dagstuhl Seminar 20021). Dagstuhl Reports 10(1): 1-51 (2020)
2010 – 2019
- 2019
- [c94]Md Asif Jalal, Roger K. Moore
, Thomas Hain
:
Spatio-Temporal Context Modelling for Speech Emotion Classification. ASRU 2019: 853-859 - [c93]Leigh Clark, Benjamin R. Cowan
, Justin Edwards
, Cosmin Munteanu, Christine Murad, Matthew P. Aylett, Roger K. Moore
, Jens Edlund, Éva Székely, Patrick Healey, Naomi Harte
, Ilaria Torre, Philip R. Doyle:
Mapping Theoretical and Methodological Perspectives for Understanding Speech Interface Interactions. CHI Extended Abstracts 2019 - [c92]Md Asif Jalal, Waqas Aftab, Roger K. Moore, Lyudmila Mihaylova:
Dual Stream Spatio-Temporal Motion Fusion With Self-Attention For Action Recognition. FUSION 2019: 1-7 - [c91]Md Asif Jalal, Lyudmila Mihaylova, Roger K. Moore:
An End-to-End Deep Neural Network for Facial Emotion Classification. FUSION 2019: 1-7 - [c90]Md Asif Jalal, Erfan Loweimi
, Roger K. Moore
, Thomas Hain
:
Learning Temporal Clusters Using Capsule Routing for Speech Emotion Recognition. INTERSPEECH 2019: 1701-1705 - [c89]Lucy Skidmore
, Roger K. Moore
:
Using Alexa for Flashcard-Based Learning. INTERSPEECH 2019: 1846-1850 - [c88]Roger K. Moore
, Lucy Skidmore:
On the Use/Misuse of the Term 'Phoneme'. INTERSPEECH 2019: 2340-2344 - [c87]Manal Linjawi, Roger K. Moore
:
Evaluating ToRCH Structure for Characterizing Robots. TAROS (2) 2019: 319-330 - [i10]Roger K. Moore, Lucy Skidmore:
On the Use/Misuse of the Term 'Phoneme'. CoRR abs/1907.11640 (2019) - [i9]Roger K. Moore:
Vocal Interactivity in Crowds, Flocks and Swarms: Implications for Voice User Interfaces. CoRR abs/1907.11656 (2019) - [i8]Roger K. Moore:
A 'Canny' Approach to Spoken Language Interfaces. CoRR abs/1908.08131 (2019) - [i7]Roger K. Moore:
Talking with Robots: Opportunities and Challenges. CoRR abs/1912.00369 (2019) - 2018
- [j25]David Cameron
, Abigail Millings, Samuel Fernando, Emily C. Collins, Roger K. Moore
, Amanda J. C. Sharkey, Vanessa Evers, Tony J. Prescott
:
The effects of robot facial emotional expressions and gender on child-robot interaction in a field study. Connect. Sci. 30(4): 343-361 (2018) - [c86]Lam Aun Cheah, James M. Gilbert, José A. González, Phil D. Green, Stephen R. Ell, Roger K. Moore
, Ed Holdsworth:
A Wearable Silent Speech Interface based on Magnetic Sensors with Motion-Artefact Removal. BIODEVICES 2018: 56-62 - [c85]Mashael M. AlSaleh, Roger K. Moore
, Heidi Christensen
, Mahnaz Arvaneh:
Discriminating Between Imagined Speech and Non-Speech Tasks Using EEG. EMBC 2018: 1952-1955 - [c84]Ruilong Chen, Md Asif Jalal, Lyudmila Mihaylova, Roger K. Moore
:
Learning Capsules for Vehicle Logo Recognition. FUSION 2018: 565-572 - [c83]Md Asif Jalal, Ruilong Chen, Roger K. Moore
, Lyudmila Mihaylova:
American Sign Language Posture Understanding with Deep Neural Networks. FUSION 2018: 573-579 - [c82]Mashael M. AlSaleh, Roger K. Moore
, Heidi Christensen
, Mahnaz Arvaneh:
Examining Temporal Variations in Recognizing Unspoken Words Using EEG Signals. SMC 2018: 976-981 - [c81]Manal Linjawi, Roger K. Moore
:
Towards a Comprehensive Taxonomy for Characterizing Robots. TAROS 2018: 381-392 - 2017
- [j24]Roger K. Moore
, Mauro Nicolao
:
Toward a Needs-Based Architecture for 'Intelligent' Communicative Agents: Speaking with Intention. Frontiers Robotics AI 4: 66 (2017) - [j23]José A. González
, Lam Aun Cheah, Angel M. Gomez
, Phil D. Green, James M. Gilbert
, Stephen R. Ell, Roger K. Moore
, Ed Holdsworth
:
Direct Speech Reconstruction From Articulatory Sensor Data by Machine Learning. IEEE ACM Trans. Audio Speech Lang. Process. 25(12): 2362-2374 (2017) - [c80]José A. González
, Lam Aun Cheah, Phil D. Green, James M. Gilbert, Stephen R. Ell, Roger K. Moore
, Ed Holdsworth:
Restoring Speech Following Total Removal of the Larynx. AAATE Conf. 2017: 314-321 - [c79]Saeid Mokaram, Roger K. Moore
:
The Sheffield Search and Rescue corpus. ICASSP 2017: 5840-5844 - [c78]Roger K. Moore, Ben Mitchinson:
Creating a Voice for MiRo, the World's First Commercial Biomimetic Robot. INTERSPEECH 2017: 3419-3420 - [c77]José A. González, Lam Aun Cheah, Phil D. Green, James M. Gilbert, Stephen R. Ell, Roger K. Moore, Ed Holdsworth:
Evaluation of a Silent Speech Interface Based on Magnetic Sensing and Deep Learning for a Phonetically Rich Vocabulary. INTERSPEECH 2017: 3986-3990 - [c76]David Cameron
, Samuel Fernando, Emily C. Collins, Abigail Millings, Michael Szollosy, Roger K. Moore
, Amanda J. C. Sharkey, Tony J. Prescott
:
You Made Him Be Alive: Children's Perceptions of Animacy in a Humanoid Robot. Living Machines 2017: 73-85 - [c75]Roger K. Moore
, Ben Mitchinson:
A Biomimetic Vocalisation System for MiRo. Living Machines 2017: 363-374 - [c74]David Cameron
, Samuel Fernando, Emily Cowles-Naja, Abigail Perkins, Emily C. Collins, Abigail Millings, Michael Szollosy, Roger K. Moore
, Amanda J. C. Sharkey, Tony J. Prescott
:
Children's Age Influences Their Use of Biological and Mechanical Questions Towards a Humanoid. TAROS 2017: 290-299 - [i6]Roger K. Moore, Ben Mitchinson:
A Biomimetic Vocalisation System for MiRo. CoRR abs/1705.05472 (2017) - 2016
- [j22]José A. González
, Lam Aun Cheah, James M. Gilbert, Jie Bai, Stephen R. Ell, Phil D. Green, Roger K. Moore
:
A silent speech system based on permanent magnet articulography and direct synthesis. Comput. Speech Lang. 39: 67-87 (2016) - [j21]Roger K. Moore
, Ricard Marxer
, Serge Thill
:
Vocal Interactivity in-and-between Humans, Animals, and Robots. Frontiers Robotics AI 3: 61 (2016) - [c73]Mashael M. AlSaleh, Mahnaz Arvaneh
, Heidi Christensen
, Roger K. Moore
:
Brain-computer interface technology for speech recognition: A review. APSIPA 2016: 1-5 - [c72]Lam Aun Cheah, James M. Gilbert, José A. González
, Jie Bai, Stephen R. Ell, Phil D. Green, Roger K. Moore
:
Towards an Intraoral-Based Silent Speech Restoration System for Post-laryngectomy Voice Replacement. BIOSTEC (Selected Papers) 2016: 22-38 - [c71]José A. González
, Lam Aun Cheah, James M. Gilbert, Jie Bai, Stephen R. Ell, Phil D. Green, Roger K. Moore
:
Direct Speech Generation for a Silent Speech Interface based on Permanent Magnet Articulography. BIOSIGNALS 2016: 96-105 - [c70]Lam Aun Cheah, Jie Bai, José A. González
, James M. Gilbert, Stephen R. Ell, Phil D. Green, Roger K. Moore
:
Preliminary Evaluation of a Silent Speech Interface based on Intra-Oral Magnetic Sensing. BIODEVICES 2016: 108-116 - [c69]José A. González
, Lam Aun Cheah, James M. Gilbert, Jie Bai, Stephen R. Ell, Phil D. Green, Roger K. Moore
:
Voice Restoration After Laryngectomy Based on Magnetic Sensing of Articulator Movement and Statistical Articulation-to-Speech Conversion. BIOSTEC (Selected Papers) 2016: 295-316 - [c68]Roger K. Moore:
A Needs-Driven Cognitive Architecture for Future 'Intelligent' Communicative Agents. EUCognition 2016: 50-51 - [c67]Roger K. Moore
:
A Real-Time Parametric General-Purpose Mammalian Vocal Synthesiser. INTERSPEECH 2016: 2636-2640 - [c66]Roger K. Moore
, Hui Li, Shih-Hao Liao:
Progress and Prospects for Spoken Language Technology: What Ordinary People Think. INTERSPEECH 2016: 3007-3011 - [c65]Roger K. Moore
, Ricard Marxer
:
Progress and Prospects for Spoken Language Technology: Results from Four Sexennial Surveys. INTERSPEECH 2016: 3012-3016 - [c64]Roger K. Moore
:
Is Spoken Language All-or-Nothing? Implications for Future Speech-Based Human-Machine Interaction. IWSDS 2016: 281-291 - [c63]Dennis Reidsma
, Vicky Charisi, Daniel P. Davison, Frances Wijnen, Jan van der Meij, Vanessa Evers, David Cameron
, Samuel Fernando, Roger K. Moore
, Tony J. Prescott
, Daniele Mazzei, Michael Pieroni, Lorenzo Cominelli
, Roberto Garofalo, Danilo De Rossi, Vasiliki Vouloutsi, Riccardo Zucca
, Klaudia Grechuta, Maria Blancas, Paul F. M. J. Verschure:
The EASEL Project: Towards Educational Human-Robot Symbiotic Interaction. Living Machines 2016: 297-306 - [c62]Vasiliki Vouloutsi, Maria Blancas, Riccardo Zucca
, Pedro Omedas, Dennis Reidsma
, Daniel P. Davison, Vicky Charisi, Frances Wijnen, Jan van der Meij, Vanessa Evers, David Cameron
, Samuel Fernando, Roger K. Moore
, Tony J. Prescott
, Daniele Mazzei, Michael Pieroni, Lorenzo Cominelli
, Roberto Garofalo, Danilo De Rossi, Paul F. M. J. Verschure:
Towards a Synthetic Tutor Assistant: The EASEL Project and its Architecture. Living Machines 2016: 353-364 - [c61]David Cameron
, Samuel Fernando, Abigail Millings, Michael Szollosy, Emily C. Collins, Roger K. Moore
, Amanda J. C. Sharkey, Tony J. Prescott
:
Designing Robot Personalities for Human-Robot Symbiotic Interaction in an Educational Context. Living Machines 2016: 413-417 - [c60]David Cameron
, Samuel Fernando, Abigail Millings, Michael Szollosy, Emily C. Collins, Roger K. Moore
, Amanda J. C. Sharkey, Tony J. Prescott
:
Congratulations, It's a Boy! Bench-Marking Children's Perceptions of the Robokind Zeno-R25. TAROS 2016: 33-39 - [i5]David Cameron, Samuel Fernando, Emily C. Collins, Abigail Millings, Roger K. Moore, Amanda J. C. Sharkey, Tony J. Prescott:
Impact of robot responsiveness and adult involvement on children's social behaviours in human-robot interaction. CoRR abs/1606.06104 (2016) - [i4]Roger K. Moore:
Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction. CoRR abs/1607.05174 (2016) - [i3]Samuel Fernando, Roger K. Moore, David Cameron, Emily C. Collins, Abigail Millings, Amanda J. C. Sharkey, Tony J. Prescott:
Automatic recognition of child speech for robotic applications in noisy environments. CoRR abs/1611.02695 (2016) - [i2]Roger K. Moore:
PCT and Beyond: Towards a Computational Framework for 'Intelligent' Communicative Systems. CoRR abs/1611.05379 (2016) - [i1]Roger K. Moore, Serge Thill
, Ricard Marxer:
Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR) (Dagstuhl Seminar 16442). Dagstuhl Reports 6(10): 154-194 (2016) - 2015
- [c59]Lam Aun Cheah, Jie Bai, José A. González, Stephen R. Ell, James M. Gilbert, Roger K. Moore, Phil D. Green:
A User-centric Design of Permanent Magnetic Articulography based Assistive Speech Technology. BIOSIGNALS 2015: 109-116 - [c58]Lam Aun Cheah, James M. Gilbert, José A. González
, Jie Bai, Stephen R. Ell, Michael J. Fagan, Roger K. Moore
, Phil D. Green
, Sergey I. Rybchenko:
Integrating User-Centred Design in the Development of a Silent Speech Interface Based on Permanent Magnetic Articulography. BIOSTEC (Selected Papers) 2015: 324-337 - [c57]Saeid Mokaram, Roger K. Moore:
Speech-based location estimation of first responders in a simulated search and rescue scenario. INTERSPEECH 2015: 2734-2738 - [c56]David Cameron
, Samuel Fernando, Abigail Millings, Roger K. Moore
, Amanda J. C. Sharkey, Tony J. Prescott
:
Children's Age Influences Their Perceptions of a Humanoid Robot as Being Like a Person or Machine. Living Machines 2015: 348-353 - 2014
- [j20]Timothy Kempton, Roger K. Moore
:
Discovering the phoneme inventory of an unwritten language: A machine-assisted approach. Speech Commun. 56: 152-166 (2014) - [c55]José A. González, Lam Aun Cheah, Jie Bai, Stephen R. Ell, James M. Gilbert, Roger K. Moore, Phil D. Green:
Analysis of phonetic similarity in a silent speech interface based on permanent magnetic articulography. INTERSPEECH 2014: 1018-1022 - [c54]Roger K. Moore:
On the use of the 'pure data' programming language for teaching and public outreach in speech processing. INTERSPEECH 2014: 1498-1499 - [c53]Samuel Fernando, Emily C. Collins, Armin Duff, Roger K. Moore
, Paul F. M. J. Verschure
, Tony J. Prescott
:
Optimising Robot Personalities for Symbiotic Interaction. Living Machines 2014: 392-395 - [c52]Roger K. Moore
:
Spoken Language Processing: Time to Look Outside? SLSP 2014: 21-36 - [c51]Lianne F. S. Meah, Roger K. Moore
:
The Uncanny Valley: A Focus on Misaligned Cues. ICSR 2014: 256-265 - 2013
- [j19]Robin Hofe, Stephen R. Ell, Michael J. Fagan, James M. Gilbert, Phil D. Green
, Roger K. Moore
, Sergey I. Rybchenko:
Small-vocabulary speech recognition using a silent speech interface based on magnetic sensing. Speech Commun. 55(1): 22-32 (2013) - [c50]Robin Hofe, Jie Bai, Lam Aun Cheah, Stephen R. Ell, James M. Gilbert, Roger K. Moore, Phil D. Green:
Performance of the MVOCA silent speech interface across multiple speakers. INTERSPEECH 2013: 1140-1143 - [c49]Roger K. Moore:
Progress and prospects for speech technology: what ordinary people think. INTERSPEECH 2013: 4006 - [c48]Mauro Nicolao, Fabio Tesser, Roger K. Moore:
A phonetic-contrast motivated adaptation to control the degree-of-articulation on Italian HMM-based synthetic voices. SSW 2013: 107-112 - [p1]Roger K. Moore
:
Spoken Language Processing: Where Do We Go from Here? Your Virtual Butler 2013: 119-133 - 2012
- [j18]Nigel T. Crook, Debora Field, Cameron G. Smith, Sue Harding, Stephen Pulman, Marc Cavazza
, Daniel Charlton, Roger K. Moore
, Johan Boye:
Generating context-sensitive ECA responses to user barge-in interruptions. J. Multimodal User Interfaces 6(1-2): 13-25 (2012) - [c47]Mauro Nicolao, Roger K. Moore:
Establishing some principles of human speech production through two-dimensional computational models. SAPA@INTERSPEECH 2012: 5-10 - [c46]Mauro Nicolao, Javier Latorre, Roger K. Moore:
C2H: A Computational Model of H&H-based Phonetic Contrast in Synthetic Speech. INTERSPEECH 2012: 987-990 - 2011
- [j17]Yorick Wilks, Roberta Catizone, Simon Worgan, Alexiei Dingli, Roger K. Moore
, Debora Field, Weiwei Cheng:
A prototype for a conversational companion for reminiscing about images. Comput. Speech Lang. 25(2): 140-157 (2011) - [j16]Simon Worgan, Roger K. Moore
:
Towards the detection of social dominance in dialogue. Speech Commun. 53(9-10): 1104-1114 (2011) - [c45]Roger K. Moore, Mauro Nicolao:
Reactive Speech Synthesis: Actively Managing Phonetic Contrast along an H&H Continuum. ICPhS 2011: 1422-1425 - [c44]Roger K. Moore:
Progress and Prospects for Speech Technology: Results from Three Sexennial Surveys. INTERSPEECH 2011: 1533-1536 - [c43]Robin Hofe, Stephen R. Ell, Michael J. Fagan, James M. Gilbert, Phil D. Green, Roger K. Moore, Sergey I. Rybchenko:
Speech Synthesis Parameter Generation for the Assistive Silent Speech Interface MVOCA. INTERSPEECH 2011: 3009-3012 - [c42]Timothy Kempton, Roger K. Moore, Thomas Hain:
Cross-Language Phone Recognition when the Target Language Phoneme Inventory is not Known. INTERSPEECH 2011: 3165-3168 - [c41]Roger K. Moore
:
Interacting with Purpose (and Feeling!): What Neuropsychology and the Performing Arts Can Tell Us About 'Real' Spoken Language Behaviour. IWSDS 2011: 5 - 2010
- [j15]Mark Elshaw
, Roger K. Moore
, Michael Klein:
An attention-gating recurrent working memory architecture for emergent speech representation. Connect. Sci. 22(2): 157-175 (2010) - [j14]Robert Kirchner, Roger K. Moore
, Tsung-Ying Chen
:
Computing phonological generalization over real speech exemplars. J. Phonetics 38(4): 540-547 (2010) - [c40]Robin Hofe, Stephen R. Ell, Michael J. Fagan, James M. Gilbert, Phil D. Green, Roger K. Moore, Sergey I. Rybchenko:
Evaluation of a silent speech interface based on magnetic sensing. INTERSPEECH 2010: 246-249 - [c39]Guillaume Aimetti, Roger K. Moore, Louis ten Bosch:
Discovering an optimal set of minimally contrasting acoustic speech units: a point of focus for whole-word pattern matching. INTERSPEECH 2010: 310-313
2000 – 2009
- 2009
- [j13]Louis ten Bosch
, Lou Boves, Hugo Van hamme
, Roger K. Moore
:
A Computational Model of Language Acquisition: the Emergence of Words. Fundam. Informaticae 90(3): 229-249 (2009) - [c38]Guillaume Aimetti, Louis ten Bosch, Roger K. Moore:
The emergence of words: Modelling early language acquisition with a dynamic systems perspective. EpiRob 2009 - [c37]Thomas M. Poulsen, Roger K. Moore
:
Evolving Spiking Neural Parameters for Behavioral Sequences. ICANN (2) 2009: 784-793 - [c36]Guillaume Aimetti, Roger K. Moore, Louis ten Bosch, Okko Johannes Räsänen, Unto Kalervo Laine:
Discovering keywords from cross-modal input: ecological vs. engineering methods for enhancing acoustic repetitions. INTERSPEECH 2009: 1171-1174 - [c35]Timothy Kempton, Roger K. Moore:
Finding allophones: an evaluation on consonants in the TIMIT corpus. INTERSPEECH 2009: 1651-1654 - [c34]Roger K. Moore, Louis ten Bosch:
Modelling vocabulary growth from birth to young adulthood. INTERSPEECH 2009: 1727-1730 - [c33]Viktoria Maier, Roger K. Moore:
The case for case-based automatic speech recognition. INTERSPEECH 2009: 3027-3030 - 2008
- [j12]Robin Hofe, Roger K. Moore
:
Towards an investigation of speech energetics using 'AnTon': an animatronic model of a human tongue and vocal tract. Connect. Sci. 20(4): 319-336 (2008) - [c32]Robin Hofe, Roger K. Moore:
Animatronic model of a human tongue. ALIFE 2008: 775 - [c31]Andrej Luneski, Roger K. Moore
:
Affective Computing and Collaborative Networks: Towards Emotion-Aware Interaction. Virtual Enterprises and Collaborative Networks 2008: 315-322 - [c30]Robin Hofe, Roger K. Moore:
Anton: an animatronic model of a human tongue and vocal tract. INTERSPEECH 2008: 2647-2650 - [c29]Timothy Kempton, Roger K. Moore:
Language identification: insights from the classification of hand annotated phone transcripts. Odyssey 2008: 14 - 2007
- [j11]François Mairesse, Marilyn A. Walker, Matthias R. Mehl, Roger K. Moore
:
Using Linguistic Cues for the Automatic Recognition of Personality in Conversation and Text. J. Artif. Intell. Res. 30: 457-500 (2007) - [j10]Roger K. Moore
:
Spoken language processing: Piecing together the puzzle. Speech Commun. 49(5): 418-435 (2007) - [j9]Odette Scharenborg, Vincent Wan, Roger K. Moore
:
Towards capturing fine phonetic variation in speech using articulatory features. Speech Commun. 49(10-11): 811-826 (2007) - [j8]Roger K. Moore
:
PRESENCE: A Human-Inspired Architecture for Speech-Based Human-Machine Interaction. IEEE Trans. Computers 56(9): 1176-1188 (2007) - [c28]Lou Boves, Louis ten Bosch, Roger K. Moore
:
ACORNS - towards computational modeling of communication and recognition skills. IEEE ICCI 2007: 349-356 - [c27]Thomas M. Poulsen, Roger K. Moore
:
Sound Localization Through Evolutionary Learning Applied to Spiking Neural Networks. FOCI 2007: 350-356 - [c26]Viktoria Maier, Roger K. Moore:
Temporal episodic memory model: an evolution of minerva2. INTERSPEECH 2007: 866-869 - 2005
- [j7]Mazin Gilbert, Roger K. Moore
, Geoffrey Zweig:
Introduction to the Special Issue on Data Mining of Speech, Audio, and Dialog. IEEE Trans. Speech Audio Process. 13(5-1): 633-634 (2005) - [c25]Roger K. Moore
:
Towards a unified theory of spoken language processing. IEEE ICCI 2005: 167-172 - [c24]Roger K. Moore:
Results from a survey of attendees at ASRU 1997 and 2003. INTERSPEECH 2005: 117-120 - [c23]Mark S. Hawley, Phil D. Green, Pam Enderby, Stuart P. Cunningham, Roger K. Moore:
Speech technology for e-inclusion of people with physical disabilities and disordered speech. INTERSPEECH 2005: 445-448 - [c22]Viktoria Maier, Roger K. Moore:
An investigation into a simulation of episodic memory for automatic speech recognition. INTERSPEECH 2005: 1245-1248 - 2004
- [c21]Roger K. Moore:
Modeling data entry rates for ASR and alternative input methods. INTERSPEECH 2004 - 2003
- [c20]Roger K. Moore:
A comparison of the data requirements of automatic speech recognition systems and human listeners. INTERSPEECH 2003 - [c19]Roger K. Moore:
Spoken language output: realising the vision. INTERSPEECH 2003 - 2000
- [c18]