default search action
Ralf Schlüter
Person information
- affiliation: RWTH Aachen University, Germany
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j17]Rohit Prabhavalkar, Takaaki Hori, Tara N. Sainath, Ralf Schlüter, Shinji Watanabe:
End-to-End Speech Recognition: A Survey. IEEE ACM Trans. Audio Speech Lang. Process. 32: 325-351 (2024) - [c226]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-Based Encoder-Decoder Model for Streaming Speech Recognition. ICASSP 2024: 11331-11335 - [c225]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
On the Relation Between Internal Language Model and Sequence Discriminative Training for Neural Transducers. ICASSP 2024: 12627-12631 - [i64]Tina Raissi, Christoph Lüscher, Simon Berger, Ralf Schlüter, Hermann Ney:
Investigating the Effect of Label Topology and Training Criterion on ASR Performance and Alignment Quality. CoRR abs/2407.11641 (2024) - [i63]Nick Rossenbach, Benedikt Hilmes, Ralf Schlüter:
On the Effect of Purely Synthetic Training Data for Different Automatic Speech Recognition Architectures. CoRR abs/2407.17997 (2024) - [i62]Jingjing Xu, Wei Zhou, Zijian Yang, Eugen Beck, Ralf Schlüter:
Dynamic Encoder Size Based on Data-Driven Layer-wise Pruning for Speech Recognition. CoRR abs/2407.18930 (2024) - [i61]Nick Rossenbach, Ralf Schlüter, Sakriani Sakti:
On the Problem of Text-To-Speech Model Selection for Synthetic Data Generation in Automatic Speech Recognition. CoRR abs/2407.21476 (2024) - 2023
- [c224]Daniel Mann, Tina Raissi, Wilfried Michel, Ralf Schlüter, Hermann Ney:
End-To-End Training of a Neural HMM with Label and Transition Probabilities. ASRU 2023: 1-8 - [c223]Nick Rossenbach, Benedikt Hilmes, Ralf Schlüter:
On the Relevance of Phoneme Duration Variability of Synthesized Training Data for Automatic Speech Recognition. ASRU 2023: 1-8 - [c222]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Investigating The Effect of Language Models in Sequence Discriminative Training For Neural Transducers. ASRU 2023: 1-8 - [c221]Peter Vieting, Christoph Lüscher, Julian Dierkes, Ralf Schlüter, Hermann Ney:
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR. ICASSP Workshops 2023: 1-5 - [c220]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers. ICASSP 2023: 1-5 - [c219]Wei Zhou, Haotian Wu, Jingjing Xu, Mohammad Zeineldeen, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Enhancing and Adversarial: Improve ASR with Speaker Labels. ICASSP 2023: 1-5 - [c218]Simon Berger, Peter Vieting, Christoph Böddeker, Ralf Schlüter, Reinhold Haeb-Umbach:
Mixture Encoder for Joint Speech Separation and Recognition. INTERSPEECH 2023: 3527-3531 - [c217]Wei Zhou, Eugen Beck, Simon Berger, Ralf Schlüter, Hermann Ney:
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition. INTERSPEECH 2023: 4094-4098 - [c216]Tina Raissi, Christoph Lüscher, Moritz Gunz, Ralf Schlüter, Hermann Ney:
Competitive and Resource Efficient Factored Hybrid HMM Systems are Simpler Than You Think. INTERSPEECH 2023: 4938-4942 - [i60]Christoph Lüscher, Jingjing Xu, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Improving And Analyzing Neural Speaker Embeddings for ASR. CoRR abs/2301.04571 (2023) - [i59]Rohit Prabhavalkar, Takaaki Hori, Tara N. Sainath, Ralf Schlüter, Shinji Watanabe:
End-to-End Speech Recognition: A Survey. CoRR abs/2303.03329 (2023) - [i58]Wei Zhou, Eugen Beck, Simon Berger, Ralf Schlüter, Hermann Ney:
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition. CoRR abs/2305.17782 (2023) - [i57]Tina Raissi, Christoph Lüscher, Moritz Gunz, Ralf Schlüter, Hermann Ney:
Competitive and Resource Efficient Factored Hybrid HMM Systems are Simpler Than You Think. CoRR abs/2306.09517 (2023) - [i56]Simon Berger, Peter Vieting, Christoph Böddeker, Ralf Schlüter, Reinhold Haeb-Umbach:
Mixture Encoder for Joint Speech Separation and Recognition. CoRR abs/2306.12173 (2023) - [i55]Peter Vieting, Ralf Schlüter, Hermann Ney:
Comparative Analysis of the wav2vec 2.0 Feature Extractor. CoRR abs/2308.04286 (2023) - [i54]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-based Encoder-Decoder Model for Streaming Speech Recognition. CoRR abs/2309.08436 (2023) - [i53]Peter Vieting, Simon Berger, Thilo von Neumann, Christoph Böddeker, Ralf Schlüter, Reinhold Haeb-Umbach:
Mixture Encoder Supporting Continuous Speech Separation for Meeting Recognition. CoRR abs/2309.08454 (2023) - [i52]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
On the Relation between Internal Language Model and Sequence Discriminative Training for Neural Transducers. CoRR abs/2309.14130 (2023) - [i51]Daniel Mann, Tina Raissi, Wilfried Michel, Ralf Schlüter, Hermann Ney:
End-to-End Training of a Neural HMM with Label and Transition Probabilities. CoRR abs/2310.02724 (2023) - [i50]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Investigating the Effect of Language Models in Sequence Discriminative Training for Neural Transducers. CoRR abs/2310.07345 (2023) - [i49]Nick Rossenbach, Benedikt Hilmes, Ralf Schlüter:
On the Relevance of Phoneme Duration Variability of Synthesized Training Data for Automatic Speech Recognition. CoRR abs/2310.08132 (2023) - 2022
- [c215]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Wilfried Michel, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
Conformer-Based Hybrid ASR System For Switchboard Dataset. ICASSP 2022: 7437-7441 - [c214]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Improving Factored Hybrid HMM Acoustic Modeling without State Tying. ICASSP 2022: 7442-7446 - [c213]Nils-Philipp Wynands, Wilfried Michel, Jan Rosendahl, Ralf Schlüter, Hermann Ney:
Efficient Sequence Training of Attention Models Using Approximative Recombination. ICASSP 2022: 8002-8006 - [c212]Wei Zhou, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
On Language Model Integration for RNN Transducer Based Speech Recognition. ICASSP 2022: 8407-8411 - [c211]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Improving the Training Recipe for a Robust Conformer-based Hybrid Model. INTERSPEECH 2022: 1036-1040 - [c210]Wei Zhou, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Efficient Training of Neural Transducer for Speech Recognition. INTERSPEECH 2022: 2058-2062 - [c209]Zijian Yang, Yingbo Gao, Alexander Gerstenberger, Jintao Jiang, Ralf Schlüter, Hermann Ney:
Self-Normalized Importance Sampling for Neural Language Modeling. INTERSPEECH 2022: 3909-3913 - [c208]Felix Meyer, Wilfried Michel, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Automatic Learning of Subword Dependent Model Scales. INTERSPEECH 2022: 4133-4136 - [c207]Michael Gansen, Jie Lou, Florian Freye, Tobias Gemmeke, Farhad Merchant, Albert Zeyer, Mohammad Zeineldeen, Ralf Schlüter, Xin Fan:
Discrete Steps towards Approximate Computing. ISQED 2022: 1-6 - [c206]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic Segmental Attention for Automatic Speech Recognition. SLT 2022: 229-236 - [c205]Tina Raissi, Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch. SLT 2022: 287-294 - [i48]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Improving Factored Hybrid HMM Acoustic Modeling without State Tying. CoRR abs/2201.09692 (2022) - [i47]Wei Zhou, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Efficient Training of Neural Transducer for Speech Recognition. CoRR abs/2204.10586 (2022) - [i46]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Improving the Training Recipe for a Robust Conformer-based Hybrid Model. CoRR abs/2206.12955 (2022) - [i45]Tina Raissi, Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch. CoRR abs/2210.09951 (2022) - [i44]Christoph Lüscher, Mohammad Zeineldeen, Zijian Yang, Peter Vieting, Khai Le-Duc, Weiyue Wang, Ralf Schlüter, Hermann Ney:
Development of Hybrid ASR Systems for Low Resource Medical Domain Conversational Telephone Speech. CoRR abs/2210.13397 (2022) - [i43]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic segmental attention for automatic speech recognition. CoRR abs/2210.14742 (2022) - [i42]Peter Vieting, Christoph Lüscher, Julian Dierkes, Ralf Schlüter, Hermann Ney:
Efficient Use of Large Pre-Trained Models for Low Resource ASR. CoRR abs/2210.15445 (2022) - [i41]Wei Zhou, Haotian Wu, Jingjing Xu, Mohammad Zeineldeen, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Enhancing and Adversarial: Improve ASR with Speaker Labels. CoRR abs/2211.06369 (2022) - [i40]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers. CoRR abs/2212.04325 (2022) - 2021
- [c204]Peter Vieting, Christoph Lüscher, Wilfried Michel, Ralf Schlüter, Hermann Ney:
On Architectures and Training for Raw Waveform Feature Extraction in ASR. ASRU 2021: 267-274 - [c203]Nick Rossenbach, Mohammad Zeineldeen, Benedikt Hilmes, Ralf Schlüter, Hermann Ney:
Comparing the Benefit of Synthetic Training Data for Various Automatic Speech Recognition Architectures. ASRU 2021: 788-795 - [c202]Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition. ICASSP 2021: 5644-5648 - [c201]Yingbo Gao, David Thulke, Alexander Gerstenberger, Khoa Viet Tran, Ralf Schlüter, Hermann Ney:
On Sampling-Based Training Criteria for Neural Language Modeling. Interspeech 2021: 1877-1881 - [c200]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. Interspeech 2021: 2052-2056 - [c199]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-Based Encoder-Decoder ASR Models. Interspeech 2021: 2856-2860 - [c198]Wei Zhou, Mohammad Zeineldeen, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
Acoustic Data-Driven Subword Modeling for End-to-End Speech Recognition. Interspeech 2021: 2886-2890 - [c197]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. Interspeech 2021: 2891-2895 - [c196]Yu Qiao, Wei Zhou, Elma Kerz, Ralf Schlüter:
The Impact of ASR on the Automatic Analysis of Linguistic Complexity and Sophistication in Spontaneous L2 Speech. Interspeech 2021: 4453-4457 - [c195]Parnia Bahar, Tobias Bieschke, Ralf Schlüter, Hermann Ney:
Tight Integrated End-to-End Training for Cascaded Speech Translation. SLT 2021: 950-957 - [i39]Albert Zeyer, Ralf Schlüter, Hermann Ney:
A study of latent monotonic attention variants. CoRR abs/2103.16710 (2021) - [i38]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Towards Consistent Hybrid HMM Acoustic Modeling. CoRR abs/2104.02387 (2021) - [i37]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. CoRR abs/2104.03006 (2021) - [i36]Peter Vieting, Christoph Lüscher, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Feature Replacement and Combination for Hybrid ASR Systems. CoRR abs/2104.04298 (2021) - [i35]Nick Rossenbach, Mohammad Zeineldeen, Benedikt Hilmes, Ralf Schlüter, Hermann Ney:
Comparing the Benefit of Synthetic Training Data for Various Automatic Speech Recognition Architectures. CoRR abs/2104.05379 (2021) - [i34]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models. CoRR abs/2104.05544 (2021) - [i33]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. CoRR abs/2104.06104 (2021) - [i32]Yu Qiao, Zhou Wei, Elma Kerz, Ralf Schlüter:
The Impact of ASR on the Automatic Analysis of Linguistic Complexity and Sophistication in Spontaneous L2 Speech. CoRR abs/2104.08529 (2021) - [i31]Wei Zhou, Mohammad Zeineldeen, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
Acoustic Data-Driven Subword Modeling for End-to-End Speech Recognition. CoRR abs/2104.09106 (2021) - [i30]Yingbo Gao, David Thulke, Alexander Gerstenberger, Khoa Viet Tran, Ralf Schlüter, Hermann Ney:
On Sampling-Based Training Criteria for Neural Language Modeling. CoRR abs/2104.10507 (2021) - [i29]Albert Zeyer, Ralf Schlüter, Hermann Ney:
Why does CTC result in peaky behavior? CoRR abs/2105.14849 (2021) - [i28]Wei Zhou, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
On Language Model Integration for RNN Transducer based Speech Recognition. CoRR abs/2110.06841 (2021) - [i27]Nils-Philipp Wynands, Wilfried Michel, Jan Rosendahl, Ralf Schlüter, Hermann Ney:
Efficient Sequence Training of Attention Models using Approximative Recombination. CoRR abs/2110.09245 (2021) - [i26]Felix Meyer, Wilfried Michel, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Automatic Learning of Subword Dependent Model Scales. CoRR abs/2110.09324 (2021) - [i25]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Wilfried Michel, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
Conformer-based Hybrid ASR System for Switchboard Dataset. CoRR abs/2111.03442 (2021) - [i24]Zijian Yang, Yingbo Gao, Alexander Gerstenberger, Jintao Jiang, Ralf Schlüter, Hermann Ney:
Self-Normalized Importance Sampling for Neural Language Modeling. CoRR abs/2111.06310 (2021) - [i23]Yu Qiao, Sourabh Zanwar, Rishab Bhattacharyya, Daniel Wiechmann, Wei Zhou, Elma Kerz, Ralf Schlüter:
Prediction of Listener Perception of Argumentative Speech in a Crowdsourced Dataset Using (Psycho-)Linguistic and Fluency Features. CoRR abs/2111.07130 (2021) - 2020
- [c194]Kazuki Irie, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
How Much Self-Attention Do We Need? Trading Attention for Feed-Forward Layers. ICASSP 2020: 6154-6158 - [c193]Wilfried Michel, Ralf Schlüter, Hermann Ney:
Frame-Level MMI as A Sequence Discriminative Training Criterion for LVCSR. ICASSP 2020: 6904-6908 - [c192]Nick Rossenbach, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Generating Synthetic Audio Data for Attention-Based Speech Recognition Systems. ICASSP 2020: 7069-7073 - [c191]Vitalii Bozheniuk, Albert Zeyer, Ralf Schlüter, Hermann Ney:
A Comprehensive Study of Residual CNNS for Acoustic Modeling in ASR. ICASSP 2020: 7674-7678 - [c190]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Layer-Normalized LSTM for Hybrid-Hmm and End-To-End ASR. ICASSP 2020: 7679-7683 - [c189]Wei Zhou, Ralf Schlüter, Hermann Ney:
Full-Sum Decoding for Hybrid Hmm Based Speech Recognition Using LSTM Language Model. ICASSP 2020: 7834-7838 - [c188]Wei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schlüter, Hermann Ney:
The Rwth Asr System for Ted-Lium Release 2: Improving Hybrid Hmm With Specaugment. ICASSP 2020: 7839-7843 - [c187]Parnia Bahar, Nikita Makarov, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Exploring A Zero-Order Direct Hmm Based on Latent Attention for Automatic Speech Recognition. ICASSP 2020: 7854-7858 - [c186]Wei Zhou, Ralf Schlüter, Hermann Ney:
Robust Beam Search for Encoder-Decoder Attention Based Speech Recognition Without Length Bias. INTERSPEECH 2020: 1768-1772 - [c185]Eugen Beck, Ralf Schlüter, Hermann Ney:
LVCSR with Transformer Language Models. INTERSPEECH 2020: 1798-1802 - [c184]Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
A New Training Pipeline for an Improved Neural Transducer. INTERSPEECH 2020: 2812-2816 - [c183]Wilfried Michel, Ralf Schlüter, Hermann Ney:
Early Stage LM Integration Using Local and Global Log-Linear Combination. INTERSPEECH 2020: 3605-3609 - [c182]Jingjing Huo, Yingbo Gao, Weiyue Wang, Ralf Schlüter, Hermann Ney:
Investigation of Large-Margin Softmax in Neural Language Modeling. INTERSPEECH 2020: 3645-3649 - [c181]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Context-Dependent Acoustic Modeling Without Explicit Phone Clustering. INTERSPEECH 2020: 4377-4381 - [i22]Wei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schlüter, Hermann Ney:
The RWTH ASR System for TED-LIUM Release 2: Improving Hybrid HMM with SpecAugment. CoRR abs/2004.00960 (2020) - [i21]Wei Zhou, Ralf Schlüter, Hermann Ney:
Full-Sum Decoding for Hybrid HMM based Speech Recognition using LSTM Language Model. CoRR abs/2004.00967 (2020) - [i20]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Context-Dependent Acoustic Modeling without Explicit Phone Clustering. CoRR abs/2005.07578 (2020) - [i19]Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
A New Training Pipeline for an Improved Neural Transducer. CoRR abs/2005.09319 (2020) - [i18]Albert Zeyer, Wei Zhou, Thomas Ng, Ralf Schlüter, Hermann Ney:
Investigations on Phoneme-Based End-To-End Speech Recognition. CoRR abs/2005.09336 (2020) - [i17]Wilfried Michel, Ralf Schlüter, Hermann Ney:
Early Stage LM Integration Using Local and Global Log-Linear Combination. CoRR abs/2005.10049 (2020) - [i16]Jingjing Huo, Yingbo Gao, Weiyue Wang, Ralf Schlüter, Hermann Ney:
Investigation of Large-Margin Softmax in Neural Language Modeling. CoRR abs/2005.10089 (2020) - [i15]Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition. CoRR abs/2010.16368 (2020) - [i14]Parnia Bahar, Tobias Bieschke, Ralf Schlüter, Hermann Ney:
Tight Integrated End-to-End Training for Cascaded Speech Translation. CoRR abs/2011.12167 (2020)
2010 – 2019
- 2019
- [j16]Ralf Schlüter, Eugen Beck, Hermann Ney:
Upper and Lower Tight Error Bounds for Feature Omission with an Extension to Context Reduction. IEEE Trans. Pattern Anal. Mach. Intell. 41(2): 502-514 (2019) - [j15]Muhammad Ali Tahir, Heyun Huang, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Training of reduced-rank linear transformations for multi-layer polynomial acoustic features for speech recognition. Speech Commun. 110: 56-63 (2019) - [c180]Albert Zeyer, Parnia Bahar, Kazuki Irie, Ralf Schlüter, Hermann Ney:
A Comparison of Transformer and LSTM Encoder Decoder Models for ASR. ASRU 2019: 8-15 - [c179]Kazuki Irie, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Training Language Models for Long-Span Cross-Sentence Evaluation. ASRU 2019: 419-426 - [c178]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On Using 2D Sequence-to-sequence Models for Speech Recognition. ICASSP 2019: 5671-5675 - [c177]Tobias Menne, Ralf Schlüter, Hermann Ney:
Investigation into Joint Optimization of Single Channel Speech Enhancement and Acoustic Modeling for Robust ASR. ICASSP 2019: 6660-6664 - [c176]Christoph Lüscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention. INTERSPEECH 2019: 231-235 - [c175]Markus Kitza, Pavel Golik, Ralf Schlüter, Hermann Ney:
Cumulative Adaptation for BLSTM Acoustic Models. INTERSPEECH 2019: 754-758 - [c174]André Merboldt, Albert Zeyer, Ralf Schlüter, Hermann Ney:
An Analysis of Local Monotonic Attention Variants. INTERSPEECH 2019: 1398-1402 - [c173]Wilfried Michel, Ralf Schlüter, Hermann Ney:
Comparison of Lattice-Free and Lattice-Based Sequence Discriminative Training Criteria for LVCSR. INTERSPEECH 2019: 1601-1605 - [c172]Tobias Menne, Ilya Sklyar, Ralf Schlüter, Hermann Ney:
Analysis of Deep Clustering as Preprocessing for Automatic Speech Recognition of Sparsely Overlapping Speech. INTERSPEECH 2019: 2638-2642 - [c171]Kazuki Irie, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Language Modeling with Deep Transformers. INTERSPEECH 2019: 3905-3909 - [c170]Anna Piunova, Eugen Beck, Ralf Schlüter, Hermann Ney:
Rescoring Keyword Search Confidence Estimates with Graph-Based Re-Ranking Using Acoustic Word Embeddings. INTERSPEECH 2019: 4205-4209 - [c169]Ralf Schlüter:
Survey Talk: Modeling in Automatic Speech Recognition: Beyond Hidden Markov Models. INTERSPEECH 2019 - [c168]Parnia Bahar, Albert Zeyer, Ralf Schlüter, Hermann Ney:
On Using SpecAugment for End-to-End Speech Translation. IWSLT 2019 - [i13]Christoph Lüscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation. CoRR abs/1905.03072 (2019) - [i12]Tobias Menne, Ilya Sklyar, Ralf Schlüter, Hermann Ney:
Analysis of Deep Clustering as Preprocessing for Automatic Speech Recognition of Sparsely Overlapping Speech. CoRR abs/1905.03500 (2019) - [i11]