default search action
Hermann Ney
Person information
- affiliation: RWTH Aachen University, Germany
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j101]David Thulke, Nico Daheim, Christian Dugast, Hermann Ney:
Task-Oriented Document-Grounded Dialog Systems by HLTPR@RWTH for DSTC9 and DSTC10. IEEE ACM Trans. Audio Speech Lang. Process. 32: 733-741 (2024) - [c728]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-Based Encoder-Decoder Model for Streaming Speech Recognition. ICASSP 2024: 11331-11335 - [c727]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
On the Relation Between Internal Language Model and Sequence Discriminative Training for Neural Transducers. ICASSP 2024: 12627-12631 - [i95]David Thulke, Yingbo Gao, Petrus Pelser, Rein Brune, Rricha Jalota, Floris Fok, Michael Ramos, Ian van Wyk, Abdallah Nasir, Hayden Goldstein, Taylor Tragemann, Katie Nguyen, Ariana Fowler, Andrew Stanco, Jon Gabriel, Jordan Taylor, Dean Moro, Evgenii Tsymbalov, Juliette de Waal, Evgeny Matusov, Mudar Yaghi, Mohammad Shihadah, Hermann Ney, Christian Dugast, Jonathan Dotan, Daniel Erasmus:
ClimateGPT: Towards AI Synthesizing Interdisciplinary Research on Climate Change. CoRR abs/2401.09646 (2024) - [i94]Tina Raissi, Christoph Lüscher, Simon Berger, Ralf Schlüter, Hermann Ney:
Investigating the Effect of Label Topology and Training Criterion on ASR Performance and Alignment Quality. CoRR abs/2407.11641 (2024) - 2023
- [c726]Christian Herold, Yingbo Gao, Mohammad Zeineldeen, Hermann Ney:
Improving Language Model Integration for Neural Machine Translation. ACL (Findings) 2023: 7114-7123 - [c725]Christian Herold, Hermann Ney:
On Search Strategies for Document-Level Neural Machine Translation. ACL (Findings) 2023: 12827-12836 - [c724]Daniel Mann, Tina Raissi, Wilfried Michel, Ralf Schlüter, Hermann Ney:
End-To-End Training of a Neural HMM with Label and Transition Probabilities. ASRU 2023: 1-8 - [c723]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Investigating The Effect of Language Models in Sequence Discriminative Training For Neural Transducers. ASRU 2023: 1-8 - [c722]Peter Vieting, Christoph Lüscher, Julian Dierkes, Ralf Schlüter, Hermann Ney:
Efficient Utilization of Large Pre-Trained Models for Low Resource ASR. ICASSP Workshops 2023: 1-5 - [c721]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers. ICASSP 2023: 1-5 - [c720]Wei Zhou, Haotian Wu, Jingjing Xu, Mohammad Zeineldeen, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Enhancing and Adversarial: Improve ASR with Speaker Labels. ICASSP 2023: 1-5 - [c719]Wei Zhou, Eugen Beck, Simon Berger, Ralf Schlüter, Hermann Ney:
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition. INTERSPEECH 2023: 4094-4098 - [c718]Tina Raissi, Christoph Lüscher, Moritz Gunz, Ralf Schlüter, Hermann Ney:
Competitive and Resource Efficient Factored Hybrid HMM Systems are Simpler Than You Think. INTERSPEECH 2023: 4938-4942 - [c717]Frithjof Petrick, Christian Herold, Pavel Petrushkov, Shahram Khadivi, Hermann Ney:
Document-Level Language Models for Machine Translation. WMT 2023: 375-391 - [i93]Christoph Lüscher, Jingjing Xu, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Improving And Analyzing Neural Speaker Embeddings for ASR. CoRR abs/2301.04571 (2023) - [i92]David Thulke, Nico Daheim, Christian Dugast, Hermann Ney:
Task-oriented Document-Grounded Dialog Systems by HLTPR@RWTH for DSTC9 and DSTC10. CoRR abs/2304.07101 (2023) - [i91]Wei Zhou, Eugen Beck, Simon Berger, Ralf Schlüter, Hermann Ney:
RASR2: The RWTH ASR Toolkit for Generic Sequence-to-sequence Speech Recognition. CoRR abs/2305.17782 (2023) - [i90]Christian Herold, Yingbo Gao, Mohammad Zeineldeen, Hermann Ney:
Improving Language Model Integration for Neural Machine Translation. CoRR abs/2306.05077 (2023) - [i89]Christian Herold, Hermann Ney:
On Search Strategies for Document-Level Neural Machine Translation. CoRR abs/2306.05116 (2023) - [i88]Christian Herold, Hermann Ney:
Improving Long Context Document-Level Machine Translation. CoRR abs/2306.05183 (2023) - [i87]Tina Raissi, Christoph Lüscher, Moritz Gunz, Ralf Schlüter, Hermann Ney:
Competitive and Resource Efficient Factored Hybrid HMM Systems are Simpler Than You Think. CoRR abs/2306.09517 (2023) - [i86]Peter Vieting, Ralf Schlüter, Hermann Ney:
Comparative Analysis of the wav2vec 2.0 Feature Extractor. CoRR abs/2308.04286 (2023) - [i85]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Chunked Attention-based Encoder-Decoder Model for Streaming Speech Recognition. CoRR abs/2309.08436 (2023) - [i84]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
On the Relation between Internal Language Model and Sequence Discriminative Training for Neural Transducers. CoRR abs/2309.14130 (2023) - [i83]Daniel Mann, Tina Raissi, Wilfried Michel, Ralf Schlüter, Hermann Ney:
End-to-End Training of a Neural HMM with Label and Transition Probabilities. CoRR abs/2310.02724 (2023) - [i82]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Investigating the Effect of Language Models in Sequence Discriminative Training for Neural Transducers. CoRR abs/2310.07345 (2023) - [i81]Frithjof Petrick, Christian Herold, Pavel Petrushkov, Shahram Khadivi, Hermann Ney:
Document-Level Language Models for Machine Translation. CoRR abs/2310.12303 (2023) - 2022
- [c716]Christian Herold, Jan Rosendahl, Joris Vanvinckenroye, Hermann Ney:
Detecting Various Types of Noise for Neural Machine Translation. ACL (Findings) 2022: 2542-2551 - [c715]Nico Daheim, David Thulke, Christian Dugast, Hermann Ney:
Controllable Factuality in Document-Grounded Dialog Systems Using a Noisy Channel Model. EMNLP (Findings) 2022: 1365-1381 - [c714]Baohao Liao, David Thulke, Sanjika Hewavitharana, Hermann Ney, Christof Monz:
Mask More and Mask Later: Efficient Pre-training of Masked Language Models by Disentangling the [MASK] Token. EMNLP (Findings) 2022: 1478-1492 - [c713]Viet Anh Khoa Tran, David Thulke, Yingbo Gao, Christian Herold, Hermann Ney:
Does Joint Training Really Help Cascaded Speech Translation? EMNLP 2022: 4480-4487 - [c712]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Wilfried Michel, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
Conformer-Based Hybrid ASR System For Switchboard Dataset. ICASSP 2022: 7437-7441 - [c711]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Improving Factored Hybrid HMM Acoustic Modeling without State Tying. ICASSP 2022: 7442-7446 - [c710]Nils-Philipp Wynands, Wilfried Michel, Jan Rosendahl, Ralf Schlüter, Hermann Ney:
Efficient Sequence Training of Attention Models Using Approximative Recombination. ICASSP 2022: 8002-8006 - [c709]Wei Zhou, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
On Language Model Integration for RNN Transducer Based Speech Recognition. ICASSP 2022: 8407-8411 - [c708]Yingbo Gao, Christian Herold, Zijian Yang, Hermann Ney:
Revisiting Checkpoint Averaging for Neural Machine Translation. AACL/IJCNLP (Findings) 2022: 188-196 - [c707]Yingbo Gao, Christian Herold, Zijian Yang, Hermann Ney:
Is Encoder-Decoder Redundant for Neural Machine Translation? AACL/IJCNLP (1) 2022: 562-574 - [c706]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Improving the Training Recipe for a Robust Conformer-based Hybrid Model. INTERSPEECH 2022: 1036-1040 - [c705]Wei Zhou, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Efficient Training of Neural Transducer for Speech Recognition. INTERSPEECH 2022: 2058-2062 - [c704]Zijian Yang, Yingbo Gao, Alexander Gerstenberger, Jintao Jiang, Ralf Schlüter, Hermann Ney:
Self-Normalized Importance Sampling for Neural Language Modeling. INTERSPEECH 2022: 3909-3913 - [c703]Felix Meyer, Wilfried Michel, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Automatic Learning of Subword Dependent Model Scales. INTERSPEECH 2022: 4133-4136 - [c702]Frithjof Petrick, Jan Rosendahl, Christian Herold, Hermann Ney:
Locality-Sensitive Hashing for Long Context Neural Machine Translation. IWSLT@ACL 2022: 32-42 - [c701]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic Segmental Attention for Automatic Speech Recognition. SLT 2022: 229-236 - [c700]Tina Raissi, Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch. SLT 2022: 287-294 - [i80]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Improving Factored Hybrid HMM Acoustic Modeling without State Tying. CoRR abs/2201.09692 (2022) - [i79]Wei Zhou, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Efficient Training of Neural Transducer for Speech Recognition. CoRR abs/2204.10586 (2022) - [i78]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Improving the Training Recipe for a Robust Conformer-based Hybrid Model. CoRR abs/2206.12955 (2022) - [i77]Tina Raissi, Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
HMM vs. CTC for Automatic Speech Recognition: Comparison Based on Full-Sum Training from Scratch. CoRR abs/2210.09951 (2022) - [i76]Yingbo Gao, Christian Herold, Zijian Yang, Hermann Ney:
Revisiting Checkpoint Averaging for Neural Machine Translation. CoRR abs/2210.11803 (2022) - [i75]Yingbo Gao, Christian Herold, Zijian Yang, Hermann Ney:
Is Encoder-Decoder Redundant for Neural Machine Translation? CoRR abs/2210.11807 (2022) - [i74]Christoph Lüscher, Mohammad Zeineldeen, Zijian Yang, Peter Vieting, Khai Le-Duc, Weiyue Wang, Ralf Schlüter, Hermann Ney:
Development of Hybrid ASR Systems for Low Resource Medical Domain Conversational Telephone Speech. CoRR abs/2210.13397 (2022) - [i73]Viet Anh Khoa Tran, David Thulke, Yingbo Gao, Christian Herold, Hermann Ney:
Does Joint Training Really Help Cascaded Speech Translation? CoRR abs/2210.13700 (2022) - [i72]Albert Zeyer, Robin Schmitt, Wei Zhou, Ralf Schlüter, Hermann Ney:
Monotonic segmental attention for automatic speech recognition. CoRR abs/2210.14742 (2022) - [i71]Peter Vieting, Christoph Lüscher, Julian Dierkes, Ralf Schlüter, Hermann Ney:
Efficient Use of Large Pre-Trained Models for Low Resource ASR. CoRR abs/2210.15445 (2022) - [i70]Nico Daheim, David Thulke, Christian Dugast, Hermann Ney:
Controllable Factuality in Document-Grounded Dialog Systems Using a Noisy Channel Model. CoRR abs/2210.17418 (2022) - [i69]Baohao Liao, David Thulke, Sanjika Hewavitharana, Hermann Ney, Christof Monz:
Mask More and Mask Later: Efficient Pre-training of Masked Language Models by Disentangling the [MASK] Token. CoRR abs/2211.04898 (2022) - [i68]Wei Zhou, Haotian Wu, Jingjing Xu, Mohammad Zeineldeen, Christoph Lüscher, Ralf Schlüter, Hermann Ney:
Enhancing and Adversarial: Improve ASR with Speaker Labels. CoRR abs/2211.06369 (2022) - [i67]Zijian Yang, Wei Zhou, Ralf Schlüter, Hermann Ney:
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers. CoRR abs/2212.04325 (2022) - 2021
- [c699]Evgeniia Tokarchuk, David Thulke, Weiyue Wang, Christian Dugast, Hermann Ney:
Investigation on Data Adaptation Techniques for Neural Named Entity Recognition. ACL (student) 2021: 1-15 - [c698]Weiyue Wang, Zijian Yang, Yingbo Gao, Hermann Ney:
Transformer-Based Direct Hidden Markov Model for Machine Translation. ACL (student) 2021: 23-32 - [c697]Nico Daheim, David Thulke, Christian Dugast, Hermann Ney:
Cascaded Span Extraction and Response Generation for Document-Grounded Dialog. DialDoc@ACL-IJCNLP 2021: 57-62 - [c696]Peter Vieting, Christoph Lüscher, Wilfried Michel, Ralf Schlüter, Hermann Ney:
On Architectures and Training for Raw Waveform Feature Extraction in ASR. ASRU 2021: 267-274 - [c695]Nick Rossenbach, Mohammad Zeineldeen, Benedikt Hilmes, Ralf Schlüter, Hermann Ney:
Comparing the Benefit of Synthetic Training Data for Various Automatic Speech Recognition Architectures. ASRU 2021: 788-795 - [c694]Wei Zhou, Simon Berger, Ralf Schlüter, Hermann Ney:
Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition. ICASSP 2021: 5644-5648 - [c693]Yingbo Gao, David Thulke, Alexander Gerstenberger, Khoa Viet Tran, Ralf Schlüter, Hermann Ney:
On Sampling-Based Training Criteria for Neural Language Modeling. Interspeech 2021: 1877-1881 - [c692]Hermann Ney:
Forty Years of Speech and Language Processing: From Bayes Decision Rule to Deep Learning. Interspeech 2021 - [c691]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. Interspeech 2021: 2052-2056 - [c690]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-Based Encoder-Decoder ASR Models. Interspeech 2021: 2856-2860 - [c689]Wei Zhou, Mohammad Zeineldeen, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
Acoustic Data-Driven Subword Modeling for End-to-End Speech Recognition. Interspeech 2021: 2886-2890 - [c688]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. Interspeech 2021: 2891-2895 - [c687]Evgeniia Tokarchuk, Jan Rosendahl, Weiyue Wang, Pavel Petrushkov, Tomer Lancewicki, Shahram Khadivi, Hermann Ney:
Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer. IWSLT 2021: 276-286 - [c686]Christian Herold, Jan Rosendahl, Joris Vanvinckenroye, Hermann Ney:
Data Filtering using Cross-Lingual Word Embeddings. NAACL-HLT 2021: 162-172 - [c685]Parnia Bahar, Tobias Bieschke, Ralf Schlüter, Hermann Ney:
Tight Integrated End-to-End Training for Cascaded Speech Translation. SLT 2021: 950-957 - [c684]Parnia Bahar, Christopher Brix, Hermann Ney:
Two-Way Neural Machine Translation: A Proof of Concept for Bidirectional Translation Modeling Using a Two-Dimensional Grid. SLT 2021: 1065-1070 - [i66]David Thulke, Nico Daheim, Christian Dugast, Hermann Ney:
Efficient Retrieval Augmented Generation from Unstructured Knowledge for Task-Oriented Dialog. CoRR abs/2102.04643 (2021) - [i65]Albert Zeyer, Ralf Schlüter, Hermann Ney:
A study of latent monotonic attention variants. CoRR abs/2103.16710 (2021) - [i64]Tina Raissi, Eugen Beck, Ralf Schlüter, Hermann Ney:
Towards Consistent Hybrid HMM Acoustic Modeling. CoRR abs/2104.02387 (2021) - [i63]Albert Zeyer, André Merboldt, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Librispeech Transducer Model with Internal Language Model Prior Correction. CoRR abs/2104.03006 (2021) - [i62]Peter Vieting, Christoph Lüscher, Wilfried Michel, Ralf Schlüter, Hermann Ney:
Feature Replacement and Combination for Hybrid ASR Systems. CoRR abs/2104.04298 (2021) - [i61]Nick Rossenbach, Mohammad Zeineldeen, Benedikt Hilmes, Ralf Schlüter, Hermann Ney:
Comparing the Benefit of Synthetic Training Data for Various Automatic Speech Recognition Architectures. CoRR abs/2104.05379 (2021) - [i60]Mohammad Zeineldeen, Aleksandr Glushko, Wilfried Michel, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models. CoRR abs/2104.05544 (2021) - [i59]Wei Zhou, Albert Zeyer, André Merboldt, Ralf Schlüter, Hermann Ney:
Equivalence of Segmental and Neural Transducer Modeling: A Proof of Concept. CoRR abs/2104.06104 (2021) - [i58]Wei Zhou, Mohammad Zeineldeen, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
Acoustic Data-Driven Subword Modeling for End-to-End Speech Recognition. CoRR abs/2104.09106 (2021) - [i57]Yingbo Gao, David Thulke, Alexander Gerstenberger, Khoa Viet Tran, Ralf Schlüter, Hermann Ney:
On Sampling-Based Training Criteria for Neural Language Modeling. CoRR abs/2104.10507 (2021) - [i56]Albert Zeyer, Ralf Schlüter, Hermann Ney:
Why does CTC result in peaky behavior? CoRR abs/2105.14849 (2021) - [i55]Nico Daheim, David Thulke, Christian Dugast, Hermann Ney:
Cascaded Span Extraction and Response Generation for Document-Grounded Dialog. CoRR abs/2106.07275 (2021) - [i54]Evgeniia Tokarchuk, Jan Rosendahl, Weiyue Wang, Pavel Petrushkov, Tomer Lancewicki, Shahram Khadivi, Hermann Ney:
Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer. CoRR abs/2109.12950 (2021) - [i53]Evgeniia Tokarchuk, Jan Rosendahl, Weiyue Wang, Pavel Petrushkov, Tomer Lancewicki, Shahram Khadivi, Hermann Ney:
Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer. CoRR abs/2109.13097 (2021) - [i52]Evgeniia Tokarchuk, David Thulke, Weiyue Wang, Christian Dugast, Hermann Ney:
Investigation on Data Adaptation Techniques for Neural Named Entity Recognition. CoRR abs/2110.05892 (2021) - [i51]Wei Zhou, Zuoyun Zheng, Ralf Schlüter, Hermann Ney:
On Language Model Integration for RNN Transducer based Speech Recognition. CoRR abs/2110.06841 (2021) - [i50]Nils-Philipp Wynands, Wilfried Michel, Jan Rosendahl, Ralf Schlüter, Hermann Ney:
Efficient Sequence Training of Attention Models using Approximative Recombination. CoRR abs/2110.09245 (2021) - [i49]Felix Meyer, Wilfried Michel, Mohammad Zeineldeen, Ralf Schlüter, Hermann Ney:
Automatic Learning of Subword Dependent Model Scales. CoRR abs/2110.09324 (2021) - [i48]Mohammad Zeineldeen, Jingjing Xu, Christoph Lüscher, Wilfried Michel, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
Conformer-based Hybrid ASR System for Switchboard Dataset. CoRR abs/2111.03442 (2021) - [i47]Zijian Yang, Yingbo Gao, Alexander Gerstenberger, Jintao Jiang, Ralf Schlüter, Hermann Ney:
Self-Normalized Importance Sampling for Neural Language Modeling. CoRR abs/2111.06310 (2021) - [i46]David Thulke, Nico Daheim, Christian Dugast, Hermann Ney:
Adapting Document-Grounded Dialog Systems to Spoken Conversations using Data Augmentation and a Noisy Channel Model. CoRR abs/2112.08844 (2021) - 2020
- [j100]Oscar Koller, Necati Cihan Camgöz, Hermann Ney, Richard Bowden:
Weakly Supervised Learning with Multi-Stream CNN-LSTM-HMMs to Discover Sequential Parallelism in Sign Language Videos. IEEE Trans. Pattern Anal. Mach. Intell. 42(9): 2306-2320 (2020) - [c683]Christopher Brix, Parnia Bahar, Hermann Ney:
Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture. ACL 2020: 3909-3915 - [c682]Parnia Bahar, Nikita Makarov, Hermann Ney:
Investigation of Transformer-based Latent Attention Models for Neural Machine Translation. AMTA 2020: 7-20 - [c681]Matthias Huck, Hermann Ney:
Pivot Lightly-Supervised Training for Statistical Machine Translation. AMTA 2020 - [c680]Yingbo Gao, Baohao Liao, Hermann Ney:
Unifying Input and Output Smoothing in Neural Machine Translation. COLING 2020: 4361-4372 - [c679]Zhihong Lei, Weiyue Wang, Christian Dugast, Hermann Ney:
Neural Language Modeling for Named Entity Recognition. COLING 2020: 6937-6941 - [c678]Yunsu Kim, Miguel Graça, Hermann Ney:
When and Why is Unsupervised Neural Machine Translation Useless? EAMT 2020: 35-44 - [c677]Baohao Liao, Yingbo Gao, Hermann Ney:
Multi-Agent Mutual Learning at Sentence-Level and Token-Level for Neural Machine Translation. EMNLP (Findings) 2020: 1715-1724 - [c676]Kazuki Irie, Alexander Gerstenberger, Ralf Schlüter, Hermann Ney:
How Much Self-Attention Do We Need? Trading Attention for Feed-Forward Layers. ICASSP 2020: 6154-6158 - [c675]Wilfried Michel, Ralf Schlüter, Hermann Ney:
Frame-Level MMI as A Sequence Discriminative Training Criterion for LVCSR. ICASSP 2020: 6904-6908 - [c674]Nick Rossenbach, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Generating Synthetic Audio Data for Attention-Based Speech Recognition Systems. ICASSP 2020: 7069-7073 - [c673]Vitalii Bozheniuk, Albert Zeyer, Ralf Schlüter, Hermann Ney:
A Comprehensive Study of Residual CNNS for Acoustic Modeling in ASR. ICASSP 2020: 7674-7678 - [c672]Mohammad Zeineldeen, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Layer-Normalized LSTM for Hybrid-Hmm and End-To-End ASR. ICASSP 2020: 7679-7683 - [c671]Wei Zhou, Ralf Schlüter, Hermann Ney:
Full-Sum Decoding for Hybrid Hmm Based Speech Recognition Using LSTM Language Model. ICASSP 2020: 7834-7838 - [c670]Wei Zhou, Wilfried Michel, Kazuki Irie, Markus Kitza, Ralf Schlüter, Hermann Ney:
The Rwth Asr System for Ted-Lium Release 2: Improving Hybrid Hmm With Specaugment. ICASSP 2020: 7839-7843 - [c669]Parnia Bahar, Nikita Makarov, Albert Zeyer, Ralf Schlüter, Hermann Ney:
Exploring A Zero-Order Direct Hmm Based on Latent Attention for Automatic Speech Recognition. ICASSP 2020: 7854-7858 - [c668]Alexander Gerstenberger, Kazuki Irie, Pavel Golik, Eugen Beck, Hermann Ney:
Domain Robust, Fast, and Compact Neural Language Models. ICASSP 2020: 7954-7958 - [c667]Yingbo Gao, Weiyue Wang, Christian Herold, Zijian Yang, Hermann Ney:
Towards a Better Understanding of Label Smoothing in Neural Machine Translation. AACL/IJCNLP 2020: 212-223 - [c666]Zijian Yang, Yingbo Gao, Weiyue Wang, Hermann Ney:
Predicting and Using Target Length in Neural Machine Translation.