default search action
Johannes Fürnkranz
Person information
- affiliation: Johannes Kepler University of Linz, Austria
- affiliation (former): TU Darmstadt, Germany
SPARQL queries
🛈 Please note that only 68% of the records listed on this page have a DOI. Therefore, DOI-based queries can only provide partial results.
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j51]Martin Atzmueller, Johannes Fürnkranz, Tomás Kliegr, Ute Schmid:
Explainable and interpretable machine learning and data mining. Data Min. Knowl. Discov. 38(5): 2571-2595 (2024) - [j50]Stefan Heid, Jonas Hanselle, Johannes Fürnkranz, Eyke Hüllermeier:
Learning decision catalogues for situated decision making: The case of scoring systems. Int. J. Approx. Reason. 171: 109190 (2024) - [j49]Anna-Christina Glock, Florian Sobieczky, Johannes Fürnkranz, Peter Filzmoser, Martin Jech:
Predictive change point detection for heterogeneous data. Neural Comput. Appl. 36(26): 16071-16096 (2024) - [c136]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Learning With Generalised Card Representations for "Magic: The Gathering". CoG 2024: 1-8 - [c135]Anna-Christina Glock, Johannes Fürnkranz:
Dynamic Time Warping for Phase Recognition in Tribological Sensor Data. DaWaK 2024: 245-250 - [c134]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Efficiently Training Neural Networks for Imperfect Information Games by Sampling Information Sets. KI 2024: 17-29 - [i43]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Neural Network-based Information Set Weighting for Playing Reconnaissance Blind Chess. CoRR abs/2407.05864 (2024) - [i42]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Efficiently Training Neural Networks for Imperfect Information Games by Sampling Information Sets. CoRR abs/2407.05876 (2024) - [i41]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Learning With Generalised Card Representations for "Magic: The Gathering". CoRR abs/2407.05879 (2024) - [i40]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Contrastive Learning of Preferences with a Contextual InfoNCE Loss. CoRR abs/2407.05898 (2024) - [i39]Jonas Hanselle, Stefan Heid, Johannes Fürnkranz, Eyke Hüllermeier:
Probabilistic Scoring Lists for Interpretable Machine Learning. CoRR abs/2407.21535 (2024) - 2023
- [j48]Phuong Huynh Van Quoc, Johannes Fürnkranz, Florian Beck:
Efficient learning of large sets of locally optimal classification rules. Mach. Learn. 112(2): 571-610 (2023) - [j47]Eneldo Loza Mencía, Moritz Kulessa, Simon Bohlender, Johannes Fürnkranz:
Tree-based dynamic classifier chains. Mach. Learn. 112(11): 4129-4165 (2023) - [c133]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Weighting Information Sets with Siamese Neural Networks in Reconnaissance Blind Chess. CoG 2023: 1-8 - [c132]Jonas Hanselle, Johannes Fürnkranz, Eyke Hüllermeier:
Probabilistic Scoring Lists for Interpretable Machine Learning. DS 2023: 189-203 - [c131]Florian Beck, Johannes Fürnkranz, Phuong Huynh Van Quoc:
Generalizing Conjunctive and Disjunctive Rule Learning to Learning m-of-n Concepts. ITAT 2023: 8-13 - [c130]Florian Beck, Johannes Fürnkranz, Phuong Huynh Van Quoc:
Layerwise Learning of Mixed Conjunctive and Disjunctive Rule Sets. RuleML+RR 2023: 95-109 - [i38]Phuong Huynh Van Quoc, Johannes Fürnkranz, Florian Beck:
Efficient learning of large sets of locally optimal classification rules. CoRR abs/2301.09936 (2023) - [i37]Anna-Christina Glock, Florian Sobieczky, Johannes Fürnkranz, Peter Filzmoser, Martin Jech:
Predictive change point detection for heterogeneous data. CoRR abs/2305.06630 (2023) - 2022
- [j46]Antonella Plaia, Simona Buscemi, Johannes Fürnkranz, Eneldo Loza Mencía:
Comparing Boosting and Bagging for Decision Trees of Rankings. J. Classif. 39(1): 78-99 (2022) - [j45]Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencía, Johannes Fürnkranz, Michael Rapp:
A flexible class of dependence-aware multi-label loss functions. Mach. Learn. 111(2): 713-737 (2022) - [c129]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Supervised and Reinforcement Learning from Observations in Reconnaissance Blind Chess. CoG 2022: 608-611 - [c128]Phuong Huynh Van Quoc, Florian Beck, Johannes Fürnkranz:
Incremental Update of Locally Optimal Classification Rules. DS 2022: 104-113 - [c127]Johannes Fürnkranz:
Towards Deep and Interpretable Rule Learning (invited paper). ITAT 2022: 2 - [c126]Florian Beck, Johannes Fürnkranz, Phuong Huynh Van Quoc:
On the Incremental Construction of Deep Rule Theories. ITAT 2022: 21-27 - [c125]Aïssatou Diallo, Johannes Fürnkranz:
Unsupervised Alignment of Distributional Word Embeddings. KI 2022: 60-74 - [i36]Aïssatou Diallo, Johannes Fürnkranz:
GausSetExpander: A Simple Approach for Entity Set Expansion. CoRR abs/2202.13649 (2022) - [i35]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Quantity vs Quality: Investigating the Trade-Off between Sample Size and Label Reliability. CoRR abs/2204.09462 (2022) - [i34]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Supervised and Reinforcement Learning from Observations in Reconnaissance Blind Chess. CoRR abs/2208.02029 (2022) - 2021
- [j44]Tomás Kliegr, Stepán Bahník, Johannes Fürnkranz:
A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 295: 103458 (2021) - [j43]Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
A Unifying Framework and Comparative Evaluation of Statistical and Machine Learning Approaches to Non-Specific Syndromic Surveillance. Comput. 10(3): 32 (2021) - [j42]Aïssatou Diallo, Johannes Fürnkranz:
Learning Ordinal Embedding from Sets. Entropy 23(8): 964 (2021) - [j41]Michael Rapp, Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Correlation-Based Discovery of Disease Patterns for Syndromic Surveillance. Frontiers Big Data 4: 784159 (2021) - [j40]Florian Beck, Johannes Fürnkranz:
An Empirical Investigation Into Deep and Shallow Rule Learning. Frontiers Artif. Intell. 4: 689398 (2021) - [c124]Moritz Kulessa, Bennet Wittelsbach, Eneldo Loza Mencía, Johannes Fürnkranz:
Sum-Product Networks for Early Outbreak Detection of Emerging Diseases. AIME 2021: 61-71 - [c123]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Predicting Human Card Selection in Magic: The Gathering with Contextual Preference Ranking. CoG 2021: 1-8 - [c122]Jessica Fritz, Johannes Fürnkranz:
Some Chess-Specific Improvements for Perturbation-Based Saliency Maps. CoG 2021: 1-8 - [c121]Aïssatou Diallo, Johannes Fürnkranz:
Elliptical Ordinal Embedding. DS 2021: 323-333 - [c120]Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Revisiting Non-specific Syndromic Surveillance. IDA 2021: 128-140 - [c119]Florian Beck, Johannes Fürnkranz:
Beyond DNF: First Steps towards Deep Rule Learning. ITAT 2021: 61-68 - [c118]Ryan W. Gardner, Gino Perrotta, Anvay Shah, Shivaram Kalyanakrishnan, Kevin A. Wang, Gregory Clark, Timo Bertram, Johannes Fürnkranz, Martin Müller, Brady P. Garrison, Prithviraj Dasgupta, Saeid Rezaei:
The Machine Reconnaissance Blind Chess Tournament of NeurIPS 2022. NeurIPS (Competition and Demos) 2021: 119-132 - [c117]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier:
Gradient-Based Label Binning in Multi-label Classification. ECML/PKDD (3) 2021: 462-477 - [c116]Florian Beck, Johannes Fürnkranz, Phuong Huynh Van Quoc:
Structuring Rule Sets Using Binary Decision Diagrams. RuleML+RR 2021: 48-61 - [i33]Tobias Joppen, Johannes Fürnkranz:
Ordinal Monte Carlo Tree Search. CoRR abs/2101.10670 (2021) - [i32]Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Revisiting Non-Specific Syndromic Surveillance. CoRR abs/2101.12246 (2021) - [i31]Aïssatou Diallo, Johannes Fürnkranz:
Elliptical Ordinal Embedding. CoRR abs/2105.10457 (2021) - [i30]Timo Bertram, Johannes Fürnkranz, Martin Müller:
Predicting Human Card Selection in Magic: The Gathering with Contextual Preference Ranking. CoRR abs/2105.11864 (2021) - [i29]Florian Beck, Johannes Fürnkranz:
An Investigation into Mini-Batch Rule Learning. CoRR abs/2106.10202 (2021) - [i28]Florian Beck, Johannes Fürnkranz:
An Empirical Investigation into Deep and Shallow Rule Learning. CoRR abs/2106.10254 (2021) - [i27]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier:
Gradient-based Label Binning in Multi-label Classification. CoRR abs/2106.11690 (2021) - [i26]Timo Bertram, Johannes Fürnkranz, Martin Müller:
A Comparison of Contextual and Non-Contextual Preference Ranking for Set Addition Problems. CoRR abs/2107.04438 (2021) - [i25]Michael Rapp, Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Correlation-based Discovery of Disease Patterns for Syndromic Surveillance. CoRR abs/2110.09208 (2021) - [i24]Eneldo Loza Mencía, Moritz Kulessa, Simon Bohlender, Johannes Fürnkranz:
Tree-Based Dynamic Classifier Chains. CoRR abs/2112.06672 (2021) - 2020
- [j39]Johannes Czech, Moritz Willig, Alena Beyer, Kristian Kersting, Johannes Fürnkranz:
Learning to Play the Chess Variant Crazyhouse Above World Champion Level With Deep Neural Networks and Human Data. Frontiers Artif. Intell. 3: 24 (2020) - [j38]Johannes Fürnkranz, Tomás Kliegr, Heiko Paulheim:
On cognitive preferences and the plausibility of rule-based models. Mach. Learn. 109(4): 853-898 (2020) - [c115]Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
On Aggregation in Ensembles of Multilabel Classifiers. DS 2020: 533-547 - [c114]Aïssatou Diallo, Markus Zopf, Johannes Fürnkranz:
Permutation Learning via Lehmer Codes. ECAI 2020: 1095-1102 - [c113]Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencía:
Conformal Rule-Based Multi-label Classification. KI 2020: 290-296 - [c112]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier:
Learning Gradient Boosted Multi-label Classification Rules. ECML/PKDD (3) 2020: 124-140 - [c111]Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencía, Vu-Linh Nguyen, Michael Rapp:
Rule-Based Multi-label Classification: Challenges and Opportunities. RuleML+RR 2020: 3-19 - [p6]Christian Bauckhage, Johannes Fürnkranz, Gerhard Paaß:
Vertrauenswürdiges, transparentes und robustesMaschinelles Lernen. Handbuch der Künstlichen Intelligenz 2020: 571-600 - [i23]Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
On Aggregation in Ensembles of Multilabel Classifiers. CoRR abs/2006.11916 (2020) - [i22]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier:
Learning Gradient Boosted Multi-label Classification Rules. CoRR abs/2006.13346 (2020) - [i21]Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencía:
Conformal Rule-Based Multi-label Classification. CoRR abs/2007.08145 (2020) - [i20]Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencía, Johannes Fürnkranz, Michael Rapp:
A Flexible Class of Dependence-aware Multi-Label Loss Functions. CoRR abs/2011.00792 (2020) - [i19]Johannes Fürnkranz, Eyke Hüllermeier, Eneldo Loza Mencía, Michael Rapp:
Learning Structured Declarative Rule Sets - A Challenge for Deep Discrete Learning. CoRR abs/2012.04377 (2020)
2010 – 2019
- 2019
- [j37]Julian Schwehr, Stefan Luthardt, Hien Q. Dang, Maren Henzel, Hermann Winner, Jürgen Adamy, Johannes Fürnkranz, Volker Willert, Benedikt Lattke, Maximilian Höpfl, Christoph Wannemacher:
The PRORETA 4 City Assistant System. Autom. 67(9): 783-798 (2019) - [c110]Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Improving the Fusion of Outbreak Detection Methods with Supervised Learning. CIBB 2019: 55-66 - [c109]Tobias Joppen, Tilman Strübig, Johannes Fürnkranz:
Ordinal Bucketing for Game Trees using Dynamic Quantile Approximation. CoG 2019: 1-8 - [c108]Aïssatou Diallo, Markus Zopf, Johannes Fürnkranz:
Learning Analogy-Preserving Sentence Embeddings for Answer Selection. CoNLL 2019: 910-919 - [c107]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
On the Trade-Off Between Consistency and Coverage in Multi-label Rule Learning Heuristics. DS 2019: 96-111 - [c106]Lukas Fleckenstein, Sebastian Kauschke, Johannes Fürnkranz:
Beta Distribution Drift Detection for Adaptive Classifiers. ESANN 2019 - [c105]Jinseok Nam, Young-Bum Kim, Eneldo Loza Mencía, Sunghyun Park, Ruhi Sarikaya, Johannes Fürnkranz:
Learning Context-dependent Label Permutations for Multi-label Classification. ICML 2019: 4733-4742 - [c104]Sebastian Kauschke, Lukas Fleckenstein, Johannes Fürnkranz:
Mending is Better than Ending: Adapting Immutable Classifiers to Nonstationary Environments using Ensembles of Patches. IJCNN 2019: 1-8 - [c103]Sebastian Kauschke, David Hermann Lehmann, Johannes Fürnkranz:
Patching Deep Neural Networks for Nonstationary Environments. IJCNN 2019: 1-8 - [c102]Hien Q. Dang, Johannes Fürnkranz:
Driver Information Embedding with Siamese LSTM networks. IV 2019: 935-940 - [c101]Maryam Tavakol, Tobias Joppen, Ulf Brefeld, Johannes Fürnkranz:
Personalized Transaction Kernels for Recommendation Using MCTS. KI 2019: 338-352 - [c100]Aïssatou Diallo, Markus Zopf, Johannes Fürnkranz:
Improving Answer Selection with Analogy-Preserving Sentence Embeddings. LWDA 2019: 84-88 - [c99]Alexander Zap, Tobias Joppen, Johannes Fürnkranz:
Deep Ordinal Reinforcement Learning. ECML/PKDD (3) 2019: 3-18 - [i18]Tobias Joppen, Johannes Fürnkranz:
Ordinal Monte Carlo Tree Search. CoRR abs/1901.04274 (2019) - [i17]Alexander Zap, Tobias Joppen, Johannes Fürnkranz:
Deep Ordinal Reinforcement Learning. CoRR abs/1905.02005 (2019) - [i16]Tobias Joppen, Tilman Strübig, Johannes Fürnkranz:
Ordinal Bucketing for Game Trees using Dynamic Quantile Approximation. CoRR abs/1905.13449 (2019) - [i15]Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz:
Improving Outbreak Detection with Stacking of Statistical Surveillance Methods. CoRR abs/1907.07464 (2019) - [i14]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
On the Trade-off Between Consistency and Coverage in Multi-label Rule Learning Heuristics. CoRR abs/1908.03032 (2019) - [i13]Johannes Czech, Moritz Willig, Alena Beyer, Kristian Kersting, Johannes Fürnkranz:
Learning to play the Chess Variant Crazyhouse above World Champion Level with Deep Neural Networks and Human Data. CoRR abs/1908.06660 (2019) - [i12]Aïssatou Diallo, Markus Zopf, Johannes Fürnkranz:
Learning Analogy-Preserving Sentence Embeddings for Answer Selection. CoRR abs/1910.05315 (2019) - [i11]Tomás Kliegr, Stepán Bahník, Johannes Fürnkranz:
Advances in Machine Learning for the Behavioral Sciences. CoRR abs/1911.03249 (2019) - [i10]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
Simplifying Random Forests: On the Trade-off between Interpretability and Accuracy. CoRR abs/1911.04393 (2019) - 2018
- [j36]Tobias Joppen, Miriam Ulrike Moneke, Nils Schröder, Christian Wirth, Johannes Fürnkranz:
Informed Hybrid Game Tree Search for General Video Game Playing. IEEE Trans. Games 10(1): 78-90 (2018) - [c98]Sebastian Kauschke, Johannes Fürnkranz:
Batchwise Patching of Classifiers. AAAI 2018: 3374-3381 - [c97]Sebastian Kauschke, Max Mühlhäuser, Johannes Fürnkranz:
Leveraging Reproduction-Error Representations for Multi-Instance Classification. DS 2018: 83-95 - [c96]Sebastian Kauschke, Max Mühlhäuser, Johannes Fürnkranz:
Towards Semi-Supervised Classification of Event Streams via Denoising Autoencoders. ICMLA 2018: 131-136 - [c95]Johannes Fürnkranz, Tomás Kliegr:
The Need for Interpretability Biases. IDA 2018: 15-27 - [c94]Hien Q. Dang, Johannes Fürnkranz:
Using Past Maneuver Executions for Personalization of a Driver Model. ITSC 2018: 742-748 - [c93]Tobias Joppen, Christian Wirth, Johannes Fürnkranz:
Preference-Based Monte Carlo Tree Search. KI 2018: 327-340 - [c92]Hien Q. Dang, Johannes Fürnkranz:
Exploiting Maneuver Dependency for Personalization of a Driver Model. LWDA 2018: 93-97 - [c91]Markus Zopf, Eneldo Loza Mencía, Johannes Fürnkranz:
Which Scores to Predict in Sentence Regression for Text Summarization? NAACL-HLT 2018: 1782-1791 - [c90]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
Exploiting Anti-monotonicity of Multi-label Evaluation Measures for Inducing Multi-label Rules. PAKDD (1) 2018: 29-42 - [c89]Markus Zopf, Teresa Botschen, Tobias Falke, Benjamin Heinzerling, Ana Marasovic, Todor Mihaylov, Avinesh P. V. S., Eneldo Loza Mencía, Johannes Fürnkranz, Anette Frank:
What's Important in a Text? An Extensive Evaluation of Linguistic Annotations for Summarization. SNAMS 2018: 272-277 - [i9]Johannes Fürnkranz, Tomás Kliegr, Heiko Paulheim:
On Cognitive Preferences and the Interpretability of Rule-based Models. CoRR abs/1803.01316 (2018) - [i8]Tomás Kliegr, Stepán Bahník, Johannes Fürnkranz:
A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. CoRR abs/1804.02969 (2018) - [i7]Tobias Joppen, Christian Wirth, Johannes Fürnkranz:
Preference-Based Monte Carlo Tree Search. CoRR abs/1807.06286 (2018) - [i6]Lukas Fleckenstein, Sebastian Kauschke, Johannes Fürnkranz:
Beta Distribution Drift Detection for Adaptive Classifiers. CoRR abs/1811.10900 (2018) - [i5]Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier, Michael Rapp:
Learning Interpretable Rules for Multi-label Classification. CoRR abs/1812.00050 (2018) - [i4]Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz:
Exploiting Anti-monotonicity of Multi-label Evaluation Measures for Inducing Multi-label Rules. CoRR abs/1812.06833 (2018) - 2017
- [j35]Anita Valmarska, Nada Lavrac, Johannes Fürnkranz, Marko Robnik-Sikonja:
Refinement and selection heuristics in subgroup discovery and classification rule learning. Expert Syst. Appl. 81: 147-162 (2017) - [j34]Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz:
A Survey of Preference-Based Reinforcement Learning Methods. J. Mach. Learn. Res. 18: 136:1-136:46 (2017) - [c88]Iryna Gurevych, Christian M. Meyer, Carsten Binnig, Johannes Fürnkranz, Kristian Kersting, Stefan Roth, Edwin Simpson:
Interactive Data Analytics for the Humanities. CICLing (1) 2017: 527-549 - [c87]Andrei Tolstikov, Frederik Janssen, Johannes Fürnkranz:
Evaluation of Different Heuristics for Accommodating Asymmetric Loss Functions in Regression. DS 2017: 67-81 - [c86]Camila González, Eneldo Loza Mencía, Johannes Fürnkranz:
Re-training Deep Neural Networks to Facilitate Boolean Concept Extraction. DS 2017: 127-143 - [c85]Hien Q. Dang, Johannes Fürnkranz, Alexander Biedermann, Maximilian Höpfl:
Time-to-lane-change prediction with deep learning. ITSC 2017: 1-7 - [c84]Jinseok Nam, Eneldo Loza Mencía, Hyunwoo J. Kim, Johannes Fürnkranz:
Maximizing Subset Accuracy with Recurrent Neural Networks in Multi-label Classification. NIPS 2017: 5413-5423 - [c83]Mohammed Arif Khan, Asif Ekbal, Eneldo Loza Mencía, Johannes Fürnkranz:
Multi-objective Optimisation-Based Feature Selection for Multi-label Classification. NLDB 2017: 38-41 - [e7]Gabriele Kern-Isberner, Johannes Fürnkranz, Matthias Thimm:
KI 2017: Advances in Artificial Intelligence - 40th Annual German Conference on AI, Dortmund, Germany, September 25-29, 2017, Proceedings. Lecture Notes in Computer Science 10505, Springer 2017, ISBN 978-3-319-67189-5 [contents] - [r21]Johannes Fürnkranz:
Class Binarization. Encyclopedia of Machine Learning and Data Mining 2017: 203-204 - [r20]Johannes Fürnkranz:
Classification Rule. Encyclopedia of Machine Learning and Data Mining 2017: 209 - [r19]Johannes Fürnkranz:
Covering Algorithm. Encyclopedia of Machine Learning and Data Mining 2017: 293-294 - [r18]Johannes Fürnkranz:
Decision List. Encyclopedia of Machine Learning and Data Mining 2017: 328 - [r17]