default search action
Daniel Kudenko
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j32]Danila Valko, Daniel Kudenko:
Reducing CO2 emissions in a peer-to-peer distributed payment network: Does geography matter in the lightning network? Comput. Networks 243: 110297 (2024) - [c104]Eric Wete, Joel Greenyer, Daniel Kudenko, Wolfgang Nejdl:
Multi-Robot Motion and Task Planning in Automotive Production Using Controller-based Safe Reinforcement Learning. AAMAS 2024: 1928-1937 - [c103]Petr Motlícek, Erinç Dikici, Srikanth R. Madikeri, Pradeep Rangappa, Miroslav Jánosík, Gerhard Backfried, Dorothea Thomas-Aniola, Maximilian Schürz, Johan Rohdin, Petr Schwarz, Marek Kovác, Kvetoslav Malý, Dominik Bobos, Mathias Leibiger, Costas Kalogiros, Andreas Alexopoulos, Daniel Kudenko, Zahra Ahmadi, Hoang H. Nguyen, Aravind Krishnan, Dawei Zhu, Dietrich Klakow, Maria Jofre, Francesco Calderoni, Denis Marraud, Nikolaos Koutras, Nikos Nikolau, Christiana Aposkiti, Panagiotis Douris, Konstantinos Gkountas, Eleni-Konstantina Sergidou, Wauter Bosma, Joshua Hughes, Hellenic Police Team:
ROXSD: The ROXANNE Multimodal and Simulated Dataset for Advancing Criminal Investigations. Odyssey 2024: 17-24 - [i15]Dren Fazlija, Arkadij Orlov, Johanna Schrader, Monty-Maximilian Zühlke, Michael Rohs, Daniel Kudenko:
How Real Is Real? A Human Evaluation Framework for Unrestricted Adversarial Examples. CoRR abs/2404.12653 (2024) - [i14]Danila Valko, Daniel Kudenko:
Sustainable broadcasting in Blockchain Network with Reinforcement Learning. CoRR abs/2407.15616 (2024) - 2023
- [j31]Sajjad Kamali Siahroudi, Daniel Kudenko:
An effective single-model learning for multi-label data. Expert Syst. Appl. 232: 120887 (2023) - [j30]Zahra Ahmadi, Hoang H. Nguyen, Zijian Zhang, Dmytro Bozhkov, Daniel Kudenko, Maria Jofre, Francesco Calderoni, Noa Cohen, Yosef Solewicz:
Inductive and transductive link prediction for criminal network analysis. J. Comput. Sci. 72: 102063 (2023) - [c102]Robert Wardenga, Liubov Kovriguina, Dmitrii Pliukhin, Daniil Radyush, Ivan SmoliakoV, Yuan Xue, Henrik Müller, Aleksei Pismerov, Dmitry Mouromtsev, Daniel Kudenko:
Knowledge Graph Injection for Reinforcement Learning. DL4KG@ISWC 2023 - [c101]Eric Wete, Joel Greenyer, Andreas Wortmann, Daniel Kudenko, Wolfgang Nejdl:
MDE and Learning for flexible Planning and optimized Execution of Multi-Robot Choreographies. ETFA 2023: 1-4 - [c100]Sajjad Kamali Siahroudi, Daniel Kudenko:
Partial Multi-label Learning via Constraint Clustering. ICONIP (11) 2023: 453-469 - 2022
- [c99]Mark Ferguson, Sam Devlin, Daniel Kudenko, James Alfred Walker:
Imitating Playstyle with Dynamic Time Warping Imitation. FDG 2022: 41:1-41:11 - [c98]Joshua Riley, Radu Calinescu, Colin Paterson, Daniel Kudenko, Alec Banks:
Assured Multi-agent Reinforcement Learning with Robust Agent-Interaction Adaptability. KES-IDT 2022: 87-97 - [c97]Eric Wete, Joel Greenyer, Daniel Kudenko, Wolfgang Nejdl, Oliver Flegel, Dennes Eisner:
A tool for the automation of efficient multi-robot choreography planning and execution. MoDELS (Companion) 2022: 37-41 - [i13]Amir Abolfazli, Gregory Palmer, Daniel Kudenko:
Data Valuation for Offline Reinforcement Learning. CoRR abs/2205.09550 (2022) - 2021
- [j29]Nourah A. ALRossais, Daniel Kudenko, Tommy Yuan:
Improving cold-start recommendations using item-based stereotypes. User Model. User Adapt. Interact. 31(5): 867-905 (2021) - [c96]Joshua Riley, Radu Calinescu, Colin Paterson, Daniel Kudenko, Alec Banks:
Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. ICAART (Revised Selected Papers) 2021: 158-180 - [c95]Joshua Riley, Radu Calinescu, Colin Paterson, Daniel Kudenko, Alec Banks:
Reinforcement Learning with Quantitative Verification for Assured Multi-Agent Policies. ICAART (2) 2021: 237-245 - [c94]Joshua Riley, Radu Calinescu, Colin Paterson, Daniel Kudenko, Alec Banks:
Utilising Assured Multi-Agent Reinforcement Learning within Safety-Critical Scenarios. KES 2021: 1061-1070 - [c93]Sajjad Kamali Siahroudi, Daniel Kudenko:
An Online Learning Algorithm for Non-stationary Imbalanced Data by Extra-Charging Minority Class. PAKDD (1) 2021: 603-615 - 2020
- [j28]Zhuang Shao, Fengqi Si, Daniel Kudenko, Peng Wang, Xiaozhong Tong:
Predictive scheduling of wet flue gas desulfurization system based on reinforcement learning. Comput. Chem. Eng. 141: 107000 (2020) - [c92]John Burden, Daniel Kudenko:
Uniform State Abstraction for Reinforcement Learning. ECAI 2020: 1031-1038 - [c91]Mark Ferguson, Sam Devlin, Daniel Kudenko, James Alfred Walker:
Player Style Clustering without Game Variables. FDG 2020: 66:1-66:4 - [c90]Mark Ferguson, Sebastian Deterding, Andreas Lieberoth, Marc Malmdorf Andersen, Sam Devlin, Daniel Kudenko, James Alfred Walker:
Automatic Similarity Detection in LEGO Ducks. ICCC 2020: 106-109 - [i12]John Burden, Daniel Kudenko:
Uniform State Abstraction For Reinforcement Learning. CoRR abs/2004.02919 (2020) - [i11]Vikram Waradpande, Daniel Kudenko, Megha Khosla:
Deep Reinforcement Learning with Graph-based State Representations. CoRR abs/2004.13965 (2020) - [i10]Andrea Bassich, Francesco Foglino, Matteo Leonetti, Daniel Kudenko:
Curriculum Learning with a Progression Function. CoRR abs/2008.00511 (2020) - [i9]Ivan Sosin, Daniel Kudenko, Aleksei Shpilman:
Continuous Gesture Recognition from sEMG Sensor Data with Recurrent Neural Networks and Adversarial Domain Adaptation. CoRR abs/2012.08816 (2020) - [i8]Anastasia Gaydashenko, Daniel Kudenko, Aleksei Shpilman:
A comparative evaluation of machine learning methods for robot navigation through human crowds. CoRR abs/2012.08822 (2020) - [i7]Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman:
Learning to Run with Potential-Based Reward Shaping and Demonstrations from Video Data. CoRR abs/2012.08824 (2020) - [i6]Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman:
MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning. CoRR abs/2012.09762 (2020) - [i5]Aleksei Shpilman, Dmitry Boikiy, Marina Polyakova, Daniel Kudenko, Anton Burakov, Elena Nadezhdina:
Deep Learning of Cell Classification using Microscope Images of Intracellular Microtubule Networks. CoRR abs/2012.12125 (2020)
2010 – 2019
- 2019
- [j27]Rui Paulo Rocha, Daniel Kudenko:
Guest Editorial: Special Issue on Intelligent Robotics and Multi-Agent Systems. Cybern. Syst. 50(8): 657 (2019) - [j26]Mao Li, Yi Wei, Daniel Kudenko:
Two-level Q-learning: learning from conflict demonstrations. Knowl. Eng. Rev. 34: e14 (2019) - [j25]Mao Li, Tim Brys, Daniel Kudenko:
Introspective Q-learning and learning from demonstration. Knowl. Eng. Rev. 34: e8 (2019) - [c89]Nourah A. ALRossais, Daniel Kudenko:
Generating Stereotypes Automatically For Complex Categorical Features. KaRS@CIKM 2019: 8-14 - [c88]Sultan Alahmari, Tommy Yuan, Daniel Kudenko:
Reinforcement Learning for Dialogue Game Based Argumentation. CMNA@PERSUASIVE 2019: 29-37 - [c87]Sultan Alahmari, Tommy Yuan, Daniel Kudenko:
Reinforcement Learning of Dialogue Coherence and Relevance. CMNA@PERSUASIVE 2019: 38-48 - [c86]Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman:
MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning. REDUNDANCY 2019: 171-176 - [i4]Lukasz Kidzinski, Carmichael F. Ong, Sharada Prasanna Mohanty, Jennifer L. Hicks, Sean F. Carroll, Bo Zhou, Hong-cheng Zeng, Fan Wang, Rongzhong Lian, Hao Tian, Wojciech Jaskowski, Garrett Andersen, Odd Rune Lykkebø, Nihat Engin Toklu, Pranav Shyam, Rupesh Kumar Srivastava, Sergey Kolesnikov, Oleksii Hrinchuk, Anton Pechenko, Mattias Ljungström, Zhen Wang, Xu Hu, Zehong Hu, Minghui Qiu, Jun Huang, Aleksei Shpilman, Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Lance Rane, Aditya Bhatt, Zhengfei Wang, Penghui Qi, Zeyang Yu, Peng Peng, Quan Yuan, Wenxin Li, Yunsheng Tian, Ruihan Yang, Pingchuan Ma, Shauharda Khadka, Somdeb Majumdar, Zach Dwiel, Yinyin Liu, Evren Tumer, Jeremy D. Watson, Marcel Salathé, Sergey Levine, Scott L. Delp:
Artificial Intelligence for Prosthetics - challenge solutions. CoRR abs/1902.02441 (2019) - [i3]Kleanthis Malialis, Sam Devlin, Daniel Kudenko:
Resource Abstraction for Reinforcement Learning in Multiagent Congestion Problems. CoRR abs/1903.05431 (2019) - [i2]Nourah A. ALRossais, Daniel Kudenko:
Generating Stereotypes Automatically For Complex Categorical Features. CoRR abs/1911.11064 (2019) - 2018
- [j24]David Zendle, Paul A. Cairns, Daniel Kudenko:
No priming in video games. Comput. Hum. Behav. 78: 113-125 (2018) - [j23]David Zendle, Daniel Kudenko, Paul A. Cairns:
Behavioural realism and the activation of aggressive concepts in violent video games. Entertain. Comput. 24: 21-29 (2018) - [c85]Timofey Bryksin, Alexey Shpilman, Daniel Kudenko:
Automated Refactoring of Object-Oriented Code Using Clustering Ensembles. AAAI Workshops 2018: 754-757 - [c84]Mao Li, Tim Brys, Daniel Kudenko:
Introspective Reinforcement Learning and Learning from Demonstration. AAMAS 2018: 1992-1994 - [c83]Aleksandra Malysheva, Daniel Kudenko, Aleksei Shpilman:
Learning to Run with Potential-Based Reward Shaping and Demonstrations from Video Data. ICARCV 2018: 286-291 - [c82]Ivan Sosin, Daniel Kudenko, Aleksei Shpilman:
Continuous Gesture Recognition from sEMG Sensor Data with Recurrent Neural Networks and Adversarial Domain Adaptation. ICARCV 2018: 1436-1441 - [c81]Anastasia Gaydashenko, Daniel Kudenko, Aleksei Shpilman:
A Comparative Evaluation of Machine Learning Methods for Robot Navigation Through Human Crowds. ICMLA 2018: 553-557 - [c80]Hasanen Alyasiri, John A. Clark, Daniel Kudenko:
Applying Cartesian Genetic Programming to Evolve Rules for Intrusion Detection System. IJCCI 2018: 176-183 - [c79]Nourah A. ALRossais, Daniel Kudenko:
Evaluating Stereotype and Non-Stereotype Recommender Systems. KaRS@RecSys 2018: 23-28 - [c78]Hasanen Alyasiri, John A. Clark, Daniel Kudenko:
Evolutionary Computation Algorithms for Detecting Known and Unknown Attacks. SecITC 2018: 170-184 - [c77]Nourah A. ALRossais, Daniel Kudenko:
iSynchronizer: A Tool for Extracting, Integration and Analysis of MovieLens and IMDb Datasets. UMAP (Adjunct Publication) 2018: 103-107 - [i1]Aleksandra Malysheva, Tegg Tae Kyong Sung, Chae-Bong Sohn, Daniel Kudenko, Aleksei Shpilman:
Deep Multi-Agent Reinforcement Learning with Relevance Graphs. CoRR abs/1811.12557 (2018) - 2017
- [c76]Yi Wei, Daniel Kudenko, Shijun Liu, Li Pan, Lei Wu, Xiangxu Meng:
A Reinforcement Learning Based Workflow Application Scheduling Approach in Dynamic Cloud Environment. CollaborateCom 2017: 120-131 - [c75]George Mason, Radu Calinescu, Daniel Kudenko, Alec Banks:
Assured Reinforcement Learning with Formally Verified Abstract Policies. ICAART (2) 2017: 105-117 - [c74]Abdullah Fayez H. Algarni, Daniel Kudenko:
Distribution Data Across Multiple Cloud Storage using Reinforcement Learning Method. ICAART (2) 2017: 431-438 - [c73]Sultan Alahmari, Tommy Yuan, Daniel Kudenko:
Reinforcement Learning for Argumentation: Describing a PhD Research. CMNA@ICAIL 2017: 76-78 - [c72]Aleksei Shpilman, Dmitry Boikiy, Marina Polyakova, Daniel Kudenko, Anton Burakov, Elena Nadezhdina:
Deep Learning of Cell Classification Using Microscope Images of Intracellular Microtubule Networks. ICMLA 2017: 1-6 - 2016
- [j22]Adam Eck, Leen-Kiat Soh, Sam Devlin, Daniel Kudenko:
Potential-based reward shaping for finite horizon online POMDP planning. Auton. Agents Multi Agent Syst. 30(3): 403-445 (2016) - [j21]Kyriakos Efthymiadis, Sam Devlin, Daniel Kudenko:
Overcoming incorrect knowledge in plan-based reward shaping. Knowl. Eng. Rev. 31(1): 31-43 (2016) - [j20]Sam Devlin, Daniel Kudenko:
Plan-based reward shaping for multi-agent reinforcement learning. Knowl. Eng. Rev. 31(1): 44-58 (2016) - [j19]Yann-Michaël De Hauwere, Sam Devlin, Daniel Kudenko, Ann Nowé:
Context-sensitive reward shaping for sparse interaction multi-agent systems. Knowl. Eng. Rev. 31(1): 59-76 (2016) - [c71]Kleanthis Malialis, Sam Devlin, Daniel Kudenko:
Resource Abstraction for Reinforcement Learning in Multiagent Congestion Problems. AAMAS 2016: 503-511 - 2015
- [j18]Kleanthis Malialis, Sam Devlin, Daniel Kudenko:
Distributed reinforcement learning for adaptive and robust network intrusion response. Connect. Sci. 27(3): 234-252 (2015) - [j17]Kleanthis Malialis, Daniel Kudenko:
Distributed response to network intrusions using multiagent reinforcement learning. Eng. Appl. Artif. Intell. 41: 270-284 (2015) - [c70]Kyriakos Efthymiadis, Daniel Kudenko:
Knowledge Revision for Reinforcement Learning with Abstract MDPs. AAMAS 2015: 763-770 - [c69]David Zendle, Paul A. Cairns, Daniel Kudenko:
Higher Graphical Fidelity Decreases Players' Access to Aggressive Concepts in Violent Video Games. CHI PLAY 2015: 241-251 - [c68]Hanting Xie, Sam Devlin, Daniel Kudenko, Peter I. Cowling:
Predicting player disengagement and first purchase with event-frequency based data representation. CIG 2015: 230-237 - 2014
- [j16]Kyriakos Efthymiadis, Daniel Kudenko:
A comparison of plan-based and abstract MDP reward shaping. Connect. Sci. 26(1): 85-99 (2014) - [j15]Daniel Kudenko:
Special Issue on Transfer Learning. Künstliche Intell. 28(1): 5-6 (2014) - [j14]Daniel Kudenko:
Interview with Peter Stone and Matthew E. Taylor. Künstliche Intell. 28(1): 45-48 (2014) - [c67]Tim Brys, Ann Nowé, Daniel Kudenko, Matthew E. Taylor:
Combining Multiple Correlated Reward and Shaping Signals by Measuring Confidence. AAAI 2014: 1687-1693 - [c66]Sam Devlin, Logan Michael Yliniemi, Daniel Kudenko, Kagan Tumer:
Potential-based difference rewards for multiagent reinforcement learning. AAMAS 2014: 165-172 - [c65]Kyriakos Efthymiadis, Sam Devlin, Daniel Kudenko:
Knowledge revision for reinforcement learning with abstract MDPs. AAMAS 2014: 1535-1536 - [c64]Sam Devlin, Peter I. Cowling, Daniel Kudenko, Nikolaos Goumagias, Alberto Nucciarelli, Ignazio Cabras, Kiran Jude Fernandes, Feng Li:
Game intelligence. CIG 2014: 1-8 - [c63]Hanting Xie, Daniel Kudenko, Sam Devlin, Peter I. Cowling:
Predicting Player Disengagement in Online Games. CGW@ECAI 2014: 133-149 - [c62]Kleanthis Malialis, Sam Devlin, Daniel Kudenko:
Coordinated Team Learning and Difference Rewards for Distributed Intrusion Response. ECAI 2014: 1063-1064 - [c61]Ali Abusnina, Daniel Kudenko, Rolf Roth:
Improving Robustness of Gaussian Process-Based Inferential Control System Using Kernel Principle Component Analysis. ICMLA 2014: 99-104 - [c60]Nikolaos Goumagias, Ignazio Cabras, Kiran Jude Fernandes, Feng Li, Alberto Nucciarelli, Peter I. Cowling, Sam Devlin, Daniel Kudenko:
A Phylogenetic Classification of the Video-Game Industry's Business Model Ecosystem. PRO-VE 2014: 285-294 - [c59]Tim Brys, Anna Harutyunyan, Peter Vrancx, Matthew E. Taylor, Daniel Kudenko, Ann Nowé:
Multi-objectivization of reinforcement learning problems by reward shaping. IJCNN 2014: 2315-2322 - [c58]Ali Abusnina, Daniel Kudenko, Rolf Roth:
Gaussian Process-Based Inferential Control System. SOCO-CISIS-ICEUTE 2014: 115-124 - 2013
- [c57]Adam Eck, Leen-Kiat Soh, Sam Devlin, Daniel Kudenko:
Potential-based reward shaping for POMDPs. AAMAS 2013: 1123-1124 - [c56]Kyriakos Efthymiadis, Sam Devlin, Daniel Kudenko:
Overcoming erroneous domain knowledge in plan-based reward shaping. AAMAS 2013: 1245-1246 - [c55]Kyriakos Efthymiadis, Daniel Kudenko:
Using plan-based reward shaping to learn strategies in StarCraft: Broodwar. CIG 2013: 1-8 - [c54]Kleanthis Malialis, Daniel Kudenko:
Multiagent Router Throttling: Decentralized Coordinated Response Against DDoS Attacks. IAAI 2013: 1551-1556 - 2012
- [c53]Sam Devlin, Daniel Kudenko:
Dynamic potential-based reward shaping. AAMAS 2012: 433-440 - [p4]María Arinbjarnar, Daniel Kudenko:
Actor Bots. Believable Bots 2012: 69-97 - 2011
- [j13]Sam Devlin, Daniel Kudenko, Marek Grzes:
An Empirical Study of Potential-Based Reward Shaping and Advice in Complex, Multi-Agent Systems. Adv. Complex Syst. 14(2): 251-278 (2011) - [j12]Rania A. Hodhod, Paul A. Cairns, Daniel Kudenko:
Innovative Integrated Architecture for Educational Games: Challenges and Merits. Trans. Edutainment 5: 1-34 (2011) - [c52]Sam Devlin, Daniel Kudenko:
Theoretical considerations of potential-based reward shaping for multi-agent systems. AAMAS 2011: 225-232 - [c51]Sam Devlin, Marek Grzes, Daniel Kudenko:
Multi-agent, reward shaping for RoboCup KeepAway. AAMAS 2011: 1227-1228 - 2010
- [j11]Rania A. Hodhod, Daniel Kudenko, Paul A. Cairns:
Adaptive Interactive Narrative Model to Teach Ethics. Int. J. Gaming Comput. Mediat. Simulations 2(4): 1-15 (2010) - [j10]Maliang Zheng, Daniel Kudenko:
Automated Event Recognition for Football Commentary Generation. Int. J. Gaming Comput. Mediat. Simulations 2(4): 67-84 (2010) - [j9]Marek Grzes, Daniel Kudenko:
Online learning of shaping rewards in reinforcement learning. Neural Networks 23(4): 541-550 (2010) - [c50]Marek Grzes, Daniel Kudenko:
PAC-MDP learning with knowledge-based admissible models. AAMAS 2010: 349-358 - [c49]María Arinbjarnar, Daniel Kudenko:
Bayesian networks: Real-time applicable decision mechanisms for intelligent agents in interactive drama. CIG 2010: 427-434 - [c48]Rania A. Hodhod, Daniel Kudenko, Paul A. Cairns:
Character Education Using Pedagogical Agents and Socratic Voice. FLAIRS 2010 - [p3]Enda Ridge, Daniel Kudenko:
Tuning an Algorithm Using Design of Experiments. Experimental Methods for the Analysis of Optimization Algorithms 2010: 265-286
2000 – 2009
- 2009
- [j8]Marek Grzes, Daniel Kudenko:
Reinforcement Learning with Reward Shaping and Mixed Resolution Function Approximation. Int. J. Agent Technol. Syst. 1(2): 36-54 (2009) - [j7]I-Hsien Ting, Chris Kimble, Daniel Kudenko:
Finding Unexpected Navigation Behaviour in Clickstream Data for Website Design Improvement. J. Web Eng. 8(1): 71-92 (2009) - [j6]Heather Barber, Daniel Kudenko:
Generation of Adaptive Dilemma-Based Interactive Narratives. IEEE Trans. Comput. Intell. AI Games 1(4): 309-326 (2009) - [c47]Daniel Kudenko, Marek Grzes:
Knowledge-Based Reinforcement Learning for Data Mining. ADMI 2009: 21-22 - [c46]Rania A. Hodhod, Daniel Kudenko, Paul A. Cairns:
Educational Narrative and Student Modeling for Ill-Defined Domains. AIED 2009: 638-640 - [c45]Sam Devlin, Marek Grzes, Daniel Kudenko:
Reinforcement Learning in RoboCup KeepAway with Partial Observability. IAT 2009: 201-208 - [c44]Marek Grzes, Daniel Kudenko:
Improving Optimistic Exploration in Model-Free Reinforcement Learning. ICANNGA 2009: 360-369 - [c43]Marek Grzes, Daniel Kudenko:
Theoretical and Empirical Analysis of Reward Shaping in Reinforcement Learning. ICMLA 2009: 337-344 - [c42]María Arinbjarnar, Daniel Kudenko:
Duality of Actor and Character Goals in Virtual Drama. IVA 2009: 386-392 - 2008
- [c41]Marek Grzes, Daniel Kudenko:
Robustness Analysis of SARSA(lambda): Different Models of Reward and Initialisation. AIMSA 2008: 144-156 - [c40]Arturo Servin, Daniel Kudenko:
Multi-Agent Reinforcement Learning for Intrusion Detection: A case study and evaluation. ECAI 2008: 873-874 - [c39]Marek Grzes, Daniel Kudenko:
An Empirical Analysis of the Impact of Prioritised Sweeping on the DynaQ's Performance. ICAISC 2008: 1041-1051 - [c38]Marek Grzes, Daniel Kudenko:
Multigrid Reinforcement Learning with Reward Shaping. ICANN (1) 2008: 357-366 - [c37]María Arinbjarnar, Daniel Kudenko:
Schemas in Directed Emergent Drama. ICIDS 2008: 180-185 - [c36]Heather Barber, Daniel Kudenko:
Generation of Dilemma-Based Narratives: Method and Turing Test Evaluation. ICIDS 2008: 214-217 - [c35]Heather Barber, Daniel Kudenko:
Generation of dilemma-based interactive narratives with a changeable story goal. INTETAIN 2008: 6 - [c34]Arturo Servin, Daniel Kudenko:
Multi-Agent Reinforcement Learning for Intrusion Detection: A Case Study and Evaluation. MATES 2008: 159-170 - [p2]Enda Ridge, Daniel Kudenko:
Determining Whether a Problem Characteristic Affects Heuristic Performance. Recent Advances in Evolutionary Computation for Combinatorial Optimization 2008: 21-35 - [e3]Karl Tuyls,