CIG 2009: Milano, Italy
Pier Luca Lanzi:
Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games, CIG 2009, Milano, Italy, 7-10 September, 2009. IEEE 2009, ISBN 978-1-4244-4814-2










Anders Drachen, Alessandro Canossa, Georgios N. Yannakakis:
Player modeling using self-organization in Tomb Raider: Underworld. 1-8
Colin D. Ward, Peter I. Cowling:
Monte Carlo search applied to card selection in Magic: The Gathering. 9-16


Leo Galway, Darryl Charles, Michaela M. Black:
Improving Temporal Difference game agent control using a dynamic exploration during control learning. 38-45
Johan Hagelbäck, Stefan J. Johansson:
Measuring player experience on runtime dynamic difficulty scaling in an RTS game. 46-52
Peter Burrow, Simon M. Lucas:
Evolution versus Temporal Difference Learning for learning to play Ms. Pac-Man. 53-60
Luis Peña, Sascha Ossowski, José María Peña Sánchez:
vBattle: A new framework to simulate medium-scale battles in individual-per-individual basis. 61-68
Su-Hyung Jang, Jongwon Yoon, Sung-Bae Cho:
Optimal strategy selection of non-player character on real time strategy game using a speciated evolutionary algorithm. 75-79
Edgar Galván López, Michael O'Neill:
On the effects of locality in a permutation problem: The Sudoku Puzzle. 80-87
Klaus P. Jantke:
Dramaturgical Design of the Narrative in Digital Games: AI planning of conflicts in non-linear spaces of time. 88-95
Steven L. Tanimoto, Tyler Robison, Sandra B. Fan:
A game-building environment for research in collaborative design. 96-103
Marcin Grzegorz Szubert, Wojciech Jaskowski, Krzysztof Krawiec:
Coevolutionary Temporal Difference Learning for Othello. 104-111
Jacek Mandziuk, Krzysztof Mossakowski:
Neural networks compete with expert human players in solving the Double Dummy Bridge Problem. 117-124
Garrett Nicolai, Robert J. Hilderman:
No-Limit Texas Hold'em Poker agents created with evolutionary neural networks. 125-131
Chris Pedersen, Julian Togelius, Georgios N. Yannakakis:
Modeling player experience in Super Mario Bros. 132-139
Luigi Cardamone, Daniele Loiacono, Pier Luca Lanzi:
Learning drivers for TORCS through imitation using supervised methods. 148-155
Julian Togelius, Sergey Karakovskiy, Jan Koutník, Jürgen Schmidhuber:
Super mario evolution. 156-161
Lori L. DeLooze, Wesley R. Viner:
Fuzzy Q-learning in a nondeterministic environment: developing an intelligent Ms. Pac-Man agent. 162-169
Ben Cowley, Darryl Charles, Michaela M. Black, Ray J. Hickey:
Analyzing player behavior in Pacman using feature-driven decision theoretic predictive modeling. 170-177
Maarten P. D. Schadd, Mark H. M. Winands, Jos W. H. M. Uiterwijk:
CHANCEPROBCUT: Forward pruning in chance nodes. 178-185
Nicola Basilico, Nicola Gatti, Thomas Rossi:
Capturing augmented sensing capabilities and intrusion delay in patrolling-intrusion games. 186-193
Michele Pace:
How a genetic algorithm learns to play Traveler's Dilemma by choosing dominated strategies to achieve greater payoffs. 194-200
Wojciech Jaskowski, Krzysztof Krawiec:
Formal analysis and algorithms for extracting coordinate systems of games. 201-208
Markus Kemmerling, Niels Ackermann, Nicola Beume, Mike Preuss, Sebastian Uellenbeck, Wolfgang Walz:
Is human-like and well playing contradictory for Diplomacy bots? 209-216
Raúl Arrabales Moreno, Agapito Ledezma, Araceli Sanchis:
Towards conscious-like behavior in computer game characters. 217-224
Manish Mehta, Andrea Corradini:
Evaluation of a domain independent approach to natural language processing for game-like user interfaces. 225-232
Francesco Bellotti, Riccardo Berta, Alessandro De Gloria, Ludovica Primavera:
A task annotation model for Sandbox Serious Games. 233-240
Erin J. Hastings, Ratan K. Guha, Kenneth O. Stanley:
Evolving content in the Galactic Arms Race video game. 241-248
Enrique Onieva, David A. Pelta, Javier Alonso, Vicente Milanés, Joshué Pérez:
A modular parametric architecture for the TORCS racing engine. 256-262
Diego Perez Liebana, Gustavo Recio, Yago Sáez, Pedro Isasi:
Evolving a fuzzy controller for a Car Racing Competition. 263-270

Matt Parker, Bobby D. Bryant:
Backpropagation without human supervision for visual control in Quake II. 287-293
Niels van Hoorn, Julian Togelius, Jürgen Schmidhuber:
Hierarchical controller learning in a First-Person Shooter. 294-301
Joost Westra, Frank Dignum:
Evolutionary neural networks for Non-Player Characters in Quake III. 302-309
Luca Galli, Daniele Loiacono, Pier Luca Lanzi:
Learning a context-aware weapon selection policy for Unreal Tournament III. 310-316
Martin V. Butz, Thies D. Lönneker:
Optimized sensory-motor couplings plus strategy extensions for the TORCS car racing challenge. 317-324
Tommy Thompson, John Levine:
Realtime execution of automated plans using evolutionary robotics. 333-340
Phillipa Avery, Sushil J. Louis, Benjamin Avery:
Evolving coordinated spatial tactics for autonomous entities using influence maps. 341-348
Attala Malik, Jörg Denzinger:
Improving testing of multi-unit computer players for unwanted behavior using coordination macros. 355-362
Tommy Thompson, Fraser Milne, Alastair Andrew, John Levine:
Improving control through subsumption in the EvoTanks domain. 363-370
David Keaveney, Colm O'Riordan:
Evolving robust strategies for an abstract real-time strategy game. 371-378
José Roberto Mercado Vega, Zvi Retchkiman Königsberg:
Modeling the game of Arimaa with Linguistic Geometry. 379-386



Google
Google Scholar
MS Academic
CiteSeerX
CORE
Semantic Scholar
