Luke Dicken is a specialist in Game AI, currently attached to the Strathclyde AI and Games Research Group at University of Strathclyde. He is also the founder of Robot Overlord Games, where he puts his knowledge into practice. In his abundant spare time, he is one of the principal organisers for the AltDev Conference series, sits on the Board of Directors for the IGDA and acts as Chapter Leader for IGDA Scotland and Chair for the IGDA's AI Special Interest Group. Luke was named as one of Develop's "30 Under 30" for 2013.
Posts by Luke Dicken
  1. The Spector of Game AI ( Counting comments... )
  2. Call for Industry Speakers to Address AltDev Student Summit ( Counting comments... )
  3. AltDevConf 2012 and 2013! ( Counting comments... )
  4. AltDevConf Education - Ken Fee ( Counting comments... )
  5. AltDevConf Education - Heather Decker Davis ( Counting comments... )
  6. AltDevConf Education - Ian Schreiber ( Counting comments... )
  7. Education Comes to AltDevConf ( Counting comments... )
  8. What Shape is Your Game? ( Counting comments... )
  9. A Turing Test for Bots ( Counting comments... )
  10. What's in a Game? ( Counting comments... )
  11. Separating Signal and Noise - Feature Selection in Game AI ( Counting comments... )
  12. Students : Game AI vs Traditional AI ( Counting comments... )
  13. The IGDA and the E3 Trip ( Counting comments... )
  14. How Complex is Complex Enough? ( Counting comments... )
  15. A Difficult Subject ( Counting comments... )
Advocacy / Technology/ Code /

UT2k4 LogoLast time around I wrote about the StarCraft AI Competition which involves creating AI players for RTS games. I've also talked previously about my work with Ms. Pac-Man and Poker.

In each case, the object of the work has been to make the strongest possible AI system, capable of winning under any circumstances (ideally). But as I've discussed previously, that's not useful for video games. These pieces of work take on the role of the player and attempt to win the game, when what is more relevant for industrial applications is to leave the player as a human and provide a compelling experience.

So, continuing my recent trend of giving a snap-shot of current threads in Game AI research, I want today to talk about something with a completely different flavour.

The 2K Bot Prize.

Back in 2008 the fine folks at 2K Australia (now 2K Marin's Canberra studio) saw the opportunity to create a new kind of academic competition. In an area littered with game challenges that involved aiming to make the best performing AI system possible, they created something truly unique - a Turing test for game AI. The fourth iteration of the competition has just taken place at the IEEE Conference on Computational Intelligence in Games in Seoul, so it seems a good time to talk it.

The Turing test is probably fairly familiar to you. It was designed by Alan Turing in 1950 as a way of finding an answer to the question "Can machines think?". As a way of tackling the problem, Turing chose to refine the question, believing that in order to demonstrate "thought" a machine had to be able to act in a manner indistinguishable from a human. He designed a game in which one judge can communicate with a participant through a mechanical system. This is the only interaction allowed between the two and they are kept isolated from each other. The question then becomes whether or not the participant can be identified by the judge correctly as either a machine or a human. In its original form the test only envisaged a text-based communication between the two, but it isn't hard to see how this can be expanded to a range of other mechanisms, and thanks to the Bot Prize, that now includes no-holds-barred Unreal Tournament 2004 deathmatch.

How does the competition work?

Penny Arcade envisaged the Bot Prize back in 2002

In order to run a bot-based Turing Test, you need three things - judges, humans and bots. Since the competition runs at an academic game AI conference, finding people relatively well qualified to attempt to determine whether a player is human or an AI is easy enough. Human participants is somewhat more tricky to organise as you need a team of competent UT2k4 players. Fortunately, the conference has been held at a variety of different universities over the past few years, guaranteeing a decent crop of enthusiastic students on hand willing to step up as the human element of the test. Finally of course, you need to have a new breed of bot entering.

Bot development is done using the GameBots2004 mod for Unreal Tournament which allows a player within the game to be controlled by an external program hooked in over TCP/IP. This is a core part of the Pogamut project which would give you an IDE designed for exactly this kind of thing, but there are no restrictions on language or tool choice provided your system is compatible with the GameBots2004 interface.

The competition takes the format of a 3-way deathmatch between a judge, a human and a bot on a standard Unreal Tournament 2004 map. The ruleset is changed slightly to better facilitate the competition's purpose, so for example chat is disabled to prevent competitors revealing their identities inadvertently. The 'Link Gun' has been modified in this year's competition to work as a "tagging" system, to allow the judge of each match to mark the other two players as either a bot (primary fire) or a human (secondary fire). As the match progresses, the judge can alter their verdict by retagging the opponents.

Results

It really is a nice trophy - Raúl Arrabales (left) and Jorge Muñoz (right), winners of the 2010 Bot Prize

The aim of the competition is to develop a bot that is indistinguishable from human players in the eyes of the judge. At a very basic level, this means no silly behaviour like running into walls as this can easily be detected. Equally it means not being too perfect a player, but there's obviously a lot more to it than this. The evaluation is based on the number of judges that each bot can fool into thinking they are human. The most convincing bot at the competition wins a prize of $2,000 (AU) as well as a very stylish trophy, and for any bot that can convince all of the judges they are human, there is a Grand Prize on offer that includes $7,000 (AU) and a trip to 2K Marin's Canberra studio. To date nobody has been able to claim the Grand Prize despite it attracting a good number of entrants over the past four years.

In part this may be explained somewhat by the format of the competition - in 2009, I was asked to be one of the judges, and there seemed to be some noticeable biases within the test itself. For a start, the matches were always played with one of each player type - a judge, a bot and a human. This meant that the judge's job was not determining how human-like each player, but to determine which one was the bot - a very different question. Additionally, for the sake of expediency, each judge only encountered each bot once, and every judge/bot pairing was played with a different human. The upshot of this was that there was no base-line of performance on which to quantify the resulting stats. However, in the intervening years, there does seem to have been some improvement to the process.

Overall though, I hope I've explained what I think is one of the greatest competitions currently happening in academia. The Bot Prize focuses on generating human-like AI, which is of key interest to the game industry, as opposed to, say, the StarCraft competition I wrote about previously. As one of the more recent additions to the regular competition season, it's clear as well that academia is starting to recognise that the core challenge of Game AI isn't necessarily (or arguably - at all) contingent on the optimality of the algorithms used but more centred on player experience.

You can read more about the research that the Bot Prize is helping to drive here.