- FTC Smacks Down on Machinima for Failure to Disclose Endorsements
- Half of the Million Sellers on Steam are in Early Access
- MS Announces New Xbox One Elite Bundle with Lunar Controller
- REVIEW: Q.U.B.E.: Director's Cut
- Deus Ex: Human Revolution May Get Backwards Compatibility Treatment for Xbox One
- Hideo Kojima Waves Goodbye to Metal Gear Solid
- Jimmy Kimmel vs. Enraged Gamers
- Mornin '15
- Valve Selling All Four Mad Max Movies
- REVIEW: Kyn
- Capcom Introduces Wesker Mode for Resident Evil 0
- Assassin's Creed Syndicate London Horizon Trailer
- Metal Gear Solid 5 Servers Currently Down
- Star Wars Battlefront Beta in October
- Voting Poll: Let's Talk Handheld Platforms
- Anita Sarkeesian Attacks Games That Have Women as Rewards
- Halo 5: Guardians Awesome Opening Cinematic Released
Brief History of Video Game AI
Birth of Game AI
Rudimentary computer and video game AI patters started appearing in some of the first single-player titles, long before today's complex 3D games such as Halo, Crysis, Half-Life, etc. If you think as far back as the '60s and '70s when Pong-mania reigned and games like Spacewar were considered innovative, you won't find any traces of AI. Such titles were designed expressly for the purpose of entertaining two people, so it was more about the competitive spirit.
Size doesn't make this bozo smart.
Mine for Spice, you lazy bastards!
Logically enough, video game AI is rooted in long-forgotten single-player coin-ups, which first come into view in the 1970s and were mostly manufactured by Atari - this includes games like "Pursuit," which would put players into the shoes of a World War I flying ace, shooting down enemy planes. Atari's "Qwak" was another opener that saw basic AI at work. Of course, at the time most games were developed and played on old-fashioned mainframe computers. One of these was Star Trek, a scripted text based game where enemy movement depended on stored patterns. This changed thanks to the introduction of microprocessors, which featured improved computation and allowed for additional random elements that were overlaid into movement patters, essential to the game itself.
More games were manufactured relying on this type of simplistic design, eventually leading to the creation of industry cornerstones such as Space Invaders, a game that emerged in 1978 and was a co-project between Taito and Midway. In this particular case, inspiration was drawn from various sources to come up with the appearance of enemies in the game. Tomohiro Nishikado, the creator of Space Invaders, claims that the pixelated baddies were actually inspired by aliens from H. G. Wells' The War of the Worlds, in which Earth invaders were described as squid-like. Refusing to create human-like foes, which according to him would have been technically easier, Nishikado made enemies that were a cross-over between a squid and crab. Still, the game itself presented rather basic patterns for what is literally one of the first steps towards better artificial intelligence in gaming. Such patterns were later enhanced and used for games like Pac-Man, a game that offered a different and new concept for game design. Yep, those cute little buggers that chase after old Pac-Man are, in fact, the fathers of game AI.
Finite State Machine AI's
During the '80s more complex games surfaced, featuring streams of new and different genres. At the beginning of the '90s, the gaming scene gradually began to move away from time-honored AI basics (basic and heavily scripted movement), which practically defined the Atari/C64/NES era. The next generation of gamers hailed the coming of more advanced gaming platforms, including the Amiga, new-generation PCs, the SNES and so on. Games were already developed using a new system for establishing in-game artificial intelligence. This system denoted the utilization of appropriate AI tools such as finite state machines (FMSs). Some of the first games using this principal were real-time strategies such as Dune II. A game like Dune II demands increasingly more intricate AI activity than any average platformer or side-scrolling shooter that came before. The enemy determines where to attack first, on top of setting certain priorities in its behaviour to ensure for realistic battle tactics. Granted, since this was one of the very first efforts in the RTS genre, it exhibited slipups, oddly enough, related to the game's AI. The AI in Dune II was almost incapable of pulling off ambushes or flanking the players when they least expect it. Any trap or unexpected enemy attack was triggered by scripted AI movement. Also, rather than assembling a powerful army and waiting for the perfect time to strike, the enemy would simply send units into battle as soon as they were produced (a feature that was later revised with third party patches). Subsequent real-time strategies improved upon this.
Adapting to the Player
Out of all the tasks programmers can assign to the AI, adapting to its movement and behavior according to the player's actions is by far the greatest challenge. Why? Because, quite simply, the AI is composed out a series of pre-determined and largely scripted behavior patterns. That, in itself, is a burden on the CPU. Methods of researching new AI technology were used in many games to create more realistic enemy reactions. More sophisticated forms of in-game artificial intelligence started to appear. These were created using nondeterministic AI techniques, such as neural networks and the emergent behavior; both are there to ensure a more life-like AI, one that evaluates your in game actions and responds accordingly. These are all important factors in the design of any decent AI. With the appearance of games like Black & White, further attempts at making true-to-life artificial intelligence were made. Although the game itself is regarded as one of the most disappointing and overhyped titles, the ambition of its creators (Lionhead Studios) have certainly triggered efforts of other designers to perk up video game AI. B&W is a bold effort to let players interact with a virtual being and model it on their own accord. The creature's appearance and conduct depends entirely on how players treat it throughout the game. Similar schemes were introduced before Peter Molyneux's B&W in titles like "Creatures," an artificial life program crafted by English computer scientist Steve Grand back in the mid '90s. It had nothing to do with making better AI-controlled killer-soldiers, but it did offer a whole new dimension to the very notion of game artificial intelligence. From that point on, making better AI soon called for more imagination.
The gush of mainstream game developers disregarded this approach, focusing more on the advancement of 3D first-person shooters. First-person action games evolved and we started witnessing games like Half-Life, Doom 3, Far Cry and Halo, all of which have influenced AI development one way or another. Of course, as always, gamers get less than the developers initially promised. Still, step by step, we see more and more elaborate AI behavior patters with each stage of video game evolution.
On the other hand, looking back on older games, programmers were always restricted by the system they were developing for. The original Killzone, for example, was constricted, as many games were, by the power of the PS2 console and its hardware. In this case, you could have as much as 14 AI soldiers acting independently and that makes up approximately 12% of CPU time. Movement and actions are determined by placing invisible waypoints, about 4,000 waypoints per map in this game, helping the AI to decide what to do and take various alternative routes, again, depending on the player's actions. This puts additional strain on the CPU. In Killzone, like in so many shooters, the AI uses terrain and objects in the environment to protect itself, simultaneously trying to stop the player from finding cover. This paradigm associates with most high-profile 3D games we're used to nowadays and no matter how hard programmers try to avoid it, the CPU gets encumbered the most.
BACK TO TOP