I originally called this a tutorial, but that's misleading.. after reading this you won't have some code to copy and paste that will yield an unstoppable AI, I only mean to introduce some concepts that people may find helpful
let's consider a simple game in which a player-controlled spaceship duels with an AI-controlled spaceship
the simplest aiming algorithm for the AI is to have it merely fire its guns toward the player's current location, the problem is the player can move..
the natural extension to the above then is to give the AI some idea of physics and have it predict from the player's current velocity where it will be at time x, then aim there, expecting the player to continue at the same velocity
the player's natural response would be to vary his velocity so the AI's predictions are off and he misses, then the AI would likewise need to alter his algorithm
instead of having to code in a new explicit algorithm each time the player changes his tactics, we would prefer if the AI could simply learn the appropriate new response
in the machine learning literature there's a ton written on algorithms like dynamic programming, temporal-difference learning, Q-learning, Sarsa, monte carlo methods, etc.. all of which try to address the same problem:
we have an agent (the AI, in this case) who can sense some aspects of his environment, which we call states.. a state might be <AI velocity, player relative position to AI, player velocity>.. the agent also has a set of actions it can take at any time, say {change velocity, rotate left/right, rotate up/down, fire weapons straight ahead}
the various algorithms I mention all try to do the same thing: learn the optimal action to take given the current state.. then it executes the action, senses the environment again and the process repeats
example: if the state is <0, 5 units ahead, 0> (i.e., neither ship is moving and the player is straight ahead) then the optimal action may be to fire weapons.. if when the AI does this, the player always turns left and flies away, the algorithm will learn that the new optimal action is to turn right (so it can fire its weapons the next time it has to decide on an action)
I hope that all makes sense, so far
that basic idea is simple, but the implementation is where it gets tricky
for now I'll keep the details to a minimum, but the best place to look if you'd actually like to use this is http://www.cs.ualberta.ca/~sutton/book/the-book.html, a book by sutton and barto that explains the details very well with lots of examples (and math)
when there is a finite number of states and actions, these algorithms are very easy to apply: each combination of state and action is assigned a value separately, then in a given state the AI simply selects the action with the highest value (the way the value is set is the heart of the algorithm and is explained on every page that talks about these, so I won't go into it right now)
the problem here is that there is more or less an infinite number of velocities and relative positions, so we can't explicitly assign a value to each combination of them and simply rounding them to one decimal point still produces a very large number of combinations, which would require a lot of memory
solutions for these problems exist in the literature (for instance, chapter 8 of the book I linked to above), but I don't know them very well offhand.. someone interested in using this in their project would have a little work ahead of them to find the appropriate solution, but like I said.. I didn't start out intending to show the solution from start to finish, merely to let people know that these algorithms exist and give an idea of what they do in case someone wants to use them
spaceship AI: learning to aim
-
- Posts: 39
- Joined: Sat Oct 30, 2004 4:35 pm
- Location: Boston, MA
- Contact:
Have the AI fire where predicts the bullets will hit based on direction and velocity, like you mentioned earlier. Just also have a small buffer area where it can also fire where hits could happen. As a very good fighter pilot in many multiplayer games, I can tell you my brain probably does very little of that advanced AI stuff, and I almost never loose. I have never seen a ship with enough manuverability in speed or turning speed that could avoid a well placed bullet based on it's current speed/direction. Sure, maybe a few times, but not enough to matter. Plus pilots simple cant fly if they do it change speed/direction too much. If however, your AI enemies will be in some ubercraft, increase the firing area tolerance.
Although, now that I think about it, my brain does do more than that. Such as a banking craft will be more likely to continue its path. If it decides to throw you a fast one and change directions, I follow and fire. It is noticable and give me a good shot on sudden opposite direction changes. The best opponents I have found is one with good aim and fast reactions tht I cant shake off my tail. With good aim an unshakability, they will take me down eventually. Although that usually happens the other way around. Very few people escape me.
The AI gets more advanced when you are outmanuevered by an enemy. You need to learn their patters for getting sights on you, as well as the best ways to reqaquire them. But taking someone down if you can stick on their tail is trivial.
Although, now that I think about it, my brain does do more than that. Such as a banking craft will be more likely to continue its path. If it decides to throw you a fast one and change directions, I follow and fire. It is noticable and give me a good shot on sudden opposite direction changes. The best opponents I have found is one with good aim and fast reactions tht I cant shake off my tail. With good aim an unshakability, they will take me down eventually. Although that usually happens the other way around. Very few people escape me.
The AI gets more advanced when you are outmanuevered by an enemy. You need to learn their patters for getting sights on you, as well as the best ways to reqaquire them. But taking someone down if you can stick on their tail is trivial.