Neural Network Evolution
Neural Network Evolution
Hello,
This is an app. I made for my final year project at university. The little vehicles evolve to pick up the "food".
Download (both Linux and Windows binaries in each package):
http://xzist.org/temp/NN_Evolution.zip
http://xzist.org/temp/NN_Evolution.tar.gz
I'll release the source code after it's been assessed. Any bug reports or comments welcome
EDIT: Source released!
http://xzist.org/temp/NN_Evolution_source.zip
(requires Irrlicht 1.4.2)
This is an app. I made for my final year project at university. The little vehicles evolve to pick up the "food".
Download (both Linux and Windows binaries in each package):
http://xzist.org/temp/NN_Evolution.zip
http://xzist.org/temp/NN_Evolution.tar.gz
I'll release the source code after it's been assessed. Any bug reports or comments welcome
EDIT: Source released!
http://xzist.org/temp/NN_Evolution_source.zip
(requires Irrlicht 1.4.2)
Last edited by xDan on Wed Jul 01, 2009 6:45 pm, edited 2 times in total.
-
- Posts: 758
- Joined: Mon Mar 31, 2008 3:32 pm
- Location: Bulgaria
Well, there's no great purpose to it except as a demonstration. It uses a genetic algorithm to evolve the weights in a neural network. After 20 generations or so they should learn to pick up the food which they detect with their sensors (leave it in "fast forward" mode for a few minutes...) Tweaking the parameters can affect how good they become at it...
Often it seems going around in circles is the most efficient way to discover new food items
Often it seems going around in circles is the most efficient way to discover new food items
-
- Posts: 1638
- Joined: Mon Apr 30, 2007 3:24 am
- Location: Montreal, CANADA
- Contact:
Nice work, the camera transition is smooth and it runs without errors... and they seem to evolve too. I'm guessing they have a food sensor, wall sensor plus left and right motors?
It would be cool to have a "sex" goal, where they need to find a mate after eating enough and make the unsuccessful ones die off naturally as they run out of energy.
It would be cool to have a "sex" goal, where they need to find a mate after eating enough and make the unsuccessful ones die off naturally as they run out of energy.
if i put around 300 food item the bots learn that going backwards is the easyst way, although result differ much.
With less they become quite smart with avoiding blocks and going after food it seems,
nice work!
With less they become quite smart with avoiding blocks and going after food it seems,
nice work!
Compete or join in irrlichts monthly screenshot competition!
Blog/site: http://rex.4vandorp.eu
Company: http://www.islandworks.eu/, InCourse
Twitter: @strong99
Blog/site: http://rex.4vandorp.eu
Company: http://www.islandworks.eu/, InCourse
Twitter: @strong99
Nice project xDan!
What the "generation" is? And why whole environment seems to reset after some time? Is that "generation" change?
Also putting some limit for circle radius would make boots bit more effective. If it is too small they have minimal chance of running in to something, they just turn around itself (too small area covered). But perhaps radius is part of learning process...
What the "generation" is? And why whole environment seems to reset after some time? Is that "generation" change?
Also putting some limit for circle radius would make boots bit more effective. If it is too small they have minimal chance of running in to something, they just turn around itself (too small area covered). But perhaps radius is part of learning process...
Yep, in a few months, after it has been assessed I think this is probably more suited to choosing higher level goals for bots though, rather than (as I have done.. :S ) directly controlling them.That could be translated well into future games as "bots" will try to outmanoever you as they are "learning" watching the player...
Do you plan to release a source someday?
They have a forwards speed (which can be negative) and also a turning speed. so the turning radius is chosen by the neural network so is effectively evolved..
and their sensors detect the walls and food (giving a value of 0 to 1.0 for the distance along the sensor that the intersection occurs). So that is 4 inputs to the neural network in total. And two outputs for speed and turning.
Maybe after a lot of generations they may evolve to explore the environment more rather than going in circles.. But then again, since the food items are respawning all the time, more food may get spawned near the vehicle, so simply spinning around and only going to food when detected with the sensors might be pretty near the optimum solution.. and also they have no real global knowledge; they can't see beyond the end of their sensors. Not a lot of deliberation seems to be going on. Hmm.
yes. it should say "new generation" or something in the text log on screen.. and also there you should see the best fitness in each generation (so that should improve over time). generation is just the period of time they are left to run around the environment before they are killed and a new population created from the "fittest" vehicles (with crossover - freaky vehicular sex - and all that..)What the "generation" is? And why whole environment seems to reset after some time? Is that "generation" change?
but I think I'm done improving it for the time being, now I've got to write a damned report about it!
well, without even having looked at more than your screenshots and reading the words "genetic algorithm" I feel like saying, GREAT WORK... I remember the long talks between level designers and the CEO of radonlabs (my old workplace) about bots 'intelligence'.
Then again I remember talking with an old school friend who studied AI in Osnabrück Germany, who told me, there is no such thing as AI until now. Don't really know what to believe, anyway I know how potentially powerfull the stuff is you're playing with... keep it up please...
wouldn't that be something for irrAI, JP ?
EDIT: what you apply to be a valid goal in evolution history is strongly going to affect the outcome, so maybe turning speed shouldn't really be a valid goal. Judge them on collected food only maybe ? Well the most impressive example I ever saw, was the 2d top down vehicle 'drive around the block' thing... Small cars with an oldschool physx (lots of rotation inertia resulting in mass carambolage behaviour) trying to go around a square block. Only sensor being, collision with wall or other car and later developing raytest maybe. I think spawning with 20 - 30 each generation. First generation always hit the first wall, except for one or two which tried to go left or right, without even knowing if they went right or wrong way... second generation really all went the right way since the only way to judge was the one that actually by coincident was going the right way at first time.. third generation was already going around the second corner, fourth, fifth and sixth or so already made the way around the block once, going further, they learned to avoid each other and so on... after letting it run for 30 min. or so they were godlike and better than any human supercars player that I've ever seen playing... perfect example probaly written by a professor... again... keep it up
EDIT2: everyone here, I guess, somehow takes pathfinding (like Argorha pathfinding <- deserves more credit) for granted even though you can probably find a lot of very good concepts in the source if you search and try to undestand... maybe take a look ?
EDIT3: think darwin... it's the year...
EDIT4: looking at supercars again, I feel like saying, and yes I'm drunk, when will they ever teach them humor ?
Like for example...
You can see it's lap 1 of 5, the race has just started, and the player (human) is motivated. While still adjusting to the new course (race track), the player is not performing to well. Up comes the car ramp (car jumping obstacle), the player does well, goes around a few more curves and reaches finish line.... So that was the first lap, naturally the player has now adjsuted quite good, he has seen the whole race course. Now short time memory still works and lap 2 goes quite fluently, confidence kicks in, and long time memory (RNS -> ZNS) begins to memorize. Lap 3 seems to be going well, when at about middle of the lap the player starts acting sloppy, hitting other cars and walls, there comes the car jumping obstacle again and the player somehow nearly crashes against the opposing wall, by hitting another car in mid-air, just about makes it, at same time saving the opponents cars life, which wouldn't have made the jump without the players aiding velocity... There comes motivation again the player tries to punish the undeserving dummy purposely by hitting the the opponent a few more times making it crash against a wall. Now there is an example of difficult maneuvering only motivated by revenge... a perfectly human ability... back to the beginning, I'll guess it will be hard to teach that kind of behaviour.... try giving them dopamin, serotonin and adrenalin...
EDIT5:
in other words, give them extra good rotation speed when 'feeling' rewarded, happy or scared
EDIT6:(after which I will watch 24S07, jack bauer saving the world)
astound us...
Then again I remember talking with an old school friend who studied AI in Osnabrück Germany, who told me, there is no such thing as AI until now. Don't really know what to believe, anyway I know how potentially powerfull the stuff is you're playing with... keep it up please...
wouldn't that be something for irrAI, JP ?
EDIT: what you apply to be a valid goal in evolution history is strongly going to affect the outcome, so maybe turning speed shouldn't really be a valid goal. Judge them on collected food only maybe ? Well the most impressive example I ever saw, was the 2d top down vehicle 'drive around the block' thing... Small cars with an oldschool physx (lots of rotation inertia resulting in mass carambolage behaviour) trying to go around a square block. Only sensor being, collision with wall or other car and later developing raytest maybe. I think spawning with 20 - 30 each generation. First generation always hit the first wall, except for one or two which tried to go left or right, without even knowing if they went right or wrong way... second generation really all went the right way since the only way to judge was the one that actually by coincident was going the right way at first time.. third generation was already going around the second corner, fourth, fifth and sixth or so already made the way around the block once, going further, they learned to avoid each other and so on... after letting it run for 30 min. or so they were godlike and better than any human supercars player that I've ever seen playing... perfect example probaly written by a professor... again... keep it up
EDIT2: everyone here, I guess, somehow takes pathfinding (like Argorha pathfinding <- deserves more credit) for granted even though you can probably find a lot of very good concepts in the source if you search and try to undestand... maybe take a look ?
EDIT3: think darwin... it's the year...
EDIT4: looking at supercars again, I feel like saying, and yes I'm drunk, when will they ever teach them humor ?
Like for example...
You can see it's lap 1 of 5, the race has just started, and the player (human) is motivated. While still adjusting to the new course (race track), the player is not performing to well. Up comes the car ramp (car jumping obstacle), the player does well, goes around a few more curves and reaches finish line.... So that was the first lap, naturally the player has now adjsuted quite good, he has seen the whole race course. Now short time memory still works and lap 2 goes quite fluently, confidence kicks in, and long time memory (RNS -> ZNS) begins to memorize. Lap 3 seems to be going well, when at about middle of the lap the player starts acting sloppy, hitting other cars and walls, there comes the car jumping obstacle again and the player somehow nearly crashes against the opposing wall, by hitting another car in mid-air, just about makes it, at same time saving the opponents cars life, which wouldn't have made the jump without the players aiding velocity... There comes motivation again the player tries to punish the undeserving dummy purposely by hitting the the opponent a few more times making it crash against a wall. Now there is an example of difficult maneuvering only motivated by revenge... a perfectly human ability... back to the beginning, I'll guess it will be hard to teach that kind of behaviour.... try giving them dopamin, serotonin and adrenalin...
EDIT5:
in other words, give them extra good rotation speed when 'feeling' rewarded, happy or scared
EDIT6:(after which I will watch 24S07, jack bauer saving the world)
Choose wisely on what you consider a fittnes and what not, try to avoid unintelligent going in circles. Think route planning... I would love the concept of reverse thrust to brake in time (more newton based movement) maybe think about centripetal force for sharp curves and failing like bugs which roll on their backs unable to roll back. Maybe integrate irrNewt car physics or generally irrPhysX.xDan wrote:and also there you should see the best fitness in each generation (so that should improve over time). generation is just the period of time they are left to run around the environment before they are killed and a new population created from the "fittest" vehicles (with crossover - freaky vehicular sex - and all that..)
astound us...
ya, that is all that is doneJudge them on collected food only maybe
really it's all about how well they achieve the goal... if, as in this case, the goal is just to collect food, then going in circles may well be the most efficient approach. The most "intelligent" approach, even!
on the other hand, if the goal is specifically to *appear* intelligent, then certainly, route planning could be used, they could be rewarded for travelling in a straight line, or following chosen paths through the environment etc. But that really seems to be against the point of a genetic algorithm. If we already know what behaviours are to be considered intelligent maybe they can just be hard coded, or use a conventional path finding algorithm.
I think the "magic" of genetic algorithms is that the method of choosing the fitness can be very simple. Like in an shooting game, perhaps simply a measure of the number of kills. Or in a chess game, a number indicating the percentage of games won. And then evolution fills in the blanks
but I think the solution in this case is to have a more complex environment, with more sensors (and longer sensors, an animal can usually see more than a few times its body length in front of it!) and more environmental stimuli... at the current state it's not a lot better than just having a wheeled robot with two whisker switches.
---
Another note, I discovered that the elitism parameter in the application is not all that useful (it's currently set by default to 4 I think, which allows the four best individuals to automatically go into the next population unmodified). But this causes the algorithm to "converge" too early sometimes. So it is better to set the elite count to zero. This prevents the vehicles from all evolving to travel backwards as they are sometimes wont to do also it gives more diverse behaviour in general...
yup, they reach their peak after about 20 generations, there's little point running it for longer. it will just bounce around the optimum solution.
and a GA is random based, also the food items and vehicles are positioned randomly, so there is going to be a large variance in fitness values. For example that 35 could be where by luck a vehicle spawned in a dense patch of food.
the significant learning happens in the first few generations where it goes from less than 10 fitness to 20+.
and a GA is random based, also the food items and vehicles are positioned randomly, so there is going to be a large variance in fitness values. For example that 35 could be where by luck a vehicle spawned in a dense patch of food.
the significant learning happens in the first few generations where it goes from less than 10 fitness to 20+.