Anybody know of some practical tutorials for using Peer to Peer in Raknet; specifically peer discovery? I'm working on a non graphical project that uses Peer to Peer networking, but whenever I look for some tutorials/information online, all I find are papers or discriptions that are heavy on theory, and light on implementation.
Any pointers would be appreciated.
Peer to Peer networking (Raknet)
I have read it (need to read it again, I guess...)
The problem is one of DISCOVERY. Once I know where peers are, I can tell Raknet to talk to 'em. But, if my program drops into an unknown network, how does the program go about discovering other peers on the network, without falling back onto brute force search (10.1.2.1;;; 10.1.2.2;;; 10.1.2.3;;;...). That's mostly what I'm trying to find.
The problem is one of DISCOVERY. Once I know where peers are, I can tell Raknet to talk to 'em. But, if my program drops into an unknown network, how does the program go about discovering other peers on the network, without falling back onto brute force search (10.1.2.1;;; 10.1.2.2;;; 10.1.2.3;;;...). That's mostly what I'm trying to find.
Without knowing anything about Raknet thre should be only one method available. Use a subnet broadcast to which all clients should reply to. But his only works in subnets as broadcasts are not routed across subnet borders. Otherwise you will need client lists which are either accessible from known servers or as distributed lists from clients in your subnet (which you should lookup as before).AutoDMC wrote:I have read it (need to read it again, I guess...)
The problem is one of DISCOVERY. Once I know where peers are, I can tell Raknet to talk to 'em. But, if my program drops into an unknown network, how does the program go about discovering other peers on the network, without falling back onto brute force search (10.1.2.1;;; 10.1.2.2;;; 10.1.2.3;;;...). That's mostly what I'm trying to find.
Have you looked at the server files??
I think this is where the information is that you need. Raknet has a function for keeping track of it's network. I read about it today in one of the files, just don't remember which one. It might have been masterserver, but really don't recall. I read all the docs available, but found actually reading the files themselves did me more good.
I think this is where the information is that you need. Raknet has a function for keeping track of it's network. I read about it today in one of the files, just don't remember which one. It might have been masterserver, but really don't recall. I read all the docs available, but found actually reading the files themselves did me more good.
If it exists in the real world, it can be created in 3d
Sys specs:
AMD 3700+ 64 processor
1.0 gb ram
e-Geforce 6600 graphics 256 mb onboard ram
Sys specs:
AMD 3700+ 64 processor
1.0 gb ram
e-Geforce 6600 graphics 256 mb onboard ram
Perhaps a quick intro to what I've been trying to do would help with this discussion.
I've been working on this idea since high school, and the idea is this: sharing processor power across an unreliable network. I know there are solutions out there (Linux MOSIX, Apple XGRID, and the Windows GPU, and others), but I have 2 problems with these implementations, and those like them:
1) They are single platform (A program that utilizes XGRID can't get help from the GPU or a MOSIX cluster). This cuts you off from useful cpu cycles. (You are most likely to get share power from the kind of people who use linux, right? )
2) You are required (In XGRID and GPU) to TRUST the code being run on your computer, as the processing code is compiled to native machine language and run directly on your processor. MOSIX takes care of this by rerouting all system calls to the originating machine; I don't think XGRID or GPU have any such protection. You are just supposed to "trust" whoever wants to use your CPU.
2.5) At least in the GPU, you have to download DLLs that do the actual processing. If you don't have a specific DLL, you can't help with that type of job. Then again, you have to trust the DLL writer to handle your computer properly.
So, a distributed CPU sharing system requires two things, in my opinion: PORTABLILITY and TRUST. This is essentially solved by a Virtual Machine.
In the system I've been hacking upon (Which I named Multivac, , for no good reason then that there's MULIPLE computers... and that I'm an Issac Asimoc fan!), jobs are built (hopefully using the GCC compiler chain) for a virtual machine named Luigi. The Luigi VM does what all good VMs do; ie it runs code as fast as possible, without allowing untrusted code access to anything but the CPU, a virtual memory space, and a virtual disk (so that the program can write files, which are returned).
Luigi VM would allow the same code written by a guy using a Mac to run on Windows, Linux, BeOS, even the GameCube, if anyone wrote a client for it.
The peer-to-peer model I've chosen is: Employer-Employee. When a peer node gets a job from another program (say, Photoshop with a gigantomongous image being filtered, which has been broken into jobs). The computer with the jobs is the EMPLOYER, the computer(s) who is(are) idle is(are) the Employee(s). This is where my discovery confusion sets in.
Being that Employers pop up randomly whenever there are jobs, and that employees are randomly available for work, I don't quite know how to set up a "job search." I had thought about having an employer or an employee broadcast a notice that they have work/are willing to work (MSG_HELP_WANTED/MSG_READY_FOR_WORK). However, this seems like it could be a great bandwidth waster; not only that but it wouldn't work too well on the internet
I had also thought of setting up a server on the internet (jobshare.autodmc.org) that employees could sign up for and post their credencials (however you spell that; essentially speed metrics for various Luigi functions, so an employer could choose the best CPUs for the job). This is great for machines on the Internet, but what about situations like at my college, where the vast majority of the machines are behind a firewall and can't work with the server? I'd still like my jobs to propogate around the school's network, if it emerges from the schools network!
I've done alot of thinking about this, and can't seem to come up with a solution. Any ideas?
(And I'd like to use RakNet, mainly because it made sense to me when I scanned the docs; that's much better than my par for other networking systems).
I've been working on this idea since high school, and the idea is this: sharing processor power across an unreliable network. I know there are solutions out there (Linux MOSIX, Apple XGRID, and the Windows GPU, and others), but I have 2 problems with these implementations, and those like them:
1) They are single platform (A program that utilizes XGRID can't get help from the GPU or a MOSIX cluster). This cuts you off from useful cpu cycles. (You are most likely to get share power from the kind of people who use linux, right? )
2) You are required (In XGRID and GPU) to TRUST the code being run on your computer, as the processing code is compiled to native machine language and run directly on your processor. MOSIX takes care of this by rerouting all system calls to the originating machine; I don't think XGRID or GPU have any such protection. You are just supposed to "trust" whoever wants to use your CPU.
2.5) At least in the GPU, you have to download DLLs that do the actual processing. If you don't have a specific DLL, you can't help with that type of job. Then again, you have to trust the DLL writer to handle your computer properly.
So, a distributed CPU sharing system requires two things, in my opinion: PORTABLILITY and TRUST. This is essentially solved by a Virtual Machine.
In the system I've been hacking upon (Which I named Multivac, , for no good reason then that there's MULIPLE computers... and that I'm an Issac Asimoc fan!), jobs are built (hopefully using the GCC compiler chain) for a virtual machine named Luigi. The Luigi VM does what all good VMs do; ie it runs code as fast as possible, without allowing untrusted code access to anything but the CPU, a virtual memory space, and a virtual disk (so that the program can write files, which are returned).
Luigi VM would allow the same code written by a guy using a Mac to run on Windows, Linux, BeOS, even the GameCube, if anyone wrote a client for it.
The peer-to-peer model I've chosen is: Employer-Employee. When a peer node gets a job from another program (say, Photoshop with a gigantomongous image being filtered, which has been broken into jobs). The computer with the jobs is the EMPLOYER, the computer(s) who is(are) idle is(are) the Employee(s). This is where my discovery confusion sets in.
Being that Employers pop up randomly whenever there are jobs, and that employees are randomly available for work, I don't quite know how to set up a "job search." I had thought about having an employer or an employee broadcast a notice that they have work/are willing to work (MSG_HELP_WANTED/MSG_READY_FOR_WORK). However, this seems like it could be a great bandwidth waster; not only that but it wouldn't work too well on the internet
I had also thought of setting up a server on the internet (jobshare.autodmc.org) that employees could sign up for and post their credencials (however you spell that; essentially speed metrics for various Luigi functions, so an employer could choose the best CPUs for the job). This is great for machines on the Internet, but what about situations like at my college, where the vast majority of the machines are behind a firewall and can't work with the server? I'd still like my jobs to propogate around the school's network, if it emerges from the schools network!
I've done alot of thinking about this, and can't seem to come up with a solution. Any ideas?
(And I'd like to use RakNet, mainly because it made sense to me when I scanned the docs; that's much better than my par for other networking systems).