Next Generation of P2P

January 28th, 2003 | Posted by paul in Uncategorized

Here is some information I dug up about whree I think P2P is going. Now with the spectre of going to the federal pen, it’s only going to push the evolution of p2p to new heights.

The next generation of P2P is going to keep copyright lawyers up at night. Imagine a network that is not only immune from technological and legal attack because it’s decentralized, but one in which even the people uploading files are also immune. Additionally this network will be immune from so-called junk files, fake, damaged, or even dangerous files put on the network, by implementing a highly sophisticated and decentralized web-of-trust reputation system.

Each file is checksummed, producing a unique ID that is practicality impossible to forge. Then the peers “rate” the file. The weight of any peer’s rating is determined by his agreement with other peers. Thus you have a system which is nearly impossible to sabotage.

The first part of this solution is to break up the content into separate blocks. Each block has its own hash. As you are downloading the content, each block can be checked. As soon as you encounter a corrupted block, you blacklist that node. This can be extended such that you download different blocks of a file from different nodes at the same time, thus getting the file sooner.

In fact, what would happen if no single node had a complete file? This alone might not absolve you from copyright infringement. But suppose that in order to form each block of the file, you actually had to download multiple blocks by their hash number, and XOR them together. Yes, it might take 3 times the bandwidth to download a file, but not necessarily 3 times as long in real time on a broadband connection. So if Joe offers block 0x2857389298371987578392 of bytes that must be XOR’ed with two other blocks in order to produce the first block of the file, is Joe guilty of copyright infringement? But that same block might also be needed to reconstruct The Constitution of the United States, or the Bible or Moby Dick.

The process of obtaining a file would be to first obtain a trusted list of the block numbers you need to obtain. Then you download those many blocks over the P2P system. The blocks you obtain may come from many different nodes. You just recombine them by mixing and adding water.

So if the lawyers try to sue, whom are they going to sue? They can’t sue the individual, because they aren’t sharing or uploading copyrighted works, only a small uncopyrighted section. They can’t sue the network, because no one owns the network, as it is totally decentralized and completely impersonal.

As Br00tus said on Slashdot:

Face it, we’re always going to be one step ahead of these people. I have been working on a Gnutella client, and am familiar with what it has implemented, and plans to implement. The idea of file CRC’s was thought of almost immediately after Gnutella hit the net, but it has been implemented in a preliminary form a few months ago in Gnutella, and tigertree hashes will improve on this when they are soon implemented. Plus we have web sites like Bitzi [] (which have an open database) so that one can verify files with their hashes. Of course, they can keep coming on and spewing junk out, trying to fake out Bitzi and whatnot, but I’m confident we’ll always keep one step ahead of them. And I think this kind of dialectic back-and-forth will eventually result in a p2p network very resistant to authority, and very dependent upon free association, which I think will be most awesome.


You can follow any responses to this entry through the RSS 2.0 Both comments and pings are currently closed.