- From: Mike Hearn <mike@plan99.net>
- Date: Thu, 21 Jan 2010 23:24:40 +0100
WebSockets doesn't let you open arbitrary ports and listen on them, so, I don't think it can be used for what you want. P2P in general is a lot more complicated than it sounds. It sort of works for things like large movies and programs because they aren't latency sensitive and chunk ordering doesn't matter (you can wait till the end then reassemble). But it has problems: - A naive P2P implementation won't provide good throughput or latency because you might end up downloading files from a mobile phone on the other side of the world rather than a high performance CDN node inside your local ISP. That sucks for users and also sucks for your ISP who will probably find their transit links suddenly saturated and their nice cheap peering links with content providers sitting idle. - That means unless you want to have your system throttled (or in companies/universities, possibly banned) you need to respect network topology and have an intimate understanding of how it works. For example the YouTube/Akamai serving systems have this intelligence but whatever implementation you come up with won't. - P2P is far more complicated than an HTTP download. I never use BitTorrent because it basically never worked well for me, compared to a regular file download. You don't see it used much outside the pirate scene and distributing linux ISOs these days for that reason, I think. Your friends problem has other possible solutions: 1) Harvesting low hanging fruit: 1a) Making sure every static asset is indefinitely cacheable (use those ISP proxy caches!) 1b) Ensuring content is being compressed as effectively as possible 1c) Consider serving off a CDN like Akamai or Limelight. There is apparently a price war going on right now. and of course the ultimate long term solution 2) Scaling revenues in line with traffic
Received on Thursday, 21 January 2010 14:24:40 UTC