- From: Ben Bucksch <ben.bucksch.news@beonex.com>
- Date: Mon, 31 Dec 2001 22:42:49 +0100
- To: Jeff Barber <jeff.barber@oracle.com>
- Cc: www-html@w3.org
Jeff Barber wrote: >There are lots of methods of compressing images >which help speed page display times, however the main problem (apart from >size) is network round trip. > >Each time an image is requested the browser has to go back to the server. > >I would have thought that a simple solution to this would have been to >combine all of the images into a single file > >What say you ? > Seems like it's an old FAQ: From <http://www.mozilla.org/docs/mozilla-faq.html>: 1.13) _I did all that, and people were still rude to me. Why?_ If you're proposing reworking something (like HTTP, HTML, etc) you're expected to have a pretty good knowlege of it first. For example, if you were to make a proposal to compress whole webpages before sending them, devise a new protocol to do so, research how HTTP works, how HTML works, and think about all the good *and* bad points of reworking things. To start understanding the issues with this example, you should dig up the RFCs for the relevant protocols, any documents written on the subject, etc. For this particular example, you would want to go look at * Relevant topics Stuart Cheshire's discussion on Latency versus Bandwidth <http://rescomp.stanford.edu/%7Echeshire/rants/Networkdynamics.html> * Internet -- Under the Covers (by me) <http://junior.apk.net/%7Eqc/comp/protocols/> * The RFC for HTTP <http://www.faqs.org/rfcs/rfc2068.html> * The w3's HTML 4.0 definition <http://www.w3.org/TR/REC-html40/> * IETF recommendations for HTTP <http://www.ics.uci.edu/pub/ietf/http/> Ben
Received on Monday, 31 December 2001 16:45:19 UTC