- From: Marc Fawzi <marc.fawzi@gmail.com>
- Date: Thu, 12 Feb 2015 08:13:08 -0800
- To: Aryeh Gregor <ayg@aryeh.name>
- Cc: Boris Zbarsky <bzbarsky@mit.edu>, public-webapps <public-webapps@w3.org>
- Message-ID: <CACioZisVRAYLss2ErsFeMj+PwRAz_phmGDSJ0+o6uNFybYfvPw@mail.gmail.com>
<< Legacy problems Across the computing industry, we spend enormous amounts of money and effort on keeping older, "legacy" systems running. The examples range from huge and costly to small and merely annoying: planes circle around in holding patterns burning precious fuel because air traffic control can't keep up on systems that are less powerful than a smartphone; WiFi networks don't reach their top speeds because an original 802.11(no letter), 2Mbps system *could* show up—you never know. So when engineers dream, we dream of leaving all of yesterday's technology behind and starting from scratch. But such clean breaks are rarely possible. For instance, the original 10 megabit Ethernet specification allows for 1500-byte packets. Filling up 10Mbps takes about 830 of those 1500-byte packets. Then Fast Ethernet came along, which was 100Mbps, but the packet size remained the same so that 100Mbps ethernet gear could be hooked up to 10Mbps ethernet equipment without compatibility issues. Fast Ethernet needs 8300 packets per second to fill up the pipe. Gigabit Ethernet needs 83,000 and 10 Gigabit Ethernet needs *almost a million packets per second* (well, 830,000). For each faster Ethernet standard, the switch vendors need to pull out even more stops to process an increasingly outrageous numbers of packets per second, running the CAMs that store the forwarding tables at insane speeds that demand huge amounts of power. The need to connect antique NE2000 cards meant sticking to 1500 bytes for Fast Ethernet, and then the need to talk to those rusty Fast Ethernet cards meant sticking to 1500 bytes for Gigabit Ethernet, and so on. At each point, the next step makes sense, but* the entire journey ends up looking irrational.* >> Source: http://arstechnica.com/business/2010/09/there-is-no-plan-b-why-the-ipv4-to-ipv6-transition-will-be-ugly/ This guy here is bypassing the DOM and using WebGL for user interfaces https://github.com/onejs/onejs He even has a demo, with no event handling other than arrow keys at this point, and as the author admits ugly graphics, but with projects like React-Canvas (forget the React part, focus on Canvas UIs) and attempts like these it looks like the way of the future is to relegate the DOM to old boring business apps and throw more creative energy at things like WebGL UIToolKit (the idea that guy is pursuing) On Thu, Feb 12, 2015 at 3:46 AM, Aryeh Gregor <ayg@aryeh.name> wrote: > On Thu, Feb 12, 2015 at 4:45 AM, Marc Fawzi <marc.fawzi@gmail.com> wrote: > > how long can this be sustained? forever? what is the point in time where > the > > business of retaining backward compatibility becomes a huge nightmare? > > It already is, but there's no way out. This is true everywhere in > computing. Look closely at almost any protocol, API, language, etc. > that dates back 20 years or more and has evolved a lot since then, and > you'll see tons of cruft that just causes headaches but can't be > eliminated. Like the fact that Internet traffic is largely in > 1500-byte packets because that's the maximum size you could have on > ancient shared cables without ambiguity in the case of collision. Or > that e-mail is mostly sent in plaintext, with no authentication of > authorship, because that's what made sense in the 80s (or whatever). > Or how almost all web traffic winds up going over TCP, which performs > horribly on all kinds of modern usage patterns. For that matter, I'm > typing this with a keyboard layout that was designed well over a > century ago to meet the needs of mechanical typewriters, but it became > standard, so now everyone uses it due to inertia. > > This is all horrible, but that's life. >
Received on Thursday, 12 February 2015 16:14:21 UTC