- From: Michael Heuberger <michael.heuberger@binarykitchen.com>
- Date: Mon, 26 May 2014 11:52:42 +1200
- To: David Bruant <bruant.d@gmail.com>, whatwg@lists.whatwg.org
Hi David >>> >>>> Look at Angular, their templates reside on the client side. For >>>> production, a grunt task can compress all files into one single, >>>> huge JS >>>> file that is served to the client, then for any subsequent pages no >>>> more >>>> resources are loaded from the server. It is a widely used practice. >>> Look at React.js, it allows to render templates on the server side and >>> it's been a selling point for some people (it allows to generate the >>> same page whether you are on the client or server-side. It helps for >>> SEO sometimes). >> Yeah, each framework has it's own way - nevertheless, Angular is very >> popular these days. > "these days" is not an argument. We're discussing an addition to the > web platform and as we know, things are hard to remove after they've > been added [1]. Frameworks come and go, the platform stays (so far at > least) Exactly. The platform stays. That's why I do not want to hack the HTTP status code inside the meta tags through my framework. What I meant with Angular's popularity is that JavaScript and SPAs are popular. > >>>>> Serving different content based on different URLs (and status) >>>>> actually does make a lot of sense when you want your user to see the >>>>> proper content within the first HTTP round-trip (which saves >>>>> bandwidth). If you always serve generic content and figure it all out >>>>> on the client side, then either you always need a second request to >>>>> get the specific content or you're always sending useless data during >>>>> the first generic response which is also wasted bandwidth. >>>> Good point. From that point of view I agree but you forgot one thing: >>>> The user experience. We want mobile apps to be very responsive below >>>> 300ms. >>> Agreed (on UX and responsive applications) >>> >>>> Hence the two requests. The first one ensures the SPA to be >>>> loaded and the UI to be initialized. You'll see some animation, a text >>>> saying "Fetching data" whatever. Then the second request retrieves the >>>> specific content. >>> What I'm proposing is that all the relevant content is served within >>> the *first* request. The URL is used by the client to express to the >>> server (with arbitrary granularity, it depends on your app, obviously) >>> what the user wants. >>> What I'm proposing is not two requests to get the proper content, but >>> only one. The user doesn't even have to wait with a useless "Fetching >>> data" screen; the useful content is just there within the first >>> request (hence server-side rendering with React or Moustache or else >>> being useful). >> Yeah of course I could do that too. It is psychologically proven that >> the subjective waiting time is shorter when you see something as soon as >> possible. > Yes and what I'm suggesting is providing actual content as soon as > possible. The whole idea of the "critical rendering path" is exactly > about engineering your webpage so useful content is provided to the > user as soon as possible (which is as soon as you're currently capable > of showing a "Fetching data" screen). > > If you're being serious about bandwidth and UX (including percieved > performance), it's exactly what you should be doing, I believe. I agree totally with you here. But again, I want to know the 404 in one request, not within two requests (Ajax). Isn't that a performance optimization (regardless of your application architecture and critical rendering path)? Still, the ability to read the HTTP status code from JavaScript would prevent me from doing "hacks". > >>>>>> Furthermore you can convert a whole single page app into an >>>>>> iPhone app >>>>>> with PhoneGap. All the HTML resides in the app, not on the server. >>>>>> That's a very different approach and a good reason why JavaScript >>>>>> has >>>>>> the right to know if the HTTP request resulted into a 200 or a 404. >>>>> If all the HTML resides in the app, not on the server, then it wasn't >>>>> served via HTTP, so there is no 200 or 404 to inform about (since no >>>>> HTTP request occured). >>>> Ah, well spotted. PhoneGap comes with two options: >>>> a) You can choose to reside the whole HTML in the app or >>>> b) have it served from the server during the first HTTP request. >>>> >>>> Option a) saved bandwidth but you cannot update pages easily >>>> (option b). >>>> >>>> Option a) wouldn't need to know if it's a 200 or 404, you are right. >>>> Still, option b) needs to know the status code. >>> Option b) sounds like a bookmark, so it's a regular web page, so the >>> arguments against stand (?) >> Yes like a bookmark. And that's a case where it would be great to read >> the HTTP status code from JavaScript. > Continuing the argument from above, the bookmark is to a specific URL > and so, for the sake of bandwidth and UX, you should be serving > different content for different URLs, so you don't need the HTTP > status code Huh? For some you might be right but you probably have missed out the architecture of my case in my early emails. This won't work for me. > >>>> Let me ask you another question: >>>> Is there a good reason NOT to give JavaScript a chance to find out the >>>> HTTP status code of the current page? >>> By that argument, an absurd amount of features should go in ;-) >> Really? How many "absurd amount of features" do you guys get every day? > http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2014-May/296902.html > ? This suggestion is mostly harmless, but doesn't really add value in > my opinion. > I am less active on standards mailing-lists these days, so I cannot > count, but others will know better. Heh, okay, I can agree with this absurd one, especially when from hotmail accounts ;) > >>> From a more "social" standpoint, it bothers me, because it means >>> people serve the exact same content for all URLs, which defeats the >>> very purpose of why URLs were invented in the first place. >> Ummm, I am not sure if I can follow you. What do you mean with "people >> serve the exact same content for all URLs"? > I meant "people write servers which serve the exact same content for > all URLs". Gotcha! > >> It is absolutely normal that URLs change or become invalid, hence the >> need for status codes. You know ... >> >>> You want to serve the same content regardless of the URL and then have >>> client-side code read the URL and change the page state based on the >>> URL. We already have a standardized way to express a part of the URL >>> that is only interpreted on the client side which is the hash >>> (everything after '#'). Format your URLs using # if that's your >>> intention. >> My single page app works without the # part and uses absolutely >> normal-looking URLs to make it easier for search engine to spider it. > Then why serving the exact content on every URL? I don't do that :) > >>> Also, given that you always serve the same content and only figure >>> things out on the client side, why does your server sometimes answer >>> 404? Deciding whether the URL is erroneous should occur on the >>> client-side, no? >>> Anyway, so far, what you're asking for seems like it's only >>> encouraging misusage of existing technologies. >> Okay, I have a huge sign language video library here for Deaf people. >> Anyone can add / edit / delete stuff. Each video has an unique URL. When >> I load a page of a deleted video, a 404 is returned with the whole SPA >> code and additional stuff is rendered to deal with 404s in a nice way to >> improve usability. Here you have a real-world example and it is not a >> misusage of technologies. > I still see a misuse of URLs. > Why aren't you serving a different page for 404s? The perceived > performance would be better for your users. > > Even if there is a way to read the HTTP status code, the user has to > wait for: > 1) the HTML + the SPA code to be downloaded > 2) the SPA to read the HTTP status code, build the error page > 3) display the error page > > If you serve a different content on 404, the user has to wait: > 1) the HTML to be downloaded (which naturally displays the page) > 2) (then, you can improve the experience with the JS code which > downloads while the user is reading that they're on the wrong page) Good summary! I've been thinking about this a lot before myself and tried it myself. I didn't decide for the latter method because I wanted to process / treat all the pages the same way without exceptions to keep the code on the server side as simple as possible. And it would become too complicated and really bad. When the HTML is downloaded, then it is rendered and as long as the JS code is not ready, you are in an undefined state. Also, think of other things i.E. navigation elements which are totally JS driven. I'd have to add more hacks, bend my framework to make it work for this case and so on. Just too complicated and cost more at the end. I want to treat all the pages the same way with "one" code. This would work if the HTTP status code is readable from within JavaScript. > >> PS: I wonder how the WhatWG procedure looks like? Are you all working >> for the WhatWG and deciding which features to implement? > Others will correct the specifics, but the WHATWG is an informal (?) > group of browser vendors. This mailing list is open to anyone (and > obviously web browser engineers participate to this list). Thanks! Anyone a web browser engineer here? I'd like to read a comment from them. > In the end, browser vendors make the call of what they want to > implement (regardless of any sort "standardization" process as EME > showed again, thanks Microsoft and Google!). Browser vendors have a > lot on their plates, so they'll implement something only if they see > strong support or a strong need. I believe that's why you've been sent > to this mailing-list by Mozilla engineers ;-) Right, I see that :) Is lots of politics involved? Tell me, do I need lots of supporters for my feature request to push through my idea? > > I'm personally not attached to any browser vendor (but contribute to > MDN and a couple of Mozilla projects occasionally). I don't make any > decision. I'm a web developer, carrying a web developer point of view > (which browser vendors make what they want of but is sometimes helpful > for them to figure out what to implement and what not). > Basically, I'm just making conversation :-) Cool, thanks for sharing and conversing :) Michael -- Binary Kitchen Michael Heuberger 4c Dunbar Road Mt Eden Auckland 1024 (New Zealand) Mobile (text only) ... +64 21 261 89 81 Email ................ michael@binarykitchen.com Website .............. http://www.binarykitchen.com
Received on Sunday, 25 May 2014 23:53:14 UTC