W3C home > Mailing lists > Public > www-style@w3.org > July 2013

Re: [cssom] Author-defined at-rules

From: Jonathan Rimmer <jon.rimmer@gmail.com>
Date: Tue, 02 Jul 2013 16:22:44 +0100
Message-ID: <51D2F044.3060100@gmail.com>
To: Fran├žois REMY <francois.remy.dev@outlook.com>
CC: "Tab Atkins Jr." <jackalmage@gmail.com>, Simon Sapin <simon.sapin@exyr.org>, www-style list <www-style@w3.org>
On 02/07/2013 13:16, Fran├žois REMY wrote:
>> > The point is: you'll have to build a parser anyway. If you're up to 
>> the > trouble of making a parser, the cost of making a preprocessor 
>> out of it > is close to 0.
>>
>> You need to build a parser for the syntax of your custom rule, but 
>> there's no reason to think that needs to be a full CSS parser, or 
>> even a CSS parser at all.
>
> By the time browsers all support this custom at-rule thing, all 
> existing preprocessors will have a plugin system that does that work 
> for you, and there will be tons of bootstraps available. I'm pretty 
> sure some already are.
>

I don't think any of us are able to make categorical statements about 
the future development of every existing preprocessor. I certainly hope 
they do introduce this functionality, but even if they do, the ability 
to do what I'm suggesting without the requirement of preprocessors will 
still make it easier to develop and use polyfills that require custom 
at-rules.

>
>> Placing a single reference to the polyfill is a lot simpler than 
>> installing a preprocessor, integrating it into a workflow, and 
>> configuring it to search every CSS, HTML, PHP, ASPX, etc. file in a 
>> project for instances of your custom syntax to generate the style JS 
>> file.
>
> You're not objective. You have to do this already to install your 
> polyfill. If you can do it for your polyfill, there's no reason you 
> can't do it for your customized build of the polyfill including the 
> json data.
>

No, I wouldn't. With the feature I've proposed, the polyfill would only 
have to register the receive notifications of the custom rules that it 
is interested in. The browser itself would take care of parsing the rest 
of the CSS for conditional rules, parsing the HTML for embedded <style> 
tags, recognising scoped stylesheets and determining the element they 
applied to, registering the appropriate conditional handlers, handling 
styles in <template>s, web components, dynamically inserted stylesheets, 
etc. using the infrstructure already in place to do this. The polyfill 
would simply have to deal with the contents of the rule itself.

>
>> I preprocess on my computer for the project I'm working on, and it 
>> requires a Grunt workflow to minify, concatenate and otherwise mangle 
>> and process things in the right order as required by the various 
>> tools and frameworks I'm using. It took an age to get working 
>> properly and it's complicated and fragile, with the result that 
>> adding new tools is burdensome enough not to bother anymore and I'm 
>> increasingly removed from the edit-refresh-test cycle has always been 
>> a singificant value proposition of web development.
>
> It probably has more to do with your needs & setup than with the 
> requirements of preprocessors. I never used such complex and fragile 
> architecture nor do I intend to.
>

Every developer's needs and setup are different, but the existance of 
sophisticated build workflow tools like Grunt are evidence that 
processing requirements are, for many, non-trivial. The more complex the 
web development workflow becomes, the higher the barrier to entry and 
the greater the difficulty of experimentation. This proposal makes it 
easier for developers to write polyfills that introduce new syntax, and 
easier for developers to use them by simply dropping in a script tag and 
using the new syntax within their CSS. The trade-off is that some 
parsing happens in script on the client. I think that trade-off is worth 
it, but if you have a philosophical objection to doing this processing 
on the client you may not agree. However, you need to justify that 
objection with something other than the fact that you don't like it.

>
>> > This is not to say I'm completely against it, but everything that 
>> could > be preprocessed on the server should be preprocessed on the 
>> server. > There's no reason in forcing two parsers to analyse the 
>> same information > every time you reload the page. Also, there's no 
>> way to know when the > CSS is done with parsing so when do you start 
>> looking for your custom > at-rules?
>>
>> By that argument, we should remove all declarative APIs from the 
>> browser, HTML, CSS, SVG, etc., process everything into JavaScript 
>> calls, and just have the platform consist of imperative APIs.
>
> Not at all. Whether we speak about HTML, CSS or SVG, the declarative 
> syntax is a very effective way to transmit information & the browser 
> actually *understand* it (and much faster than a script could 
> communicate it).
>
> Using those declarative tools in a way they just can't understand the 
> data you give them, however, is a waste.

The custom syntax needs itself to be processed, either on the server or 
on the client. Again, you may have a philosophical objection to 
performing this processing on the client. I don't. I think the gains in 
ease-of-experimentation, ease-of-deployment and ease-of-use are worth 
the "waste" of doing some parsing in script, and just saying, "we 
shouldn't do this" is not going to convince me in the absence of any 
other evidence.

Jon Rimmer
Received on Tuesday, 2 July 2013 15:23:14 UTC

This archive was generated by hypermail 2.4.0 : Friday, 25 March 2022 10:08:32 UTC