Re: Shrinking existing libraries as a goal

A related TL;DR observation...

While we may get 5 things that really help shrink the current set of
problems, it adds APIs which inevitably introduce new ones.  In the
meantime, nothing stands still - lots of specs are introducing lots of new
APIs. Today's 'modern browsers' are the ones we are all swearing at a year
or two from now.

New APIs allow people to think about things in new ways.  Given new APIs,
new ideas will develop (either in popular existing libraries, or even whole
new ones).  Ideas spawn more ideas - offshoots, competitors, etc. In the
long term, changes like the ones being discussed will probably serve more
to mitigating libraries otherwise inevitable continued growth.

More interestingly though, to Tab's point -  all of the things that he
explained will happen with all of those new APIs too.  New ideas will spawn
competitors and better APIs that are normalized by libraries, etc.  They
will compete and evolve until eventually it becomes self-evident over time
that there is something much preferred still by the user community at large
to whatever is actually implemented in the browser.    It seems to me that
this is inevitable, happens with all software,  and is actually kind of a
good thing...

I'm not exactly sure what value this observation has other than to maybe
explain why I think that on this front, libraries have a few important
advantages and wonder aloud whether somehow there is a way to change the
model/process to incorporate those advantages more directly.  Particularly,
the advantages are about real world competition and less need to be
absolutely positively fully universal.

The advantages of the competition aspect I think cannot be overstated -
they play in virtually every point along whole lifecycle.... For all of the
intelligence on the committees and on these lists (and it's a lot), it's
actually a pretty small group of people ultimately proposing things for the
whole world.  By their very nature, committees (and the vendors who are
heavily involved) also have to consider the very fringe cases and the
browser vendors have to knowing enter in to things considering that every
change means more potential problems that have to work without breaking
anything existing.  Libraries might have a small number of authors, but
their user base starts our small too. The fact that it is also the author's
choice to opt-in to using a library also means that they are much freer to
rev and version and say "don't do that, instead do this" with some of the
very fringe cases - or even just consciously choose that that is not a use
case they are interested in supporting.  With the process,  even when we
get to vendor implementations, features start out in test builds or require
flags to enable.  While it's "good" - it's really more of a test for
uniform compliance and a preview for/by a group of mavens.  This means that
features/apis cannot actually be practically used in developing real
pages/sites and that is a huge disadvantage that libraries don't generally
have.  Often it isn't obvious until there are at least thousands and
thousands of average developers who have had significant time to really try
to live with it in the real world (actually delivering product) that it
becomes evident that something is overly cumbersome or somehow falls short
for what turn out to be unexpectedly common cases.  Finally, the whole
point of  these committees is to arrive at standards, not compete.
 However, in practice, they also commonly resolve differences after the
fact (the standard is revised to meet what is implemented and now can't
change).  Libraries are inherently usually the opposite - they want
competition first and standardization only after things have wide
consensus.   These are the kinds of things that cause innovation and
competition of ideas which ultimately helping define and evolve what the
community at large sees as "good".

I'm not exactly sure how you would go about changing the model/process to
encourage/foster the sort of inverse relationship while simultaneously
focusing on standards... tricky.  Maybe some of the very smart people on
this list have some thoughts?

-Brian


On May 17, 2012 3:52 PM, "Rick Waldron" <waldron.rick@gmail.com> wrote:

>
>
> On Thu, May 17, 2012 at 3:21 PM, Brian Kardell <bkardell@gmail.com> wrote:
>
>> On Thu, May 17, 2012 at 2:47 PM, Rick Waldron <waldron.rick@gmail.com>
>> wrote:
>> >
>> >
>> > On Thu, May 17, 2012 at 2:35 PM, Brian Kardell <bkardell@gmail.com>
>> wrote:
>> >>
>> >> So, out of curiosity - do you have a list of things?  I'm wondering
>> >> where some efforts fall in all of this - whether they are good or bad
>> >> on this scale, etc... For example:  querySelectorAll - it has a few
>> >> significant differences from jQuery both in terms of what it will
>> >> return (jquery uses getElementById in the case that someone does #,
>> >> for example, but querySelectorAll doesn't do that if there are
>> >> multiple instances of the same id in the tree)
>> >
>> >
>> > Which is an abomination for for developers to deal with, considering
>> the ID
>> > attribute value "must be unique amongst all the IDs in the element's
>> home
>> > subtree"[1] . qSA should've been spec'ed to enforce the definition of
>> an ID
>> > by only returning the first match for an ID selector - devs would've
>> learned
>> > quickly how that worked; since it doesn't and since getElementById is
>> > faster, jQuery must take on the additional code burden, via cover API,
>> in
>> > order to make a reasonably usable DOM querying interface. jQuery says
>> > "you're welcome".
>> >
>> >
>> >
>> >>
>> >> and performance (this
>> >> example illustrates both - since jQuery is doing the simpler thing in
>> >> all cases, it is actually able to be faster (though technically not
>> >> correct)
>> >
>> >
>> >
>> > I'd argue that qSA, in its own contradictory specification, is "not
>> > correct".
>>
>> It has been argued in the past - I'm taking no position here, just
>> noting.  For posterity (not you specifically, but for the benefit of
>> those who don't follow so closely), the HTML link also references DOM
>> Core, which has stated for some time that getElementById should return
>> the _first_  element with that ID in the document (implying that there
>> could be more than one) [a] and despite whatever CSS has said since
>> day one (ids are unique in a doc) [b] a quick check in your favorite
>> browser will show that CSS doesn't care, it will style all IDs that
>> match.  So basically - qSA matches CSS, which does kind of make sense
>> to me... I'd love to see it "corrected" in CSS too (first element with
>> that ID if there are more than one) but it has been argued that a lot
>> of stuff (more than we'd like to admit) would break.
>>
>> >> in some very difficult ones. Previously, this was something
>> >> that the browser APIs just didn't offer at all -- now they offer them,
>> >> but jQuery has mitigation to do in order to use them effectively since
>> >> they do not have parity.
>> >
>> >
>> > Yes, we're trying to reduce the amount of mitigation that is required of
>> > libraries to implement reasonable apis. This is a multi-view discussion:
>> > short and long term.
>> >
>>
>> So can someone name specific items?   Would qSA / find been pretty
>> high on that list?  Is it "better" for jQuery (specifically) that we
>> have them in their current state or worse?  Just curious.
>>
>
> TBH, the current state can't get any worse, though I'm sure it will.
> Assuming you're referring to this:
> http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1454.html
>
> ... Yes, APIs like this would be improvements, especially considering the
> pace of implementation in modern browsers - hypothetically, this could be
> in wide implementation in less then a year; by then development of a sort
> of "jQuery 2.0" could happen -- same API, but perhaps modern browser only??
> This is hypothetical of course.
>
>
>
> Rick
>
>
>
>>
>> [a] -
>> http://dvcs.w3.org/hg/domcore/raw-file/tip/Overview.html#dom-document-getelementbyid
>> [b] - http://www.w3.org/TR/CSS1/#id-as-selector
>
>
>
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> >
>> > Rick
>> >
>> >
>> > [1] http://www.whatwg.org/specs/web-apps/current-work/#the-id-attribute
>> >
>> >
>> >>
>> >>
>> >> On Thu, May 17, 2012 at 2:16 PM, Yehuda Katz <wycats@gmail.com> wrote:
>> >> >
>> >> > Yehuda Katz
>> >> > (ph) 718.877.1325
>> >> >
>> >> >
>> >> > On Thu, May 17, 2012 at 10:37 AM, John J Barton
>> >> > <johnjbarton@johnjbarton.com> wrote:
>> >> >>
>> >> >> On Thu, May 17, 2012 at 10:10 AM, Tab Atkins Jr. <
>> jackalmage@gmail.com>
>> >> >> wrote:
>> >> >> > On Thu, May 17, 2012 at 9:56 AM, John J Barton
>> >> >> > <johnjbarton@johnjbarton.com> wrote:
>> >> >> >> On Thu, May 17, 2012 at 9:29 AM, Rick Waldron
>> >> >> >> <waldron.rick@gmail.com>
>> >> >> >> wrote:
>> >> >> >>> Consider the cowpath metaphor - web developers have made
>> highways
>> >> >> >>> out
>> >> >> >>> of
>> >> >> >>> sticks, grass and mud - what we need is someone to pour the
>> >> >> >>> concrete.
>> >> >> >>
>> >> >> >> I'm confused. Is the goal shorter load times (Yehuda) or better
>> >> >> >> developer ergonomics (Waldron)?
>> >> >> >>
>> >> >> >> Of course *some* choices may do both. Some may not.
>> >> >> >
>> >> >> > Libraries generally do three things: (1) patch over browser
>> >> >> > inconsistencies, (2) fix bad ergonomics in APIs, and (3) add new
>> >> >> > features*.
>> >> >> >
>> >> >> > #1 is just background noise; we can't do anything except write
>> good
>> >> >> > specs, patch our browsers, and migrate users.
>> >> >> >
>> >> >> > #3 is the normal mode of operations here.  I'm sure there are
>> plenty
>> >> >> > of features currently done purely in libraries that would benefit
>> >> >> > from
>> >> >> > being proposed here, like Promises, but I don't think we need to
>> push
>> >> >> > too hard on this case.  It'll open itself up on its own, more or
>> >> >> > less.
>> >> >> >  Still, something to pay attention to.
>> >> >> >
>> >> >> > #2 is the kicker, and I believe what Yehuda is mostly talking
>> about.
>> >> >> > There's a *lot* of code in libraries which offers no new features,
>> >> >> > only a vastly more convenient syntax for existing features.  This
>> is
>> >> >> > a
>> >> >> > large part of the reason why jQuery got so popular.  Fixing this
>> both
>> >> >> > makes the web easier to program for and reduces library weight.
>> >> >>
>> >> >> Yes! Fixing ergonomics of APIs has dramatically improved web
>> >> >> programming.  I'm convinced that concrete proposals vetted by major
>> >> >> library developers would be welcomed and have good traction. (Even
>> >> >> better would be a common shim library demonstrating the impact).
>> >> >>
>> >> >> Measuring these changes by the numbers of bytes removed from
>> downloads
>> >> >> seems 'nice to have' but should not be the goal IMO.
>> >> >
>> >> >
>> >> > We can use "bytes removed from downloads" as a proxy of developer
>> >> > ergonomics
>> >> > because it means that useful, ergonomics-enhancing features from
>> >> > libraries
>> >> > are now in the platform.
>> >> >
>> >> > Further, shrinking the size of libraries provides more headroom for
>> >> > higher
>> >> > level abstractions on resource-constrained devices, instead of
>> wasting
>> >> > the
>> >> > first 35k of downloading and executing on relatively low-level
>> >> > primitives
>> >> > provided by jQuery because the primitives provided by the platform
>> >> > itself
>> >> > are unwieldy.
>> >> >
>> >> >>
>> >> >>
>> >> >> jjb
>> >> >>
>> >> >> >
>> >> >> > * Yes, #3 is basically a subset of #2 since libraries aren't
>> >> >> > rewriting
>> >> >> > the JS engine, but there's a line you can draw between "here's an
>> >> >> > existing feature, but with better syntax" and "here's a
>> fundamentally
>> >> >> > new idea, which you could do before but only with extreme
>> >> >> > contortions".
>> >> >> >
>> >> >> > ~TJ
>> >> >
>> >> >
>> >
>> >
>>
>
>

Received on Friday, 18 May 2012 14:49:54 UTC