- From: Sylvain Pasche <sylvain.pasche@gmail.com>
- Date: Tue, 28 Jul 2009 17:18:12 +0200
On Tue, Jul 28, 2009 at 5:52 AM, Jonas Sicking<jonas at sicking.cc> wrote: >> By the way, preserving duplicates shouldn't be much code complexity if >> I'm not mistaken. > > I take it you mean *removing* duplicates here, right? Oops, yes. >> The only required code change would be to use a hashset when parsing >> the attribute in order to only insert unique tokens in the token >> vector. Then DOMTokenList.length would return the token vector length >> and .item() get the token by index. I don't think anything actually >> depends on keeping duplicate tokens in the token vector. Then there >> would be a small perf hit when parsing attributes with more than one >> token. > > It's certainly doable to do this at the time when the token-list is > parsed. However given how extremely rare duplicated classnames are (I > can't recall ever seeing it in a real page), I think any code spent on > dealing with it is a waste. Agreed. >> The remove() algorithm is about 50 lines with whitespace and comments. >> After all, that's not a big cost and I guess that preserving >> whitespace may be closer to what DOMTokenList API consumers would >> expect. > > The code would be 7 lines if we didn't need to preserve whitespace: > > nsAttrValue newAttr(aAttr); > newAttr->ResetMiscAtomOrString(); > nsCOMPtr<nsIAtom> atom = do_GetAtom(aToken); > while (newAttr->GetAtomArrayValue().RemoveElement(atom)); > nsAutoString newValue; > newAttr.ToString(newValue); > mElement->SetAttr(...); > > If you spent a few more lines of code you could even avoid serializing > the token-list and call SetAttrAndNotify instead of SetAttr. That's an interesting comparison. Less code and much more readable than my remove() implementation I have to say. Sylvain
Received on Tuesday, 28 July 2009 08:18:12 UTC