- From: Glenn Maynard <glenn@zewt.org>
- Date: Sat, 31 Mar 2012 20:13:42 -0500
On Wed, Mar 28, 2012 at 1:44 AM, Jonas Sicking <jonas at sicking.cc> wrote: > Scanning over the buffer twice will cause a lot more memory IO and > will definitely be slower. > That's what cache is for. But: benchmarks... We can argue weather it's meaningfully slower or harder. But it seems > like we agree that it's slower and harder. > What? Are you really arguing that we should do something because of *meaningless* differences? I still don't understand what that benefit you are seeing is. You > hinted at some "more generic" argument, but I still don't understand > it. So far the only reason that has been brought up is that it > provides an API for simply finding null terminators which could be > useful if you are doing things other than decoding. Is that what you > are talking about when you are saying that it's "more generic"? > Yes, I've said that repeatedly. It also avoids bloating the API with something that's merely a helper for something you can do in a couple lines of code, and allows you to tell how many bytes/words were consumed (eg. for packed string arrays). It can always be added later, but it feels unnecessary. -- Glenn Maynard
Received on Saturday, 31 March 2012 18:13:42 UTC