W3C home > Mailing lists > Public > public-script-coord@w3.org > July to September 2013

Re: Maybe we should think about Interface.isInterface functions again

From: Allen Wirfs-Brock <allen@wirfs-brock.com>
Date: Fri, 2 Aug 2013 14:33:23 -0700
Cc: Boris Zbarsky <bzbarsky@mit.edu>, Domenic Denicola <domenic@domenicdenicola.com>, Travis Leithead <travis.leithead@microsoft.com>, "public-script-coord@w3.org" <public-script-coord@w3.org>
Message-Id: <2A23F803-4797-47CF-8951-965ECB1F7980@wirfs-brock.com>
To: Jonas Sicking <jonas@sicking.cc>

On Aug 1, 2013, at 11:18 PM, Jonas Sicking wrote:

>> ...
> I don't really think this approach scales. Another example of where
> this occurs is when handling binary data. Right now browsers provide
> two ways of doing so: ArrayBuffers and Blobs. Blobs are more
> appropriate when handling large pieces of data since the data can be
> stored on disk, ArrayBuffer is better when handling small pieces of
> data since since the synchronous reading makes it faster and easier to
> use.
> So I could see code that wants to support both types of data. For
> example the WebSocket interface supports both sending ArrayBuffers or
> Blobs. So I could imagine other libraries wanting to implement APIs
> that do the same.
> Does this mean that we should add ArrayBuffer.isBlob?

No read on about the second approach.

You are saying that both ArrayBuffer and Blob can be used as data sources or sinks and that most clients should be able to work interchangeably with each one.  This suggests that there should be a DataSource and DataSink interfaces that both ArrayBuffer and Blob should support.  Either by directly exposing the methods of those APIs or by expose methods for accessing derived DataSource or DataSink objects.  

Or looking at it another way, data that is stored "on disk" can't really be manipulated until it is "in core" and that for the Web Platform ArrayBuffer is the preferred abstraction over arbitrary "in core" binary data.  In that case, websocket I/O with a blob is really just websocket I/O with multiple ArrayBuffers.  You might model this (for output purposes, at least). With a DataSource interface that produces a sequence of ArrayBuffers.  A DataSource over an actual ArrayBuffer would always produce only a single buffer sequence.  A DataSource over a Blob could produce a sequence of multiple buffers. 

>> //second approach
> ...
> This approach would partially solve iteration. It would let you
> iterate over all files and directories in the filesystem. But if you
> wanted to actually do something with, other than simply checking their
> names, you likely would want to know which ones are files and which
> ones aren't. So if you wanted to find all gif files in the filesystem,
> by checking for the gif file signature in the beginning of the file
> contents, you would still want to check which entries are files and
> which are directories.

You typically don't want to build knowledge of specific file formats into a filesystem.  That's why modern file systems typically allow meta data to be associated with individual filesystem entities. Because such meta-data is open-ended in nature it would be exposed as method/property names at this level of abstraction but instead is data values.

> With the current FileReader API you would have to try to read them all
> and handle the error that is produced when trying to read from a
> Directory.

Or, you might define Directories such that when read they act as 0 length files...

> I have made a proposal [1] for an improved API for reading from files.
> Using that API it would be possible to check for the existence of
> dirItem.readBinary and if that's there call that function with the
> assumption that it is a file rather than a directory. So I agree that
> with very careful design of APIs then we could solve this use case.
> I'm far from convinced that we'll be able to design all APIs that will
> though. Nor that all existing APIs are designed as well.
> It also requires expanding the definition of a File object. Currently
> a File object is used in many places outside of filesystems and is
> designed to be independent of it. So in all cases where File objects
> are used outside of filesystems the .itemsDo function makes not sense.

One reason I used the term "items" rather than "files".  Arguably a FileDirectory API is more about managing and organizing the items in the directory hierarchy rather than doing application specific things with those items.  You might have a design where you have one set of objects for observing/organizing/managing "items" in a directory structure.  But when it comes time to access a specific "file item" you get back a different kind of object that only exposes the interfaces need to access the data stored within the item. 

> Also, what does this mean that we should do for a Directory.get(name)
> function. Currently it is drafted to (asynchronously) return a File
> object or a Directory object. Should we replace that with separate
> Directory.getDirectory(name) and Directory.getFile(name) which produce
> an error if you use the "wrong" one.

If the interfaces for the two types of items are substantially different and unlikely to be extended then I would use two different method names.  But how often do you actually even do this style of iterate/dispatch.  The alternative is have the single Directory.get that is defined to return a DirectoryItem and which has the behavior that is common to both, from the perspective of a Directory client. That probably includes a metadata facility that can be queried to get more information about the item. Then, like I described above you could convert  an item object to more appropriate abstractions for  actual I/O and processing.

> So I guess my general meta-point here is that I think type detection
> is needed. The fact that "instanceof" does see a fair amount of usage
> seems to indicate that authors feel the need for it.

And my point is that significant use of type detection is a smell that suggests an insufficiently abstracted design.  The fact that "instanceof" gets a fair amount of usage indicates that there are a lot of not very good designs around.

There is a point of tension here.  You will here a lot of agile development methodologist telling people not to waste time on excessive design or abstraction mining. And they're right when they are talking to developers of small to moderate sized applications that will have short to medium lifespans. In those cases the extra effort required to create a good design often just isn't justified.  But if you are building a platform or framework that is going to be used by hundreds of millions of developers for decades to come,  different economic consideration apply.  Such systems deserve our best possible efforts to create great designs.

Design of good software abstractions isn't easy and even with significant effort they're never perfect.  The web deserves our best effort.


> [1] http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0727.html
> / Jonas
Received on Friday, 2 August 2013 21:33:56 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:37:50 UTC