W3C home > Mailing lists > Public > whatwg@whatwg.org > July 2014

Re: [whatwg] Questions about the Fetch API

From: Domenic Denicola <domenic@domenicdenicola.com>
Date: Thu, 17 Jul 2014 19:34:28 +0000
To: William Chan (Dz) <willchan@chromium.org>
Message-ID: <1405625670688.53502@domenicdenicola.com>
Cc: "whatwg@lists.whatwg.org" <whatwg@lists.whatwg.org>, "Tab Atkins Jr." <jackalmage@gmail.com>, Juan Ignacio Dopazo <jdopazo@yahoo-inc.com>

Will and I hashed this out offline. Our tentative conclusion for streams is captured in https://github.com/whatwg/streams/issues/146.

In short, the issue he brings up is a potential issue for not just the fetch body stream, but for any writable stream. As such it needs to be addressed generically there, so that writable streams can have the "pull" behavior he describes, in the case where they want or need it.

With that taken care of, I still think it would be ideal for the (client) RequestBodyStream to be writable, not readable. Since it is something you write to, and because it allows much better code than the alternative---see example below. So let's turn back to how that might work.

Talking with Jake in IRC I realized one of the major goals of the current RequestBodyStream is to be able to do stuff like this pass-through "service worker proxy":

```js
self.onfetch = ev => {
  fetch(ev.request).then(res => ev.respondWidth(res));
};
```

i.e. to use "server" requests, incoming to the service worker, as "client" requests, outgoing through fetch.

This feels conceptually wrong to me, because I have a very strong mental divide between the two types of requests in my mind. But it sure is convenient. In Node.js, where they enforce such a separation, the equivalent code is

```js
http.createServer((serverReq, serverRes) => {
  var clientReqOptions = url.parse(serverReq.url);
  clientReqOptions.headers = serverReq.headers;

  var clientReq = http.request(clientReqOptions);
  clientReq.on('response', clientRes => {
    serverRes.writeHead(clientRes.statusCode, clientRes.headers);
    clientRes.pipe(serverRes);
  });
  // Error handling omitted, since Node doesn't use promises ohgawdthepain
});
```

which obviously kind of sucks. It could be made slightly better with fetch/service worker, see e.g. https://gist.github.com/domenic/1bbec0f341ae3cfb3a8f, but in general I think that is a bad path to go down especially compared to the current simplicity.

One thing that might work is for FetchBodyStream to become a pass-through stream, i.e. { WritableStream input, ReadableStream output } where the chunks are just passed directly through:

- When used as a server request body, the user will read from its output side, whereas the UA will be responsible for writing data into its input side.
- When used as a client request body, the user can write to its input side, whereas the UA will be responsible for reading data from its output side.

The benefit of this is it would allow code like the following, which uses a (hypothetical) file stream to demonstrate uploading a few large files to a server separated by some bookends:

```js
var clientReq = fetch("http://example.com/files", { method: "POST" });
clientReq.body.input.write("FILE INCOMING!");
var file1 = openFileStream("filesystem://...");
file1.pipeTo(clientReq.body.input, { close: false });
file1.closed.then(() => {
  clientReq.body.input.write("DONE WITH THE FILE!! ONE MORE COMING!");

  var file2 = openFileStream("filesystem://...");
  file2.pipeTo(clientReq.body.input, { close: false });
  file2.closed.then(() => {
    clientReq.body.input.write("ALL DONE!");
  });
});
```

Whereas with an approach where fetch just accepts a readable stream, you have to manually construct a readable stream whose contents are first pulled from a string, then from file1, then from a string, then from file2, then from a string.
Received on Thursday, 17 July 2014 19:34:58 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 17:00:21 UTC