- From: <bugzilla@jessica.w3.org>
- Date: Fri, 22 Jan 2016 07:14:54 +0000
- To: public-script-coord@w3.org
https://www.w3.org/Bugs/Public/show_bug.cgi?id=29388 Domenic Denicola <d@domenic.me> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |d@domenic.me --- Comment #2 from Domenic Denicola <d@domenic.me> --- Opt-in makes a lot of sense to me. It seems most natural to opt in on a per-argument basis. - ArrayBuffer -> (ArrayBuffer or SharedArrayBuffer) - Int8Array -> [AllowShared] Int8Array, etc. - ArrayBufferView -> [AllowShared] ArrayBufferView There should then be some sort of requirement that specs which opt in to this have their processing models for the typed array/array buffer argument more well defined than they are currently. Specs are generally not very precise about when or if they do copies, transfers, moves, etc. IDL tries to enforce more precision with: > At the specification prose level, IDL buffer source types are simply references to objects. To inspect or manipulate the bytes inside the buffer, specification prose MUST first either get a reference to the bytes held by the buffer source or get a copy of the bytes held by the buffer source. With a reference to the buffer source’s bytes, specification prose can get or set individual byte values using that reference. But for APIs that accept SAB I'd expect extreme precision, possibly with branching paths depending on SAB or not (e.g. "get a reference to the bytes held by the buffer source" for SAB and "get a copy of the bytes held by the buffer source" for AB). -- You are receiving this mail because: You are on the CC list for the bug.
Received on Friday, 22 January 2016 07:14:58 UTC