- From: makxdekkers via GitHub <sysbot+gh@w3.org>
- Date: Thu, 30 Aug 2018 21:48:56 +0000
- To: public-dxwg-wg@w3.org
@riccardoAlbertoni The issue of granularity/scale -- whether the size is expressed in bytes, kilobytes, megabytes etc -- is really a case of trying to be helpful to people at the expense of efficiency of data. Creating a complex mechanism with an additional class to reduce the number of digits, e.g. from "1000000000" (bytes) to "1" (terabyte) will actually increase the number of bytes on the wire. `dcat:byteSize "1000000000000"` is actually shorter than (inventing some properties) `dcat:scaledSize [dcat:scale "TB" ; dcat:number "1"]`. The other thing is a potential requirement to express different _types_ of sizes, e.g. number of observations, number of rows in a spreadsheet, number of articles in a legal text etc. If there is a small number of such types, the VOID approach makes sense. If there are a large number of types, a structured approach should be better, which is what Data Cube does with `sdmx-attribute:unitMeasure`. In my mind, in DCAT we're just talking about byte size so I don't see the need for a more complex approach. -- GitHub Notification of comment by makxdekkers Please view or discuss this issue at https://github.com/w3c/dxwg/issues/313#issuecomment-417478102 using your GitHub account
Received on Thursday, 30 August 2018 21:48:58 UTC