- From: Holger Knublauch <holger@topquadrant.com>
- Date: Fri, 21 Nov 2014 09:38:07 +1000
- To: public-data-shapes-wg@w3.org
Hi Eric, I have a question on the User Story S33 that you added recently: https://www.w3.org/2014/data-shapes/wiki/User_Stories#S33:_Normalizing_data_patterns_for_simple_query You describe the requirement to normalize data - I guess automatically to drop extra duplicate entries? Could you clarify how this would work in practice: is your assumption that if there are two identical blank nodes (like in your example) then the system could delete one of them? What about cases where the two blank nodes have slight differences - would this also be covered and how? Is this about automatically fixing constraint violations? Thanks for clarification Holger
Received on Thursday, 20 November 2014 23:38:42 UTC