Question on User Story S33: Normalizing data

Hi Eric,

I have a question on the User Story S33 that you added recently:

You describe the requirement to normalize data - I guess automatically 
to drop extra duplicate entries? Could you clarify how this would work 
in practice: is your assumption that if there are two identical blank 
nodes (like in your example) then the system could delete one of them? 
What about cases where the two blank nodes have slight differences - 
would this also be covered and how? Is this about automatically fixing 
constraint violations?

Thanks for clarification

Received on Thursday, 20 November 2014 23:38:42 UTC