- From: Joshua Cornejo <josh@marketdata.md>
- Date: Mon, 09 Jun 2025 14:25:00 +0100
- To: Nicoletta Fornara <nicoletta.fornara@usi.ch>
- CC: "public-odrl@w3.org Group" <public-odrl@w3.org>
- Message-ID: <C04573D4-09E4-4F1D-802B-B64FC6AEC7FA@marketdata.md>
Hello, I was obviously “cold” on my memory cache about the topic during the call, but here is the definition that matches my understanding (from Wikipedia): “The closed-world assumption (CWA), in a formal system of logic used for knowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to a logical formalization of this assumption by Raymond Reiter.[1] The opposite of the closed-world assumption is the open-world assumption (OWA), stating that lack of knowledge does not imply falsity. Then the summary for OWA/CWA from ChatGPT also matches my points: Use CWA when: You control the data and know it's complete. Making false positives is worse than false negatives. You're designing systems like banking rules, logistics, etc. Use OWA when: You're dealing with incomplete or distributed data. You need to reason under uncertainty (e.g., AI, law). You're building ontologies, knowledge graphs, or inference engines. My thinking when raising the issue is that we are not dealing with any of the 3 cases *at evaluation* to justify an OWA (we have a state of the world, which contains all the necessary information and we are not building an KG), while the 3 points: control, false positives & logistics (which is rights management) Regards, ___________________________________ Joshua Cornejo marketdata smart authorisation management for the AI-era
Received on Monday, 9 June 2025 13:25:11 UTC