Re: [Specifications] Retracting operations (#246)

> @inf3rno
> 
> I think you understand the problem correctly.
> 
> The thing is, moving toward former solution would break all that currently exists and is based on hydra - it was never supposed to work that way. That's why the latter solution is somehow closer to my heart. There is a third solution -do nothing, let the client fail invoking an operation discovering the issue in runtime (as it is now).
> 
> As for the statement that _Webpages just don't serve broken hyperlinks_ - why am I seeing dozens of broken links leading to 404s ? Also, a bookmarked link may become obsolete in time - smart client could discover it with proper tools.
> 
> I can also imagine a situation when a resource is of several types causing some operations to be inoperable due to some logic. I know it's a long shot and multiple resource types is nothing easy to work with, but this is RDF world after all and there is nothing that may hold you back in this - retracting operations may be the only solution for that scenario.

Okay, the wording wasn't the best here. I meant, that normally it is not the intended behavior to have broken links.

We could model it with types accurately with very small types which represent a fragment of the resource state. Each of these types could have operations and the resource state would be the composition of these types. It works in the RDF world, but when we create an abstraction it is usually a lot more vague and it requires a lot of extra work and thinking to create an accurate abstraction, so I guess for most developers this kind of approach does not work.

The SOA approach was that we had a WSDL documentation and let the client fail in such situations. I think for REST this is not acceptable and the representation should contain the actual state of the resource or something close to it with this kind of details. Having obsolete links and bookmarks should not be an issue, they are sort of cache after all, and when they are broken the client can update cache with the same process it used to find these links. So I think there are two types of broken links here with two different reasons and better not to mix them. As far as I know the SOA type broken links can lead to a lot more failed calls than the cache type broken links and each failed call is resource wasting from server perspective. Another thing that if you want to avoid SOA type failed calls, you must copy the logic to the client which lead to the failure. E.g. copy the access control logic, for example checking whether the user has admin right, because only admins can use a certain link. Which we try to avoid normally, because it strongly couples the client to the access control implementation. So it is against REST principles. On the other hand having a link cache meets completely with REST principles.

-- 
GitHub Notification of comment by inf3rno
Please view or discuss this issue at https://github.com/HydraCG/Specifications/pull/246#issuecomment-1373036606 using your GitHub account


-- 
Sent via github-notify-ml as configured in https://github.com/w3c/github-notify-ml-config

Received on Friday, 6 January 2023 01:56:35 UTC