Re: Regarding user interfaces and smart clients

On 15 March 2015 at 23:51, Tomasz Pluskiewicz <tomasz@t-code.pl> wrote:

>
> Hi Nathan
>
> That's some interesting read. I will share some thoughts inline.
>
> > 2) To be purely HATEOAS/REST-compliant, I should be able to point my
> > client at any endpoint which offers data in a format I can support (in
> > my case, at least JSON-LD and Hydra) and have it be able to offer some
> > kind of user experience using only what is returned from that endpoint.
> > Any given request should include links so that a client can advertise
> > potential further actions to the user. The client should not make any
> > assumptions in advance about what will be returned, because those
> > assumptions will inevitably be wrong if anything about the endpoint
> > changes in the future.
>
> I'm not entirely with you about the assumptions the client should or
> shouldn't make. Of course it is true for 100% generic clients but I think
> that below you mostly write about specialized clients. A specialized client
> would likely operate in a specific domain and a such you can expect that
> certain data is displayed differently and maybe some properties are
> ignored. It could use a generic view for unexpected data.
>

I've been trying to get a sense of how I should be designing systems so
that they play nicely with the rest of the semantic web and don't violate
the principles behind HATEOAS and REST. To that end, it seems like an ideal
client should have both generic capabilities, but be able to conform to
domain-specific requirements as well. That's why you're perceiving a
mismatch between my comments above and below.


> > There are no two ways about this. User interface is really, really
> > important, and I can't help but notice a lack of focus on this fact by
> > the semantic web community. Unless you're talking about completely
> > autonomous agents, which I believe is Ruben's area of interest, you
> > simply can't get away from thinking about the user interface, and how
> > you might offer a good user experience. UI/UX experts get paid because
> > they spend the time to consider the data and service in advance and
> > craft something that the user is going to love, and which is going to do
> > the best job possible in allowing the user to interact with a service
> > and its data. This usually involves careful analysis of what the service
> > provides, what is important, and the various workflows that the user is
> > expected to engage in.
>
> This is very important. From it's inception Semantic Web has mostly
> focused on autonomous or intelligent agents, while most of the web is
> designed on per-app basis. And for good reason as you point out. That may
> be the main reason why SW is largely ignored but the general public - it
> too often seems impractical and addresses issues that aren't paramount for
> most developers.
>

Agree fully.


> >
> > How do we get our client to fill this role adequately? In all honestly,
> > I don't think we can, given what is on the table (of specifications) at
> > this point. I've seen discussions on the list where it is argued that
> > attributes such as labels and so forth have no place in the data because
> > these are presentational attributes and should not be intrinsically
> > linked to the data. I understand the argument, but then something still
> > needs to be offered whereby we can advertise metadata with which a
> > decent user interface can be constructed.
>
> Well, the holy grail of SW would be that each property or type can be
> dereferenced so that a client can discover the metadata from relevant
> directly vocabulary. That is however impossible for very practical reasons:
> performance (many and potentially slow requests), unstable or
> un-dereferencable vocabularies and incomplete metadata (think translations
> and various lingustic quirks). Also one could argue that such metadata
> doesn't belong to the ontology/vocabulary itself anyway. In such case a
> given client has to know that information up-front anyway, much like it is
> done without any SW technologies. Whether a server publishes such metadata
> and how is another topic. I do however agree that this must not be part of
> Hydra. The problem is that introducing yet another vocabulary/scheme for
> publishing that along Hydra will only strengthen the perception that
> Semantic Web or Linked Data is difficult, because you need so many pieces:
> JSON-LD/RDF, Hydra, some metadata vocabulary.


If it's not to be part of Hydra, and not to be advertised via some other
vocabulary, and yet we want our clients to have generic capabilities and be
adaptable to whatever they find on the web, what is the alternative? I
guess the solution could be published as a library that is maintained for
the purpose of "knowing" a set of standard vocabularies, but that would of
course limit our ability to adapt to new data that wants to "suggest" an
optimal layout to whatever client is reading that data.


> > (a) If I return a JSON-LD document in response to an HTTP request, the
> > general idea of hypermedia and linked data is that I should include
> > hypermedia controls to related data. Hypermedia controls seem to me be
> > context-specific metadata and not always intrinsically-related to the
> > requested resource. Is this line of thinking correct?
>
> This one has baffled me. Consider a traditional web app, where on every
> page there is a login/logout link visible depending on the logon status.
> How does that correspond to a Linked Data resource? Do you include a
> relevant login/logout operation to every resource based on the request's
> authentication? I would assume you have to so that all possible state
> transitions are given with a response.
>
>
I'm also still thinking about this, though I have a suspicion that the
answer somehow relates to graph structures, i.e. with one of the graph
nodes being the requested data. I haven't fully considered all of the
issues here though, and I have a suspicion that there may be issues with
that sort of solution.


> >
> > (b) As above, is it generally understood that the document and its
> > related hypermedia controls have a dynamic relationship? For example, if
> > I return information about a user, then perhaps a link to the user's
> > account history will be included, or perhaps it won't; maybe the person
> > making the request doesn't have the appropriate credentials and
> > shouldn't be offered certain links.
>
> Possibly but not necessarily. Again the HTML analogy. Consider pages which
> always include links to protected resources but redirect to login page when
> required. I think it would be reasonable to offer such links/operations in
> a representation and return 401 status with a log-in operation if the user
> is unauthenticated.
>

Thinking further on my previous comment, perhaps returning information as a
graph is actually reasonable. After all, a graph simply represents a bunch
of data, and hypothetically our client would just inhale whatever it's fed.
If the query is initially "get this resource", and we fetch it, and we get
back a JSON-LD document with a @graph structure within, then supposedly
we'd still go through the structure to read in any of the data we've been
provided with and augment our local data cache with what we found, which
would include the triples/json/whatever that we originally wanted, and thus
our query would still be satisfied, even though the resource representation
wasn't at the top level in the query response.

> (c) Building further on the above points; I know I also want to include
> > a link to a Hydra entry point, but is it appropriate to include a link
> > to that entry point from every document returned from my application?
> > One would assume that any route offered by my application may be called
> > as a first point of entry, and as such, the entry point would need to be
> > advertised so that the client is able to figure out what the API can do.
>
> This is similar to what I described above. For example consider a menu
> displayed on every page. Do we offer such "static" links with every
> response? If not, how would you build a "real" page from such resource's
> representation, especially in a generic client? Unfortunately there are a
> number of similar issues. Think about hierarchies of resources. Usually
> pages include shared parts, which contain parent's data. For example a book
> chapter page displayed along with chapter listing and the book's title, etc.
>

HTML is of course a valid hypermedia type, and as a single response it
contains both the data and the hyperlinks surrounding it. I suppose it
would be easy to get into a debate about the purity of the representation,
but if we remember that a JSON-LD document is really just a bundled format
of triples, then perhaps it's not invalid to couple the sitewide metadata
with the original document. The only question would be how a third party
might be able to distinguish which parts of the returned data relates
"purely" to the resource. What if we were to use @reverse in order to
bundle in links from our sitewide data back to the requested resource? This
might help us express the data in a way that removes this ambiguity.


> > So my implementation model that has been materialising in my head looks
> > something like this:
> >
> > 1) All general requests to a server are subject to content negotiation
> > with respect to the data requested and the specified Accept header. So
> > in most cases, if the client requests JSON-LD, that's what it gets. If
> > it requests HTML though, we need to somehow decide how to render the
> > [arbitrary] data using HTML. That HTML would therefore need to include
> > enough so that the full client can bootstrap itself and then display the
> > data that was originally requested.
>
> I'm not sure I understand the above.
>

HTML is harder to offer a valid representation of a requested resource than
machine-readable formats such as JSON or XML, because it's designed for
presentation. We can still offer a valid response to an "Accept" header of
"text/html" though, by making sure that the HTML we return includes a
script tag containing the JSON-LD representation of the resource we
requested, along with any client scripts or whatever is needed for the
browser to render HTML that represents the resource. The actual rendering
of suitable HTML for displaying the resource could be done on either the
client or the server, but by including your smart client scripts with the
HTML (i.e. via <script src="...">) then the browser will load those scripts
upon rendering the HTML, and those scripts can take over the management of
the user interface. i.e. making further resource requests via AJAX calls,
etc.


> > 2) A client cannot truly be intelligent. If it receives an object of
> > @type schema:LocalBusiness, it isn't going to know that one attribute is
> > more important than another, or that certain attributes probably don't
> > need to be displayed initially, or that certain groups of attributes, if
> > appearing in tandem, would be better rendered using a specialised view,
> > rather than individually in a table of values. We need to teach our
> > client how to render a user interface in a way that both suits our
> > application's domain-specific requirements, and can also adapt to linked
> > data that is beyond the application's initial scope.
>
> By "teach" do you simply mean that specific page design/layout is prepared
> by developers for schema:LocalBusiness and unexpected data is handled in a
> generic way? I don't see a better way currently. Again, building more such
> "intelligent" features atop Linked Data so that it can be consumed will
> further block it's adoption.


My use of the word "teach" was just a fancy way of saying that client would
use some kind of plugin model to build out its functionality for how to
respond to and render data using different vocabularies. Such a model could
also include a domain-specific plugin to ensure that, while your client
behaves in a generic way by default, it also has functionality injected
into it to handle *your* data in very specific ways so that you're still
offering a good user experience that is tailored to the needs of your
application, while not losing the ability to interoperate with other linked
data which may have to be dealt with after the application goes live. This
seems important to me because the general idea of this whole field is that
you should be able to start with a single endpoint and adapt to whatever
you come across, especially considering that we can't really account for
how our application's data may evolve and be augmented over time.


> > 3) All linked data ultimately boils down to literals, so our client
> > needs to understand how to render literals, which means, at the absolute
> > minimum, it can take a set of triples, either directly or from a
> > deconstructed JSON-LD document, and render them out to a simple table of
> > values.
>
> Yes, a simple table or a form of graph. have you seen http://lodlive.it/
> by the way?
>

I had a look, although I didn't find it to be particularly usable...


>
> > 5) To make our client (paradoxically) both domain-specific and generic,
> > we accept that our API entry point is domain-specific by definition, so
> > we also teach our client how to render for our own needs. This means
> > that the method we use to teach our client rendering behaviours must
> > include functionality that overrides other behaviours defined earlier.
> > This may include overriding the rendering of specific data types, either
> > always, or only when certain conditions assert themselves. As a result
> > of this approach, our client starts with the baseline ability to render
> > any kind of linked data, but also becomes able to render a user
> > interface purpose-built for our application's domain-specific needs.
>
> Simply displaying in a customized way may be reasonably simple. But what
> about teaching a client to give a great UX for operations. For example I've
> just recently implemented a table, where each cell contains a checkbox.
> When clicked a request is sent to the server. I have no idea how I would
> model that with any kind of hypermedia let alone creating a good UI
> dynamically based only on such model.
>

I don't have a clear answer to this right now,  but my intuition suggests
that you need to approach the problem from the other direction. Rather than
starting with a table of checkboxes and trying to retrofit that into a
linked data process, you'd think about the data to begin with and ask the
question "what is it about this data that makes it best represented by a UI
of this type". If you can answer that question in a logical, deterministic
fashion that is answered by the data and the context in which it is
requested, then a solution presents itself in the form of the
presentational vocabulary idea I suggested, or something else that achieves
the same result.


> > Any comments, questions and/or criticisms are most welcome. Also, take a
> > look at a link Markus posted earlier regarding JSON-LD and web
> > components, as there are ideas there that may be relevant to a client
> > such as what I envisaged above.
> > http://updates.html5rocks.com/2015/03/creating-semantic-
> sites-with-web-components-and-jsonld
>
> The above article could be a step towards a solution to your last
> paragraphs. How about instead of advertising layout, grouping, etc. a
> server would publish links to web components? Starting with a simplest
> example consider schema:Image, which optionally contains a thumbnail:
>
> {
>   "@type": "schema":ImageObject",
>   "contentURL": "http://image/large",
>   "thumbnail": {
>     "contentURL": "http://image/small"
>   }
> }
>
> A server would somehow advertise that it could be displayed with a custom
> web component <my-schema-image />, available at
> http://my.components.com/my-schema-image. The browser uses HTML import to
> download the component, generate the appropriate tag and pass the
> schema:ImageObject as attribute. The page could generate HTML like
>
> <link rel="import" href="http://my.components.com/my-schema-image" />
> <my-schema-image id="some-id" />
> <script>
> var image = { ... } // the image above
> $('#some-id').image = image;
> </script>
>
> From there the component takes over to render an the image. If present,
> the thumbnail would be displayed with a link to the full-size version (a
> modal maybe?):
>

Web components, on the surface, appear to offer a solution, but I don't
think my suggested solution needs to go to that level. It's easy enough to
offer some form of composable solution using any number of basic web
technologies, and given that the client would be doing all the heavy
lifting anyway, the need for custom HTML elements, a shadow DOM and so
forth becomes unclear. All that is required is a model whereby some linked
data can be expressed as HTML. If a vocabulary solution is used, then such
a vocabulary could advertise web component sources, or it could offer
higher-level "hints" to assist a layout engine in prioritising what is
important, without specifically dictating the actual layout implementation,
which in our case would be HTML, but could just as easily be XAML or some
other format. I haven't got my heart set on this as a solution at this
point, but I haven't thought of a better one yet either.


Nathan

Received on Tuesday, 17 March 2015 11:04:47 UTC