Re: Physical web project

Scott and all

thanks for sharing, good to have a top view of what you are doing, and
others for chipping in.  Just to clarify, what we have in the report are
individual efforts, and we are hoping to be able to put together some
collaborative work towards some 'standard' for such interfaces.

You say

The primary goal is simple: Show beacons to the user as simply and easily
> as possible. Our current prototype uses the Android notification manager
> (with no sounds or vibrations) so seeing nearby beacons is just two taps.
> We expect other platforms to try different things.



>From a semantic interface perspective, imho this simple goal can lead to
further design considerations, including addressing privacy issues and
other factors that can be addressed at interface design level.

How to interact with a beacon, what to do with it, how to do it etc is
something most users will need advice with, and hopefully we can help with
that

So what I suggest, that with the help of group members, we consider the
physical web project a working prototype to guide our work
in principle.

So, we ll keep in mind your work when making our plan ahead for this group,
and hopefully you can give feedback on what we are doing (useful/not )

Also whatever you may require in the meantime, please shout

The semantic web for everyone at last ? :-)!!!

PDM





On Wed, Oct 22, 2014 at 1:32 AM, Scott Jenson <scott@jenson.org> wrote:

> Interesting point Miguel. Right now we gather our meta data in a very
> primitive way: we scrap the target html page for TITLE, DESCRIPTION, and
> FAVICON. That clearly needs to improve. We have been looking at JSON+LD as
> well as RDFa mechanisms for web pages to offer up more information,
> cooperating with Schema.org. However, this is very much early days and
> we're just starting this exploration.
>
> However, that being said, what we use for PhysicalWeb shouldn't be a
> limitation for your group. While we clearly is *some* interaction between
> our layers, we want to keep our layer as thin and focused as possible so we
> don't crimp any of your future ambitions.
>
> Scott
>
> On Tue, Oct 21, 2014 at 9:14 AM, Miguel <miguel.ceriani@gmail.com> wrote:
>
>> Dear Scott and all,
>> IMHO there is already something "semantic webby" in your approach.
>>
>> What I understood of your project is that a physical object broadcast
>> an URL through which some related information can be gathered.
>> In a "Semantic Web of Physical Objects" view, that URL (or URI, for
>> that purpose) could also actually identify that physical object. That
>> would allow to gather information related to that object from
>> different sources and not just from that single URL (e.g. independent
>> information on a product in a super-market). Moreover it would allow
>> the user to produce information related to the object (e.g. using an
>> annotation service).
>>
>> I think this is a very good reason for us to keep in touch with your work.
>> What I sketched is basically a possible "semantic interpretation" over
>> the "Physical Web" idea, not something that would necessarily add any
>> technical requirements to your project.
>>
>> To be concrete, there are potentially simple ways to make use of the
>> semantics of physical objects.
>> For example, if the URL broadcasted by an object points to an HTML
>> page, RDFa can be used to embed meta-data in HTML code.
>> Some of the meta-data could be gathered directly by the app and shown
>> to the user somehow (an icon for the type/category of the object, a
>> color for the time to expiration of a perishable item, ...).
>>
>> Best,
>> Miguel Ceriani
>>
>> On Tue, Oct 21, 2014 at 4:13 PM, Scott Jenson <scott@jenson.org> wrote:
>> > On Tue, Oct 21, 2014 at 2:13 AM, Paola Di Maio <paola.dimaio@gmail.com>
>> > wrote:
>> >>
>> >> What would help us here is some idea of what your system looks like
>> >> (in design terms), so that we could, in principle, include any
>> >> requirements you may have in our work
>> >
>> > Not sure I understand but I'll give it a shot: our system is a series of
>> > hardware beacons that are broadcasting URIs using the BLE advertising
>> > packet. These URIs are expected to be URLs but we are exploring other
>> > encodings (e.g. URNs but that is a bit more speculative) This creates
>> the
>> > 'senders' of our system. The 'receivers' (at this time) are phones
>> running
>> > an app. However, that is just for prototyping purposes. We expect this
>> to be
>> > built into the OS for most systems. The goal of these receivers is to
>> > collect the nearby beacons, display them to the user WHEN THEY ASK (no
>> > proactive beeping!) rank them in some way, and if the user taps on one,
>> take
>> > them to that web page. The receivers, much like browsers today, can vary
>> > quite a bit (and even be proprietary) We don't expect to 'control' the
>> > receivers and hope there is a wide range of experiments here. What we do
>> > need to standardize however, is the broadcasting packet so everything
>> sends
>> > out a URI the same way.
>> >
>> >>
>> >>
>> >> A question in return:  is the physical web already thinking what kind
>> >> of interface is it going to have,  and would you benefit from input
>> >> from this community (bearing in mind that we are a collection of
>> >> individuals with different views on things),
>> >
>> > Of course, that is why we released early to get hard questions and
>> > experiments. The primary goal is simple: Show beacons to the user as
>> simply
>> > and easily as possible. Our current prototype uses the Android
>> notification
>> > manager (with no sounds or vibrations) so seeing nearby beacons is just
>> two
>> > taps. We expect other platforms to try different things.
>> >
>> > Scott
>>
>
>

Received on Wednesday, 22 October 2014 05:24:55 UTC