Re: Physical web project

I have looked at the technical spec. Have you looked at other wireless technologies and corresponding data packets and how we could create a generalization?
To deal with all components of the Internet, whether the Internet of Data, Internet of Devices and Internet of DNA we need just two bits, since we only need to identify E=living entities, stationary like coral reefs, forests and crop fields or vegetative cover, A=Animal, D=Devices (or M=Machine) and H=Human.
And only in the case of humans need we bother ourselves with privacy and general security issues.
Four categories, just two bits. I proposed we do this in HTML code several years ago in a post to the semweb and lod lists of W3C.

The proposed scheme would allow thru HTML categorize all components and instantly create the way for the semantic web.

The data formats for BlueTooth, RFID, near-field and wearable sensor data packets should be looked at for such a generalization.

I am currently looking at near-field, wearable sensors in Health IT applications. Who else has looked e.g. at agricultural, industrial and smart vehicle applications using a similar approach?

Creating one universal interface may be very difficult, but creating a way of "sensing" what entity is being accessed and based on that info choosing an interface engagement protocol may just work and require little standardization!!

 
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: A structured approach to bringing the tools for sustainable development to all stakeholders worldwide by creating ICT tools for NGOs worldwide and: providing online access to web sites and repositories of data and information for sustainable development

This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.



On Wednesday, October 22, 2014 1:25 AM, Paola Di Maio <paola.dimaio@gmail.com> wrote:
 


Scott and all

thanks for sharing, good to have a top view of what you are doing, and others for chipping in.  Just to clarify, what we have in the report are individual efforts, and we are hoping to be able to put together some collaborative work towards some 'standard' for such interfaces.


You say


The primary goal is simple: Show beacons to the user as simply and easily as possible. Our current prototype uses the Android notification manager (with no sounds or vibrations) so seeing nearby beacons is just two taps. We expect other platforms to try different things.
>
>

>
>From a semantic interface perspective, imho this simple goal can lead to further design considerations, including addressing privacy issues and other factors that can be addressed at interface design level.  

How to interact with a beacon, what to do with it, how to do it etc is something most users will need advice with, and hopefully we can help with that

So what I suggest, that with the help of group members, we consider the physical web project a working prototype to guide our work
in principle.  

So, we ll keep in mind your work when making our plan ahead for this group, and hopefully you can give feedback on what we are doing (useful/not )

Also whatever you may require in the meantime, please shout

The semantic web for everyone at last ? :-)!!!

PDM






On Wed, Oct 22, 2014 at 1:32 AM, Scott Jenson <scott@jenson.org> wrote:

Interesting point Miguel. Right now we gather our meta data in a very primitive way: we scrap the target html page for TITLE, DESCRIPTION, and FAVICON. That clearly needs to improve. We have been looking at JSON+LD as well as RDFa mechanisms for web pages to offer up more information, cooperating with Schema.org. However, this is very much early days and we're just starting this exploration.
>
>
>However, that being said, what we use for PhysicalWeb shouldn't be a limitation for your group. While we clearly is *some* interaction between our layers, we want to keep our layer as thin and focused as possible so we don't crimp any of your future ambitions.
>
>
>Scott
>
>
>On Tue, Oct 21, 2014 at 9:14 AM, Miguel <miguel.ceriani@gmail.com> wrote:
>
>Dear Scott and all,
>>IMHO there is already something "semantic webby" in your approach.
>>
>>What I understood of your project is that a physical object broadcast
>>an URL through which some related information can be gathered.
>>In a "Semantic Web of Physical Objects" view, that URL (or URI, for
>>that purpose) could also actually identify that physical object. That
>>would allow to gather information related to that object from
>>different sources and not just from that single URL (e.g. independent
>>information on a product in a super-market). Moreover it would allow
>>the user to produce information related to the object (e.g. using an
>>annotation service).
>>
>>I think this is a very good reason for us to keep in touch with your work.
>>What I sketched is basically a possible "semantic interpretation" over
>>the "Physical Web" idea, not something that would necessarily add any
>>technical requirements to your project.
>>
>>To be concrete, there are potentially simple ways to make use of the
>>semantics of physical objects.
>>For example, if the URL broadcasted by an object points to an HTML
>>page, RDFa can be used to embed meta-data in HTML code.
>>Some of the meta-data could be gathered directly by the app and shown
>>to the user somehow (an icon for the type/category of the object, a
>>color for the time to expiration of a perishable item, ...).
>>
>>Best,
>>Miguel Ceriani
>>
>>
>>On Tue, Oct 21, 2014 at 4:13 PM, Scott Jenson <scott@jenson.org> wrote:
>>> On Tue, Oct 21, 2014 at 2:13 AM, Paola Di Maio <paola.dimaio@gmail.com>
>>> wrote:
>>>>
>>>> What would help us here is some idea of what your system looks like
>>>> (in design terms), so that we could, in principle, include any
>>>> requirements you may have in our work
>>>
>>> Not sure I understand but I'll give it a shot: our system is a series of
>>> hardware beacons that are broadcasting URIs using the BLE advertising
>>> packet. These URIs are expected to be URLs but we are exploring other
>>> encodings (e.g. URNs but that is a bit more speculative) This creates the
>>> 'senders' of our system. The 'receivers' (at this time) are phones running
>>> an app. However, that is just for prototyping purposes. We expect this to be
>>> built into the OS for most systems. The goal of these receivers is to
>>> collect the nearby beacons, display them to the user WHEN THEY ASK (no
>>> proactive beeping!) rank them in some way, and if the user taps on one, take
>>> them to that web page. The receivers, much like browsers today, can vary
>>> quite a bit (and even be proprietary) We don't expect to 'control' the
>>> receivers and hope there is a wide range of experiments here. What we do
>>> need to standardize however, is the broadcasting packet so everything sends
>>> out a URI the same way.
>>>
>>>>
>>>>
>>>> A question in return:  is the physical web already thinking what kind
>>>> of interface is it going to have,  and would you benefit from input
>>>> from this community (bearing in mind that we are a collection of
>>>> individuals with different views on things),
>>>
>>> Of course, that is why we released early to get hard questions and
>>> experiments. The primary goal is simple: Show beacons to the user as simply
>>> and easily as possible. Our current prototype uses the Android notification
>>> manager (with no sounds or vibrations) so seeing nearby beacons is just two
>>> taps. We expect other platforms to try different things.
>>>
>>> Scott
>>
>

Received on Wednesday, 22 October 2014 19:29:22 UTC