W3C home > Mailing lists > Public > public-wot-ig@w3.org > June 2016

FYI - a stream processing and data visualisation experiment

From: Dave Raggett <dsr@w3.org>
Date: Thu, 2 Jun 2016 10:36:43 +0100
Message-Id: <0EE44D30-CEFC-4B79-9F2A-40FEC70F6966@w3.org>
To: Public Web of Things IG <public-wot-ig@w3.org>
To ground the discussion on streams for the Web of Things, I am working on a demo based around the GY-521 board. This is based on the InvenSense MPU-6050 sensor which includes a 3 axis accelerometer, gyro, temperature sensor, and digital signal processor, along with a 1 Kbyte FIFO buffer and I2C bus. Not bad for just £2.89!

Looking to get started with it, I came across the arduino introduction [1] and Janaka's matching data visualiser [2] based upon Processing [3].  Processing is a mature project for a graphics visualisation scripting language, that originated in the MIT Media Lab.  Janaka has provided a Processing sketch that reads the MPU-6050 data from the Arduino via the serial port and displays it.  The MPU-6050 data sheet is at [4]. You can set the chip to provide 250 to 2000 samples per second.

I am planning on combining the GY-521 board with an Arduino Uno and Ethernet Shield, and using TCP/IP to send the data. Processing also comes with a Network library, so it will be trivial to adapt [2] to read the data via a TCP client connection to the Arduino.  The next step will be to use my C++ web of things gateway project along with HTTP and WebSockets for a web browser based human machine interface. This will involve mapping the Processing sketch to JavaScript for the HTML5 Canvas2D.

The Thing description will describe:

a stream for the raw accelerometer data for x, y and z axes
the IoT server in terms of its IP address and port
the protocol as a raw TCP/IP stream

TCP just happens to be really easy with the Arduino Ethernet Shield via the SPI bus.  In principle, it would be relatively easy to use a UDP based protocol, in which case, the thing description also needs to declare how many samples are transferred in each packet. As far as I can see there would be little benefit from using CoAP, but I may be mistaken.

When you look at the MPU-6050 data sheet, there is lots of information that could be expressed as a semantic model. The on chip digital signal processor can even be programmed for gesture recognition. However, I will leave that to future experiments.

It is clear to me that thing descriptions can be used to describe a very wide range of platforms and use cases,  and this includes devices that do not natively support the Web of Things. 

[1] http://playground.arduino.cc/Main/MPU-6050 <http://playground.arduino.cc/Main/MPU-6050>
[2] https://github.com/janaka/Gy521-Dev-Kit <https://github.com/janaka/Gy521-Dev-Kit>
[3] https://processing.org/overview/ <https://processing.org/overview/>
[4] http://www.invensense.com/products/motion-tracking/6-axis/mpu-6050/ <http://www.invensense.com/products/motion-tracking/6-axis/mpu-6050/>

—
   Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>
Received on Thursday, 2 June 2016 09:36:52 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 2 June 2016 09:36:53 UTC