Monitoring on Pi Zero W with multiple RuuviTags

So you aim to use the limited processing power of the Rpi to handle Ruuvicollector and do the hard work at the receiving end?
Sounds sensible. :slight_smile:

Hey there @dgerman,

Really thanks. I’m not too interested in grafana at this stage -> haven’t really worked with it before - probably need to look into it since everyone else is working with it :slight_smile:

Rgds,
Wzz

Hi @Scrin, thanks for the pointers!

I’ll revert on this for others that might have the same line of thinking. *although, I won’t call myself a HW dev at all, but can dig quite well and love to break things until they work :rofl:

Rgds,
Wzz

Hey @Motobiman.

Yeah, that was my initial idea - thanks for affirming! I haven’t checked which modules are available on the Ruuvi to allow some simple edge processing instead of harvesting all the data straight into a db though… collecting is good up to the point your bank account breaks because of all the cloud space you occupy…

What are your thoughts on the collection process? @dgerman @Scrin, you’re welcome to jump in here if you’d like…

Rgds,
Wzz

For my application, beehives, I need to be made aware of change rather than being able to look at days and weeks of unchanging data.

I would be happy if the whole system, how ever it is configured, worked in the background and was only powered up for a few minutes every quarter hour and ignored all temp, humidity, battery voltage and movement data within pre-defined limits but raised a red flag for that tag when its data went outside those limits.

The comms has to be over cellular networking as the collector site is remote without Wifi or mains power.

Frankly this idea of watching unchanging data, from sensors a few yards away from me, however pretty the page, leaves me cold, as for example if you miss a serious temp drop or movement in a hive, by the time you do notice it, the damage is done and often unrecoverable.

Data change and an effective warning of it, say by a text message to my phone, is what I need.

Sent from my iPad

PS
Having a camera on the Rpi and being able to view the image over cellular network might save me a drive to the apiary site too.

Regarding Alternate processing: Scrin’s description at 4/16/19 03:27 above and his excellent idea of providing the ability to provide the influxURL, makes routing to another web page process very easy.
I have had some ideas regarding the use of syslog to route the data see http://mybeacons.info/jakoavain.html

Regarding monitoring changes. I have noticed that grafana has alerts. As this is the best program I have seen in over 30 years, at least for processing data, It seems they have thought of that too, but I haven’t investigated how to use that.

Regarding remote site: Motobiman and his bees have got me so interested that I am pursuing using using http://www.uugear.com/product/wittypi2/ as per https://f.ruuvi.com/t/is-the-swamp-too-deep/3401

Hey @Motobiman.

I get it. I have the same kind of objective here with comms and notification. So, I like your thoughts.

With some of my other controllers that are running Node, I would pull the limits you’re talking about from my api to the device and then edit the broadcast loop to adhere to those limits - i.e. only broadcast when sensor data is in error. Hence,

broadcast loop every 1000ms => if temp >=< desired temp then do http request. The algo can be much complex than this, taking into account a sequential rise in temp…

This is all good in on some of the other stuff I managed to accomplish, but Ruuvi habitat is a bit new and still need to find my feet here…

As soon as I have my setup up and running I’ll send through some thoughts on how I approached it!

Rgds,
Wzz

That witty2 device looks good, especially with its apparently very low power consumption.
At the moment the 5w solar panel I have cant cut it but thats with the Rpi running 24/7 but without a cellular hat working too- thats still on order.
I think its going to be the power consumption and hence battery/charger setup thats going to be the limiting factor for my setup and use.

This is actually one of the reasons why I chose InfluxDB as my database. As of right now, I have over 70 million unique measurements (ie. a packet from a tag, including all “extra” data such as tag name, mac, calculated values, RSSI…) saved, which means a few billion values (a value is ie. temperature or humidity), which consume less than 7GB of storage space right now.

This is from 8+ tags collected in the past 2+ years with 5-10s transmission intervals. The efficient delta-compression algorithm (among other things) is what makes this kind of space efficiency possible in InfluxDB (and most other time-series databases).

However, on very large scale deployments (hundreds or thousands of tags potentially transmitting even more frequently) even this may be too much, in which case it would be smart to create aggregations of the history data (ie. over a month old data) and drop the single measurements of these “older measurements”, keeping only ie. “hourly averages” or whatever happens to be sufficient for the use-case.

As my setup is this small, I don’t have a need for this and thus I still keep every single measurement as-is. :slight_smile:

Thanks for the contribution @Scrin. Learning new things every day!

On this subject, I see mongodb has some examples on how to setup time-series schemas to store data more efficiently and economically…

I should’ve created a new topic on this instead of hijacking this thread - sorry everyone… :grin:

Sorry scrin I just dont get what I can possibly get from being able to recall thousands of bits of data that seldom changes and when it does I only find out about it after the event.
Seems to me its more a case if doing it because its possible rather than useful.
If it was set up to monitor and only report changes outside preset limits, now that is more useful IMHO.

Has anyone managed to get the Chronograf alert system in Kapacitor working?

Well, in my case it’s more about avoiding the unnecessary effort, I could save maybe a gigabyte per year or so by saving less values, but it’s definitely not worth the time in my case. And like I said, in very large setups the savings may become worth it, in which case it’s relatively easy to set different retention policies and automatic aggregation of historic data.

Also, in my personal opinion, there is no need to over-complicate things by optimizing storing of the values if the used space is not an issue with the current setup, kinda like the saying “if it ain’t broken, don’t fix it” :slight_smile:

OK understood.

I have been working on getting the wittyPi hat running on a Rpi3B with dgermans Collector.Influx.Chronograf image.
It works and the on.off sequence can be chosen just by making a text file.
One thing that odd though.
On installing I got the required files in the correct folder but also others with a symbol and ‘35’ in front.
See below, anyone know what these are, before I rename them? :-/

What you’re seeing is the terminal screen attribute codes not part of the name of the file.
Seems to be a configuration issue with LX Terminal

Not just 35 but 1B 01; 35m
That’s an escape character ( 0x1B) " Set Graphics Rendition" (‘m’) to magenta(35).

PS using x-org (i.e. windows ) takes more memory and makes the system slow to respond. I recommend ssh instead.

See http://real-world-systems.com/docs/ANSIcode.html

OK so do I need to do anything about it?
Thats on the Rpi zero w using terminal, as I find it easier to connect by wire than using ssh on the mac.
That windows thing putty is XXXX. rude words.
Its quite a bit slower to display Chronograf on either the mac or pc than the 3B.
Next thing is try and get the alerts working then its the cellular hat.
Running on a zero W and witty 15m on and 15m off seems to work.
Any time less ‘on’ is a bit of an issue as the delay built into the Influx startup means it takes about 5m from witty switching on until Chronograf updates.

I don;t know why the ls command seems to be outputting a ANSI set graphich rendition string for magenta that lxterminal is not able to process.

Issue echo $LS_COLORS

see also http://linux-sxs.org/housekeeping/lscolors.html

You could try adding
alias
ls='ls --color=never’
to your .profile
No worries. Its not a problem with witty pi, not a problem with raspberry pi

I agree that it seems to take influx a really long time to get ready to accept transactions.

I wonder if there is a nice way to shutdown InfluxDB so that the next start is quicker.
+++++++++
I think that a more limiting factor, regarding duty cycle, will be related to the amount of light aka battery charge, that can be received. I would be surprised if 50-50 is off enough but that depends on latitude, size of solar cell. I think that evaluating the current state of the battery and trying to predict how much time to spend running vs charging is way to compacted and imprecise to be worth the trouble. Especially if frequency of environmental changes of concern as well as alert evaluation and – most of all in your case – time necessary to respond are taken into consideration. Trial and trail and excessive safety margin seems the best way to go.

I hope to get my equipment tomorrow.

Ok I have had little success with Rpi zero but on a 3B the Collector/influx/chronograf image plus wittyPi2 hat, runs fine with two ruuvitags on my wifi.
Times are 20mins on and 40mins off set to start on the hour so you always know when to log on via ssh to do any changes.
Next is to get wvmail and the 3/4g hat running.