(advanced feature) Filtering data sent to InfluxDB/Grafana

Posted on
Wed May 09, 2018 2:03 pm
vtmikel offline
Posts: 628
Joined: Aug 31, 2012
Location: Boston, MA

(advanced feature) Filtering data sent to InfluxDB/Grafana

Several have asked questions about how to deal with abnormal data in Grafana. A single bad value can throw off your charts quite substantially. When this happens, your options are:

1. Leave it, and possibly set the scale of your visualization so that the auto scale doesn't zoom out, making your graph illegible.
2. Delete the bad data from Influx.

Once the bad data has made it to Influx, it's out of the plugin's control. You are welcome to log into Influx yourself to perform #2. However, there are some things we can do to reduce the frequency of this happening. The normal plugin configuration deals with WHAT DEVICES and STATES are sent to Influx, but it does not configure what VALUES OF THOSE DEVICES AND STATES. I've implemented data filtering as a seperate menu option as part of the advanced options of the plugin. Here's how it works:

  • You may create filters on a per state basis that apply to one, multiple, or all devices. The filter only applies to numerical values. Creating more than one filter for a state or device will require for the state/device combo to pass ALL states in order to be sent to InfluxDB. Once a state fails a filter, further filters are not processed. Only device filters are supported, variable filters are not supported.
  • The "order" of filters only affects the logging to the Event log. For example, you can have a filter rule that applies to a single device / state combo (for example, a common state such as SensorValue) that misbehaves regularly, and turn logging off. Then a second rule can apply to all devices, on the same state (SensorValue), with logging on. This way, you'll only see log messages for devices that are not expected to behave badly.

Here's an example of how you can use this:

Example 1: Say you have a device that you know is reporting bad values. Using the data filtering, create a new min/max rule for state "sensorValue" that applies to only that device. Insert it at a high priority (position 1) in the order. And, set the logging to "off". Since you already know that device is reporting bad values, you might not want the filter rules filling up your Event Log. I have this happening for a Multi Sensor that I mounted outside, and the Luminance sensor is now broken (reporting -32000 regularly).

Example 2: Say you occasionally have sensors that report bad temperature values. Using the data filtering, create a new maximum percentage changed rule for state "sensorValue" that says that the maximum a temperature can change is 25% between readings. Depending on your update frequency for this device, you can scale this up or down appropriately. Select all the temperature devices in your Indigo config, and set logging to True. Also, set "lock minimum frequency updates after filter failure" to true as well (I honeslty dont know why anyone would not turn this on, but I added it to be safe).

Here's how Example 2 would work:
  1. at 10:00: Your temperature sensor sends a normal value of 68 degrees. Assuming this is within range of the previous value for this device, the filter passes and the value is sent to Influx/Grafana.
  2. at 11:00: Your temperature sensor sends an odd value of 10 degrees. Since the previous value was 68, that is a change of 85%, well above the threshold you set in the rule of 25%. The filter rule would fail, a log item would be sent to the Event Log, and the data would not be sent to Influx.
  3. Any minimum frequency updates for this device would be skipped (NOTE: this includes ALL properties, not just sensorValue)

Now, one of two things would happen next.

Situation 1: The temperature really is 10 degrees!
  1. At 12:00, another reading comes through, and it's now 9 degrees! Since the new change is (10-9)/10 = 10%, the value now passes. 9 degrees is sent to Influx as it's now determined to be the right value.
  2. Minimum frequency updates will resume after the 12:00 update

Situation 2: It was a bad value, and it's really 65 degrees:
  1. At 12:00, another reading comes through, and it's now 65 degrees. The previous value was 10 degrees, despite not passing the filter. So, this value will also be rejected out of caution.
  2. At 13:00, another reading comes through and it's 65 degrees again. This has not changed since the previous value, and will pass the filter and move on to Influx/Grafana.
  3. Minimum frequency updates will resume after the 13:00 update

To see what is happening with your data filters, you can use the Plugins - > Grafana Home Dashboard -> Print filter blocks to Event Log

This will show you the last 50 filter rules that have blocked values going to Influx. Use it to see if the filters are behaving as you expect.

Example 3: Graphing your HVAC setpoints based on the season. The coolsetpoint and heatsetpoint are set when you are in the appropriate modes (cool, heat, or cool/heat). When you are in a off season, the values go to 0. Graphing this makes for a ugly graph. You can easily filter the coolsetpoint and heatsetpoint so that they are sent to Influx when >0.

Mike
Attachments
Screen Shot 2018-05-17 at 3.45.19 PM.png
Screen Shot 2018-05-17 at 3.45.19 PM.png (764.62 KiB) Viewed 3699 times

Page 1 of 1

Who is online

Users browsing this forum: No registered users and 1 guest