[ANSWERED]: "Timed out" errors

Posted on
Thu Apr 30, 2015 8:06 pm
Perry The Cynic offline
Posts: 836
Joined: Apr 07, 2008

[ANSWERED]: "Timed out" errors

Occasionally, SQL Logger says this in the Indigo log:
Code: Select all
  SQL Logger Error                exception trying to insert row into table device_history_670732672 ("LI-S1 Library Motion")
  SQL Logger Error                timed out
  SQL Logger Error                INSERT INTO device_history_670732672 ("onoffstate", "state", "state_alarm", "state_bypassed", "state_open", "state_preparing", "state_ready", "state_trouble", "state_unavailable") VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s);

My SQL Logger is configured to log to a fairly busy Postgres database, so I can imagine some delays. If this is a timeout set by the plugin, is there a way to extend it? Or is this an error coming from Postgres itself? (Or whatever shim layer you're using...?)

Most stores work fine. When these error happen, they're coming in bursts (presumably because the database is committing and taking a wee bit longer).

Cheers
-- perry

Posted on
Thu Apr 30, 2015 9:12 pm
matt (support) offline
Site Admin
User avatar
Posts: 21411
Joined: Jan 27, 2003
Location: Texas

Re: "Timed out" errors

Hi Perry,

I believe we have the timeout set to 8 seconds. In your scenario above do you think it might be taking longer than that? I don't mind increasing it some, but if the timeout value I'm seeing (8 s) isn't really the issue then we'll need to dig deeper into it.

Image

Posted on
Thu Apr 30, 2015 10:47 pm
Perry The Cynic offline
Posts: 836
Joined: Apr 07, 2008

Re: "Timed out" errors

matt (support) wrote:
Hi Perry,

I believe we have the timeout set to 8 seconds. In your scenario above do you think it might be taking longer than that? I don't mind increasing it some, but if the timeout value I'm seeing (8 s) isn't really the issue then we'll need to dig deeper into it.

It's conceivable. I set this up in 2011, and ever since SQL Logger has faithfully recorded all state transitions for all my (~250 now) devices. The device history tables for my motion sensor devices have >1m records each in them. (If I ever want to know whether there was motion in my library on some day in 2012, it's right there. :-) And from a scan of my logs, it does seem that the ones failing are the ones with frequent data updates (motion sensors, virtual devices that update every minute).

I don't have an obvious way to time these requests. Individual insert commands execute quite quickly; but then most SQL Logger requests succeed as well. Can you tell me where in the plugin to hack the timeout? I'll report back whether setting this to (say) 20s solves the problem for me.

Cheers
-- perry

Posted on
Fri May 01, 2015 5:03 pm
matt (support) offline
Site Admin
User avatar
Posts: 21411
Joined: Jan 27, 2003
Location: Texas

Re: [ANSWERED]: "Timed out" errors

Try changing kDbConnectTimeout in indigologger/indigosqly.py.

Image

Posted on
Fri May 08, 2015 8:22 pm
Perry The Cynic offline
Posts: 836
Joined: Apr 07, 2008

Re: [ANSWERED]: "Timed out" errors

I kicked the timeout value to 30 and the problem went away. It looks like that PostgreSQL database really experienced 20-second hiccups; I'll be "adjusting" that shortly. No further investigation on your end is needed.

Consider making that timeout a configuration parameter.

Cheers
-- perry

Posted on
Wed May 13, 2015 11:08 pm
kw123 offline
User avatar
Posts: 8333
Joined: May 12, 2013
Location: Dallas, TX

Re: [ANSWERED]: "Timed out" errors

My SQLITE database was ~ 3Gbyte.
I could reduce it to ~ 700Mbyte without loosing real info:

- many devices create multiple records for 1 real change i.e. RFXCOM devices. If they have 4 states that are updated it will create 4 entries in the same second updating one state after the other. Only the last record is needed and many of them update every 15 seconds w 4 states > 10,000 entries per day > 3 million per year for ONE device! ==> reduced to ~ 700k records/ year

- many variables do not need to be tracked i.e. fast changing temp variables used in other trigger script etc. prune them to ~ 2 months is more than sufficient data. ==> that was ~ 30% of the data removed.


==> export the data, run a selection script and reimport / recreate the database again.
Check out the python script at http://forums.indigodomo.com/viewtopic.php?f=168&t=13713


Karl

Posted on
Thu May 14, 2015 7:09 am
RogueProeliator offline
User avatar
Posts: 2501
Joined: Nov 13, 2012
Location: Baton Rouge, LA

Re: [ANSWERED]: "Timed out" errors

I kicked the timeout value to 30 and the problem went away. It looks like that PostgreSQL database really experienced 20-second hiccups; I'll be "adjusting" that shortly. No further investigation on your end is needed.

Perry, have you tried rebuilding the indices and/or statistics on the database? We usually setup a maintenance plan at work to do those two routine tasks and, of course, it is super easy to do so in SQL Server. I'm thinking about setting those up for my PostgreSQL database as well -- looks like you would just do it through the standard T-SQL interface.

Was just curious if you had tried those and if they had an effect - trying to prevent mine from going down the "getting slow" path. :-)

Adam

Posted on
Thu May 14, 2015 1:01 pm
Perry The Cynic offline
Posts: 836
Joined: Apr 07, 2008

Re: [ANSWERED]: "Timed out" errors

RogueProeliator wrote:
Perry, have you tried rebuilding the indices and/or statistics on the database? We usually setup a maintenance plan at work to do those two routine tasks and, of course, it is super easy to do so in SQL Server. I'm thinking about setting those up for my PostgreSQL database as well -- looks like you would just do it through the standard T-SQL interface.

Was just curious if you had tried those and if they had an effect - trying to prevent mine from going down the "getting slow" path. :-)

Adam

Mine turned out to be a mismatch between the filesystem and database block sizes that accidentally happened during an upgrade a few months ago. As block size mismatches are wont to do, this didn't actually break anything; it just made the worst case behavior (much) worse. Now that I'm paying attention, I actually added indices and it's still doing just fine. PostgreSQL is usually pretty good about self-maintenance (as long as you don't screw around with its storage...)

[ObGeekyTech: I switched zfs implementations, and somewhere during the upgrade the filesystem record size got reset from 8K (good for PostgreSQL) to the default of adaptive 128K (not good at all). This is not a problem you'll have on HFS+. :-)]

Cheers
-- perry

Posted on
Thu May 14, 2015 1:56 pm
RogueProeliator offline
User avatar
Posts: 2501
Joined: Nov 13, 2012
Location: Baton Rouge, LA

Re: [ANSWERED]: "Timed out" errors

[ObGeekyTech: I switched zfs implementations, and somewhere during the upgrade the filesystem record size got reset from 8K (good for PostgreSQL) to the default of adaptive 128K (not good at all). This is not a problem you'll have on HFS+. :-)]

Out of curiosity, what sparked the desire for ZFS -- you using it to do snapshots or another feature that steered you in that direction?

Posted on
Thu May 14, 2015 6:40 pm
Perry The Cynic offline
Posts: 836
Joined: Apr 07, 2008

Re: [ANSWERED]: "Timed out" errors

RogueProeliator wrote:
[ObGeekyTech: I switched zfs implementations, and somewhere during the upgrade the filesystem record size got reset from 8K (good for PostgreSQL) to the default of adaptive 128K (not good at all). This is not a problem you'll have on HFS+. :-)]

Out of curiosity, what sparked the desire for ZFS -- you using it to do snapshots or another feature that steered you in that direction?

The killer feature, for me, is integrity - the file system never needs "checking". Every few weeks, zfs goes around and scrubs all the data and knows if any of it went bad and fixes it (from a mirror disk). That file I stuck in ten years ago will still be there, intact, when I ask for it - not probably, but certainly.

I regularly pull a disk off the mirror and send it offsite for backup. The previous disk comes back and re-integrates into the pool. Disaster recovery consists of installing zfs on some mac and attaching the offsite disk; it mounts right up. Essentially, the backup format is the file system format; I could mount that disk on Linux or Solaris or anywhere else where openzfs is available.

Snapshots are handy. Snapshot clones are way handy - I can clone an old view of a filesystem while the original is up and working, make separate changes, and then (if I choose) swap it out for the original in constant time. (And still have the original to put back if I screwed that up...)

Zfs can stream snapshot deltas between pools while the source pool is mounted and in use. Last time I upgraded my server hardware, I installed a new set of disks, streamed all the zfs content over, verified that the new server worked fine, and the final downtime was <1 hour to make the switch.

There are risks and drawbacks, to be sure - this is an open source product, and Apple might screw it up one day. But then, my server still runs 10.8.5 and I don't feel like I'm missing anything terribly important, so...

Cheers
-- perry

Page 1 of 1

Who is online

Users browsing this forum: No registered users and 1 guest