Page 1 of 1

AWS connections

PostPosted: Sun Feb 18, 2018 8:32 am
by snowjay
Any idea what would cause the plugin to spawn (and seemingly orphan) all these AWS connections?

Code: Select all
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address          Foreign Address        (state)
tcp4       0      0  macmini.49531          ec2-54-174-104-1.9553  CLOSE_WAIT
tcp4      37      0  macmini.49530          ec2-35-168-98-11.https CLOSE_WAIT
tcp4       0      0  macmini.49529          ec2-54-174-104-1.9553  CLOSE_WAIT
tcp4      37      0  macmini.49528          ec2-35-168-98-11.https CLOSE_WAIT
tcp4       0      0  macmini.49519          ec2-54-174-104-1.9553  CLOSE_WAIT
tcp4      37      0  macmini.49518          ec2-35-168-98-11.https CLOSE_WAIT
tcp4       0      0  macmini.49517          ec2-54-174-104-1.9553  CLOSE_WAIT
tcp4      37      0  macmini.49516          ec2-35-168-98-11.https CLOSE_WAIT
tcp4       0      0  macmini.49514          ec2-54-174-104-1.9553  CLOSE_WAIT
tcp4       0      0  macmini.49509          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49508          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49507          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49506          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49501          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49500          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49499          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49498          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49496          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49495          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49494          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49493          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49490          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49489          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49488          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49487          ec2-52-87-80-95..https LAST_ACK
tcp4       0      0  macmini.49479          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49478          ec2-35-168-98-11.https LAST_ACK
tcp4       0      0  macmini.49477          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49476          ec2-35-168-98-11.https LAST_ACK
tcp4       0      0  macmini.49474          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49473          ec2-52-70-30-52..https LAST_ACK
tcp4       0      0  macmini.49472          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49471          ec2-52-70-30-52..https LAST_ACK
tcp4       0      0  macmini.49469          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49468          ec2-52-70-30-52..https LAST_ACK
tcp4       0      0  macmini.49467          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49466          ec2-52-70-30-52..https LAST_ACK
tcp4       0      0  macmini.49464          ec2-54-174-104-1.9553  LAST_ACK
tcp4       0      0  macmini.49463          ec2-52-70-30-52..https LAST_ACK

Re: AWS connections

PostPosted: Sun Feb 18, 2018 8:52 am
by Colorado4Wheeler
I'm assuming they are API calls, the plugin makes a metric ton of those things (so much so I had to change my plugins to not be overwhelmed by all the state changes). With all of the calls it makes I'm not surprised that there are remnants. This may also be why there is such a massive memory leak in the plugin (I restarted mine yesterday as it sat at 900MB!). Since Chameleon is MIA (he was dealing with health issues the last I heard, hope he is OK) I've considered forking this project to try to plug the leak but I'm neck deep in alligators right now with my new plugin. It's on the roadmap ;).

Re: AWS connections

PostPosted: Sun Feb 18, 2018 10:08 am
by snowjay
Thanks for the update, I had no idea he was MIA as I haven't been active here in a while myself. (My Indigo/Insteon just works!)

I only noticed the issue after I started sending my dd-wrt logs to Splunk and was seeing all sorts of outgoing traffic to Amazon getting dropped. Further investigation showed the Indigo server constantly sending FIN ACKs which were getting dropped which then send me down the rabbit whole to finding out what was causing it.

I may experiment with changing the update time (right now at 60 secs) to see if I can cut down on the amount of orphaned connections, otherwise I'll probably disable it all together as the only thing I was using it for was home/away status.

If you ever tackle forking the project over keep me in the loop, I can provide logs for you.


Re: AWS connections

PostPosted: Sun Feb 18, 2018 4:39 pm
by snowjay
These are my drops:

Code: Select all
Feb 18 17:09:59 Feb 18 17:11:18 kernel: DROP IN=br0 OUT=vlan2
SRC= DST= LEN=40 TOS=0x00 PREC=0x00 TTL=63 ID=51922 DF PROTO=TCP SPT=38568 DPT=80
SEQ=237400789 ACK=538878403 WINDOW=1403 RES=0x00 ACK FIN URGP=0

After turning off the plugin, I started seeing the same pattern of traffic from my Fire TVs to AWS. Interesting that it only seems to be AWS traffic that triggers it, but some research reveals it's a known issue with iptables, Seems that after the first FIN it tears down the connection and drops everything else thinking it's invalid. So now I need to figure out how to allow invalid traffic leaving my internal network so it's not clogging up my logs.

Re: AWS connections

PostPosted: Sun Feb 18, 2018 6:27 pm
by snowjay
I realize I shouldn't do it, but I'm less concerned about letting FIN/ACKs out of my internal network right now, than having my logs be useless. Everywhere I research this, it seems to be a bug in iptables, and I doubt I'm going to change that overnight. ;) I've seen threads dating back to 2000 describing the issue.

Re: AWS connections

PostPosted: Sun Feb 18, 2018 6:51 pm
by snowjay
If I make the change, my intent isn't to allow all invalid traffic, it would be to only allow FIN/ACKs sourcing from my internal network.

Iptables isn't some obscure firewall and maybe I'm just not finding the right solution to the issue.