Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP address 'UNKNOWN' in logs? #45

Closed
Makeshift opened this issue Apr 29, 2021 · 4 comments
Closed

IP address 'UNKNOWN' in logs? #45

Makeshift opened this issue Apr 29, 2021 · 4 comments

Comments

@Makeshift
Copy link

Makeshift commented Apr 29, 2021

I was trying to implement fail2ban on my host box, and when browsing through the journalctl logs, I noticed that all connections to the bastion containers as logged by sshd appear to come from 'UNKNOWN' IPs:

sshd[32094]: Failed password for root from UNKNOWN port 65535 ssh2
sshd[41614]: Accepted publickey for cbell from UNKNOWN port 65535 ssh2: RSA SHA256:NAQ3eyAsG/Ixv8kAVVmvdStQQTr+6BfM7p/swY8G3UQ

Connections to the host show the IP as expected.

The socket appears to correctly name itself with the IP, so I could parse that with a custom fail2ban filter (but obviously this is a bit difficult as it doesn't differentiate between successful and failed logins):

sshd_worker@91-172.26.0.109:22-88.xxx.xxx.66:12437.service: Main process exited, code=exited, status=255/EXCEPTION

But your example in the readme shows the IP address as expected from the sshd logs. Any ideas why this differs?

@joshuamkite
Copy link
Owner

I've just tried this myself and experienced the same behaviour as you: 'UNKNOWN' IP. Reviewing that line in the README it was added September 8th, 2018 and so would have been added with release 4.3. There have been many changes since, but I see from a (non-AWS) local physical host that local default behaviour for sshd on Ubuntu 20.04 retains displaying calling IP address in systemd journal. I suspect that the container isn't seeing the caller IP address for some reason. I don't know why

@joshuamkite
Copy link
Owner

I'm going to close this as it's been several months without progress. I'm sorry but I don't why this should be. This project is very much a 'spare time' project for me, and whilst a solution here would be nice, I don't think it's a major issue.

@Makeshift
Copy link
Author

I'm going to close this as it's been several months without progress. I'm sorry but I don't why this should be. This project is very much a 'spare time' project for me, and whilst a solution here would be nice, I don't think it's a major issue.

Entirely fair, cheers anyway!

The reason I raised this is because the sheer number of connections I was getting was killing the host by spinning up lots of containers (since it's only a t3a.micro).

I wrote a very hacky and unpleasant workaround which does the job for my use-case, by blocking all IPs that geo-lookup reports to be outside of a provided list of country codes. I just patched /opt/start_worker.sh in systemd.tpl

cat << EOF > /opt/start_worker.sh
#!/bin/bash
name=\$1
whitelistedCountryCodes="${whitelistedCountryCodes}"

function log { logger -s -t "vpc" -- \$1; }

userCountryCode=\$(curl --silent http://ip-api.com/json/\$REMOTE_ADDR | jq -r '.countryCode')

if [[ \$whitelistedCountryCodes == *"\$userCountryCode"* ]]; then
  log "User \$REMOTE_ADDR country code \$userCountryCode is in list \$whitelistedCountryCodes, starting worker: ${bastion_host_name}_\$name"
  /usr/bin/docker run --rm -i --hostname ${bastion_host_name}_\$name -v /dev/log:/dev/log -v /opt/iam_helper:/opt:ro -v /etc/ssh:/etc/ssh:ro sshd_worker
else
  log "User \$REMOTE_ADDR country code \$userCountryCode is not in list \$whitelistedCountryCodes, not starting worker."
fi
EOF

chmod +x /opt/start_worker.sh

cat << EOF > /etc/systemd/system/sshd_worker@.service
[Unit]
Description=SSH Per-Connection docker ssh container

[Service]
Type=simple
ExecStart=/opt/start_worker.sh "%i"
StandardInput=socket
RuntimeMaxSec=43200
OOMPolicy=kill

[Install]
WantedBy=multi-user.target
EOF

All of my users happen to be in the UK, so this was a reasonable workaround for me. If anybody else comes across a better solution, I'd love to hear about it, but I can understand that this topic will likely not be the best place to discuss it, so drop me an email.

Thanks

@joshuamkite
Copy link
Owner

Thanks

I'll respond here as this may be useful to others - I've only ever employed this in production with some form of IP-pass-listing in front of it. This is why there is the section in the README

It is essential to limit incoming service traffic to whitelisted ports. If you do not then internet background noise will exhaust the host resources and/ or lead to rate limiting from amazon on the IAM identity calls- resulting in denial of service.

This would be true even with a larger instance. I can see what you're trying to do and why, it's an interesting approach. Other methods that I have seen work beyond the basic pass-listing offered in this terraform plan:

  • Run an uncontainerised EC2 bastion in front of this, like widdix have done - but heavily locked down- no sudo; no shell etc
  • Use a VPN

Both of these methods assure that traffic can be restricted to a very specific IP address range, but unlike your solution require an additional deployment.

Thanks and good luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants