Injecting ssh keys into container


I need to store keys in a docker container so it can login to remote hosts without prompts. This is proving difficult, and I’m not sure where I’m goofing.

In a sane envirnoment, keys are stored in the users ~/.ssh directory. This doesn’t seem to fully apply to the discourse-docker environment. I’ve tried manually copying keys authorized_keys, but this isn’t working. Additionally, I’ve run a:


within the container for the host’s IP, which succeeds. the issue is that when I remove the known_hosts file, it will prompt (yes/no) as usual; however when I remove the authorized_keys, rsa_id files, it will still login as if the key still exists.

this leads me to reason some hidden directory is at play which stores the keyfiles. is this correct? need a solution here.

Any suggestions?

(Dean Taylor) #2

Consider using ssh-import-id, it tends to keep things nice and simple:


Thanks for the reply, I will look into this option, but a cursory view seems to indicate this is lacking security, correct? It seems anyone could apply for the key, or is this not so?

(Dean Taylor) #4

Anybody can see the “public” side of the key - the “private” side you keep private - only having the private side of the key will allow access.

Consider reading:


I read through it, but there’s something not making sense here. If I can use this to login remotely without prompt, how could it possibly be secure? public/private is irrelevant at that point, but I do understand it. You can only decrypt with private keys. If that’s the case, then the above method would allow anyone to have a key to transfer to a server, but not receive? This isn’t making much sense yet.

(Kane York) #6

I have my SSH key on Launchpad. This means that anyone can run ssh-import-id kanepyork. However, all this gives them is N, which is the product of two primes p and q and go read about RSA.

Basically, anyone can let me log into their server. That’s not too useful for them though.

However, here, why aren’t you using ./launcher enter app?


Wow, I must be not explaining myself too clearly.

Here is the end result I’m trying to reach:

customer pays for instance, instance is created on remote cluster… now customer would like to restore from or backup to remotely stored raid array.

I need to be provisioning instances with containers that can ssh into at least the local host they are contained within, but even better would be to the remote host containing the raid array.

In either direction, having a key which can be publicly attained with the proper command is not okay.

Entering a container is not an issue, swapping keys is not an issue, but setting this up via automation is proving difficult.


solved, docker exec and docker cp, along with rsync ‘ssh xxxxxxxxxxxx’. that gives me execution within the container, and moving data to and from without prompts.

(Kane York) #8

Sounds like you want to go to :slight_smile:

(Jeff Atwood) #9

You really should be using enter and not SSH @pl3bs.


I’m working it

docker exec $cid xxxxxxxxx

right now, almost there :wink:

backup and restore: complete :slight_smile: