Forum Discussion

sethraymond's avatar
sethraymond
New Contributor
3 months ago

Feature Request - Access vault from inside docker container

Background

At our work, we spawn a Docker container that contains the tooling required to build our Yocto-based OS image. We want to pull a key from our shared 1Password vault and inject it into our OS image. Developers all have the op plugin installed and are logged in to their accounts.

Problem

If we install the op CLI inside the Docker image, developers have to log in to their account again when they spawn the container to get access to the vault. This is cumbersome. Our workaround is to have developers run a script before spawning the container that opens the vault on their host machine, accesses the vault through the op CLI, and grabs the key. We currently write it to disk on the host and mount that into the Docker container, which is a potential security vulnerability that we'd like to avoid.

Proposed Solution

I'd like to install the op plugin into our builder Docker image and access the op CLI from the host. We can do that for things like SSH - we mount the SSH_AUTH_SOCK into the container and it just proxies requests back into the host. Could there be a domain socket for the op CLI to allow us to do something similar?

Similar Requests

Feature Request: First-Class Support for Dev Containers and 'op' CLI | 1Password Community - but not just for devcontainers, more generic.

8 Replies

  • Hi sethraymond​ ,

    Have you tried having a simple script that the devs on the host machine can call.  The script then uses "op inject" into an env variable?  I use this for my scripts, here's an example usage:

    env $(op inject -i ./.env.template | xargs) \
      python ./fetch_posts.py --write-output --output-file

    My .env.template file is simply a file that contains references to secrets in my 1Password account.

    username=op://[Vault]/[Item]/login
    password=op://[Vault]/[Item]/password

    Then I add this to a startup script that sets the environment variable. Once the script returns the environment variables are no longer available to the end user.

    Thanks,
    Phil

    • sethraymond's avatar
      sethraymond
      New Contributor

      Hi 1P_Phil​ ,

      I suppose this would work to do something like

      env $(op inject -i ./.env.template | xargs) docker run -e USERNAME=$username ...

      to support the use case I have (authenticating inside of a docker container with the op CLI installed). It's not the most robust mechanism because it requires all of our developers to have their 1Password credentials named exactly the same in their Employee vault. For a bit more clarification, our process is:

      1. Log in to the op CLI
      2. Clone our repository
      3. Run the enter_docker.sh script inside the repository
        1. This script sets environment variables and bind/volume mounts appropriately for the container

      ---

      I've got a working prototype where the enter_docker.sh script does an op read op://Shared Vault/Service Account Token: builder/credential and injects that into the container as the OP_SERVICE_ACCOUNT_TOKEN environment variable. It doesn't really give us an appropriate audit trail to see who is consuming items out of the vaults (that's where the proposed socket mechanism would be nice), but it does work for the functionality we need.

      ---

      One question about service account tokens: When I call

      op read "op://Employee/Service Account Token: builder/credential"

      I get the following error:

      [ERROR] 2025/09/16 17:14:33 could not read secret 'op://Employee/Service Account Token: builder/credential': invalid secret reference 'op://Employee/Service Account Token: builder/credential': invalid character in secret reference: ':'

      The ':' character after Token is causing op read to choke. Is there a known workaround for this, or does the default name for the service account auth tokens just not compatible with the CLI? I've had to use the UUID as a reference instead, but that hurts readability.

  • Hi sethraymond​ ,

    Thanks for the request.

    I'm curious has the team explored using Service Accounts?

    Service accounts let your container pull secrets directly from the vault in a non-interactive way. The CLI supports them out of the box, but so do the recently introduced SDKs.

    Docs: https://developer.1password.com/docs/service-accounts/
    SDKs: https://developer.1password.com/docs/sdks (Go, TS & Python)

    Let me know if this works for you.

    Thanks!
    Phil & the 1Password team

    • sethraymond's avatar
      sethraymond
      New Contributor

      Hi 1P_Phil​ , thanks for the quick reply! We do use a service account for our Jenkins integration. I'm not sure that a service account is appropriate for this use case, though. We'd have to give each of the developers either their own unique service account (not ideal), or we'd have to share the same service account token, which is also not ideal. Unless you're suggesting we build the service account token into our Docker image, which would be doable if we're extremely careful about doing that securely.

      My preference would be to just be able to have developers authenticate as themselves as they go and pull secrets from the vault, but if you have a clearer picture as to how a service account could solve this problem, I'm all ears. I can also try to clarify the problem a bit more if that helps.

      • Olen's avatar
        Olen
        New Contributor

        Could you use a service-account and add the token as a docker secret?
        If you add the service-account password itself to 1Password, and give your developers access to it, the docker secret can be generated on build time by fetching the password from 1password by the developer that builds it and added to the container on demand.