Getting started with 1Password for your growing team, or refining your setup? Our Secured Success quickstart guide is for you.
Forum Discussion
Former Member
4 years ago[16,17,20,37,38] Support for both versions in scripts
Hi,
Initial impression of the new version is great! The UI with grouping by category makes sense.
And I can't wait for the TouchID support. :)
Unfortunately the changed UI also breaks all exi...
Former Member
4 years agoYeah, I totally understand the spirit of the EA. Better to aim for the best UI, no matter how many changes still.
I can't speak for my colleagues, as everyone uses direnv like they wish. Or don't use it. But I have found it to be really, really great for setting environment based on the directory. At the moment I mostly work with Terraform with wide variety of providers. And while waiting for a real CD pipeline, Terraform is normally run on my own machine. Most common variables to set are e.g. AWS_PROFILE for selecting the role and account, and different API secrets (from 1Password). Sometimes also Ansible Vault passphrases and what not.
Running Terraform through op run
doesn't feel like optimal, as not all terraform
runs need secrets, and expiring 1password session is annoying, especially with long passphrase and without Touch ID ;)
The variables set with direnv stay in the shell session (until changing directory out of the hierarchy). So far I've had something like this in an .envrc
(requiring that op session is valid):
bash
source_up
export FOO="$(op get item --vault VAULT --item ITEM --fields FIELD)"
OK, now starts the fun part. :p
On the weekend I already made first version of a direnv extension. Based on v1 I started indeed with op read
, but then thought that it might be more efficient to fetch all secrets with just one op
call, as there seems to be significant latency.
So the next idea was to use op run
similar to this:
bash
direnv_load op run --env-file=/dev/stdin -- direnv dump <<OP
FOO=op://vault/item/field
# ...
OP
So op
would resolve the secrets and add the variables to the env, which is then dumped and read by direnv
. But there are (at least) two problems:
OP_SESSION_*
variables are removed from the run env, which leadsdirenv
to unset them from the real shell session. But no problem, I can save them before the command and then export them back.- Existing env vars are not overridden with the
--env-file
vars (which is wanted in the real use case for theop run
command). Unsetting all the variables would require parsing them, which I wanted to avoid (for v2).
Another smaller downside of op run
in this case is that it needlessly scans through all the environment variables to find op://
refs, even though I only want processing for the --env-file
ones.
So now I'm thinking using op inject
by feeding the variables though it. That would only process the needed ones, but would require passing the variables in certain structure and either evaluating the output, or parse and export the vars one-by-one.
I'm happy to hear your thoughts on this. :)
I'll try to publish the code soon. I work on this mostly on my own time, so depends on the life, too.