Forum Discussion
CLI Slow Performance
1P_Phil -
macBook Pro 16" running macOS Sequoia 15.6.1, desktop client version: 8.11.8, 81108040 on PRODUCTION channel, and CLI version 2.32.0 installed via homebrew.
Yes, I took a look at the SDK, it did not help. Every usage required at least a 1.2 second delay involving a round trip to the cloud. Pretty much a dealbreaker for a command line tool with a local data source (1P desktop client) available to it.
Regarding the native 1P CLI...
Without the `--cache` argument:
~$ time op items list > /dev/null real 0m4.302s user 0m0.122s sys 0m0.057sWith the `--cache` argument:
$ time op --cache items list > /dev/null real 0m0.986s user 0m0.119s sys 0m0.055sSomething simple like listing vaults (which should be instant):
$ time op vault list > /dev/null real 0m1.022s user 0m0.082s sys 0m0.037s $ time op --cache vault list > /dev/null real 0m1.103s user 0m0.083s sys 0m0.039sWith all network disabled:
$ time op --cache items list > /dev/null [ERROR] 2025/09/18 15:28:47 Get "https://my.1password.com/api/v2/account/keysets?__t=1758209327.444": dial tcp: lookup my.1password.com: no such host real 0m0.184s user 0m0.018s sys 0m0.025sAs far as I can tell, a network round trip is required no matter what, which adds a full second for every CLI call.
It's not just one network round trip. Using https://www.mitmproxy.org/ you can see that each CLI call makes multiple network round trips, in series. For a cached `op vault list`, it's two:
HTTPS GET my.1password.com /api/v2/overview?__t=1758759373.782 200 …plication/json 3.1k 115ms
HTTPS GET my.1password.com /api/v3/account?__t=1758759374.203&attrs=tier 200 …plication/json 2.0k 125msFor a cached `op read` it's four, all in series:
HTTPS GET my.1password.com /api/v2/overview?__t=1758759627.806 200 …plication/json 3.1k 118ms
HTTPS GET my.1password.com /api/v3/account?__t=1758759628.196&attrs=tier 200 …plication/json 2.0k 122ms
HTTPS GET my.1password.com /api/v3/account?__t=1758759628.354&attrs=tier 200 …plication/json 2.0k 110ms
HTTPS GET my.1password.com /api/v2/overview?__t=1758759628.490 200 …plication/json 3.1k 115msThe last two requests are exact duplicates of the first two, meaning the CLI could save two round trips per call to `op item read` just by keeping the results of `/api/v3/account` and `/api/v2/overview` in memory. It could save another round trip time by requesting both `/api/v3/account` and `/api/v2/overview` in parallel rather than in series.
I also experimented with scripting mitmproxy to cache all of these requests. This effectively eliminates the network delay. The good news is that this saves ~220-500ms per CLI call (a little over 100ms per roundtrip). The bad news is that CLI calls still take a little over 400ms, which is still slow for reading at most a few kilobytes of data that is sitting locally on my machine. I'm not sure what the remaining slowdown is. Could be network roundtrips done by the 1password desktop app rather than the CLI, or could be something else.
With network delay:
$ hyperfine 'op vault list'
Benchmark 1: op vault list
Time (mean ± σ): 649.8 ms ± 9.1 ms [User: 76.1 ms, System: 26.4 ms]
Range (min … max): 636.9 ms … 664.8 ms 10 runs
$ hyperfine 'op read "op://Private/gmail.com/username"'
Benchmark 1: op read "op://Private/gmail.com/username"
Time (mean ± σ): 929.9 ms ± 17.6 ms [User: 159.6 ms, System: 49.6 ms]
Range (min … max): 901.2 ms … 957.1 ms 10 runsWithout network delay:
$ HTTP_PROXY=http://localhost:8080 HTTPS_PROXY=http://localhost:8080 hyperfine 'op vault list'
Benchmark 1: op vault list
Time (mean ± σ): 414.0 ms ± 6.9 ms [User: 60.8 ms, System: 18.2 ms]
Range (min … max): 397.4 ms … 423.2 ms 10 runs
$ HTTP_PROXY=http://localhost:8080 HTTPS_PROXY=http://localhost:8080 hyperfine 'op read "op://Private/gmail.com/username"'
Benchmark 1: op read "op://Private/gmail.com/username"
Time (mean ± σ): 435.5 ms ± 7.6 ms [User: 100.5 ms, System: 31.1 ms]
Range (min … max): 422.1 ms … 449.8 ms 10 runsIf you're interested in reproducing the mitmproxy caching, here is the script I used:
from mitmproxy import http
from urllib.parse import urlparse, parse_qs, urlencode
import json
import time
class OnePasswordCache:
def __init__(self):
self.cache = {}
def get_cache_key(self, flow):
"""Generate cache key by removing __t parameter"""
url = urlparse(flow.request.pretty_url)
params = parse_qs(url.query)
# Remove __t parameter for cache key
params.pop('__t', None)
return (
url.scheme, url.netloc, url.path, urlencode(params),
flow.request.headers["x-agilebits-session-id"],
)
def request(self, flow: http.HTTPFlow):
# Only cache GET requests to 1password API
if (flow.request.method == "GET" and
"1password.com/api/" in flow.request.pretty_url):
cache_key = self.get_cache_key(flow)
if cache_key in self.cache:
# Serve from cache
cached_response = self.cache[cache_key]
# Create response from cache
flow.response = http.Response.make(
cached_response["status_code"],
cached_response["content"],
cached_response["headers"]
)
# Add header to indicate cache hit
flow.response.headers["X-Cache"] = "HIT"
def response(self, flow: http.HTTPFlow):
# Cache successful 1password API responses
if (flow.request.method == "GET" and
"1password.com/api/" in flow.request.pretty_url and
flow.response.status_code == 200):
cache_key = self.get_cache_key(flow)
# Only cache if we didn't serve from cache
if "X-Cache" not in flow.response.headers:
self.cache[cache_key] = {
"status_code": flow.response.status_code,
"content": flow.response.content,
"headers": dict(flow.response.headers)
}
addons = [OnePasswordCache()]Save as `cache_1password.py`, then run `mitmproxy -s cache_1password.py -p 8080`.
- chuckwolber2 months agoNew Contributor
jeremyschlatter - Thanks that got some interesting results.
I can confirm what you are saying about cached and un-cached network round-trips. Extending your idea, I found a way to get it to work (mostly) offline and all (cached) transactions land in about 200ms. I also learned there may be some other options (e.g. connect server) worth considering.I tested with a basic pass-through `mitmproxy` with `--cache=true` and `--cache=false`. For un-cached and cached `op vault list`, it is three and two HTTP/s calls respectively. For un-cached and cached `op read`, it is seven and four HTTP/s calls respectively.
$ export HTTP_PROXY="http://localhost:8080" $ export HTTPS_PROXY="http://localhost:8080" $ hyperfine --min-runs 10 'op vault list --cache=false' Benchmark 1: op vault list --cache=false Time (mean ± σ): 1.055 s ± 0.039 s [User: 0.078 s, System: 0.023 s] Range (min … max): 0.951 s … 1.093 s 10 runs # Example log from each run. HTTPS GET https://my.1password.com/api/v2/account/keysets?__t=1759013968.522 HTTP/2.0 200 application/json 14.1k 214ms HTTPS GET https://my.1password.com/api/v1/vaults?__t=1759013969.160 HTTP/2.0 200 application/json 29.8k 275ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759013969.487&attrs=tier HTTP/2.0 200 application/json 2.0k 136ms $ hyperfine --min-runs 10 'op vault list --cache=true' Benchmark 1: op vault list --cache=true Time (mean ± σ): 780.6 ms ± 29.3 ms [User: 67.4 ms, System: 19.4 ms] Range (min … max): 749.8 ms … 839.1 ms 10 runs # Example log from each run. HTTPS GET https://my.1password.com/api/v2/overview?__t=1759014609.432 HTTP/2.0 200 application/json. 3.7k 159ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759014610.008&attrs=tier HTTP/2.0 200 application/json 2.0k 155ms $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=false' Benchmark 1: op read "op://Personal/Google/username" --cache=false Time (mean ± σ): 1.896 s ± 0.036 s [User: 0.125 s, System: 0.039 s] Range (min … max): 1.829 s … 1.938 s 10 runs # Example log from each run. HTTPS GET https://my.1password.com/api/v2/account/keysets?__t=1759015002.564 200 application/json 14.1k 214ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759015003.157&attrs=tier 200 application/json 2.0k 181ms HTTPS GET https://my.1password.com/api/v1/vaults?__t=1759015003.341 200 application/json 29.8k 203ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759015003.595&attrs=tier 200 application/json 2.0k 137ms HTTPS GET https://my.1password.com/api/v1/vault/<REDACTED1>/items/overviews?__t=1759015003.739. 200 application/json 446k 408ms HTTPS GET https://my.1password.com/api/v1/vault/<REDACTED1>/item/<REDACTED2>?__t=1759015004.172 200 application/json 3.2k 119ms HTTPS GET https://my.1password.com/api/v2/overview?__t=1759015004.293 200 application/json 3.7k 232ms $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=true' Benchmark 1: op read "op://Personal/Google/username" --cache=true Time (mean ± σ): 1.060 s ± 0.020 s [User: 0.114 s, System: 0.036 s] Range (min … max): 1.033 s … 1.101 s 10 runs HTTPS GET https://my.1password.com/api/v2/overview?__t=1759015576.630 200 application/json 3.7k 139ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759015577.151&attrs=tier 200 application/json 2.0k 152ms HTTPS GET https://my.1password.com/api/v3/account?__t=1759015577.335&attrs=tier 200 application/json 2.0k 187ms HTTPS GET https://my.1password.com/api/v2/overview?__t=1759015577.547 200 application/json 3.7k 137msUsing your proxy implementation, the total time improved significantly to about 500ms.
$ export HTTP_PROXY="http://localhost:8080" $ export HTTPS_PROXY="http://localhost:8080" $ hyperfine --min-runs 10 'op vault list --cache=false' Benchmark 1: op vault list --cache=false Time (mean ± σ): 471.0 ms ± 12.0 ms [User: 58.0 ms, System: 16.6 ms] Range (min … max): 455.9 ms … 487.0 ms 10 runs $ hyperfine --min-runs 10 'op vault list --cache=true' Benchmark 1: op vault list --cache=true Time (mean ± σ): 465.8 ms ± 9.1 ms [User: 57.9 ms, System: 16.3 ms] Range (min … max): 452.2 ms … 479.2 ms 10 runs $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=false' Benchmark 1: op read "op://Personal/Google/username" --cache=false Time (mean ± σ): 501.5 ms ± 6.5 ms [User: 82.1 ms, System: 25.0 ms] Range (min … max): 493.3 ms … 513.1 ms 10 runs $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=true' Benchmark 1: op read "op://Personal/Google/username" --cache=true Time (mean ± σ): 480.4 ms ± 19.0 ms [User: 80.5 ms, System: 25.0 ms] Range (min … max): 457.3 ms … 517.5 ms 10 runsHowever, I was still not able to get this to work offline, so I did some packet sniffing and realized that `mitmproxy` was still making a connection to 1P.com to get the upstream certificate. According to the `mitmproxy` documentation, `upstream_cert` defaults to `true`, and `connection_strategy` defaults to `eager`.
There are good reasons for both of those defaults, but I do not think they matter here. When I added `--set upstream_cert=false --set connection_strategy=lazy` to the `mitmproxy` arguments, and after warming up the cache, everything landed in about 200ms with no requirement to be online.
$ export HTTP_PROXY="http://localhost:8080" $ export HTTPS_PROXY="http://localhost:8080" $ hyperfine --min-runs 10 'op vault list --cache=false' Benchmark 1: op vault list --cache=false Time (mean ± σ): 173.5 ms ± 4.6 ms [User: 46.6 ms, System: 13.7 ms] Range (min … max): 166.8 ms … 181.2 ms 16 runs $ hyperfine --min-runs 10 'op vault list --cache=true' Benchmark 1: op vault list --cache=true Time (mean ± σ): 171.0 ms ± 6.1 ms [User: 45.4 ms, System: 13.3 ms] Range (min … max): 156.9 ms … 181.2 ms 17 runs $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=false' Benchmark 1: op read "op://Personal/Google/username" --cache=false Time (mean ± σ): 220.4 ms ± 4.0 ms [User: 77.6 ms, System: 23.5 ms] Range (min … max): 213.1 ms … 225.8 ms 13 runs $ hyperfine --min-runs 10 'op read "op://Personal/Google/username" --cache=true' Benchmark 1: op read "op://Personal/Google/username" --cache=true Time (mean ± σ): 189.1 ms ± 7.7 ms [User: 69.5 ms, System: 21.9 ms] Range (min … max): 171.3 ms … 200.9 ms 15 runsLooking at the packet response from 1P, it seems to show that the replies are JWE. I think these can be temporarily persisted locally since the session token is required to decrypt them. Security-wise, that puts this on par with using `ssh-agent`. That should work for most desktop automation cases, but extracting the session key would be necessary to work reliably offline.
I went down the session token "rabbit hole" and discovered that the `--raw` argument, although prominently displayed in the documentation, no longer does what it says it does. The alternative appears to be to use a connect server, which I have not yet played with. But it does appear to be available to all 1P customers as of this year.