Forum Discussion
Autofill now immediately submits (enters) (feature becomes bug)
1P_Tommy No worries about me, I turned the feature off and over the last few days the number of captchas that I've had to fill have been decreasing pretty steadily. Google is a black box so I couldn't confirm anything until I got the email that straight up said that failed logins were having an affect but it was hard to miss the correlation after a few days of the feature going live. ReCaptchaV2 is still prevalent but the invisible v3 getting at least some adoption helps, but oddly Google's flagship product - search - still uses v2, which amplified the effect. However as long as a site didn't happen to expire my session - and didn't offer a native app on mobile that incorporated faceId or biometrics as login - I was fine. I saw the pop up box but wasn't really sure what behavior to expect, and I guess I also assumed that it would wait for any captcha to be filled since the captcha response would be part of the POST request for the login, even if the process really involved an additional request to obtain the token - I suppose I had simply assumed that by credentials it meant the entire request for the login flow, since logically even if it happens under the hood any anti-bot/credential stuffing mechanism, however ineffective it is in practice, would be part of the credentials in a sense for the login, as it tends to be a Chekov's Gun for websites - if it's coded into the page, the login likely is going to use it, even if it makes little sense to do so.
Actually, as a side note, the fact that one small change is able to cause all sorts of inconveniences mostly from systems designed to provide in broad terms "security" shows in stark terms how poor the heuristic methods being used really are and how much it relies on assumptions. I'm sure there are assistive technologies for the disabled that are just as able to trip up technology that in some cases can be quite costly to implement yet in the end are both easily circumvented and also easily triggers false positives by the ton. My background is a legal one and out of everyone at my relatively small law school graduating class I'm the only person I know who had a tech-centric skillset and made a conscious choice not to go into IP/in house practice (mostly because I had moral objections in equivocating intellectual and real property rights, I went into criminal and public interest law instead as they affect the liberty interest of my clients). It's pretty jarring how big of a gulf exists between the two worlds and how much reliance on tech illiteracy versus legal illiteracy is involved in practice - as in, attribution made either in bad faith or ignorance in regards to whether an IP address can be directly linked to a specific individual and their actions and intent at a specific time, or an address on a blockchain to a single person, backed by "expert" testimony that clearly was vetted by attorneys either too cynical or too lacking in knowledge of the technology to properly cross examine. Even as a law student during spring break I found myself essentially picking a jury for a first degree murder trial by simply looking at public facebook profiles and picking out those who claimed no relationship to law enforcement when their social networks stated otherwise, and when it resulted in a hung jury the prosecutor simply scheduled retrial on the same evidence during the two weeks of my finals on the same evidence, and got a guilty verdict. I'm no longer actively practicing - fortuitously some stupid internet joke managed to somehow enable me to retire before most of my classmates have paid off their loans - but if nothing else it had only gave me more time to look at how flimsy assumptions are treated as ironclad truths that set the groundwork for precedent, ranging from a series of cases where the court failed to note that jurisdiction is established based on geolocation of Cloudflare IPs (so of course they appear to be purposefully availed to be located in the US, the word 'Anycast' never appears in the docket) to the FBI taking a 2 year delay on blockchain analysis based on one objection to obscure the fact that attribution data is effectively entirely crowdsourced, ending up with both sides arguing the wrong issue. Bots can't be charged with a crime, of course, as code is not a real person, but OFAC absolutely thought that a deployed smart contract was a person and it took 5 months for them to figure out a way to explain how they appeared to sanction a smart contract that is literally two metaphors and exists in sync around the world without an operator. I'm curious as to whether this case where a small, seemingly insignificant alteration can create a self-perpetuating misidentification loop ever becomes an example of how far from "beyond a reasonable doubt" some of the evidence admitted can really be if understood correctly. Since the admission of evidence happens in front of a judge and not a jury, it's very much down to whether the judge has a proper understanding of what tech is able to and isn't able to ascertain with almost certainty. Bad forensic science have certainly led to innocent people being executed (Cameron Todd Willingham perhaps the best known example), but even with lower stakes, illustrative examples help far more than mere technical - or technical-sounding - explainers. Perhaps something helpful can come out of this, at some point, since the amount of snake oil in cases of consequence is so often treated as the genuine cure in cases of great importance at trial when anything vaguely technical is part of the evidence. Here's to hoping, although I can't say I'm holding my breath. The best trial attorney I worked for had me print out his emails for reading as recently as 2014.
Cheeers