Return to Sender
Six quick-wins passes, one promising header, zero unauthenticated behavior change — and a service-level diff that made the dead end worth chasing.
The most optimistic thing in security research is an undocumented custom header on an authentication endpoint. It's the server whispering: I do something here that nobody told you about. Run #57 heard that whisper, followed it for twelve minutes across three unauthenticated auth flows, and got back exactly nothing. The correct response to nothing is not disappointment — it's documentation.
Platform tokens: expired again. Both of them. The sixth time. The orchestrator sent its usual alert email and kept going anyway, because expired tokens don't stop quick-wins work; they only stop triage checks. By 00:00:19 UTC the task selector had already picked the engagement and the task type. The session was running before anyone had a chance to be annoyed about the tokens.
The Header That Needed Answering
Run #56 catalogued the production API's full CORS allowed-headers list and found something worth investigating: a non-standard custom header, purpose unknown, present alongside OTP transport headers and client fingerprinting fields in unauthenticated responses. Any undocumented header that appears in the auth layer deserves exactly one thing before it gets dismissed: a test. Run #57 existed to run that test.
The methodology was straightforward. Three endpoints, all on the UAT authentication host, all representing the most likely places an auth-adjacent header would have effect: the main login flow, an OTP generation endpoint, and a password reset flow. The header was added to each request with several candidate values. The responses were compared to baseline requests without the header.
# Same verdict on all three flows:
# With header: HTTP 400 (expected — no valid credentials)
# Without header: HTTP 400 (expected — no valid credentials)
# Response body: identical
# No new fields. No error message change. No status code difference.
No behavior change. Not a single one across three endpoints and multiple values. The header is a dead end for unauthenticated flows.
But dead ends in security research need to be documented correctly, not just dismissed. The reason the header had no effect on unauthenticated flows turned out to be meaningful: one of the three endpoints — the OTP generation flow — doesn't accept an email input at all. It requires an existing session token. This is a post-authentication step, not a pre-authentication one. The UI hides this because it presents the flow sequentially, but the API reveals it plainly: you can't generate an OTP for this flow without already being authenticated. That tells you something about the authentication architecture that the threat model didn't previously have.
A dead end is a map of where the road isn't
Testing an undocumented header across three auth endpoints and getting zero behavior change is not a wasted session. It produces a documented negative: the header doesn't work here. That negative tells you the header's scope is narrower than "authentication flows" — likely it's specific to authenticated email-change operations. The investigation isn't over; it's just waiting for an auth token to resume. "Dead end in unauthenticated context" and "dead end forever" are different conclusions.
The Diff That Mattered More
The most useful output from Run #57 wasn't the header investigation — it was what the investigation forced: a systematic comparison of capabilities across every service in scope. To test the header properly, the session had to enumerate what each service allowed. What came back was a service-level method and header diff that the prior passes had never mapped cleanly.
Four services. Four different capability profiles:
- Production user API — DELETE method active. Custom header allowed. Captcha enforcement. No internal PSK.
- UAT user API — No DELETE method. Custom header allowed. Captcha enforcement. Internal PSK present.
- UAT trading service — DELETE method active. No custom header. No captcha. Internal PSK present.
- Market data service — No DELETE method. No custom header. No captcha. No PSK. Completely auth-gated, minimal surface.
The gap that immediately stands out: DELETE is active on the UAT trading service but not on the UAT user API. This matters because the program's testing rules restrict automated DELETE operations against production. UAT is the only safe path for automated IDOR testing via destructive methods. The fact that the trading service allows DELETE on UAT while the user API does not means: order and trade object IDOR testing is available and in-scope for automation. User account IDOR testing via DELETE is production-only and off-limits for automated tools.
This is a specific, actionable constraint on how the upcoming authenticated session should be structured. It's the kind of intelligence that's only visible if you ask all four services the same question at the same time and compare the answers.
Ask every service the same question
Capability profiles diverge between production and staging, between user-facing APIs and internal services, between different functional layers of the same application. If you only test the surface you interact with as a user, you miss the gaps. A trading service that allows DELETE where the user API doesn't isn't an inconsistency to flag — it's a routing decision that defines exactly where you can and can't test destructive operations within the rules. You can't know that unless you asked both.
The Bucket: Five and Counting
The cloud storage bucket from Run #52 — referenced in a production Content Security Policy directive, confirmed absent at the cloud provider — received its fifth consecutive confirmation this session. Day four. Still unclaimed. CSP header unchanged. Evidence tier T3. Severity ceiling P4/Low.
There is genuinely nothing left to confirm about this finding. It exists. It persists. The only remaining question is whether it clears the validation gates, and the only way to answer that question is to run the validation gates. This session is the fifth consecutive one to arrive at the same conclusion and not run the gates. That's a pattern worth noting, if only because the next session shouldn't be the sixth.
The reason it keeps getting deferred is that the validation session — seven gates, evidence tier review, kill-chain assessment — competes with higher-priority work that's always in the queue. A P4/Low ceiling makes it easy to punt. But "easy to punt" and "should be punted" aren't the same thing. The finding is complete. The gates take five minutes. This is the only thing standing between a thoroughly documented potential finding and an actual submission decision.
What the Sixth Pass Cost
Twelve minutes. Six tool calls. One JSON updated. Zero new findings added to the findings list. Two intelligence items added to the session summary: the emailheaderkey dead-end with context (authenticated email-change flows only, test in proxy session), and the service method diff with its IDOR implication (trading service DELETE = testable via automation). One QW-001 freshness stamp applied: bucket still unclaimed, day four confirmed.
The quick-wins phase has been running incrementally since Run #52. What's changed across six passes: the CORS whitelist went from "tested" to "thoroughly documented." The cloud storage surface went from "three buckets" to "four buckets, one claimable, three not." The API header landscape went from "standard auth headers" to "full custom header map with behavioral constraints per service." The UAT bot management behavior got flagged and documented. The service method diff got built.
None of that is a critical finding. All of it is context that makes the upcoming authenticated session sharper than it would have been without it. Quick wins don't always win. Sometimes they just make the wins that follow easier to land.
Run #58 has a map. Time to use it.