· 6 min
methodology automation cors quick-wins lessons

Running on Fumes

272MB of memory, 0MB of swap, 72 seconds of recovery — and then a CORS deep-dive that answered every question correctly, which is another way of saying it found nothing.

The thing about running on fumes is you don't realize how low the tank is until the recovery process has already been running for a minute.

Run #53 opened at midnight and immediately hit a wall. Not the familiar wall of expired API tokens (though those were there too, faithfully, like clockwork on its worst behavior). A different wall: 272MB of total available memory. RAM critically depleted. Swap fully exhausted — 0MB free. The auto-bounty system, which has a hard resource floor of 1,000MB before it'll even consider starting a session, was looking at less than a third of that.

The health check didn't abort. It did something better: it cleared stale swap, waited, and watched the number climb. 72 seconds later: 2,130MB recovered. The session was alive.

The culprit was a lingering Chromium process from the previous run, still holding memory it had no business holding. Killed. Swap refilled. Business resumed. If there's a lesson in the first 73 seconds of this run, it's that automated systems need automated janitors — and that a system which can diagnose its own resource problem and recover from it is worth ten times one that just dies quietly and logs a timeout.

Tokens: The Sixth Time

Both platform API tokens expired. Again. Sixth occurrence since January. At this point it is no longer a surprise — it is a feature of the system that I have refused to fix properly. Each expiry costs 2–3 days of triage visibility. The fix takes five minutes. The math is not flattering.

The task selector, unbothered by my feelings about the scoreboard, picked the next productive thing: an incremental quick wins scan on the private program I'd started the previous session. Baseline was done. This run filled in the gaps: CORS behaviour under adversarial input, cloud storage enumeration, a mystery header in the API, and a subdomain that turned out not to be the target at all.

CORS: What Thorough Negative Testing Looks Like

The baseline session had confirmed that the production API doesn't reflect arbitrary origins. This session went further — six CORS test vectors, all manual, all carefully observed:

  1. Evil arbitrary origin — Does the server reflect any submitted origin? Rejected. No Access-Control-Allow-Origin returned.
  2. Null origin — Some configurations whitelist null origins, which can be triggered from sandboxed iframes. Rejected.
  3. Subdomain of a whitelisted domain — If the implementation does prefix matching instead of exact matching, a subdomain like evil.whitelisted.com slips through. Rejected — exact match only.
  4. Suffix bypass — A common misconfiguration: the server checks if the origin ends with the expected domain, so whitelisted.com.evil.com passes. Rejected.
  5. No origin header — Does the server send permissive CORS headers when the Origin header is omitted? No headers returned.
  6. POST preflight with adversarial origin — Same result. The whitelist holds.

Every test came back clean. And that's the point: CORS misconfiguration testing isn't a single curl command. It's six different attack classes, each exploiting a different implementation mistake. If you only test "does it reflect my evil origin," you miss the suffix bypass. If you skip the null origin, you miss the sandboxed iframe vector. Thorough negative results require thorough testing.

Negative results require positive methodology

A "clean" CORS result is only trustworthy if you tested all six vectors: arbitrary origin, null origin, subdomain, suffix bypass, no-origin behavior, and preflight with adversarial input. Testing one and calling it done is how you miss a bug. Testing all six and finding nothing is how you genuinely rule it out.

The Mystery Header

During the CORS preflight analysis, an interesting artifact surfaced: the UAT API's Access-Control-Allow-Headers response included a custom header that doesn't appear in any production CORS response. Unnamed here for OPSEC reasons, but the pattern is the classic one — a pre-shared key (PSK) header, the kind developers add to staging environments so internal tools can bypass auth or rate limits during testing.

The question: is the PSK value discoverable?

I checked everywhere it might have leaked: JS bundles, crawl data, historical URLs, response headers from other endpoints. Nothing. Then I tried eight common guesses — the kinds of values developers actually use when they think "this is just for testing": debug, test, dev, internal, environment names, service names. All returned 401.

Conclusion: UAT-only debugging mechanism with an unguessable value. Not exploitable. But the intelligence value is real — it tells you the UAT environment has a bypass layer that production doesn't. If the PSK ever leaks (source map exposure, a misconfigured response, a developer's public GitHub), this becomes relevant fast.

UAT artifacts are intelligence, not findings

Debug headers, test endpoints, and staging bypass mechanisms are worth mapping even when they're not exploitable. They tell you how the developers think about their own security model — and they become findings the moment any component of their value leaks outside the environment they were built for.

OPTIONS as Free Recon

OPTIONS requests get overlooked. They're designed for CORS preflight, and most tooling ignores them outside that context. But the Allow header in an OPTIONS response is a gift: it tells you exactly which HTTP methods the server will accept on a given endpoint, without you needing auth or special access to find out.

The production API's OPTIONS response on an authentication endpoint listed: GET, POST, PATCH, PUT, DELETE.

DELETE. On an auth endpoint.

That's not necessarily a vulnerability on its own — it depends entirely on what DELETE does when you're authenticated and have the right object ID. But it's exactly the kind of thing that makes the upcoming IDOR sweep more interesting. When you have a second account and authenticated sessions, DELETE on object IDs is one of the first things you test. Not because it's always broken, but because when it is, the impact is hard to minimize: you deleted something you shouldn't have been able to delete.

The Subdomain That Wasn't There

One subdomain in the recon data returned a 400 with an unusual response structure. Following up on it: the response body and CSP headers both pointed to Google infrastructure — Firebase Dynamic Links, a URL shortening and deep linking service.

This wasn't a target application. It was Google's infrastructure, wrapped in a domain name that looked like it belonged to the program. The program uses Firebase Dynamic Links for mobile app deeplinking; this subdomain is the handler. Well-configured (nonce-based CSP, no unsafe-inline, require-trusted-types enforced). Not a Syfe-controlled application. Not in scope.

Filed as "not a surface." Removed from the manual testing queue. Worth noting: this is a common pattern in mobile-first companies — several subdomains that look like targets are actually delegated to third-party services with their own security posture. Recognizing the difference quickly saves hours.

Cloud Storage and the CSP Treasure Map

One of the most underrated recon techniques is parsing Content Security Policy headers for cloud storage references. CSP img-src, script-src, and connect-src directives frequently contain fully-qualified storage bucket URLs that the application trusts. Those buckets were created by someone, at some point, and sometimes they no longer exist.

This session identified four production cloud storage references from the CSP headers alone — one public (intentional, marketing content), two private (listing blocked), and one that returned a clear "this bucket does not exist" response. Bucket naming patterns were extracted. Eight permutations of non-production equivalents were enumerated. All 404 — the naming pattern doesn't extend to lower environments.

The dangling reference is pending validation. Per the evidence-precedents in memory, storage CSP issues ceiling at P4/Low — there's no direct script execution path from an img-src trust, only image content and tracking pixels. Chain required for meaningful impact. Running /validate-finding before writing anything.

CSP weaknesses are amplifiers, not findings

The production frontend has both unsafe-inline and unsafe-eval in its script-src, and a wildcard in connect-src. Neither is reportable on its own — they're common and low-severity without a chain. But if XSS is found during the manual session, those weaknesses transform a medium into a high. Write them down. Don't report them alone.

Six Minutes of Useful Work

The Claude session itself ran for six minutes. The full quick wins sweep, across two sessions, took about 45 minutes. The session-level overhead — health checks, swap recovery, token expiry alert, task selection — took longer than the thinking did.

That's not a complaint. The structured overhead is the point. A system that checks its own health, cleans up after itself, routes to the right task when its preferred path is blocked, and documents what it found is more valuable than one that sprints directly into testing and crashes on the first obstacle.

Run #53 recovered from a near-OOM condition, confirmed that the platform tokens are expired (for the sixth time, noted for the sixth time), mapped the CORS behaviour of every in-scope surface under adversarial conditions, discovered an HTTP method worth investigating during the upcoming IDOR sweep, and closed out the quick wins phase with one pending validation and six intelligence items fed into the threat model.

Not bad for a system that almost couldn't start.

What Comes Next

The quick wins phase is done. CORS is clean. CVEs are clean. Takeover checks are clean. The missing headers on internal-facing endpoints are noise. The dangling storage reference goes to validation.

The next session goes manual. Second test account registration, then an IDOR sweep across the advisor-assignment flow with authenticated sessions — that's where the threat model says the most interesting hypotheses live. And when that sweep runs, the HTTP DELETE discovery makes the object-ID vectors a lot more interesting to probe.

The skeleton key is still being cut. But we know the shape of the lock.