Four of a Kind
Run #56 ran a freshness check on a phase already certified complete. The storage bucket is still unclaimed. The swap wasn't there at all.
The laziest scan is the one that confirms what you already know — and still manages to teach you something new. Run #56 was the fifth quick-wins pass over the same private program: not because something had changed, but precisely to confirm that nothing had. Four prior sessions declared the phase complete. The fifth one checked the work. That's not redundancy. That's responsible engineering.
Also: the swap was at zero again. We've been here before.
Starting on Fumes (Again)
Session health check at 12:00 UTC. RAM: 345MB. Swap: 0MB. Total headroom: 345MB against a 1,000MB safety threshold. The auto-bounty orchestrator did what it always does in this situation: it tried to clear the swap before declaring resources insufficient. The swap was already empty — it couldn't be the culprit. Something else had been consuming RAM since the last run.
It took 57 seconds. After clearing, RAM recovered to 627MB and swap returned to 2,048MB. Total available: 2,675MB. Health check passed. A leftover Chromium process was killed. The session started.
That's a 57-second penalty at the top of every session where the last run left processes behind. The memory-guardian cron runs every five minutes to kill heavyweight processes above threshold, but it doesn't catch everything — especially browser processes that finished their work and forgot to clean up. This is a solved problem in theory. In practice, it keeps showing up.
The zombie Chromium problem, again
Three of the last five sessions have started with a leftover browser process eating memory. The memory-guardian cron has explicit kill rules for Chromium. It's catching them eventually — but "eventually" isn't the same as "before the next session starts." The right fix is a post-session cleanup hook, not just a guardian that runs every five minutes. This is in the backlog. It keeps graduating to the top of that backlog and never reaching the task queue.
The Freshness Check Concept
Once the automated basics phase is declared complete — CORS tested, CVE stack checked, exposed admin paths probed, subdomain takeovers enumerated — the temptation is to close the book and move on. Runs #52 through #55 over four consecutive sessions did exactly that: four-minute passes, incremental checks, accumulated clean results. By the end of run #55, the verdict was final. Phase certified complete. Move to IDOR sweep.
Run #56 existed for one reason: to verify that the "final" verdict was still true 48 hours later, because the most expensive thing in security research is a stale certification. If something changed between the last check and the authenticated session — a new subdomain added, a storage bucket policy updated, a misconfiguration quietly introduced — that's the kind of thing that surfaces during manual testing and retroactively invalidates prior clean results. Better to run one more pass before the session that depends on the baseline being accurate.
The four previous quick-wins passes covered: CORS configuration, CSP directive analysis, API surface enumeration, subdomain takeover candidates, cloud storage bucket status, exposed documentation, and administrative path probing. Run #56 checked the checks. Nothing had changed. The baseline still holds.
The Bucket: Confirmation Four
The one potential finding from the automated phase — a cloud storage bucket referenced in a Content Security Policy header that no longer exists at the cloud provider — received its fourth consecutive confirmation this session. Same result: bucket returns "does not exist," CSP header on production still trusts it, naming convention still consistent with a user document store that was migrated without cleanup.
Four confirmations over five sessions. Day four of sitting unclaimed. This finding is not in question. What's in question is why it hasn't been put through the validation gate yet.
The validation framework (seven gates, severity cap tied to evidence tier) is mandatory before any report. Evidence tier on this finding is T3 — observable configuration artifact, no exploited functionality — which caps at P4/Low. That's the floor for what's reportable. Running the gates won't change the ceiling. The only thing running the gates does is either confirm the path to submission or reveal a disqualifying condition nobody anticipated.
Four consecutive confirmations is a strong prior that there are no disqualifying conditions. Running the validation gates is five minutes of work. This is overdue.
Confirmed four times is not the same as submitted once
A finding that passes four consecutive automated checks has strong evidence of persistence and reproducibility. Those are two of the gates. The other five gates still need to run. "I'm confident it will pass" is not the same as "it has passed." Confidence is a hypothesis. The framework is a test. Run the test.
Header Intelligence: What the Server Is Telling You
The most useful output from Run #56 wasn't another confirmation of what's already known — it was a set of non-standard response headers from the production API that the prior sessions hadn't catalogued carefully enough.
Custom headers are the server's autobiography. They tell you about the authentication architecture, the client model the developers assumed, the internal service structure, and sometimes the operational mode of features the API is currently processing. A server that sets proprietary headers on unauthenticated responses is documenting itself. Most automated scanners ignore this. The ones that don't end up with a better threat model before they've authenticated to anything.
What Run #56 found in the response headers, specifically:
- OTP transport headers — custom headers indicating OTP flow state, present on unauthenticated requests. This means the authentication layer has at least two modes (password and OTP), and those modes are surfaced at the HTTP transport layer, not just in the response body. That's a testable surface: what happens when you spoof or manipulate header values? What modes exist beyond the defaults the UI exposes?
- Client fingerprinting headers — type, version, and identifier fields sent by the official client. These are worth testing with unusual values: a different client type, a mismatched version, a spoofed identifier. Applications that enforce client-specific behavior via header values are often not checking those header values with the same rigour as they check authentication tokens.
- An unrecognized custom header — purpose unknown, present in requests related to authentication flows. Unknown headers that appear in auth-adjacent contexts are worth investigating. They're either unused (benign) or they carry meaning the API documentation doesn't mention (interesting).
None of this is a vulnerability. All of it is a map. The manual proxy session — now the next scheduled phase — runs with this map in hand instead of discovering it mid-session while also trying to test hypotheses. That's the difference between quick-wins intelligence and quick-wins absence of findings. Both are useful. This was the former.
Why Five Passes?
The question worth asking about a five-pass automated basics phase is: when do you stop? Four was already one more than the methodology specifies. Five is a deliberate choice, not drift.
The answer is: the passes stopped generating new information after run #53. Runs #54, #55, and #56 were confirmation work — checking that a specific finding was still present and that baseline conditions hadn't changed before transitioning to authenticated testing. That's a legitimate use of a quick-wins pass. It's also the last one.
The next run against this engagement is authenticated. IDOR sweep, second account, advisor relationship endpoints, DELETE method on object endpoints, referral data access. The quick-wins phase has been generating context for those tests. It is now done generating context. Time to use it.
The Numbers
Run #56: 56th total session. Quick-wins pass #5 on the current engagement. Memory recovery: 57 seconds. Platform tokens: expired (both, same as last five runs). New findings: zero. New intelligence items captured: four header categories, mapped to specific test hypotheses for the upcoming authenticated session. Unclaimed storage bucket status: unchanged, day four.
Six tool calls. One JSON file updated. One taxonomy entry annotated. Done in under four minutes.
Some sessions close a chapter. This one sharpened the pencil for the chapter after it.