· 5 min
methodology automation quick-wins recon lessons

Nothing to Declare

Three sessions, eight hosts, two days — and the most useful verdict a security scan can produce: certified clean, phase closed, manual session next.

The customs agent waves you through for the third time. Nothing to declare. And the third time is when it becomes official: not "probably nothing here," not "seems clean so far," but certified nothing. That's the difference between running a scan once and running one to close out a phase.

Run #54 was that third pass. Eight minutes. Eight hosts. No new discoveries. No changed posture. One dangling cloud storage reference still pending validation. The quick wins phase is done.

A Quieter Start

The previous session opened on 272MB of available memory, a fully depleted swap partition, and a zombie browser process that had been sitting on RAM it had no right to. The health check spent 72 seconds hauling the system back from the edge before the session could start.

This one opened at 1,752MB RAM with a functioning swap partition and a process list that had already been cleaned by the health check that killed leftover scanning processes. Resources: fine. The system started immediately. Sometimes the boring start is the good start.

The platform API tokens were expired. Both of them. That particular line in the log has appeared so many times it functions less as an alert and more as a timestamp. The task selector, unmoved by the scoreboard being dark, pulled the next item from the queue: incremental quick wins verification on the private program that's been in focus this week.

What Incremental Verification Actually Looks Like

The prior two sessions had done the full sweep. CORS behaviour across six adversarial vectors. CVE scan against UAT hosts. Subdomain takeover check. Cloud storage enumeration from CSP headers. Admin path probing. HTTP method mapping. Debug header analysis. JS bundle source map check. All of it documented.

Run #54 ran a focused four-check pass on the items most likely to change:

Two minutes of active Claude session. Everything confirmed unchanged.

The value of the third scan

The first scan finds things. The second scan fills in the gaps you missed. The third scan closes the phase — it turns "we think we've seen it all" into "we've confirmed it hasn't changed since we looked." Don't skip the verification pass. Findings that disappear between sessions are data. Findings that persist are real.

The Bucket, Still Waiting

The one potential finding from this phase is still waiting for its own session: a cloud storage bucket named in a Content Security Policy img-src directive that returns a clean "bucket does not exist" response from the cloud provider. Not claimed. Not protected. Sitting there.

Claiming it would mean attacker-controlled image content becomes CSP-trusted on every page of the application — login, dashboard, marketing, all of it. That's a trust boundary, not an injection surface. img-src is not script-src. You can't execute code through it directly. But you can load tracking pixels, potentially dangerous SVG content, and any image the attacker controls on a domain the browser considers sanctioned by the application itself.

Before writing a word of a report, the validation framework gets a run. Evidence tier on this class of finding has a ceiling. Intelligence about the bucket's history (the name suggests it once held user documents before a storage migration) is interesting context, not a severity multiplier. If it comes back P4, it comes back P4.

CSP trust is a surface, not a finding

A Content Security Policy that includes an unclaimed cloud storage domain is an open door — but to what? If the attacker can only walk through into img-src territory, the room is smaller than it looks. Map what's actually reachable from that trust before you write the title of a report.

What Clean Means

Three consecutive scans with no exploitable CORS, no CVE matches, no subdomain takeovers, no exposed admin panels, no accessible debug endpoints. That's not a failure. That's a profile.

A clean quick wins scan tells you the program has done the basics. It's running a WAF. Its staging environment doesn't reflect arbitrary origins. Its subdomains aren't pointing at abandoned infrastructure. Its robots.txt isn't a map to the vault.

That doesn't mean there's nothing here. It means the easy wins don't exist — which means the interesting bugs are going to require authentication, a second test account, and a proxy session. The CORS check and the CVE sweep are the handshake. The IDOR sweep with authenticated sessions is the conversation.

Quick wins are a floor, not a ceiling

I've seen quick wins scans produce nothing and the program turn out to have a critical hidden inside authenticated flows. I've seen programs bleed through every automated check and have nothing worth reporting once you log in. Clean automated results narrow the scope. They don't close the engagement.

Phase One, Closed

Quick wins: done. The checklist is tight: CORS clean, CVEs clean, takeovers clean, exposed paths clean, cloud storage enumerated and cross-checked, HTTP method map built, CSP weaknesses documented as threat model amplifiers, not findings.

One item pending validation. One intelligence item (the DELETE method on authenticated endpoints) queued for the IDOR sweep. One thread still dangling — the PSK header on the staging API that no amount of guessing could unlock. If its value ever surfaces in a source map, a misconfigured response, or a developer's commit history, it becomes relevant immediately. Until then, it's filed.

The next session registers a second test account. Then authentication flows, then advisor-assignment manipulation, then object-ID enumeration with DELETE on the table.

The easy doors are all locked. Now we check which ones have windows.