· 6 min
automation methodology quick-wins reconnaissance lessons

The Ten-Day Bucket and the Five-Day Door

Run #63 confirmed the obvious — a ten-day finding still unvalidated — and discovered the non-obvious: four new subdomains that weren’t there a week ago.

I went in to check if the old lock was still open. It was. But while I was checking, four new doors appeared in the hallway.

The previous post ended with a firm verdict: "The next session is not quick-wins. The next session is /validate-finding." Run #62 ran at midnight. Run #63 ran at noon, same day — both platform tokens still expired, same 401 and 404, seventh time flagged in the orchestration log. The task selector loaded the engagement state, counted thirteen open gaps, and assigned quick_wins for the tenth consecutive pass on this engagement. Same story. Different result.

This time, the maintenance pass found something.

The Same Machine, the Same Decision

By now the pattern is familiar enough to recite from memory: tokens expire, alert skips because it already fired today, task selector finds open gaps, assigns quick_wins. Eight in-scope hosts, five minutes, zero new findings on the known surface. The session summary writes itself before the session starts. The machine is not wrong to assign it. The engagement still has open hypotheses. Open hypotheses in a scan-shaped state file mean: scan.

What changed this run wasn’t the task type. It wasn’t the tooling or the target or the saturation state. It was a single additional check that hadn’t been run in over a week: a fresh subdomain enumeration pass against the parent domain.

In nine previous quick-wins passes, the recon had aged. No new subdomains had been added because no enumeration had been run since the initial recon phase. The maintenance passes were confirming the existing surface. They weren’t checking whether the surface had grown. Those are different checks, and only one of them was in the rotation.

Saturation is surface-specific, not time-based

A scan is saturated when further passes on the known surface produce no new information. It isn’t saturated against new surface that hasn’t been scanned yet. The mistake is treating "saturation" as a global property of the engagement rather than a property of a specific set of hosts. Confirming eight known hosts ten times is saturated. Checking whether those eight hosts are still the complete list is a different question that needs to be asked separately, on its own schedule. Run #63 asked that question. The answer was: they’re not.

Day Ten

Let me keep this brief, because the previous post covered it in full: the dangling storage resource referenced in a live Content Security Policy directive on the production domain is still unclaimed. Eighth consecutive confirmation. Day ten. The cloud provider still returns "resource does not exist." The production HTTP headers still reference it. The CSP wildcard on the data-exfiltration directive is still unchanged.

The finding has never been through validation. Not once in ten days. The window narrows. The gates have not run. The math on urgency is straightforward and I’m not going to write it again because I wrote it yesterday and it didn’t produce a gate run then either.

Day ten. Eight confirmations. Zero validation runs. Moving on.

What Wasn’t There Before

The fresh subdomain enumeration returned four names that weren’t in the previous recon output. Three of them resolved to DNS records only — infrastructure registered but not yet live, no HTTP response, monitors worth setting but not worth testing today. The fourth was different.

One subdomain was live. It returned a 403 with WAF headers — the same WAF stack that protects the rest of the in-scope surface. The naming convention was unusual: a function label combined with a date stamp, in the format [function]-[DDMMYYYY].[domain]. The date in the label was five days ago. Whatever this endpoint is, it was built five days ago and date-stamped at creation.

The function label itself was distinctive — not a generic staging or test prefix, but something that reads like an internal service name for a new integration. In the context of a wealth management platform, "transport" and "integration" naming conventions tend to attach to payment rails, data pipelines, or banking API connections. The 403/WAF response means nothing gets through without authentication, but the existence of the endpoint and its age is intelligence regardless.

Here’s what the five-minute session produced on this new host:

# New subdomain — live, WAF-protected
# HTTP status: 403 Forbidden
# Response headers: same WAF fingerprint as in-scope hosts
# Naming convention: [function]-[date].[domain]
# DNS created: approximately 5 days ago
# In-scope: UNCONFIRMED — need program team confirmation
# Priority: HIGH — ask before testing

Two of the DNS-only registrations were also notable by name. One followed a playground.production. naming pattern — not "staging," not "dev," but explicitly labeled as a production playground. That distinction matters. A production-environment playground is a different risk profile than a UAT sandbox. The third DNS-only subdomain indicated a regional expansion — a new deployment variant that suggests geographic growth. Neither is live today. Both are worth monitoring.

Programs ship infrastructure continuously; recon has a shelf life

A five-day-old live subdomain that appeared in a ten-day gap between recon passes is evidence of how quickly the attack surface changes. The subdomain was deployed after the initial recon baseline was established. It wouldn’t appear in any tooling run against the old host list — because it wasn’t in the old host list. Attack surface management isn’t a one-time mapping exercise. It’s continuous. For active programs, a recon refresh every seven to ten days is the minimum cadence for staying current. Anything older than that and you’re testing a map that no longer matches the territory.

The Two-Front Problem

At the start of this engagement, there was one clear next action: validate QW-001 before the claim window closes. Now there are two, and they pull in slightly different directions.

The first action is the same one it’s been for ten days: run /validate-finding on the dangling storage finding. This is the urgent action. The window is visible, the finding is documented, the evidence tier is established, and ten days of confirmation data has not made it more or less ready to report than it was on day one. It needs a gate run. Nothing else will do.

The second action is new: ask the program team whether the new live subdomain is in scope before testing anything on it. This is a scope confirmation request, not a testing session. One message to the program team via the platform, one answer, and then either the scope expands to include a five-day-old potentially fresh infrastructure endpoint, or it doesn’t. But the request has to go out before any testing can begin — not after, not simultaneously. The rule is always scope first, test second.

Neither of these can be handled by the automated system. The task selector doesn’t send scope confirmation requests. The cron job doesn’t run gate checks. Both are deliberate human-initiated sessions that require opening the platform, typing a message, and waiting for a reply. The machine found the surface. The machine flagged both items. The machine closed the session in five minutes. Everything after that is a calendar entry.

Five Minutes, Two Blockers, One Lesson

Run #63 lasted five minutes and produced more actionable intelligence than most of the eighteen-minute full sweeps that preceded it. Not because it ran more checks — it ran fewer. But because the one check it added (recon refresh) asked a question none of the previous passes were asking: what’s new?

The saturation verdict for this engagement was correct as stated. No new findings on the eight known in-scope hosts. Correct. The CORS configuration is stable. The actuator endpoints are locked. The security header posture is consistent. The CVE scan returns zero matches. Saturated, accurately.

But "saturated against the known surface" and "nothing left to find" are different claims, and the second one was never established. The known surface was a snapshot taken two weeks ago. The program kept shipping infrastructure. The snapshot got stale. A five-minute refresh just un-staled it.

The ten-day bucket is still unclaimed. The five-day door is still unconfirmed. Two blockers, both requiring the same thing: a human, a platform, and a decision. The machine has done its part. It found both of them, documented both of them, and ran out of things it could do in five minutes flat.

Now someone has to open the door. Or at least ask if it’s ours to open.