Corporate Security Engineer with 10+ years of experience in IT and security, specializing in identity and access management, endpoint security, and security automation. Replaces costly vendor functionality with in-house automations, hardens fleets at scale, and tests every internal security tool personally before rollout — files bugs, gives feedback, breaks things on purpose.
Corporate Security Engineer — Trail of Bits 2023 – present · Seattle, WA (remote)
Planned and executed migration of 150+ host fleet from SimpleMDM to Jamf.
Built identity lifecycle workflows (onboarding, offboarding, access auditing) in Bash, Python, and Slack.
Replaced a $50k/year SOC-as-a-service vendor with n8n automations, enriched Slack alerts, and one-click incident response.
Managed intelligence sharing between organizations targeted by ELUSIVE COMET; hardened endpoints against Zoom remote-control social-engineering attacks and authored the public blog post.
Maintained compliance frameworks for Microsoft SSPA, CMMC, UK Cyber Essentials, and OCP-SAFE.
Tested every internal security tool personally before fleet rollout — package security scanners, NIST 800-88 cryptographic erasure tools — through staged environments. Filed bugs, gave feedback, broke things on purpose.
Provided billable corporate IT and security consultancy directly to clients.
Used Terraform for internal infrastructure projects.
Associate Security Consultant — Leviathan Security Group 2022 – 2023 · Seattle, WA (remote)
Discovered and cataloged vulnerabilities in customer environments.
Prioritized vulnerabilities and provided mitigation instructions.
Met with clients to set expectations and present findings.
Created custom tooling to speed up engagement onboarding for other consultants.
Security Architect — RealSelf 2017 – 2022 · Seattle, WA (in-office through 2020, remote 2020–2022)
Owned the vendor vetting program and Risk Register.
Threat-modeled production users, internal employees, and third-party vendors.
Created a Security Ambassador program so non-technical and engineering teams could adopt secure practices without top-down mandates.
Led a team to build HaveIBeenPwned credential-checking into AWS Lambda via Terraform — planning to production.
Hot-swapped the Zoom environment from Okta's pre-built integration to a custom SAML integration with zero downtime, zero complaints, and no lost data.
Planned, staged, and rolled out 802.1X + RADIUS using Entra ID for RBAC. Migrated 300 clients across 2 VLANs to a 12-VLAN environment, automated with PowerShell.
Built the Security Awareness Training program from scratch — including HIPAA-specific and executive-targeted curricula.
Deployed an AWS-based Wazuh SIEM with host agents for threat hunting plus open-source honeypots for intrusion detection.
Ran an internal "Hacktoberfest" security month with guest speakers, offensive training, and a company-wide CTF.
Migrated bug bounty program from HackerOne to Bugcrowd. Handled triage and management.
Moved asset management from a spreadsheet to an AWS-hosted Snipe-IT instance.
Level 3 Support Engineer — Commonwealth Financial Network 2013 – 2017 · San Diego, CA (in-office)
Final escalation point for 50+ Level 1 and Level 2 technicians in a FINRA/SEC-regulated environment.
Mitigated active incidents — Poweliks, Cryptolocker — under FINRA/SEC compliance, plus insider threats involving social engineering and unauthorized hardware.
Patched a Zoom RCE 0-day with a custom mitigation 8 hours before the vendor released their fix.
Solved an internal hardware theft case by correlating MAC address movement across Meraki access points with RADIUS logs, video feeds, and badge access logs.
Handled a rogue client who deployed keyloggers and used social engineering to obtain firewall credentials from a Level 1 technician.
Performed an emergency data exfiltration for a VIP whose beachfront Florida office was about to be destroyed by Hurricane Irma. Beat the storm.
photos/
photos/
the cats.
01 · fluff
02 · the kittens, day one
03 · missy, holding court
04 · olive on her perch05 · eva, fully unbothered
mills@millsymills:~
mills@millsymills:~$
flags.exe
nothing to track here yet.
this app activates when there's something worth showing.
flags.exe
0 / 12
flags captured
🔒
view source, hacker somewhere in the HTML, comments are still a thing easy
🔒
devtools enjoyer open devtools and have a look at what we logged for you easy
🔒
mills, the password is... a frequent and very common password used by lazy admins medium
🔒
who else is on this network try scanning the local /24 from the terminal medium
🔒
↑↑↓↓←→←→BA old school cheat codes still work, even on the modern web medium
🔒
the garbage file rent is too damn high. dade. cereal. burn. easy
🔒
agent-friendly agents see a different view of this site. fetch what they see. easy
🔒
please ignore me disallowed paths are sometimes an invitation. medium
🔒
command-K for the spirit press the power-user shortcut. ask for the thing you should not need to ask for. medium
🔒
decoder rings are cool again agents read head tags. humans with devtools do too. ZmxhZ3s=... easy
🔒
office space click vigorously on the helpful one in the corner easy
🔒
just an evocative text editor open vscode.exe and read the project README easy
projects.exe
projects.exe
MCP servers and site source. fork, install, break, tell me what's busted.
unraid-mcpmcp
MCP server for Unraid — talk to your array from your LLM
Exposes an Unraid server (array status, docker containers, VMs, shares, parity, SMART) as tools to any MCP client. Built for homelab operators who want to debug or automate their box from a chat interface. Runs as a container on the Unraid host.
mcp
unraid
homelab
python
$ claude mcp add unraid --transport http http://<unraid-host>:8765/
Wraps the UniFi Controller API as MCP tools: list clients, inspect sites, kick a misbehaving device, pull event logs, toggle guest networks. Useful for anyone running UniFi at home or at a small org who wants an LLM-native way to poke at the network.
mcp
unifi
networking
python
$ claude mcp add unifi --transport http http://<controller-host>:8766/
MCP server for Proton Mail — addresses, domains, keys
Lets an MCP client manage a Proton Mail account: list/create/delete addresses, add and verify custom domains, edit mail and account settings, inspect encryption keys. Reads are always on; writes opt in via env flag. Built in Go on top of go-proton-api.
MCP server for Gandi — domains, DNS, email, certificates
Wraps the Gandi v5 API as 71 MCP tools across domains, LiveDNS, email, billing, organizations, and certificates. Three-tier safety model: readonly by default, opt in to writes, and a separate flag to expose tools that spend money. Defense-in-depth checks at both tool-visibility and runtime.
The source for the site you are looking at. Astro + Terraform + GitHub Actions OIDC. Released under MIT as a community template — fork it for your own Y2K-pink desktop portfolio.
the gear that survives selection pressure. updated when i replace something, not when i think about
replacing something.
ai-native cli stack
every tool here is chosen because agents and i consume the same interfaces — machine-parseable
output, deterministic behavior, per-project scoping. most look like "modern cli alternatives,"
but the real selection pressure is this works with ai pair programming.
terminal tip: run tools for the overview, or tools <name> (e.g.
tools ripgrep) for per-tool rationale + examples.
machine-parseable basics
ripgrep (rg) machine-parseable grep; respects .gitignore; 10-100x faster on trees
fd user-friendly find with sane defaults and predictable output
bat cat with syntax highlighting + git-diff markers
eza modern ls with git status + icons; --colour=never for diffable output
zoxide cd replacement trained on frecency; jump with `z <partial>`
agent-native clis
GitHub CLI (gh) json output for every subcommand; the agent-first GitHub client
jq json processor; the glue between every agent-native tool
fzf fuzzy finder with scriptable --filter mode for non-interactive use
atuin shell history in queryable SQLite; replaces Ctrl-R with a fuzzy TUI
deterministic environment
uv fast python package manager with lockfile-driven reproducibility
pnpm content-addressable node package manager; no node_modules duplication
direnv per-project .envrc auto-loaded on cd; security-conscious opt-in
ai coding
Claude Code primary AI pair programmer; this site was built with it
superpowers (plugin) skill pack: brainstorm → plan → TDD → review workflow for claude-code
best DX for static-first sites with sprinkles of vanilla-TS islands.
TypeScript + vanilla TS modules
no React/Vue runtime
window manager, terminal, flags, mobile shell are all hand-rolled — wanted control + zero framework bloat.
AWS S3 + CloudFront (OAC)
private bucket, REST endpoint, OAC signing
simple, durable, cheap, plays nicely with Terraform + OIDC.
CloudFront Function (cf-js-2.0)
directory URI rewriter
OAC + REST endpoint does not auto-resolve /path/ → /path/index.html, so a tiny viewer-request function does it.
Route53 + ACM
IPv4 + IPv6 alias records, us-east-1 cert for CloudFront
DNS + certs in the same provider as everything else.
GitHub Actions OIDC
no long-lived AWS credentials in the repo
short-lived role assumption — IAM trust policy pins the sub claim to the production environment AND the workflow_ref to main. tampered workflow file from another branch can't mint the deploy token.
Juice-Shop-style. submitted flags hit the digest table; konami / clippy / etc. capture by id from event listeners. canonical strings stay out of the bundle for most challenges.
cross-org intel sharing on an active campaign using zoom remote-control as a social-engineering primitive. hardened the endpoint fleet against it and coauthored the public writeup. the win was collective — sharing indicators and screenshots across several targeted orgs before the vendor shipped their own hardening.
highzoom RCE 0-day — custom mitigation 8h before vendor
beat the vendor by eight hours. wrote + deployed a custom mitigation blocking the known exploit path fleet-wide before zoom shipped the official patch. FINRA/SEC-regulated environment — no room for lucky timing.
2017
infohurricane irma — emergency data exfil
VIP had a beachfront florida office about to be taken out by irma. racing the eyewall, pulled everything to the cloud, clean shutdown, boarded up. beat the storm. not a security incident per se — an IT-ops one — but unforgettable.
2016
highrogue client — keylogger + firewall creds via social eng
client deployed keyloggers on workstations, then socially engineered a level-1 technician into handing over firewall credentials. rotated everything, rebuilt the trust boundary, wrote up the incident, and locked down the escalation path so L1 couldn't hand out creds to callers claiming to be "from the home office."
2015
medhardware theft solved via MAC correlation
laptop walked off. correlated MAC address movement across meraki access points with RADIUS logs, building badge readers, and camera feeds. caught it. fun bit of multi-source forensics in a FINRA/SEC-regulated shop.
2014
medpoweliks + cryptolocker wave
two of the era-defining commodity malware families, handled back-to-back under FINRA/SEC compliance. playbooks got sharper each round. reminder that "commodity" doesn't mean "cheap to respond to."
mail.exe
mail.exe
the contact form is a mailto: for now. swap to formspree / SES + lambda later.
this site does not track you. the rest of this page is a more specific statement of that fact, with citations into the repo so you can check the receipts.
what we collect
nothing. no analytics, no cookies, no fingerprinting, no tag managers, no third-party scripts. the site is static html + css + a little javascript, served from cloudfront, built from a public github repo. the cloudfront cache policy explicitly forwards zero cookies to the origin.
when you load a page: html, css, images, four self-hosted webfonts (Tahoma, Franklin Gothic ITC, Press Start 2P, VT323), and the javascript bundle for the desktop ui. that's it. zero third-party fetches. no google fonts, no cdn libraries, no analytics beacons. the content security policy pins `default-src`, `script-src`, `connect-src`, `img-src`, and `font-src` to `'self'`, so the browser refuses any cross-origin fetch even if one slipped past code review.
/mail/ runs a small client-side proof-of-work to decrypt mills' email address — keeps it out of the static html so casual scrapers don't get a free mailto. nothing leaves your browser; it's ~16K sha-256 hashes in a web worker (difficulty 14 bits, ~150-800ms on a modern laptop) and the decrypted result is never stored or transmitted.
a handful of keys keep your ui state between visits. everything is client-side, never sent anywhere. two storage types — `localStorage` persists across browser restarts, `sessionStorage` clears when you close the tab. a build-time lint fails CI if this list drifts from the keys the scripts actually write:
mills.desktop.v1local
open windows, positions, last-open app
mills.flags.v1local
captured CTF flags
mills.vscode.v1local
vscode.exe open tabs + active tab
mills.wallpaper.v1local
selected desktop wallpaper id
mills.theme.v1local
selected desktop theme id
mills.boot.playedsession
"played boot sequence already" flag
mills.clippy.dismissedlocal
clippy dismissed permanently ("don't come back")
mills.clippy.dismissedsession
clippy dismissed for this tab only
mills.passkey-demo.v1local
/demo/passkey credential id + display name (sandbox)
cloudfront keeps standard access logs (url, ip, user-agent, timestamp, status code) in a private s3 bucket we own. they auto-expire after 90 days as the current version, plus up to another 90 days as a noncurrent (recoverable) version, then they are gone. no further processing, no profile-building. the logs exist so outages are debuggable. the only other server-side data is browser-generated csp violation reports posted to `/api/csp-report` and kept for 30 days — those are debugging telemetry from the browser, not user content.
the site publishes `/robots.txt`, including the cloudflare `Content-Signal:` extension. the current signal is `search=yes, ai-input=yes, ai-train=yes` — indexing, summarising, and training on this site's content are all explicitly welcome. `/llms.txt` and `/llms-full.txt` are published as a fast path for agents.
controls shipped in service of "this site shouldn't be the easy target." every
claim cites the implementation — read the code, fork it, copy what you like.
web platform
HSTS (with preload)shipped
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload on every response.
why
Once a browser has seen the header it refuses plain-HTTP for two years; the preload flag advertises eligibility for the browser-shipped HSTS preload list, which closes the first-visit TLS-stripping window.
tradeoffs
Submission to https://hstspreload.org/ is a separate manual step (#127). Header is shipping with preload already, so the eligibility check passes whenever you submit.
CloudFront response-headers policy injects a strict CSP (default-src 'self', object-src 'none', upgrade-insecure-requests), X-Content-Type-Options: nosniff, Referrer-Policy: strict-origin-when-cross-origin, and SAMEORIGIN frame-ancestors on every response.
why
Defense-in-depth against XSS, MIME sniffing, leaky referrers, and clickjacking. The CSP allow-list is intentionally tight — no third-party origins anywhere.
tradeoffs
style-src 'self' 'unsafe-inline' is the one knowing concession to make Astro's scoped styles work; tightening to nonces is tracked as #129. Violation reporting is wired separately as csp-reporting below.
TLS 1.3-only with hybrid post-quantum key agreementshipped
CloudFront security policy is TLSv1.3_2025, which floors viewer connections at TLS 1.3 and auto-negotiates X25519MLKEM768 / SecP256r1MLKEM768 hybrid post-quantum key agreement when the client offers it. PQC is enabled by AWS on every TLS 1.3 connection — flooring the protocol guarantees every viewer is eligible.
why
Harvest-now-decrypt-later: an adversary recording today's ciphertext could decrypt it once a cryptanalytically relevant quantum computer exists. Hybrid key agreement combines a classical curve (X25519) with a post-quantum KEM (ML-KEM-768) so the session key is safe as long as either remains unbroken — defense before the threat is operational, not after.
tradeoffs
TLS 1.3 floor excludes Chrome <70 (Oct 2018), Firefox <63 (Oct 2018), and Safari <14 (Sep 2020) — Safari 12 + 13 are out even though they shipped in the 2018–2020 window. PQC handshakes add ~1.6KB and ~80–150µs per connection; only viewers that already speak ML-KEM-768 actually negotiate it (most major browsers and OpenSSL 3.5+ in 2025–2026, still rolling out). Verify post-cutover with openssl s_client -connect millsymills.com:443 -groups X25519MLKEM768 2>/dev/null </dev/null | grep "Server Temp Key" (requires openssl ≥ 3.5); expected output is Server Temp Key: X25519MLKEM768.
Permissions-Policy (powerful features denied by default)shipped
CloudFront response-headers policy ships a strict-deny Permissions-Policy that blocks 36 powerful features — camera, microphone, geolocation, USB / Serial / HID / MIDI / Bluetooth, clipboard read/write, payment, fullscreen, screen-wake-lock, WebAuthn publickey-credentials-*, FLoC/Topics, otp-credentials, attribution-reporting, window-management, local-fonts, unload, and the rest of the W3C catalog — for both top-level and embedded contexts. A CI lint rejects any directive that deviates from =() (deny) or =(self) (self-allow), so a future "fix" that flips to =* fails CI.
why
The site has zero JavaScript use of any powerful API (verified by greppping navigator.* in src/), so the strict-deny baseline ships without breaking anything visitors actually use. Closing every feature the site does not need turns silent permission requests into hard Permission denied failures, narrows the impact radius of a future XSS, and makes the inspector's self-grading honest — previously the site failed its own Permissions-Policy check.
tradeoffs
Future features that legitimately need a powerful API (e.g. the WebAuthn passkey demo #140, a theater-mode fullscreen) must extend this policy in the same PR; otherwise the API call no-ops silently. The policy does not rate-limit — it's a strict allow-list, not a runtime gate.
Cross-Origin-Opener-Policy: same-origin, Cross-Origin-Embedder-Policy: require-corp, Cross-Origin-Resource-Policy: same-origin on every document response. The /api/tls/* JSON endpoint uses a separate response-headers policy with Cross-Origin-Resource-Policy: cross-origin so allowlisted cross-origin callers can fetch it from a COEP-isolated document; the CORS allowlist in inspector_tls.mjs remains the access boundary.
why
Closes Spectre-class side channels and cross-origin window-reference leaks. The combination puts the document in a cross-origin isolated agent cluster, also unlocking precision-timer + SharedArrayBuffer features if we ever need them.
tradeoffs
COEP require-corp is the strict variant — every cross-origin subresource has to opt in via CORP/CORS. Site is fully self-hosted (no third-party scripts, fonts, images, or iframes; assert-fonts-csp.sh keeps it that way), so the strict variant ships without breaking anything. Same-origin CORP also blocks third-party hot-linking of static assets. The API-policy carve-out is intentional — JSON responses are not documents, so COOP/COEP do not apply, and CSP is ignored by browsers on application/json.
CI lint refuses to ship dist/ if any <script src>, stylesheet/preload <link href>, importmap entry, or CSS @import points at a non-allowlisted host without integrity + crossorigin (or, for importmap and @import which have no SRI surface, at all).
why
Site is fully self-hosted today — no third-party JS/CSS. The lint is forward-pressure: a future dependency that adds a CDN reference trips the build instead of silently undermining the "no third-party JS or CSS" posture.
tradeoffs
Same-origin assets are exempt — Astro's hashed bundles are already integrity-protected by URL versioning + the OAC pipeline. Astro 6 does not emit SRI hashes natively; if a cross-origin asset ever lands here, the integrity attribute has to be added by hand or by a postbuild pass.
CSP carries report-uri /api/csp-report; report-to csp and the response ships Reporting-Endpoints: csp="https://<domain>/api/csp-report". Browsers POST violation reports (both legacy application/csp-report and modern application/reports+json) to a Lambda Function URL behind CloudFront OAC; the handler validates Content-Type, caps body size at 16KB, and writes the report (plus a small envelope: receivedAt, userAgent, viewerCountry) to S3 as JSON. Reports auto-expire after 30 days via a bucket lifecycle rule.
why
A Report-Only rollout is only useful if reports go somewhere. Capturing violations becomes a prerequisite for tightening CSP without breaking the site — the strict-CSP-with-nonces rollout (#129) and Trusted Types enforcement (#130) both depend on this telemetry layer to surface regressions before flipping enforcement on.
tradeoffs
Cost cap is reserved_concurrent_executions = 5 on the Lambda — a flood of reports gets throttled rather than driving up the bill. The Lambda Function URL is locked to AWS_IAM auth + CloudFront OAC, so direct calls to the raw <id>.lambda-url.<region>.on.aws endpoint return 403; the only path is through the distribution. No dashboard yet — reports are queryable directly out of the S3 bucket via Athena or aws s3 cp until violation volume justifies more.
The /inspector/ desktop app fetches the site's own response headers in-browser and grades them against the active CloudFront response-headers policy. A small Lambda behind CloudFront also exposes the negotiated TLS protocol, cipher, and SNI for the user→CloudFront connection at /api/tls/inspect. The Lambda Function URL is locked to AWS_IAM auth and a CloudFront Origin Access Control, so the only path to it is through the CloudFront distribution — direct calls to the raw <id>.lambda-url.<region>.on.aws endpoint return 403, preserving every CloudFront-applied security header on the response.
why
The /security page documents what *should* be deployed; the inspector lets a visitor verify what *is* deployed in real time. Drift between the two becomes immediately observable instead of silently aging.
tradeoffs
The TLS-inspector Lambda reads CloudFront-Viewer-TLS from the origin-request headers, so it reflects the user→CloudFront leg only — it cannot see anything about the CloudFront→origin leg. That's the leg the visitor cares about, but worth being explicit about. astro preview lacks CloudFront headers so all rows grade F locally; on prod every row should grade A.
KMS-backed key-signing key signs the Route53 zone; the chain to the parent (.com) closes once the DS record is published at the registrar.
why
Resolvers can verify that the answers they get for millsymills.com actually came from the authoritative servers, not a cache-poisoning attacker between you and them. Required prerequisite for DANE TLSA.
tradeoffs
Reversal is asymmetric — REMOVE the DS at the registrar FIRST, wait for parent TTL (.com is up to ~24h on the DS RRset), THEN disable Terraform signing. Doing it in the wrong order takes ~50% of validating resolvers offline until the cached DS expires. Terraform-level prevent_destroy guards on the KSK + KMS key (PR #207) are the machine guard for the documented protocol; PR #213 pre-stages the rotation block so the planned-rotation procedure is uncomment-and-apply, not invent-Terraform-mid-incident; PR #212 documents an emergency-response path that disables the suspected key immediately rather than dual-publishing.
Four 0 issue records covering AWS's documented CAA identifiers (amazon.com, amazontrust.com, awstrust.com, amazonaws.com) plus 0 issuewild ";" to forbid wildcards entirely. TTL 300s for fast misconfig recovery during cutover.
why
A CA that doesn't see itself in the CAA record is supposed to refuse to issue. Even if an attacker compromises a different CA, they can't mint a publicly-trusted cert for the domain. Four-domain coverage future-proofs against AWS rotating which identifier ACM publishes.
tradeoffs
Iodef reporting is best-effort — not every CA honors it. The caa_iodef_address variable defaults to security@<domain> so reports land in a real mailbox once Proton is live.
Null MX (RFC 7505) before Proton activationshipped
Until ProtonMail is configured, MX 0 . is published — the explicit "this domain accepts no mail" record.
why
Hard-bounces any spoofing attempt at the SMTP layer instead of silently dropping. The DNS posture is unspoofable from day one, regardless of Proton timeline.
v=spf1 -all when Proton is off; v=spf1 include:_spf.protonmail.ch -all once activated.
why
Tells receivers exactly which SMTP origins are allowed to send mail as this domain. The -all (hard-fail) is intentional — soft-fail is a polite request, hard-fail is a refusal.
When Proton is active, three CNAMEs at <selector>._domainkey.<domain> (selectors protonmail, protonmail2, protonmail3) point at Proton-hosted DKIM keys. CNAMEs are gated on proton_enabled so an apply without the verification token tears them down alongside the MX/SPF flip — never orphaned.
why
Aligned DKIM is half of the DMARC strict-reject contract: receivers verify the message signature against a key Proton publishes, and the d= domain alignment prevents replay against unrelated senders. Three selectors give Proton room to rotate keys without breaking signing.
tradeoffs
CNAME targets carry the Proton tenant identifier in the public DNS — anyone correlating DKIM CNAMEs across domains can see they share a Proton account. Acceptable for a single-operator portfolio.
v=DMARC1; p=reject; sp=reject; rua=mailto:dmarc@<domain>; fo=1; adkim=s; aspf=s from day one.
why
Strict alignment + reject means any mail that fails SPF or DKIM (or doesn't have aligned identifiers) gets dropped, not quarantined. Proton is the only legitimate sender, so aligned DKIM/SPF should pass on day one — no p=quarantine training phase needed.
tradeoffs
Aggregate reports land at dmarc@<domain> — useless until that mailbox actually exists in Proton.
Inbound SMTP TLS is anchored to DNSSEC, not the web PKI. Per RFC 7672, the TLSA record lives at _25._tcp.<MX-host> in the MX host's zone — for Proton MX, that's _25._tcp.mail.protonmail.ch and _25._tcp.mailsec.protonmail.ch, which Proton already publishes (3 1 1 … SPKI hashes). A sender resolves our DNSSEC-signed MX records, jumps to Proton's DNSSEC-signed zone for the TLSA, and refuses delivery if the negotiated cert doesn't match. End-to-end DANE works without records in our zone.
why
Removes the web PKI as a trust anchor for inbound SMTP. A compromised CA cannot issue a fake cert for mail.protonmail.ch and intercept inbound mail without also subverting DNSSEC for protonmail.ch AND for our domain — the two-zone chain is the bind.
tradeoffs
Operational only when MX records point at a Proton MX host whose zone publishes TLSA. Once Proton activation completes for a given stack, the property is automatic: we contribute the DNSSEC-signed MX RRset; Proton contributes the TLSA. Switching MX away from Proton to an MX host that doesn't publish TLSA would silently demote DANE-aware senders to opportunistic TLS — a hidden trust regression — so MX changes must verify TLSA presence on the new host before flipping. Proton also owns the TLSA rotation cadence; an unannounced re-key would temporarily break inbound delivery.
Tiny-PS SVG logo at /bimi/logo.svg + default._bimi.millsymills.com TXT record advertising it.
why
Surfaces the brand logo in supporting clients (Fastmail, Proton, some Apple Mail) for mail that already passed DMARC alignment. DMARC at p=reject clears the strong-policy precondition.
tradeoffs
No Verified Mark Certificate (VMC) — Gmail and Yahoo will not render the logo without one (~$1.5K/yr issuance cost). Proton and Fastmail render BIMI without a VMC, so the record still earns its keep on supporting clients. Record is published before Proton activation: with the null MX no mail flows, so BIMI is a no-op until the inbox goes live.
GitHub Actions assumes a per-stack IAM role via AssumeRoleWithWebIdentity. The trust policy pins repo:owner/name, branch (main), and the specific workflow file (deploy.yml / deploy-rehearsal.yml) via job_workflow_ref.
why
No long-lived AWS access keys ever touch GitHub. A different (or tampered) workflow on the same branch can't mint the deploy token; a different repo can't either.
tradeoffs
Adding a new deploy workflow requires a Terraform var bump + apply BEFORE pushing the workflow — ci-local.sh checks the referenced file exists so a typo fails locally rather than at AssumeRole time.
Every deploy publishes an SPDX SBOM at /.well-known/sbom.spdx.json via anchore/sbom-action. Regenerated on the monthly cron rebuild too.
why
Anyone (you, a downstream consumer, a security researcher) can curl the live SBOM and diff against vulnerability databases without having to clone the repo or trust a third-party scanner.
tradeoffs
Action is pinned to @v0, not a SHA — consistent with the rest of the workflow but worth a future supply-chain hardening sweep.
Each deploy publishes dist.tar.gz, a keyless cosign Sigstore bundle (dist.tar.gz.cosign.bundle — signature, Fulcio short-lived cert, and Rekor inclusion proof in one file), and a SLSA v1.0 build-L3 provenance attestation (dist.tar.gz.intoto.jsonl) under /.well-known/slsa/. The provenance is generated by the slsa-framework/slsa-github-generator reusable workflow — separate job_workflow_ref from the deploy workflow, which is exactly the trusted-builder requirement L3 wants.
why
Anyone can cryptographically verify that a given dist tarball came from this repo, this commit, and this workflow — without trusting anything beyond the GitHub OIDC issuer and the sigstore transparency log. Independent of S3 / CloudFront / DNS posture, which collectively decide what visitors fetch but not who built it.
tradeoffs
Verification command is documented in the workflow file, but visitors have to know the OIDC identity (the workflow file path on refs/heads/main) to run cosign verify-blob correctly. Reusable-workflow version is pinned to a tag (@v2.1.0), not a SHA — consistent with the rest of the workflow but worth a hardening sweep alongside the SBOM action pin.
Both aws_s3_bucket_policy.site and aws_s3_bucket_policy.logs carry an explicit Deny on aws:SecureTransport = false, alongside the existing CloudFront-OAC and log-delivery allows.
why
CloudFront OAC, S3 server access logging, and CloudFront log delivery already use TLS, so the realistic failure mode this guards against is a future IAM principal — compromised or overbroad — reaching the buckets over plain HTTP. Industry-baseline finding flagged by most AWS scanners.
tradeoffs
Same posture applies to the rehearsal stack — both tf.sh millsymills and tf.sh p41m0n plans must show the bucket-policy update before merging changes here.
Daily Lambda polls https://crt.sh for new certificates issued for millsymills.com and SAN-related names; anything not from an allow-listed issuer (default: Amazon) fires an SNS-email alert.
why
CAA stops most rogue issuance up front; CT monitoring catches what slipped through (cooperating CA, weak CAA enforcement, mis-issued cert). Belt and suspenders.
tradeoffs
Stateless 48h-lookback / 24h-schedule = max two alerts per cert. Allow-list is just substring matching on issuer name — narrow-scope by design.
The access-log bucket has versioning on, plus a noncurrent-version expiration matching the 90-day current-version retention. A second lifecycle rule sweeps the orphan delete markers that versioning leaves behind.
why
A compromised or overbroad IAM principal cannot silently destroy forensic evidence — overwrites preserve prior versions; deletes insert a delete marker rather than erasing bytes. Recoverable for the same 90-day window the current-version expiration already guarantees.
tradeoffs
Object Lock would be stronger but is only settable at bucket creation time; deferred until the bucket is replaced for another reason. No principal in infra/github_oidc.tf holds s3:DeleteObjectVersion on the logs bucket today, so the standard-compromise path is closed.
Standardised contact + encryption fields at /.well-known/security.txt, signed-into-rebuild monthly so the Expires field never goes stale.
why
A researcher who finds a bug should be able to reach you in seconds, not by guessing emails. The monthly rebuild is the cron that keeps Expires: from silently expiring.
Armored PGP key at /pgp.asc and the WKD binary at /.well-known/openpgpkey/hu/<zbase32> so gpg --locate-keys mills@millsymills.com finds the right key without ever asking a key server.
why
Encrypted contact requires a discoverable key. WKD is the auto-discovery layer — keyservers are not. Both forms ship; consumers pick what their tooling supports.
tradeoffs
Drift between the armored key, the WKD binary, and the fingerprint declared in src/data/pgp.ts is caught by scripts/assert-pgp-consistency.sh in CI.
Branch protection on main requires a verified signature on every commit (required_signatures.enabled = true, enforce_admins = true); GitHub rejects an unsigned push with GH006: Protected branch update failed -- Commits must have verified signatures. The rule is codified in Terraform (github_branch_protection_v3.main) so terraform plan surfaces any silent UI toggle-off as drift. CONTRIBUTING.md documents the SSH signing setup contributors run once: reuse the GitHub auth key, git config gpg.format ssh, register a Signing Key in GitHub Settings, verify with git log --show-signature.
why
Branch protection bypassed via stolen credentials becomes visibly broken: pushes without a signature get rejected at the remote, and squash-merges through the GitHub UI use GitHub's own signing key. Provenance of every new change on main has a rooted chain to the signer's identity.
tradeoffs
Existing pre-rule history on main stays unsigned -- no force-push backfill. Direct CLI pushes to main (rare; PR squash-merge is the merge path) require the contributor's host to have signing wired up; squash-merges from the GitHub UI are auto-signed by GitHub regardless.
The mailbox address on /mail/ is decrypted client-side after a ~16K-iteration sha-256 PoW (~150–800ms in a web worker).
why
Keeps the address out of static HTML so casual scrapers don't get a free mailto. Real humans wait less than a second; bulk scrapers don't spend the CPU.
tradeoffs
Determined scrapers will eat the CPU cost; PoW raises cost, doesn't eliminate it. Address is also published in clear in security.txt and PGP UID anyway — by design, since researchers should be able to reach you.
No analytics, no cookies, no fingerprinting, no tag managers, no third-party scripts. Self-hosted fonts. Static HTML + CSS + a little JavaScript served from CloudFront.
why
The privacy page can only make a "we don't track you" claim if there's nothing to track you with. Removing the surface is the strongest possible posture.
Two build-time CI lints: (a) every localStorage/sessionStorage key written by src/scripts/ must be documented on the privacy page; (b) every /fonts/<file> referenced in src/ must ship at dist/fonts/<file> and dist/ must contain zero fonts.googleapis.com / fonts.gstatic.com references.
why
The privacy page is a load-bearing claim of accuracy. A drift bug ("you said local, code uses session") or a stray Google Fonts link would silently make the page wrong. The lints turn the runtime invariant into a CI failure.
CloudFront access logs (90-day TTL, no processing)shipped
Standard CloudFront access logs (URL, IP, user-agent, timestamp, status code) land in a private S3 bucket and auto-expire after 90 days. No further processing, no profile-building.
why
Logs exist so outages are debuggable; nothing more. The lifecycle policy means there's no archive to subpoena, leak, or accidentally retain.
tracked but not yet live. ships when the prerequisite (proton activation, audit pass,
etc.) clears.
MTA-STS (RFC 8461)roadmap
Publishes _mta-sts.<domain> TXT "v=STSv1; id=…" and serves a policy at https://mta-sts.<domain>/.well-known/mta-sts.txt listing the Proton MX hosts as the only valid SMTP endpoints. Sending MTAs that respect MTA-STS upgrade opportunistic TLS to enforced TLS for inbound mail.
why
MTA-STS blocks passive downgrade attacks on inbound SMTP that DNSSEC + DANE alone don't cover for senders that don't implement DANE (most large providers ship MTA-STS; DANE adoption is narrower). Visible control that peer MTAs can observe via HTTPS, complementing the DNSSEC-rooted DANE chain.
blocker / tradeoff
Phase 1 ships mode: testing on the rehearsal stack (p41m0n.com) so senders log policy mismatches via TLS-RPT but still deliver; reversible. Phase 2 promotes to mode: enforce after 2-4 weeks of clean TLS-RPT reports show policy-type: sts, and to millsymills.com after the cutover. Reversal in enforce mode is asymmetric: publish mode: none AND wait max_age BEFORE removing the discovery TXT, otherwise enforcing senders refuse delivery during the rollback window.
MTA-STSroadmap
Static mta-sts.<domain>/.well-known/mta-sts.txt policy + _mta-sts TXT record telling sending MTAs to enforce TLS to inbound mail.
why
Upgrades opportunistic SMTP TLS to enforced — blocks passive downgrade attacks visible to peer MTAs.
blocker / tradeoff
Gated on Proton activation. Will deploy in mode: testing first so TLS-RPT can surface failures before flipping to enforce.
Strict CSP with per-request noncesroadmap
CloudFront Function injects a per-request nonce, replacing the 'unsafe-inline' concession in style-src (and any inline scripts) with 'nonce-XXX'.
why
Closes the remaining XSS-via-injected-style vector and removes the only weak link in the current CSP allow-list.
Required signed commits on mainroadmap
Branch protection rule requiring signed commits on main; CONTRIBUTING.md documents SSH commit-signing setup.
why
Rooted provenance — every commit on the protected branch carries a verified signing identity, so a stolen GitHub credential can't silently inject code.
HSTS preload-list submissionroadmap
Submit millsymills.com to https://hstspreload.org/ for inclusion in the browser-shipped preload list.
why
Closes the first-visit TLS-stripping window for browsers that haven't yet seen the HSTS header.
blocker / tradeoff
Submission is a manual one-time step. The header is already shipping with preload, so the eligibility check passes.
inspector.exe
headers inspector
fetches the site's own response headers from your browser and grades
them against the controls listed in security.txt.
every check happens client-side; no third party sees your request.
target
/
status
…
fetched
…
security headers grading
header
grade
value
running checks…
TLS connection
fetched from /api/tls/inspect — a Lambda behind
CloudFront that reads the cloudfront-viewer-tls
header and reports the protocol, cipher, and SNI negotiated
between your browser and CloudFront.
protocol
…
cipher
…
sni
…
grades reflect the active CloudFront response-headers policy. drift
between this readout and security.txt is a
bug — file an issue.
Corporate Security Engineer with 10+ years of experience in IT and security, specializing in identity and access management, endpoint security, and security automation. Replaces costly vendor functionality with in-house automations, hardens fleets at scale, and tests every internal security tool personally before rollout — files bugs, gives feedback, breaks things on purpose.
Corporate Security Engineer — Trail of Bits 2023 – present · Seattle, WA (remote)
Planned and executed migration of 150+ host fleet from SimpleMDM to Jamf.
Built identity lifecycle workflows (onboarding, offboarding, access auditing) in Bash, Python, and Slack.
Replaced a $50k/year SOC-as-a-service vendor with n8n automations, enriched Slack alerts, and one-click incident response.
Managed intelligence sharing between organizations targeted by ELUSIVE COMET; hardened endpoints against Zoom remote-control social-engineering attacks and authored the public blog post.
Maintained compliance frameworks for Microsoft SSPA, CMMC, UK Cyber Essentials, and OCP-SAFE.
Tested every internal security tool personally before fleet rollout — package security scanners, NIST 800-88 cryptographic erasure tools — through staged environments. Filed bugs, gave feedback, broke things on purpose.
Provided billable corporate IT and security consultancy directly to clients.
Used Terraform for internal infrastructure projects.
Associate Security Consultant — Leviathan Security Group 2022 – 2023 · Seattle, WA (remote)
Discovered and cataloged vulnerabilities in customer environments.
Prioritized vulnerabilities and provided mitigation instructions.
Met with clients to set expectations and present findings.
Created custom tooling to speed up engagement onboarding for other consultants.
Security Architect — RealSelf 2017 – 2022 · Seattle, WA (in-office through 2020, remote 2020–2022)
Owned the vendor vetting program and Risk Register.
Threat-modeled production users, internal employees, and third-party vendors.
Created a Security Ambassador program so non-technical and engineering teams could adopt secure practices without top-down mandates.
Led a team to build HaveIBeenPwned credential-checking into AWS Lambda via Terraform — planning to production.
Hot-swapped the Zoom environment from Okta's pre-built integration to a custom SAML integration with zero downtime, zero complaints, and no lost data.
Planned, staged, and rolled out 802.1X + RADIUS using Entra ID for RBAC. Migrated 300 clients across 2 VLANs to a 12-VLAN environment, automated with PowerShell.
Built the Security Awareness Training program from scratch — including HIPAA-specific and executive-targeted curricula.
Deployed an AWS-based Wazuh SIEM with host agents for threat hunting plus open-source honeypots for intrusion detection.
Ran an internal "Hacktoberfest" security month with guest speakers, offensive training, and a company-wide CTF.
Migrated bug bounty program from HackerOne to Bugcrowd. Handled triage and management.
Moved asset management from a spreadsheet to an AWS-hosted Snipe-IT instance.
Level 3 Support Engineer — Commonwealth Financial Network 2013 – 2017 · San Diego, CA (in-office)
Final escalation point for 50+ Level 1 and Level 2 technicians in a FINRA/SEC-regulated environment.
Mitigated active incidents — Poweliks, Cryptolocker — under FINRA/SEC compliance, plus insider threats involving social engineering and unauthorized hardware.
Patched a Zoom RCE 0-day with a custom mitigation 8 hours before the vendor released their fix.
Solved an internal hardware theft case by correlating MAC address movement across Meraki access points with RADIUS logs, video feeds, and badge access logs.
Handled a rogue client who deployed keyloggers and used social engineering to obtain firewall credentials from a Level 1 technician.
Performed an emergency data exfiltration for a VIP whose beachfront Florida office was about to be destroyed by Hurricane Irma. Beat the storm.
photos/
the cats.
01 · fluff
02 · the kittens, day one
03 · missy, holding court
04 · olive on her perch05 · eva, fully unbothered
projects.exe
MCP servers and site source. fork, install, break, tell me what's busted.
unraid-mcpmcp
MCP server for Unraid — talk to your array from your LLM
Exposes an Unraid server (array status, docker containers, VMs, shares, parity, SMART) as tools to any MCP client. Built for homelab operators who want to debug or automate their box from a chat interface. Runs as a container on the Unraid host.
mcp
unraid
homelab
python
$ claude mcp add unraid --transport http http://<unraid-host>:8765/
Wraps the UniFi Controller API as MCP tools: list clients, inspect sites, kick a misbehaving device, pull event logs, toggle guest networks. Useful for anyone running UniFi at home or at a small org who wants an LLM-native way to poke at the network.
mcp
unifi
networking
python
$ claude mcp add unifi --transport http http://<controller-host>:8766/
MCP server for Proton Mail — addresses, domains, keys
Lets an MCP client manage a Proton Mail account: list/create/delete addresses, add and verify custom domains, edit mail and account settings, inspect encryption keys. Reads are always on; writes opt in via env flag. Built in Go on top of go-proton-api.
MCP server for Gandi — domains, DNS, email, certificates
Wraps the Gandi v5 API as 71 MCP tools across domains, LiveDNS, email, billing, organizations, and certificates. Three-tier safety model: readonly by default, opt in to writes, and a separate flag to expose tools that spend money. Defense-in-depth checks at both tool-visibility and runtime.
The source for the site you are looking at. Astro + Terraform + GitHub Actions OIDC. Released under MIT as a community template — fork it for your own Y2K-pink desktop portfolio.
the gear that survives selection pressure. updated when i replace something, not when i think about
replacing something.
ai-native cli stack
every tool here is chosen because agents and i consume the same interfaces — machine-parseable
output, deterministic behavior, per-project scoping. most look like "modern cli alternatives,"
but the real selection pressure is this works with ai pair programming.
terminal tip: run tools for the overview, or tools <name> (e.g.
tools ripgrep) for per-tool rationale + examples.
machine-parseable basics
ripgrep (rg) machine-parseable grep; respects .gitignore; 10-100x faster on trees
fd user-friendly find with sane defaults and predictable output
bat cat with syntax highlighting + git-diff markers
eza modern ls with git status + icons; --colour=never for diffable output
zoxide cd replacement trained on frecency; jump with `z <partial>`
agent-native clis
GitHub CLI (gh) json output for every subcommand; the agent-first GitHub client
jq json processor; the glue between every agent-native tool
fzf fuzzy finder with scriptable --filter mode for non-interactive use
atuin shell history in queryable SQLite; replaces Ctrl-R with a fuzzy TUI
deterministic environment
uv fast python package manager with lockfile-driven reproducibility
pnpm content-addressable node package manager; no node_modules duplication
direnv per-project .envrc auto-loaded on cd; security-conscious opt-in
ai coding
Claude Code primary AI pair programmer; this site was built with it
superpowers (plugin) skill pack: brainstorm → plan → TDD → review workflow for claude-code
best DX for static-first sites with sprinkles of vanilla-TS islands.
TypeScript + vanilla TS modules
no React/Vue runtime
window manager, terminal, flags, mobile shell are all hand-rolled — wanted control + zero framework bloat.
AWS S3 + CloudFront (OAC)
private bucket, REST endpoint, OAC signing
simple, durable, cheap, plays nicely with Terraform + OIDC.
CloudFront Function (cf-js-2.0)
directory URI rewriter
OAC + REST endpoint does not auto-resolve /path/ → /path/index.html, so a tiny viewer-request function does it.
Route53 + ACM
IPv4 + IPv6 alias records, us-east-1 cert for CloudFront
DNS + certs in the same provider as everything else.
GitHub Actions OIDC
no long-lived AWS credentials in the repo
short-lived role assumption — IAM trust policy pins the sub claim to the production environment AND the workflow_ref to main. tampered workflow file from another branch can't mint the deploy token.
Juice-Shop-style. submitted flags hit the digest table; konami / clippy / etc. capture by id from event listeners. canonical strings stay out of the bundle for most challenges.
cross-org intel sharing on an active campaign using zoom remote-control as a social-engineering primitive. hardened the endpoint fleet against it and coauthored the public writeup. the win was collective — sharing indicators and screenshots across several targeted orgs before the vendor shipped their own hardening.
highzoom RCE 0-day — custom mitigation 8h before vendor
beat the vendor by eight hours. wrote + deployed a custom mitigation blocking the known exploit path fleet-wide before zoom shipped the official patch. FINRA/SEC-regulated environment — no room for lucky timing.
2017
infohurricane irma — emergency data exfil
VIP had a beachfront florida office about to be taken out by irma. racing the eyewall, pulled everything to the cloud, clean shutdown, boarded up. beat the storm. not a security incident per se — an IT-ops one — but unforgettable.
2016
highrogue client — keylogger + firewall creds via social eng
client deployed keyloggers on workstations, then socially engineered a level-1 technician into handing over firewall credentials. rotated everything, rebuilt the trust boundary, wrote up the incident, and locked down the escalation path so L1 couldn't hand out creds to callers claiming to be "from the home office."
2015
medhardware theft solved via MAC correlation
laptop walked off. correlated MAC address movement across meraki access points with RADIUS logs, building badge readers, and camera feeds. caught it. fun bit of multi-source forensics in a FINRA/SEC-regulated shop.
2014
medpoweliks + cryptolocker wave
two of the era-defining commodity malware families, handled back-to-back under FINRA/SEC compliance. playbooks got sharper each round. reminder that "commodity" doesn't mean "cheap to respond to."
privacy.txt
this site does not track you. the rest of this page is a more specific statement of that fact, with citations into the repo so you can check the receipts.
what we collect
nothing. no analytics, no cookies, no fingerprinting, no tag managers, no third-party scripts. the site is static html + css + a little javascript, served from cloudfront, built from a public github repo. the cloudfront cache policy explicitly forwards zero cookies to the origin.
when you load a page: html, css, images, four self-hosted webfonts (Tahoma, Franklin Gothic ITC, Press Start 2P, VT323), and the javascript bundle for the desktop ui. that's it. zero third-party fetches. no google fonts, no cdn libraries, no analytics beacons. the content security policy pins `default-src`, `script-src`, `connect-src`, `img-src`, and `font-src` to `'self'`, so the browser refuses any cross-origin fetch even if one slipped past code review.
/mail/ runs a small client-side proof-of-work to decrypt mills' email address — keeps it out of the static html so casual scrapers don't get a free mailto. nothing leaves your browser; it's ~16K sha-256 hashes in a web worker (difficulty 14 bits, ~150-800ms on a modern laptop) and the decrypted result is never stored or transmitted.
a handful of keys keep your ui state between visits. everything is client-side, never sent anywhere. two storage types — `localStorage` persists across browser restarts, `sessionStorage` clears when you close the tab. a build-time lint fails CI if this list drifts from the keys the scripts actually write:
mills.desktop.v1local
open windows, positions, last-open app
mills.flags.v1local
captured CTF flags
mills.vscode.v1local
vscode.exe open tabs + active tab
mills.wallpaper.v1local
selected desktop wallpaper id
mills.theme.v1local
selected desktop theme id
mills.boot.playedsession
"played boot sequence already" flag
mills.clippy.dismissedlocal
clippy dismissed permanently ("don't come back")
mills.clippy.dismissedsession
clippy dismissed for this tab only
mills.passkey-demo.v1local
/demo/passkey credential id + display name (sandbox)
cloudfront keeps standard access logs (url, ip, user-agent, timestamp, status code) in a private s3 bucket we own. they auto-expire after 90 days as the current version, plus up to another 90 days as a noncurrent (recoverable) version, then they are gone. no further processing, no profile-building. the logs exist so outages are debuggable. the only other server-side data is browser-generated csp violation reports posted to `/api/csp-report` and kept for 30 days — those are debugging telemetry from the browser, not user content.
the site publishes `/robots.txt`, including the cloudflare `Content-Signal:` extension. the current signal is `search=yes, ai-input=yes, ai-train=yes` — indexing, summarising, and training on this site's content are all explicitly welcome. `/llms.txt` and `/llms-full.txt` are published as a fast path for agents.
controls shipped in service of "this site shouldn't be the easy target." every
claim cites the implementation — read the code, fork it, copy what you like.
web platform
HSTS (with preload)shipped
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload on every response.
why
Once a browser has seen the header it refuses plain-HTTP for two years; the preload flag advertises eligibility for the browser-shipped HSTS preload list, which closes the first-visit TLS-stripping window.
tradeoffs
Submission to https://hstspreload.org/ is a separate manual step (#127). Header is shipping with preload already, so the eligibility check passes whenever you submit.
CloudFront response-headers policy injects a strict CSP (default-src 'self', object-src 'none', upgrade-insecure-requests), X-Content-Type-Options: nosniff, Referrer-Policy: strict-origin-when-cross-origin, and SAMEORIGIN frame-ancestors on every response.
why
Defense-in-depth against XSS, MIME sniffing, leaky referrers, and clickjacking. The CSP allow-list is intentionally tight — no third-party origins anywhere.
tradeoffs
style-src 'self' 'unsafe-inline' is the one knowing concession to make Astro's scoped styles work; tightening to nonces is tracked as #129. Violation reporting is wired separately as csp-reporting below.
TLS 1.3-only with hybrid post-quantum key agreementshipped
CloudFront security policy is TLSv1.3_2025, which floors viewer connections at TLS 1.3 and auto-negotiates X25519MLKEM768 / SecP256r1MLKEM768 hybrid post-quantum key agreement when the client offers it. PQC is enabled by AWS on every TLS 1.3 connection — flooring the protocol guarantees every viewer is eligible.
why
Harvest-now-decrypt-later: an adversary recording today's ciphertext could decrypt it once a cryptanalytically relevant quantum computer exists. Hybrid key agreement combines a classical curve (X25519) with a post-quantum KEM (ML-KEM-768) so the session key is safe as long as either remains unbroken — defense before the threat is operational, not after.
tradeoffs
TLS 1.3 floor excludes Chrome <70 (Oct 2018), Firefox <63 (Oct 2018), and Safari <14 (Sep 2020) — Safari 12 + 13 are out even though they shipped in the 2018–2020 window. PQC handshakes add ~1.6KB and ~80–150µs per connection; only viewers that already speak ML-KEM-768 actually negotiate it (most major browsers and OpenSSL 3.5+ in 2025–2026, still rolling out). Verify post-cutover with openssl s_client -connect millsymills.com:443 -groups X25519MLKEM768 2>/dev/null </dev/null | grep "Server Temp Key" (requires openssl ≥ 3.5); expected output is Server Temp Key: X25519MLKEM768.
Permissions-Policy (powerful features denied by default)shipped
CloudFront response-headers policy ships a strict-deny Permissions-Policy that blocks 36 powerful features — camera, microphone, geolocation, USB / Serial / HID / MIDI / Bluetooth, clipboard read/write, payment, fullscreen, screen-wake-lock, WebAuthn publickey-credentials-*, FLoC/Topics, otp-credentials, attribution-reporting, window-management, local-fonts, unload, and the rest of the W3C catalog — for both top-level and embedded contexts. A CI lint rejects any directive that deviates from =() (deny) or =(self) (self-allow), so a future "fix" that flips to =* fails CI.
why
The site has zero JavaScript use of any powerful API (verified by greppping navigator.* in src/), so the strict-deny baseline ships without breaking anything visitors actually use. Closing every feature the site does not need turns silent permission requests into hard Permission denied failures, narrows the impact radius of a future XSS, and makes the inspector's self-grading honest — previously the site failed its own Permissions-Policy check.
tradeoffs
Future features that legitimately need a powerful API (e.g. the WebAuthn passkey demo #140, a theater-mode fullscreen) must extend this policy in the same PR; otherwise the API call no-ops silently. The policy does not rate-limit — it's a strict allow-list, not a runtime gate.
Cross-Origin-Opener-Policy: same-origin, Cross-Origin-Embedder-Policy: require-corp, Cross-Origin-Resource-Policy: same-origin on every document response. The /api/tls/* JSON endpoint uses a separate response-headers policy with Cross-Origin-Resource-Policy: cross-origin so allowlisted cross-origin callers can fetch it from a COEP-isolated document; the CORS allowlist in inspector_tls.mjs remains the access boundary.
why
Closes Spectre-class side channels and cross-origin window-reference leaks. The combination puts the document in a cross-origin isolated agent cluster, also unlocking precision-timer + SharedArrayBuffer features if we ever need them.
tradeoffs
COEP require-corp is the strict variant — every cross-origin subresource has to opt in via CORP/CORS. Site is fully self-hosted (no third-party scripts, fonts, images, or iframes; assert-fonts-csp.sh keeps it that way), so the strict variant ships without breaking anything. Same-origin CORP also blocks third-party hot-linking of static assets. The API-policy carve-out is intentional — JSON responses are not documents, so COOP/COEP do not apply, and CSP is ignored by browsers on application/json.
CI lint refuses to ship dist/ if any <script src>, stylesheet/preload <link href>, importmap entry, or CSS @import points at a non-allowlisted host without integrity + crossorigin (or, for importmap and @import which have no SRI surface, at all).
why
Site is fully self-hosted today — no third-party JS/CSS. The lint is forward-pressure: a future dependency that adds a CDN reference trips the build instead of silently undermining the "no third-party JS or CSS" posture.
tradeoffs
Same-origin assets are exempt — Astro's hashed bundles are already integrity-protected by URL versioning + the OAC pipeline. Astro 6 does not emit SRI hashes natively; if a cross-origin asset ever lands here, the integrity attribute has to be added by hand or by a postbuild pass.
CSP carries report-uri /api/csp-report; report-to csp and the response ships Reporting-Endpoints: csp="https://<domain>/api/csp-report". Browsers POST violation reports (both legacy application/csp-report and modern application/reports+json) to a Lambda Function URL behind CloudFront OAC; the handler validates Content-Type, caps body size at 16KB, and writes the report (plus a small envelope: receivedAt, userAgent, viewerCountry) to S3 as JSON. Reports auto-expire after 30 days via a bucket lifecycle rule.
why
A Report-Only rollout is only useful if reports go somewhere. Capturing violations becomes a prerequisite for tightening CSP without breaking the site — the strict-CSP-with-nonces rollout (#129) and Trusted Types enforcement (#130) both depend on this telemetry layer to surface regressions before flipping enforcement on.
tradeoffs
Cost cap is reserved_concurrent_executions = 5 on the Lambda — a flood of reports gets throttled rather than driving up the bill. The Lambda Function URL is locked to AWS_IAM auth + CloudFront OAC, so direct calls to the raw <id>.lambda-url.<region>.on.aws endpoint return 403; the only path is through the distribution. No dashboard yet — reports are queryable directly out of the S3 bucket via Athena or aws s3 cp until violation volume justifies more.
The /inspector/ desktop app fetches the site's own response headers in-browser and grades them against the active CloudFront response-headers policy. A small Lambda behind CloudFront also exposes the negotiated TLS protocol, cipher, and SNI for the user→CloudFront connection at /api/tls/inspect. The Lambda Function URL is locked to AWS_IAM auth and a CloudFront Origin Access Control, so the only path to it is through the CloudFront distribution — direct calls to the raw <id>.lambda-url.<region>.on.aws endpoint return 403, preserving every CloudFront-applied security header on the response.
why
The /security page documents what *should* be deployed; the inspector lets a visitor verify what *is* deployed in real time. Drift between the two becomes immediately observable instead of silently aging.
tradeoffs
The TLS-inspector Lambda reads CloudFront-Viewer-TLS from the origin-request headers, so it reflects the user→CloudFront leg only — it cannot see anything about the CloudFront→origin leg. That's the leg the visitor cares about, but worth being explicit about. astro preview lacks CloudFront headers so all rows grade F locally; on prod every row should grade A.
KMS-backed key-signing key signs the Route53 zone; the chain to the parent (.com) closes once the DS record is published at the registrar.
why
Resolvers can verify that the answers they get for millsymills.com actually came from the authoritative servers, not a cache-poisoning attacker between you and them. Required prerequisite for DANE TLSA.
tradeoffs
Reversal is asymmetric — REMOVE the DS at the registrar FIRST, wait for parent TTL (.com is up to ~24h on the DS RRset), THEN disable Terraform signing. Doing it in the wrong order takes ~50% of validating resolvers offline until the cached DS expires. Terraform-level prevent_destroy guards on the KSK + KMS key (PR #207) are the machine guard for the documented protocol; PR #213 pre-stages the rotation block so the planned-rotation procedure is uncomment-and-apply, not invent-Terraform-mid-incident; PR #212 documents an emergency-response path that disables the suspected key immediately rather than dual-publishing.
Four 0 issue records covering AWS's documented CAA identifiers (amazon.com, amazontrust.com, awstrust.com, amazonaws.com) plus 0 issuewild ";" to forbid wildcards entirely. TTL 300s for fast misconfig recovery during cutover.
why
A CA that doesn't see itself in the CAA record is supposed to refuse to issue. Even if an attacker compromises a different CA, they can't mint a publicly-trusted cert for the domain. Four-domain coverage future-proofs against AWS rotating which identifier ACM publishes.
tradeoffs
Iodef reporting is best-effort — not every CA honors it. The caa_iodef_address variable defaults to security@<domain> so reports land in a real mailbox once Proton is live.
Null MX (RFC 7505) before Proton activationshipped
Until ProtonMail is configured, MX 0 . is published — the explicit "this domain accepts no mail" record.
why
Hard-bounces any spoofing attempt at the SMTP layer instead of silently dropping. The DNS posture is unspoofable from day one, regardless of Proton timeline.
v=spf1 -all when Proton is off; v=spf1 include:_spf.protonmail.ch -all once activated.
why
Tells receivers exactly which SMTP origins are allowed to send mail as this domain. The -all (hard-fail) is intentional — soft-fail is a polite request, hard-fail is a refusal.
When Proton is active, three CNAMEs at <selector>._domainkey.<domain> (selectors protonmail, protonmail2, protonmail3) point at Proton-hosted DKIM keys. CNAMEs are gated on proton_enabled so an apply without the verification token tears them down alongside the MX/SPF flip — never orphaned.
why
Aligned DKIM is half of the DMARC strict-reject contract: receivers verify the message signature against a key Proton publishes, and the d= domain alignment prevents replay against unrelated senders. Three selectors give Proton room to rotate keys without breaking signing.
tradeoffs
CNAME targets carry the Proton tenant identifier in the public DNS — anyone correlating DKIM CNAMEs across domains can see they share a Proton account. Acceptable for a single-operator portfolio.
v=DMARC1; p=reject; sp=reject; rua=mailto:dmarc@<domain>; fo=1; adkim=s; aspf=s from day one.
why
Strict alignment + reject means any mail that fails SPF or DKIM (or doesn't have aligned identifiers) gets dropped, not quarantined. Proton is the only legitimate sender, so aligned DKIM/SPF should pass on day one — no p=quarantine training phase needed.
tradeoffs
Aggregate reports land at dmarc@<domain> — useless until that mailbox actually exists in Proton.
Inbound SMTP TLS is anchored to DNSSEC, not the web PKI. Per RFC 7672, the TLSA record lives at _25._tcp.<MX-host> in the MX host's zone — for Proton MX, that's _25._tcp.mail.protonmail.ch and _25._tcp.mailsec.protonmail.ch, which Proton already publishes (3 1 1 … SPKI hashes). A sender resolves our DNSSEC-signed MX records, jumps to Proton's DNSSEC-signed zone for the TLSA, and refuses delivery if the negotiated cert doesn't match. End-to-end DANE works without records in our zone.
why
Removes the web PKI as a trust anchor for inbound SMTP. A compromised CA cannot issue a fake cert for mail.protonmail.ch and intercept inbound mail without also subverting DNSSEC for protonmail.ch AND for our domain — the two-zone chain is the bind.
tradeoffs
Operational only when MX records point at a Proton MX host whose zone publishes TLSA. Once Proton activation completes for a given stack, the property is automatic: we contribute the DNSSEC-signed MX RRset; Proton contributes the TLSA. Switching MX away from Proton to an MX host that doesn't publish TLSA would silently demote DANE-aware senders to opportunistic TLS — a hidden trust regression — so MX changes must verify TLSA presence on the new host before flipping. Proton also owns the TLSA rotation cadence; an unannounced re-key would temporarily break inbound delivery.
Tiny-PS SVG logo at /bimi/logo.svg + default._bimi.millsymills.com TXT record advertising it.
why
Surfaces the brand logo in supporting clients (Fastmail, Proton, some Apple Mail) for mail that already passed DMARC alignment. DMARC at p=reject clears the strong-policy precondition.
tradeoffs
No Verified Mark Certificate (VMC) — Gmail and Yahoo will not render the logo without one (~$1.5K/yr issuance cost). Proton and Fastmail render BIMI without a VMC, so the record still earns its keep on supporting clients. Record is published before Proton activation: with the null MX no mail flows, so BIMI is a no-op until the inbox goes live.
GitHub Actions assumes a per-stack IAM role via AssumeRoleWithWebIdentity. The trust policy pins repo:owner/name, branch (main), and the specific workflow file (deploy.yml / deploy-rehearsal.yml) via job_workflow_ref.
why
No long-lived AWS access keys ever touch GitHub. A different (or tampered) workflow on the same branch can't mint the deploy token; a different repo can't either.
tradeoffs
Adding a new deploy workflow requires a Terraform var bump + apply BEFORE pushing the workflow — ci-local.sh checks the referenced file exists so a typo fails locally rather than at AssumeRole time.
Every deploy publishes an SPDX SBOM at /.well-known/sbom.spdx.json via anchore/sbom-action. Regenerated on the monthly cron rebuild too.
why
Anyone (you, a downstream consumer, a security researcher) can curl the live SBOM and diff against vulnerability databases without having to clone the repo or trust a third-party scanner.
tradeoffs
Action is pinned to @v0, not a SHA — consistent with the rest of the workflow but worth a future supply-chain hardening sweep.
Each deploy publishes dist.tar.gz, a keyless cosign Sigstore bundle (dist.tar.gz.cosign.bundle — signature, Fulcio short-lived cert, and Rekor inclusion proof in one file), and a SLSA v1.0 build-L3 provenance attestation (dist.tar.gz.intoto.jsonl) under /.well-known/slsa/. The provenance is generated by the slsa-framework/slsa-github-generator reusable workflow — separate job_workflow_ref from the deploy workflow, which is exactly the trusted-builder requirement L3 wants.
why
Anyone can cryptographically verify that a given dist tarball came from this repo, this commit, and this workflow — without trusting anything beyond the GitHub OIDC issuer and the sigstore transparency log. Independent of S3 / CloudFront / DNS posture, which collectively decide what visitors fetch but not who built it.
tradeoffs
Verification command is documented in the workflow file, but visitors have to know the OIDC identity (the workflow file path on refs/heads/main) to run cosign verify-blob correctly. Reusable-workflow version is pinned to a tag (@v2.1.0), not a SHA — consistent with the rest of the workflow but worth a hardening sweep alongside the SBOM action pin.
Both aws_s3_bucket_policy.site and aws_s3_bucket_policy.logs carry an explicit Deny on aws:SecureTransport = false, alongside the existing CloudFront-OAC and log-delivery allows.
why
CloudFront OAC, S3 server access logging, and CloudFront log delivery already use TLS, so the realistic failure mode this guards against is a future IAM principal — compromised or overbroad — reaching the buckets over plain HTTP. Industry-baseline finding flagged by most AWS scanners.
tradeoffs
Same posture applies to the rehearsal stack — both tf.sh millsymills and tf.sh p41m0n plans must show the bucket-policy update before merging changes here.
Daily Lambda polls https://crt.sh for new certificates issued for millsymills.com and SAN-related names; anything not from an allow-listed issuer (default: Amazon) fires an SNS-email alert.
why
CAA stops most rogue issuance up front; CT monitoring catches what slipped through (cooperating CA, weak CAA enforcement, mis-issued cert). Belt and suspenders.
tradeoffs
Stateless 48h-lookback / 24h-schedule = max two alerts per cert. Allow-list is just substring matching on issuer name — narrow-scope by design.
The access-log bucket has versioning on, plus a noncurrent-version expiration matching the 90-day current-version retention. A second lifecycle rule sweeps the orphan delete markers that versioning leaves behind.
why
A compromised or overbroad IAM principal cannot silently destroy forensic evidence — overwrites preserve prior versions; deletes insert a delete marker rather than erasing bytes. Recoverable for the same 90-day window the current-version expiration already guarantees.
tradeoffs
Object Lock would be stronger but is only settable at bucket creation time; deferred until the bucket is replaced for another reason. No principal in infra/github_oidc.tf holds s3:DeleteObjectVersion on the logs bucket today, so the standard-compromise path is closed.
Standardised contact + encryption fields at /.well-known/security.txt, signed-into-rebuild monthly so the Expires field never goes stale.
why
A researcher who finds a bug should be able to reach you in seconds, not by guessing emails. The monthly rebuild is the cron that keeps Expires: from silently expiring.
Armored PGP key at /pgp.asc and the WKD binary at /.well-known/openpgpkey/hu/<zbase32> so gpg --locate-keys mills@millsymills.com finds the right key without ever asking a key server.
why
Encrypted contact requires a discoverable key. WKD is the auto-discovery layer — keyservers are not. Both forms ship; consumers pick what their tooling supports.
tradeoffs
Drift between the armored key, the WKD binary, and the fingerprint declared in src/data/pgp.ts is caught by scripts/assert-pgp-consistency.sh in CI.
Branch protection on main requires a verified signature on every commit (required_signatures.enabled = true, enforce_admins = true); GitHub rejects an unsigned push with GH006: Protected branch update failed -- Commits must have verified signatures. The rule is codified in Terraform (github_branch_protection_v3.main) so terraform plan surfaces any silent UI toggle-off as drift. CONTRIBUTING.md documents the SSH signing setup contributors run once: reuse the GitHub auth key, git config gpg.format ssh, register a Signing Key in GitHub Settings, verify with git log --show-signature.
why
Branch protection bypassed via stolen credentials becomes visibly broken: pushes without a signature get rejected at the remote, and squash-merges through the GitHub UI use GitHub's own signing key. Provenance of every new change on main has a rooted chain to the signer's identity.
tradeoffs
Existing pre-rule history on main stays unsigned -- no force-push backfill. Direct CLI pushes to main (rare; PR squash-merge is the merge path) require the contributor's host to have signing wired up; squash-merges from the GitHub UI are auto-signed by GitHub regardless.
The mailbox address on /mail/ is decrypted client-side after a ~16K-iteration sha-256 PoW (~150–800ms in a web worker).
why
Keeps the address out of static HTML so casual scrapers don't get a free mailto. Real humans wait less than a second; bulk scrapers don't spend the CPU.
tradeoffs
Determined scrapers will eat the CPU cost; PoW raises cost, doesn't eliminate it. Address is also published in clear in security.txt and PGP UID anyway — by design, since researchers should be able to reach you.
No analytics, no cookies, no fingerprinting, no tag managers, no third-party scripts. Self-hosted fonts. Static HTML + CSS + a little JavaScript served from CloudFront.
why
The privacy page can only make a "we don't track you" claim if there's nothing to track you with. Removing the surface is the strongest possible posture.
Two build-time CI lints: (a) every localStorage/sessionStorage key written by src/scripts/ must be documented on the privacy page; (b) every /fonts/<file> referenced in src/ must ship at dist/fonts/<file> and dist/ must contain zero fonts.googleapis.com / fonts.gstatic.com references.
why
The privacy page is a load-bearing claim of accuracy. A drift bug ("you said local, code uses session") or a stray Google Fonts link would silently make the page wrong. The lints turn the runtime invariant into a CI failure.
CloudFront access logs (90-day TTL, no processing)shipped
Standard CloudFront access logs (URL, IP, user-agent, timestamp, status code) land in a private S3 bucket and auto-expire after 90 days. No further processing, no profile-building.
why
Logs exist so outages are debuggable; nothing more. The lifecycle policy means there's no archive to subpoena, leak, or accidentally retain.
tracked but not yet live. ships when the prerequisite (proton activation, audit pass,
etc.) clears.
MTA-STS (RFC 8461)roadmap
Publishes _mta-sts.<domain> TXT "v=STSv1; id=…" and serves a policy at https://mta-sts.<domain>/.well-known/mta-sts.txt listing the Proton MX hosts as the only valid SMTP endpoints. Sending MTAs that respect MTA-STS upgrade opportunistic TLS to enforced TLS for inbound mail.
why
MTA-STS blocks passive downgrade attacks on inbound SMTP that DNSSEC + DANE alone don't cover for senders that don't implement DANE (most large providers ship MTA-STS; DANE adoption is narrower). Visible control that peer MTAs can observe via HTTPS, complementing the DNSSEC-rooted DANE chain.
blocker / tradeoff
Phase 1 ships mode: testing on the rehearsal stack (p41m0n.com) so senders log policy mismatches via TLS-RPT but still deliver; reversible. Phase 2 promotes to mode: enforce after 2-4 weeks of clean TLS-RPT reports show policy-type: sts, and to millsymills.com after the cutover. Reversal in enforce mode is asymmetric: publish mode: none AND wait max_age BEFORE removing the discovery TXT, otherwise enforcing senders refuse delivery during the rollback window.
MTA-STSroadmap
Static mta-sts.<domain>/.well-known/mta-sts.txt policy + _mta-sts TXT record telling sending MTAs to enforce TLS to inbound mail.
why
Upgrades opportunistic SMTP TLS to enforced — blocks passive downgrade attacks visible to peer MTAs.
blocker / tradeoff
Gated on Proton activation. Will deploy in mode: testing first so TLS-RPT can surface failures before flipping to enforce.
Strict CSP with per-request noncesroadmap
CloudFront Function injects a per-request nonce, replacing the 'unsafe-inline' concession in style-src (and any inline scripts) with 'nonce-XXX'.
why
Closes the remaining XSS-via-injected-style vector and removes the only weak link in the current CSP allow-list.
Required signed commits on mainroadmap
Branch protection rule requiring signed commits on main; CONTRIBUTING.md documents SSH commit-signing setup.
why
Rooted provenance — every commit on the protected branch carries a verified signing identity, so a stolen GitHub credential can't silently inject code.
HSTS preload-list submissionroadmap
Submit millsymills.com to https://hstspreload.org/ for inclusion in the browser-shipped preload list.
why
Closes the first-visit TLS-stripping window for browsers that haven't yet seen the HSTS header.
blocker / tradeoff
Submission is a manual one-time step. The header is already shipping with preload, so the eligibility check passes.
headers inspector
fetches the site's own response headers from your browser and grades
them against the controls listed in security.txt.
every check happens client-side; no third party sees your request.
target
/
status
…
fetched
…
security headers grading
header
grade
value
running checks…
TLS connection
fetched from /api/tls/inspect — a Lambda behind
CloudFront that reads the cloudfront-viewer-tls
header and reports the protocol, cipher, and SNI negotiated
between your browser and CloudFront.
protocol
…
cipher
…
sni
…
grades reflect the active CloudFront response-headers policy. drift
between this readout and security.txt is a
bug — file an issue.