skip to content →
dispatch_log DISPATCH_010FILED APR 13 2026OPERATORSOPHIE LAURENTLIVE
DISPATCH_010PRODUCTApr 13, 20269 min read

Introducing Site Health Monitoring: catch SEO issues before they sink your rankings

Publishing great content is only half the battle — the other half is making sure Google can actually read what you ship. Today we're rolling out continuous SEO monitoring: scheduled sitemap scans, seven health rules running on every URL, a 0–100 health score, and email alerts when something breaks.

A few months ago, one of our earliest customers — a B2B SaaS company with about 340 blog posts — watched their organic traffic drop 22% over six weeks. Nothing in their analytics pointed to a clear culprit. No algorithm update. No big content changes. They were still publishing. The rankings just… slipped.

When we dug in, the story took ten minutes to unravel. Their marketing team had migrated their image CDN the previous quarter. The migration went smoothly — except for 47 blog posts that ended up with http:// image URLs instead of https://. Google flagged those pages as mixed content. Rankings decayed quietly. No one on their team noticed because everything looked fine from the browser.

That is exactly the kind of failure Site Health Monitoring is built to catch. Today we're rolling it out to every VectraSEO customer.

Why we built this

We started VectraSEO as a content generation tool. Write competitor-aware blog posts, schedule them, publish to whatever CMS you use. That part of the product works. But over the first year, the same conversation kept happening with customers on calls: "My new posts rank fine. It's the older stuff that's decaying, and I can't tell why."

The honest answer is that most SEO problems aren't about content quality. They're about plumbing. Redirects that chain four hops deep. Sitemaps that list pages marked noindex. 404s that were launched on a Tuesday and stayed broken until the marketing director hit one on a Friday. None of this shows up in a content brief. None of it gets caught by a writer. And most teams only find out after the traffic drop shows up in a monthly dashboard, weeks after the damage started.

So we built the other half of the loop: continuous monitoring that runs while you sleep.

What it does, in plain English

You point a monitor at your sitemap. That's setup. From there, VectraSEO crawls up to 200 URLs per scan, runs seven SEO health rules on every one, and emits a list of issues tagged as critical, warning, or info. Every issue has a URL, a rule name, a human-readable message, and — where relevant — evidence (the broken redirect chain, the actual <meta name="robots"> value, the mixed-content asset URL).

Scans run daily or weekly, on whatever schedule you pick. If a scan turns up new critical or warning issues that weren't in the previous scan, we email you. Not a dashboard. Not a Slack bot you'll mute in two weeks. An actual email that lands in the inbox of whoever owns SEO on your team, with the diff written out.

And we attach a single number — a 0–100 health score — that summarizes the monitor's state. Priya on our engineering team wrote a whole post on why that number is weighted the way it is, but the short version: critical issues cost you 15 points, warnings cost 5, info costs 1, and we clamp at zero. The math is deliberately simple so you don't have to trust us on the weights.

The seven rules

We shipped seven rules at launch. Not seventy. Not a settings panel with fifty toggles. Seven rules, each of which has caught something expensive on a real customer site during our beta.

1. Broken status codes. Any URL in your sitemap that returns 4xx or 5xx. This is the most obvious rule and also the one that trips people up the most — it's shocking how many sitemaps in the wild list deleted pages.

2. Redirect chains. URLs that require more than one redirect hop to reach their destination. Google will follow two or three, but each hop dilutes link equity and makes you look sloppy. We flag anything with two or more redirects, and we show you the full chain.

3. Missing meta. No <title>. No <meta name="description">. Or a title that's 180 characters long because someone forgot the character limit exists. These are warnings, not critical — but an index full of them is a slow bleed.

4. Noindex in sitemap. If your sitemap lists a URL, but the page itself says <meta name="robots" content="noindex">, you are sending Google contradictory signals. Googlebot will obey the noindex, but it will also hold the contradiction against you in the crawl budget calculus. We flag this critical.

5. Mixed content. HTTPS pages that load HTTP assets — images, scripts, stylesheets. Browsers block some mixed content outright and downgrade user trust signals on the rest. This was the rule that caught the 22% traffic drop in our opening anecdote.

6. Slow response. Server response time over 2 seconds. This isn't full-page load — we don't render with a headless browser (yet). It's time-to-first-byte from a warm origin. If your origin is slow on a vanilla crawler request, it's slow for Googlebot.

7. SPA placeholder. Content that looks like a JavaScript-rendered single-page app served a skeleton. We detect this by looking for telltale signs: empty <body>, absurdly low text-to-HTML ratio, content that doesn't match the title. If Googlebot is getting a <div id="app"></div>, your SEO is not happening.

Every rule emits a severity. Critical means "this is actively hurting you now." Warning means "this will hurt you as it accumulates." Info means "consider this; it's not urgent." We pick those severities deliberately and we'll revisit them as we learn more.

The architecture, briefly

Under the hood, monitors dispatch to the same SQS FIFO queue we use for content generation — just with a different job type. A Lambda worker picks up a monitor_scan job, fetches the sitemap, runs the seven rules over each URL with a bounded concurrency, writes issues to DynamoDB, computes the health score, and — if there are new critical or warning issues compared to the previous scan — dispatches an email.

Priya made a decision early that I want to highlight: rules are pure functions of a URL and its response. They don't share state. They don't talk to each other. Each rule has about 60–100 lines of Python and can be tested in isolation with a recorded HTTP response. That made it easy to add a rule during the beta when a customer flagged a gap — mixed content, as it turned out, wasn't in our original seven. It took half a day to add.

The boring architecture part matters because SEO monitoring is the kind of product that dies by a thousand flaky edge cases. We chose simple over clever. Reruns are idempotent. If a scan fails halfway through, you lose the partial scan, not the data. If you add a monitor on a sitemap with 10,000 URLs, we cap the scan at 200 and tell you that we capped it — we don't silently timeout.

The alerting, specifically

Here is what I want to tell you about alerting, because every SEO tool ships alerts and most of them are useless: we only email you on new critical and warning issues. Not "your site has issues." Not a weekly digest. Not a dashboard reminder. If Scan #47 turns up an issue that wasn't in Scan #46, you get an email. If the same issue persists across scans, you don't get re-emailed. If an issue resolves, we note it in the next scan's diff but we don't email a victory message.

This was probably the biggest design argument we had internally. Marcus wanted weekly summaries. Alex wanted Slack integration. I pushed for the minimal version — new issues only, by email — because alert fatigue is how monitoring tools die, and we can always add more later. We'll see if I was right.

What it does not do (yet)

Being specific about limitations is part of shipping honestly, so:

  • No JavaScript rendering. We fetch HTML with a standard HTTP client. If your page is client-rendered, the SPA placeholder rule will flag it, but we won't crawl what the JS would have rendered. Headless rendering is on the roadmap.
  • No Core Web Vitals. That requires a different kind of infrastructure (real-user or lab measurements). Not in v1.
  • 200 URLs per scan cap. Enterprise-scale sites with 50K+ URLs in their sitemap need sampling, and sampling is a harder problem than we wanted to solve in v1. We'll get there.
  • No custom rules. Seven rules, chosen deliberately. A custom-rule framework is interesting but it makes the "what does my score mean" question much harder to answer.

How to turn it on

Go to your project. Click the Monitors tab. Paste your sitemap URL. Pick daily or weekly. That's the whole setup.

The first scan starts immediately. After that it runs on your chosen schedule. You can also kick off a manual scan at any time from the monitor page — useful if you just shipped something and want to confirm you didn't break anything.

If you're on the free tier, you get one monitor. Pro gets five. Enterprise gets unlimited. Scan history is retained for 90 days; issues persist until resolved.

A closing thought

The thing I keep coming back to with SEO tooling is that the industry has spent twenty years building reporting tools — dashboards, audits, spreadsheet exports, weekly PDFs. Reporting is valuable, but reporting is not action. By the time a monthly audit catches a mixed-content issue that's been live for six weeks, the damage is already in your rankings.

What we want VectraSEO to be is the thing that runs while you are doing something else. The watchman on the wall. The radar that sweeps at 3 a.m. and wakes someone up only when something has actually changed. Content generation was the first half of that. Monitoring is the second. Together they're the loop — write, publish, watch, fix — and we think that loop is what the industry has been missing.

Turn it on. Tell us what breaks. We're listening.

[ END_OF_DISPATCH ]
SL
Sophie Laurent
Head of Product — VectraSEO

Field reports filed by operators who actually run the system. If something in this dispatch is wrong, tell us — dispatch@vectraseo.com.