Behind the Scenes

How Our Security Scanning Works

Our Security & Trust dashboard isn't a marketing page — it's a live view of automated scans that run against our production environment every night. Here's exactly how it works.

The Pipeline

Every night at 2:00 AM, a scheduled CI/CD pipeline kicks off automatically. No human triggers it, no one curates the results. Here's what happens:

1

Scheduled trigger

Our GitLab CI pipeline fires on a nightly schedule. It targets the same production code and production URLs that our customers use — not a staging environment or a sanitized copy.

2

Five scans plus platform verification run in parallel

Each scan runs in its own isolated container. Four test external-facing security: encryption quality, HTTP security headers, dependency vulnerabilities, and application-level risks. A fifth job verifies that our platform security controls — MFA, RBAC, RLS, encryption at rest, and CI scanning — are still active in both the codebase and production database.

3

Results are normalized and stored

A publish script reads each tool's raw output, extracts only the summary-level data (grades, pass/fail, counts), and stores it in our database. The full scan reports are stored privately for our team to review — they are never exposed publicly.

4

Dashboard updates automatically

The /security page reads the latest scan results from the database each time it loads. No caching tricks, no manual updates. If a scan produces a worse result tomorrow, the dashboard will reflect it immediately.

The Tools

We deliberately chose well-known, third-party and open source tools so results can be independently verified. None of these are proprietary scoring systems we control.

Qualys SSL Labs

by Qualys, Inc.

Performs a deep analysis of our SSL/TLS configuration — cipher suites, protocol versions, certificate chain, and known vulnerabilities. This is the same test you can run yourself at ssllabs.com.

Published: Letter grade (A+ through F) and certificate expiry date

Withheld: Detailed cipher suite list and server configuration

Mozilla Observatory

by Mozilla Foundation

Audits our HTTP security headers — Content Security Policy, Strict Transport Security, X-Frame-Options, and more. Scores on a 0–100+ scale (bonus points for extra hardening beyond the baseline).

Published: Letter grade and numeric score

Withheld: Individual header test breakdown and specific policy values

Trivy

by Aqua Security

Scans our codebase dependencies (npm packages) for known vulnerabilities listed in public CVE databases. Catches issues in third-party libraries before they become a problem.

Published: Count of critical and high severity findings (e.g., '0 critical, 0 high')

Withheld: Package names, versions, specific CVE identifiers

OWASP ZAP

by OWASP Foundation

Runs a passive baseline scan against our production site — checking for common web application risks like missing headers, information leakage, and insecure configurations. This is a non-destructive scan (it observes, it doesn't attack).

Published: Pass/Fail status and count of high-risk findings

Withheld: Specific URLs tested, finding details, remediation guidance

Platform Security Controls

Beyond scanning for external vulnerabilities, our pipeline also verifies that core security controls are actually in place — not just documented, but present in running code and the production database. Each control is checked automatically every night.

These aren't static claims on a marketing page. If someone removes an RBAC middleware check or disables RLS on a table, the next nightly run will flag it.

Multi-Factor Authentication

How we verify

Our CI pipeline searches the codebase for MFA enrollment flows, challenge/verify logic in the login form, MFA enforcement on password changes, and MFA settings in the user dashboard. All 5 checkpoints must pass.

What it means

Users can enable TOTP-based MFA. The enrollment, challenge, and verification code paths are confirmed present in the production codebase.

Role-Based Access Control

How we verify

The pipeline checks for the roles module, server-side permission enforcement functions (requireAuth, getUserRole), and API route protection. It also queries the production database to confirm the firm_users table exists with its role column.

What it means

Access is controlled by role (owner, admin, member). The authorization logic exists in code and the supporting database structure is live in production.

Row-Level Security

How we verify

Rather than checking migration files, the pipeline queries the live Supabase database to verify that key tables are accessible via the service role. We check 20+ tables that are expected to have RLS policies enforcing firm-level data isolation.

What it means

Each firm's data is isolated at the database level. Even if application code has a bug, the database itself prevents cross-firm data access.

Encryption at Rest

How we verify

The pipeline fetches Supabase's public documentation pages and confirms they still reference AES-256 encryption for data at rest. This verifies our infrastructure provider's commitment hasn't changed.

What it means

All data stored in our Supabase-hosted PostgreSQL database is encrypted on disk using AES-256, managed by the infrastructure provider.

CI/CD Security Scanning

How we verify

The pipeline verifies its own configuration — confirming that SAST (static analysis), secret detection, dependency scanning, and the nightly dashboard scans are all defined in the CI configuration file.

What it means

Every code push is automatically scanned for vulnerabilities, exposed secrets, and insecure dependencies before it reaches production.

Transparency With Boundaries

We believe in being transparent about our security posture, but transparency doesn't mean exposing details that could help an attacker. Here's the principle we follow:

“Publish the grade, not the answer sheet.”

What We Publish

  • Letter grades from independent third-party tools
  • Pass/fail status for each scan category
  • Summary counts (e.g., “0 critical vulnerabilities”)
  • When each scan last ran
  • Stale data warnings if scans haven't run recently
  • Which tools we use and who makes them

What We Withhold

  • Full scan reports with finding details
  • Specific CVE identifiers or package versions
  • URLs or endpoints where issues were found
  • Server configuration or infrastructure details
  • Remediation timelines or patch status
  • Any scan result that failed (shown generically instead)

Verify It Yourself

You don't have to take our word for it. Two of the four tools we use are publicly accessible — you can run them against our site right now and compare with what we publish:

Trivy and OWASP ZAP require access to our codebase or server to run, so those can't be independently verified from outside. The grades from Qualys and Mozilla, however, should match what's on our dashboard within the normal scan caching window.

Questions About Our Security?

If you're evaluating SafeGuardGRC and have questions about our security practices, we're happy to discuss them directly.

We use cookies to measure site performance and improve your experience. No data is sold to third parties. Privacy Policy