AI Governance

How We Use AI Responsibly

SafeGuardGRC uses artificial intelligence to help CPA firms build and maintain their compliance programs. This page explains exactly how we use AI, what data is involved, and the safeguards we have in place.

Transparency

We tell you exactly what AI does and what data it sees

Human Oversight

AI assists — your Qualified Individual decides

Data Protection

Your data is never used to train AI models

Accountability

Every AI interaction is logged and auditable

1. Our AI Provider

SafeGuardGRC uses Anthropic’s Claude API as our AI provider. Anthropic is a U.S.-based AI safety company. We access Claude exclusively through their commercial API, which operates under fundamentally different terms than consumer AI products.

Key API Commitments from Anthropic

  • Zero training on API data: Anthropic does not use data submitted through their commercial API to train or improve their models
  • No data retention for model improvement: API inputs and outputs are not stored by Anthropic for the purpose of training future models
  • Commercial data protection: API traffic is governed by Anthropic’s commercial terms of service, which include enterprise-grade data protection commitments

For full details, see Anthropic’s Commercial Terms and Privacy Policy.

2. Where AI Is Used in SafeGuardGRC

AI is integrated into specific parts of the SafeGuardGRC platform to accelerate compliance work that would otherwise take hours of manual effort. Here is exactly where AI is involved:

Risk Assessment Analysis

What it does: Generates executive summaries and CISO-level reviews based on your assessment responses

Data involved: Assessment answers, firm profile (size, services, data handling practices)

Document Generation (WISP & IRP)

What it does: Creates customized Written Information Security Policies and Incident Response Plans

Data involved: Firm profile, team contacts, technology details, risk assessment findings

Evidence Evaluation

What it does: Reviews uploaded screenshots and documents to evaluate whether controls meet compliance requirements

Data involved: Uploaded evidence images, control test descriptions, evaluation criteria

Remediation Planning

What it does: Generates action plans with prioritized steps to address compliance gaps

Data involved: Assessment findings, current control status, firm context

Training Personalization

What it does: Adapts compliance training content to your firm’s specific profile and risk areas

Data involved: Firm profile, identified risk areas, training module context

Insurance Questionnaire Assistance

What it does: Helps complete cyber insurance applications using your existing compliance data

Data involved: Aggregated compliance posture, assessment responses, control status

3. What AI Does NOT Do

It is just as important to understand the boundaries of how we use AI:

  • AI does not make compliance decisions. All AI outputs are recommendations and starting points. Your Qualified Individual (QI) is responsible for reviewing, approving, and finalizing all compliance determinations.
  • AI does not access your data autonomously. AI processing is triggered only when you take a specific action in the platform (e.g., generating a document, running an assessment).
  • AI does not have access to your client’s end-customer data. SafeGuardGRC processes your firm’s compliance posture — not the personal data of your firm’s clients.
  • AI does not share data between firms. Each firm’s data is completely isolated. One firm’s assessment data is never included in another firm’s AI processing.
  • AI does not store conversation history. Each AI request is independent — there is no persistent memory or context carried between separate AI operations.

4. How Your Data Is Protected

In Transit to AI

All data sent to Anthropic’s API is encrypted using TLS 1.2 or higher. Data travels directly from our servers to Anthropic’s API endpoints — it does not pass through intermediaries or third-party proxies.

Data Minimization

We send only the data necessary for each specific AI task. Before each API call, our system filters and trims the data to include only the fields relevant to that particular operation. Empty or non-applicable fields are excluded.

Firm Isolation

Every AI request processes data from a single firm only. Row-level security at the database layer ensures that Firm A’s data can never be included in an AI request made on behalf of Firm B. This isolation is enforced at the infrastructure level, not just the application level.

No Persistent AI Storage

AI-generated outputs (summaries, documents, evaluations) are stored in your SafeGuardGRC account within our encrypted database. They are not stored separately by Anthropic. The AI provider does not retain your prompts or responses after processing.

5. Data Processing Location

SafeGuardGRC’s infrastructure is hosted in the United States via Vercel and Supabase. AI processing through Anthropic’s API also takes place in the United States. All data involved in AI operations — from your SafeGuardGRC database to Anthropic’s API and back — remains within U.S.-based infrastructure.

6. AI Audit Trail

Every AI interaction in SafeGuardGRC is logged. Our audit trail records:

  • Which firm the request was made for
  • What operation was performed (e.g., assessment summary, document generation, evidence evaluation)
  • When the request occurred
  • Token usage and cost for billing transparency
  • Which AI model processed the request

These logs are maintained for operational integrity and billing purposes. They do not contain the full content of AI inputs or outputs — only metadata about each interaction.

7. Human Oversight Model

SafeGuardGRC is built on the principle that AI should accelerate compliance work — not replace the judgment of qualified professionals. Our human oversight model ensures that:

  • AI outputs are always reviewable. Every document, summary, and evaluation generated by AI is presented to the user for review before it becomes part of the compliance record.
  • No autonomous actions. AI does not submit reports, send notifications, or modify your compliance status without explicit user action.
  • Professional responsibility remains with you. As stated in our Terms of Service, SafeGuardGRC provides personalized starting points for compliance documentation — not legal or professional advice. AI-generated content should be reviewed by your Qualified Individual.

8. AI Subprocessor

For the purposes of vendor risk assessments and data processing agreements, Anthropic is a subprocessor of SafeGuardGRC. Below is a summary:

DetailInformation
ProviderAnthropic, PBC
ServiceClaude API — AI-powered text analysis and generation
Data ProcessedFirm compliance data, assessment responses, evidence images (as described in Section 2)
Processing LocationUnited States
Data Retention by ProviderNo retention of API inputs/outputs for training
Training on Customer DataNo — explicitly excluded under commercial API terms

9. Your Rights Regarding AI Processing

You have control over how AI is used with your data:

  • Right to know: This page discloses all AI processing activities. If you have additional questions, contact us at any time.
  • Right to review: All AI-generated content is visible and editable within your SafeGuardGRC account before being finalized.
  • Right to deletion: Upon account closure, all your data — including AI-generated documents and assessment results — is deleted within 30 days per our Privacy Policy.
  • Right to data export: You can export your compliance documents and firm data at any time from your account settings.

10. Changes to Our AI Practices

If we make material changes to how AI is used in SafeGuardGRC — such as introducing new AI features, changing providers, or expanding the categories of data processed by AI — we will update this page and notify active subscribers via email at least 30 days before such changes take effect. The date at the bottom of this page indicates when this policy was last updated.

Questions?

If you have questions about our AI practices, need information for a vendor risk assessment, or want to discuss our AI governance in more detail:

Related Pages

Last Updated: April 5, 2026

We use cookies to measure site performance and improve your experience. No data is sold to third parties. Privacy Policy