Manual ITGC testing doesn't scale — this platform does
Spreadsheets don't enforce methodology. Ad-hoc tools can't reproduce results. Reviewers spend hours reverse-engineering how conclusions were reached. This platform exists because ITGC testing deserves real software. Whether you run external audits for clients, internal SOX 404 testing, or contract-based IT audit projects.
Built for auditors, by auditors
The workflow, methodology, and documentation register come from people who have spent years inside ITGC engagements — testing controls, triaging exceptions, defending conclusions to reviewers. The platform reflects how audits actually run, not how a generic compliance tool models them.
The problem with manual ITGC testing
Most ITGC testing still runs on spreadsheets and ad-hoc tools. The result: inconsistent output, unreproducible sampling, and reviewers who can't tell how conclusions were reached.
- No enforced methodology — quality depends on who runs it
- No reproducibility — same inputs, different workpapers
- No audit trail — conclusions float without evidence links
- Hours lost to formatting — copy-paste errors, manual cross-references
What makes this platform different
Three architectural decisions that set it apart from spreadsheets and generic tools.
- Evidence-first — AI tests run only on samples with mapped evidence; controls cannot be concluded without an evidence-grounded test result. Evidence-first design reduces fabrication risk; code-level pre-matching links evidence to samples before AI evaluates, and all AI outputs require professional review before reliance.
- AI transparency — every result stores model version, input/output tokens, confidence score, match quality tag, rationale, and quoted evidence. Documentation outputs your firm can use for AS 1215 review obligations — your firm determines documentation sufficiency.
- Reproducible — SHA-256 seeded Fisher-Yates sampling. Same population + same seed = same sample set. Verification built into the platform.
AI outputs are probabilistic and require professional review before reliance.
- 1Population
- 2Sampling
- 3Expectations
- 4Evidence
- 5Testing
- 6Exceptions
- 7Quality Review
- 8Review
(Operations controls add a scoping step. Change controls replace Expectations with Traceability.)
Data handling with clear boundaries
Evidence stored in your dedicated tenant. Mutual NDA available on request as part of scoping. You control your own export + deletion cadence; tenant-scoped content is deleted per MSA Schedule B post-termination.
Trust principles
Four commitments that define how customer data is handled on the platform.
- Least data — only what's required for the tests you configure
- Tenant-isolated storage — evidence remains in your dedicated tenant
- Customer-controlled lifecycle — 30-day post-termination export window via the 13-section pack; the platform provider (Bonfleur s.r.o.) runs revoke-and-delete cadence per MSA Schedule B; the platform is not a long-term archive on your firm's behalf
- Full traceability — conclusions tie back to evidence and criteria
Technical edge
Seeded SHA-256 sampling means the same population always produces the same sample set — verifiable, reproducible, defensible. Every AI test result stores the model version, confidence score, quoted evidence excerpts, and full rationale. Every conclusion traces back to specific evidence. Every action is logged in an append-only audit trail that cannot be modified after the fact. Sign-off requires all 11 validation gates to pass — testing complete, QC acknowledged, AI results reviewed, exceptions resolved, evidence linked, coverage thresholds met. The platform enforces methodology, not just documents it.
See what the platform produces
Browse a real workpaper output — same methodology, same evidence chains, same format your reviewers will see.