Skip to content

Why A.8.33 Is Suddenly Critical for Game Maths and RNG

ISO 27001:2022 A.8.33 is critical for game maths and RNG because it treats test information as high‑risk, regulated data. The control expects you to decide exactly what goes into test environments, classify it and protect it through its full lifecycle. For studios and gambling operators, that means RTP tables, configuration packs and RNG internals in QA are no longer harmless working files; they become assets your ISMS must manage as carefully as production systems. Test information in gaming is no longer background noise: if maths, RTP values or RNG mappings leak from QA, attackers can model your games, competitors can copy designs and regulators may question fairness. Regulators and test labs increasingly ask how you protect maths and RNG internals outside production, not just in the final certified build, so weak non‑production controls quickly become licencing, revenue and reputation risks.

Strong QA turns test information from a quiet liability into a visible strength.

Game maths and RNG: your real “test information”

In gambling and RNG‑driven games, the most sensitive and valuable test information is often the maths itself rather than player records. In practice, that means assets such as:

  • Pay tables, symbol weights and reel strips
  • RTP curves and volatility profiles
  • Progressive jackpot rules and seed values
  • RNG implementations, seeding strategies and mappings of random outputs to outcomes

Together these artefacts describe exactly how your games behave and how value flows through them, so they deserve the same level of control as any other crown‑jewel asset.

If these details leak from a test environment, attackers can model your games, regulators may challenge fairness and competitors can clone your designs. A.8.33 expects you to recognise these assets as test information and protect them accordingly, even when they only appear in non‑production systems.

Test environments have become the soft underbelly

Test and QA environments in gambling are attractive targets because they often combine rich maths and configuration data with weaker security controls. Many studios run several non‑production environments that lag behind production in patching, monitoring and access management. A.8.33 brings these environments formally into scope, so you treat QA as part of your security boundary rather than a convenient side channel where attackers or insiders can steal maths or influence fairness.

Modern studios and operators commonly run developer sandboxes, automated test rigs, staging, UAT, external certification labs and vendor test setups. These environments often:

  • Are patched and monitored less rigorously than production
  • Rely on shared accounts or broad database access
  • Contain copies of production data or configurations made “just for testing”

These weaknesses create exactly the sort of soft underbelly attackers look for when they cannot easily breach hardened production systems.

Attackers know that breaching a permissive QA cluster can be easier than breaching a live environment, yet still yields game maths, RTP profiles and test harness outputs. Treating those assets as in scope for A.8.33 helps you close that gap before someone else exploits it.

A quick disclaimer

Nothing here is legal or regulatory advice; it is practical guidance to help you understand A.8.33 and design better controls for your studio or operation. For decisions about standards, regulation or licences you should involve your legal, compliance and audit advisers, and align with any specific requirements from your regulators and test labs.

Book a demo


Where Test Information Really Lives in a Games Studio or Operator

A.8.33 is much easier to apply once you know exactly where test information appears across your studio or operation. In gambling this goes well beyond a single “test database” and includes design artefacts, configuration files, copied production samples, and logs or outputs from tools and labs. Mapping how these move between teams and environments shows where maths, RNG assets and quasi‑production data accumulate, so you can bring them formally into your ISMS and assign owners and protections. You cannot protect what you have not identified, so the first real task under A.8.33 is mapping test information. In gaming, that means going beyond broad labels and pinpointing exactly where maths, RNG‑related assets and near‑production data appear during QA; once you see the full picture, risky patterns and weak spots stop being invisible and start to become manageable.

Mapping assets across the QA lifecycle

Mapping test information across the QA lifecycle helps you see where maths, configurations and data are created, copied and stored. In practice, tracing one or two flagship titles from design through build, QA, external testing and certification reveals how often spreadsheets, configuration packs, data exports and logs move across tools and teams. Each hop creates new test information that falls within A.8.33’s scope and needs a defined owner, classification and protection level.

Work through one or two flagship titles and trace how information moves from design to certification:

  • Design and modelling:

Game design documents, spreadsheets, balancing tools and simulation outputs that often sit in shared drives or collaboration tools and are copied into test or lab packs.

  • Build and configuration:

Configuration files for RTP, paylines, symbol weights, jackpot parameters and bonus triggers that are bundled into builds, deployed to test servers or exported in plain text for debugging.

  • Data used in testing:

Player‑like datasets, transactional logs, telemetry samples and support dumps brought into QA “just for realism”, even when names and IDs are stripped.

  • Outputs of testing:

Logs, screenshots, crash dumps, RNG test harness outputs and certification reports that can contain seeds, sequences and internal state information.

Every time information crosses a boundary – from maths team to QA, from QA to an external lab, from support to developers – you create a new piece of test information that falls within A.8.33’s scope.

Typical leakage routes in QA

Identifying typical leakage routes in QA helps you focus on the handful of patterns that create most of the risk. Once you chart real projects, the same routes appear again and again, usually driven by time pressure or convenience. A.8.33 effectively asks you to spot these patterns, rate their confidentiality and integrity risk, and then treat them like any other ISMS risk rather than inevitable side‑effects of delivery.

When you map real projects, some common risk routes show up repeatedly:

  • Database snapshots taken from production and restored into QA with minimal masking
  • Verbose logging in test builds that prints internal odds, RNG outputs or configuration values
  • Spreadsheets with pay tables and balancing formulas shared in email threads or chat attachments
  • Copies of test packs left in cloud storage or on local laptops long after a project ends

Once you identify these patterns, you can start tackling them systematically rather than relying on ad‑hoc fixes after each scare.

Turning your map into a risk view

Turning your map into a risk view allows you to show that QA is formally inside your management system. From an ISO 27001 perspective, the output should be more than a mental picture; you want traceable assets, owners and recorded risks so auditors and regulators can see how test information is handled. The more this evidence falls out of your normal way of working, the less painful audits and licence reviews become.

Useful outputs include:

  • An updated asset inventory listing key test information items, including maths and RNG artefacts
  • A risk register that explicitly recognises test environments and information as sources of confidentiality and integrity risk
  • Clear ownership: who is responsible for each category of test information, including selection, protection and disposal

If you prefer to keep this picture in one place rather than across scattered documents, a structured ISMS platform such as ISMS.online can help you maintain inventories, ownership and risks in a way that stays aligned with A.8.33 as your games and environments evolve.




ISMS.online gives you an 81% Headstart from the moment you log on

ISO 27001 made easy

We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.




Choosing Safe Test Information: Production, Masked, Synthetic and Maths

Choosing safe test information under A.8.33 starts with deliberate selection rather than copying whatever is quickest from production. In gaming organisations there are two main dimensions: whether you rely on real or synthetic data for players and transactions, and how much of your game maths and RNG internals you expose in each environment. Clear rules for both make later design, risk and audit conversations far easier. The first word in A.8.33’s requirement is “selected”: test information must be chosen deliberately, not inherited by accident, so you decide when synthetic data is sufficient, when tightly masked samples are justified and how far maths and RNG assets should travel beyond your core systems; when selection decisions are explicit, you can justify them to auditors and regulators instead of defending one‑off exceptions.

Principles for selecting player and transaction data

Good principles for selecting player and transaction data in QA help you move away from full production clones. Regulators and privacy frameworks increasingly treat non‑production use of personal data as risky, so you need to be able to explain what you used, why it was needed and how it was protected and removed. That does not make realistic QA impossible; it simply demands more care and documentation.

A sensible baseline for QA and test under A.8.33 is:

  • Prefer synthetic data:

Generate realistic but fictitious accounts, sessions and bet histories so test coverage reflects production patterns without using real customers.

  • Mask when you must copy:

When you need production‑derived data, remove direct identifiers and generalise quasi‑identifiers to reduce the chance of re‑identification.

  • Minimise the data footprint:

Pull only the fields and time windows you genuinely need for a given test instead of cloning whole databases.

  • Document justification:

Record why production‑derived data was used, who approved it, how it was masked and when it will be removed.

These practices align with A.8.33 and with privacy‑oriented regulations that treat non‑production use of personal data as an area requiring clear justification.

Treating game maths as a special class of test information

Game maths and RTP/RNG details behave more like cryptographic keys or trading algorithms than ordinary test data, so they warrant stricter rules. While privacy laws focus on individuals, gambling regulators and test labs focus on fairness and integrity, which depend directly on how these assets are handled. Your selection approach for maths and RNG should therefore be considerably more conservative than for generic player‑like data.

Game maths and RTP/RNG details deserve a more cautious stance:

  • Assume maths and RNG assets are crown‑jewel IP:

Keep them inside a tightly controlled core and avoid exposing raw values on end‑user devices or broadly accessible systems.

  • Expose behaviour, not implementation:

Let testers validate outcomes and distributions, for example through APIs that return expected RTP bands, rather than sharing underlying calculation sheets.

  • Use reduced‑fidelity maths in low‑risk environments:

Run lower‑tier QA environments with representative but not exact RTP and volatility, reserving true values for higher‑tier environments and certification labs.

  • Avoid casual exports:

Design tools and processes so people rarely need to export maths or RNG details into local files or spreadsheets.

Selecting test information in this way can feel like a culture shift, but once teams have practical patterns to follow it quickly becomes routine.

Comparing common test‑data choices

Comparing common test‑data choices side by side helps teams understand why some options create far more risk than others even if they feel convenient. A simple view covering personal data and maths assets supports decisions such as using synthetic player data by default, masking narrowly when needed and treating maths or RNG assets as a separate high‑sensitivity category in your ISMS.

Test data type Contains real personal data? Main risk focus
Production clone Yes Privacy and IP
Masked production data Partially Re‑identification
Synthetic test data No Coverage quality
Maths/RNG configurations No players, high IP content Fairness and game clone

This comparison backs a more disciplined selection strategy without undermining realistic testing.




Designing QA Environments That Are Both Secure and Realistic

Designing QA environments that are both secure and realistic means mimicking production behaviour while enforcing clear security and data boundaries. A.8.33 does not require you to cripple QA; it requires you to make it deliberate, segmented and well controlled so that maths, RNG internals and any personal data are handled in predictable ways. Done well, this reassures internal stakeholders, test labs and regulators that fairness is protected throughout the lifecycle, not only in the final release. The challenge in gambling is to set up environments that catch real‑world issues without turning every non‑production system into “almost production” in risk terms; you want clear rules for what each environment may contain, how it is accessed and how logs, dumps and data copies are handled so regulators see a designed system rather than improvised patches.

Building on a DTAP‑style environment model

A DTAP‑style environment model gives you a simple language for embedding A.8.33 decisions into everyday practice. Everyone understands Development, Test, Acceptance and Production; the key is defining what levels of player data, maths fidelity and access controls are acceptable in each. That prevents slow drift, where every environment fills up with near‑production data and configurations “just for convenience”.

A common pattern in mature organisations is to adopt a DTAP lifecycle:

  • Development: – individual sandboxes and feature branches
  • Testing: – shared QA environments for integration and regression
  • Acceptance: – pre‑production, used by business stakeholders and sometimes regulators
  • Production: – live systems with real players and money

Under A.8.33 you should decide, for each level:

  • Which kinds of player data are allowed, such as synthetic only, masked samples or none at all
  • What level of maths and configuration fidelity is required to test effectively
  • Who may access the environment and through which identity and access mechanisms
  • How logs and dumps are retained, redacted and destroyed

Naming these decisions explicitly stops every environment gradually turning into “almost production” from a risk point of view and makes your approach much easier to explain during audits.

Separating sensitive logic from everyday testing

Separating sensitive logic from everyday testing lets QA validate behaviour without exposing the engine. In practice, this means hiding maths and RNG internals behind well‑designed services while exposing controlled test behaviours. A.8.33 becomes far easier to satisfy when testers work through stable interfaces rather than direct access to source code or raw tables.

A secure and realistic architecture for gambling QA usually involves:

  • Backend services for maths and RNG:

Game clients and test harnesses call services that encapsulate maths and random‑number generation, keeping internal details server‑side behind strong access control.

  • Test‑specific endpoints and toggles:

QA users trigger scenarios such as near‑miss bonuses, jackpot approaches or long losing streaks via controlled test interfaces rather than editing internal values.

  • Data pipelines with built‑in masking:

Any movement of production‑derived data into test passes through pipelines that automatically mask and philtre fields according to defined rules.

  • Network and identity segmentation:

Test environments sit in separate networks with dedicated identity and access management, and access is granted per role and per environment.

With this design, testers still see everything they need to validate fairness, performance and game feel, but they do so through controlled lenses rather than raw internals.




climbing

Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.




Protecting Proprietary Game Maths and RNG Logic in Practice

Protecting proprietary game maths and RNG logic in practice means handling them like other core security components rather than like ordinary test data. A.8.33 is particularly relevant here because these assets combine high commercial value with direct impact on fairness. The goal is to let people do their jobs without needing to handle more detail than their role genuinely requires. Once your environments are structured, you still need day‑to‑day guardrails about how much of the engine you expose. A.8.33 does not list game‑specific requirements, but its intent aligns closely with how you would protect any sensitive algorithm or cryptographic component, and if you can show that maths and RNG logic are controlled to a similar standard, auditors and regulators are far more likely to accept your approach.

Reducing how much testers need to know

Reducing how much testers and external partners need to know about your internals is one of the most effective ways to lower risk without reducing coverage. A.8.33 is much easier to satisfy if each role is consciously designed around what they must observe and control versus what they never need to see. That distinction directly limits what can be stolen or reconstructed if a device is lost or an account is misused.

A practical approach is to ask, for each role:

  • What do they need to observe? For example, outcomes, win rates and distributions.
  • What do they need to control? For example, test seeds, start states and feature toggles.
  • What do they never need to see? For example, source code, detailed tables and long‑term secrets.

You can then design:

  • Black‑box test suites: that specify expected behaviours and result ranges, not formulas
  • Controlled seed management: so QA can reproduce issues without knowing long‑term production seeds
  • Statistical validation tools: that compare outputs against expected distributions without exposing raw intermediate values

This mirrors common fairness‑testing practice: labs and regulators care more about whether the RNG is demonstrably fair and unpredictable than about having a copy of the full implementation.

Engineering controls for maths and RNG assets

Engineering controls make a “least‑knowledge” model stick under pressure and translate A.8.33 into concrete behaviour. By combining strict code and secret management with sensible monitoring, you can show that maths and RNG assets are handled with the same care as any other core security component. That is exactly the kind of storey auditors and regulators expect to hear in a mature operation.

To protect maths and RNG assets in practice:

  • Keep maths libraries, RTP tables and RNG implementations in version‑controlled repositories with strict role‑based access
  • Store secrets and seeds in dedicated secret‑management systems, not in configuration files or source code
  • Ensure test builds for contractors and external labs do not contain debug switches that reveal internal state or allow arbitrary exports
  • Instrument services and repositories with monitoring so unusual read, export or clone patterns trigger review

In effect, you treat game maths and RNG logic like cryptographic keys: tightly limited access, strong segregation and good telemetry around their use. A.8.33 then becomes a natural extension of your general security design rather than a bolt‑on.




Working Safely With External Testers, Labs and Contractors

Working safely with external testers, labs and contractors under A.8.33 means extending your test‑information controls beyond your own walls. Many gaming organisations rely on third parties for QA, certification and specialist testing, and regulators want to know that maths, RNG internals and any personal data remain protected when that happens. Demonstrating that your controls travel with your information is now a core part of both security and licencing conversations. In practice, this means treating external access as part of your test‑information lifecycle rather than a special case: you still decide what information is selected, how it is protected and when it is removed; the only difference is that some of the work happens on someone else’s infrastructure, and when those expectations are written down, enforced and reviewed, regulators and partners are far more comfortable.

Designing external‑facing test environments

Designing external‑facing test environments as deliberately constrained outer rings allows third parties to work effectively without seeing more than they need. Under A.8.33 you should aim to give external testers enough access to validate behaviour, performance and compliance, while preventing broad visibility of internal state or long‑term sensitive assets. That usually means dedicated environments, tightly scoped access profiles and carefully mediated interfaces.

When external parties are involved, a secure pattern typically includes:

  • Dedicated environments: for external access, separate from internal QA and from production
  • Strict roles: such as “external lab tester” or “external QA” that grant only the permissions and data needed for agreed tasks
  • Brokered access: to maths and RNG behaviour via APIs or controlled tools, not direct database or file access
  • Time‑bound accounts and approvals: so access automatically expires when projects or contracts end

This architecture keeps the relationship straightforward: external parties see and interact with the game as needed, but never gain broad visibility of internals or the ability to copy large volumes of test information.

Contracts, onboarding and ongoing assurance

Contracts, onboarding and ongoing assurance make sure your technical expectations are understood and followed by external partners. A.8.33 naturally overlaps with supplier‑management and outsourcing controls in ISO 27001, so you can reuse many of the same patterns you already apply for production services. The goal is to make expectations about test information explicit, monitored and revisited.

Helpful practices include:

  • Contracts and statements of work that spell out expectations for test information, including classification, handling rules, storage locations, retention and disposal
  • Onboarding for external testers that includes security and confidentiality briefings specific to game maths and RNG protection
  • A register showing which external parties have access to which environments and what kind of test information each receives
  • Periodic reviews or attestations confirming that partners still meet your standards and have not created uncontrolled copies of data or maths artefacts

Treating external QA and labs as extensions of your own control environment – rather than separate silos – makes it much easier to demonstrate conformity with A.8.33 during audits and licence renewals.




ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.

ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.




Proving to Auditors and Regulators That You Satisfy A.8.33

Proving to auditors and regulators that you satisfy A.8.33 is as important as designing good controls in the first place. ISO 27001 is about being able to show what you do, not just doing it, and A.8.33 is no exception. Auditors and regulators will look for coherent definitions, consistent processes and tangible evidence that test information is selected, protected and removed in line with policy. Good evidence turns difficult questions into short conversations; when you can calmly show how test information was chosen, masked, used and deleted for a real game, trust rises and audit stress falls, and the same artefacts also support fairness and integrity reviews for game maths and RNG even when regulators never mention the control number.

What auditors typically look for

Auditors assessing A.8.33 usually start with how you define test information and scope, then follow the trail into environments, processes and records. In gaming, they quickly focus on how you identify maths and RNG‑related assets as test information, what test environments contain and how any non‑production use of production‑derived data is justified. Clear answers, backed by artefacts, shorten conversations and build trust.

When assessing A.8.33 in a gaming context, internal or external auditors will usually want to see:

  • Policy and standards: that mention test information explicitly, including maths and RNG‑related assets
  • Environment diagrams: showing clear segregation between development, test, acceptance and production, with notes on what kinds of data and configurations each holds
  • Procedures: for test data selection, masking, approval and disposal
  • Access control records: indicating who can reach sensitive test information and how those rights are reviewed
  • Examples: of test cycles where you can trace the journey of data and maths from selection through to removal

If you also have regulatory obligations, the same evidence will support fairness and integrity reviews, demonstrating that your control over maths and RNG extends beyond production binaries.

Making evidence capture part of normal work

Making evidence capture part of everyday work is the most sustainable way to stay ready for ISO audits and regulatory reviews. If approvals, masking steps and access reviews are logged automatically as you work, you avoid the last‑minute scramble to reconstruct what happened. This approach also surfaces gaps earlier, when they are cheaper and less embarrassing to fix.

Practical approaches include:

  • Change tickets for creating or refreshing test environments that include data‑selection and masking steps
  • Pipelines for moving data between environments that log approvals and transformations
  • Access‑review activities conducted on test systems as well as production, with outputs stored centrally
  • Incidents and near‑misses related to test information that generate follow‑up actions and playbook updates

An ISMS platform such as ISMS.online can help by linking controls, risks, policies and evidence in one place. Instead of scrambling before each audit, you have an always‑on view of how A.8.33 is being met across your studio or operation and can show that to auditors and regulators whenever they ask.




Book a Demo With ISMS.online Today

ISMS.online helps you turn ISO 27001 A.8.33 from a potential liability into a demonstrable strength across your QA environments, game maths assets and test data. By pulling policies, risks, controls and evidence into one structured system, you gain a clear view of where test information lives, who owns it and how it is protected throughout its lifecycle. That makes it far easier to reassure auditors, regulators and B2B partners that fairness, integrity and privacy extend beyond production.

A structured way to map and control test information

The hardest part for many operators and studios is simply keeping track of where test information sits and which controls apply. ISMS.online gives you a single place to maintain your asset inventory, risk register and control set, including specific entries for game maths, RNG configurations and non‑production data flows that matter under A.8.33. You move away from scattered documents and spreadsheets towards a joined‑up picture of your test‑information landscape.

You can model your DTAP environments, link them to test‑data selection rules and access controls, and attach real evidence such as change tickets or masking logs. That makes it easier to explain your approach to auditors, regulators and demanding B2B partners, because the narrative and the proof live side by side rather than in separate silos.

Seeing your A.8.33 posture across studios and brands

If you operate multiple studios, platforms or brands, consistent QA and test‑information handling is vital for both security and licencing. ISMS.online lets you see how different teams and suppliers are meeting the same A.8.33 expectations without forcing everyone into identical workflows. You define shared policies and minimum controls, then let local teams implement them in ways that fit their delivery cadence and technology choices.

Over time, this creates a feedback loop: incidents, audits and near‑misses in one part of the business become improvements everywhere else, because they are captured in a shared ISMS rather than disappearing into project archives. That is when A.8.33 stops being a checkbox and starts to feel like a genuine part of your IP protection and fairness storey.

Choose ISMS.online when you want A.8.33 to become an asset rather than a liability for your studio or operation; you will be in a stronger position to show regulators, auditors, partners and players that you take test information, game maths and RNG protection as seriously as your live games.

Book a demo



Frequently Asked Questions

How does ISO 27001 A.8.33 actually change day‑to‑day QA in a gaming studio?

ISO 27001:2022 A.8.33 turns QA from “copy production and test freely” into “design test information deliberately and keep it under control.”

In practical terms, it means your QA, maths, RNG and platform teams need a shared, written view of what counts as test information and how it is handled across environments. For a gaming studio, that includes everything from game maths and RNG to logs, screenshots and synthetic “players”.

What changes in how we define and handle test information?

You need to be able to explain, consistently and in plain language:

  • What test information is: in your context

Typical examples: maths configuration files, RNG parameters, jackpot logic, test player accounts, logs and dumps, screenshots, replay scripts, performance traces and synthetic datasets.

  • Where it lives:

Which repositories, environments and tools hold that information: development and test environments, CI systems, object storage, log platforms, QA tools, external lab environments.

  • Who owns it:

Named roles such as QA lead, maths owner, RNG owner, environment owner or data owner, not just “IT” or “dev”.

  • How it is protected:

Access controls, separation between environments, logging, masking, retention limits and disposal routes.

Most gaming organisations end up with a concise test‑information standard that:

  • Calls out game maths, RNG artefacts, jackpot logic and test datasets as in‑scope “test information”.
  • Sets a default of synthetic data first, with small, justified exceptions when masked production‑derived data is truly required.
  • Describes environment tiers (for example DTAP) and which types of test information are allowed in each.

How does this feel in day‑to‑day QA work?

Once these rules are built into your pipelines and runbooks:

  • Testers request new datasets or maths scenarios through a known flow instead of creating one‑off copies.
  • Environments are refreshed in predictable ways (for example, nightly synthetic loads, scheduled masked snapshots).
  • Screenshots, logs and dumps are created, tagged and disposed of under clear rules instead of living forever on shared drives.
  • When auditors, regulators or B2B clients ask how you handle test information, you show them how your lifecycle works rather than improvising answers.

If your information security management system lives in ISMS.online, you can link the test‑information standard, environment diagrams, data‑handling procedures and ownership matrix directly to A.8.33. That gives your QA, security and compliance teams a single place to maintain the storey and makes it much easier to prove that test information is designed and controlled, not accidental.


How should we protect game maths and RNG in QA without slowing testing down?

You protect game maths and RNG by treating them as high‑sensitivity secrets while letting QA see everything they need in terms of behaviour and outcomes.

The goal is that testers can prove fairness, volatility and stability without routinely handling pay tables, RTP curves or seeding strategies in raw form.

Which maths and RNG artefacts should we treat as “crown jewels”?

In most gaming stacks, the particularly sensitive items include:

  • RTP tables and configuration sets.
  • Pay tables, reel strips, symbol weightings and return curves.
  • Jackpot, bonus and feature state machines.
  • RNG algorithms, seeding strategies and bias‑correction logic.
  • Any mapping between configuration files and player‑visible behaviour.

Those artefacts should sit in secured repositories or internal services, not on QA laptops or generic shared folders. In practice that usually means:

  • Tight role‑based access: – a small, identified maths/RNG group rather than blanket access for “dev” or “everyone in QA”.
  • Encrypted storage and controlled export paths: – no casual copies to removable media or personal cloud stores.
  • Change control tied to tickets and approvals: – every material maths or RNG change is traceable from request to release.
  • Regular access reviews and log checks: – so you can show who has read, cloned or exported sensitive assets.

Handled this way, your approach aligns both with ISO 27001 A.8.33 and typical gambling regulator expectations around maths and RNG secrecy.

How do we keep QA fast while shielding internals?

The pattern that tends to work best is encapsulation:

  • Maths and RNG sit behind internal services and test harnesses, not as editable spreadsheets in test environments.
  • QA drives simulations – spins, jackpots, bonus triggers and edge‑case scenarios – via APIs, harnesses or internal tools.
  • Tools surface aggregated results such as hit rates, RTP bands, error counts and edge‑case behaviour instead of raw tables or seed material.
  • Repeatability is delivered through short‑lived test seeds and scenario definitions under controlled access, not by handing out production seeds.

Builds that go to external labs or partners should be compiled without debug modes or hidden panels that expose internal configuration. Testers still explore realistic behaviour and can push the games hard; they are simply exercising a protected engine instead of inspecting the blueprints.

When those repositories, services and harnesses are registered in your ISMS and mapped to A.8.33, it becomes straightforward to show an auditor or regulator how you protect maths and RNG while still enabling thorough QA.


How can we keep QA environments realistic without breaching A.8.33 or privacy rules?

You keep QA realistic and compliant by mirroring production architecture and flows while deliberately reducing data sensitivity and visibility.

A.8.33 expects clarity on which environments can see which types of information and who is allowed to work within them. Privacy requirements add constraints on how player‑like data is created, transformed and viewed.

What does a sensible environment strategy look like for a games studio?

Many gaming organisations move towards a DTAP‑style model:

  • Development:

Local or shared instances; synthetic data only; simplified maths acceptable; shorter log retention.

  • Test / Integration:

Shared environments; synthetic player accounts; maths and RNG behind internal services; full logging; restricted access via corporate networks or VPN.

  • Acceptance / Certification:

Near‑final maths and configuration; carefully controlled use of masked production‑derived data only where justified; stricter access control and change approvals.

  • Production:

Live players and real money; complete protection stack; no direct reuse of production data in lower environments.

For each environment, write down:

  • Allowed data: – synthetic only, synthetic plus masked extracts, or none (for pure simulations).
  • Access scope: – permitted roles (dev, QA, maths, operations, external labs) and connection paths.
  • Visibility: – whether user interfaces, admin tools or logs can expose anything that looks like a player identifier, payment reference or internal maths state.
  • Retention and disposal: – how long logs and datasets are kept and how they are destroyed.

How do we embed these rules into pipelines?

To make these rules stick, connect them directly to your automation:

  • Data flowing “down” from production into test or certification must pass through approved masking pipelines with logging and approvals, rather than manual exports.
  • Configuration and maths changes moving “up” must follow your change management process, with clear separation of duties and rollback options.
  • New environments are built from standard templates that already include correct data‑handling controls.

If you capture systems, environments, data flows and these rules in ISMS.online and link them to A.8.33 and privacy‑related controls, you give new engineers, auditors and regulators a clear map of how realism and control coexist. It also gives you one place to update when you add new titles, platforms or regions.


When is it acceptable to use production‑derived data in test, and how do we keep that safe?

Using production‑derived data in test is only acceptable when less sensitive options genuinely cannot achieve the same result, and you can show that the use case is justified, transformed and temporary.

A.8.33 sits naturally alongside data‑protection and gambling rules here: start from minimisation, add transformation, and log every step.

Which situations usually justify live‑derived data in QA?

In gaming studios, the more defensible use cases tend to look like:

  • Rare performance or concurrency issues: that only appear under very specific live traffic patterns, device mixes or networks.
  • Detailed complaint or dispute reconstruction: , where a regulator or high‑value player expects you to replay an exact transaction sequence.
  • Settlement and reconciliation checking: , where you need to confirm that end‑to‑end reporting handles real transaction flows correctly.

Even in those situations, it is worth asking whether synthetic patterns or fully anonymised historical aggregates would be sufficient. If so, they should take precedence over live‑derived data.

How should we handle production‑derived data when we genuinely need it?

A robust pattern for handling live‑derived data in test can include:

  • Tight scope: – time‑limited and field‑limited extracts, never whole tables or broad ranges pulled “just in case”.
  • Strong transformation: – pseudonymisation or tokenisation for identifiers and removal of non‑essential attributes such as marketing data or device fingerprints.
  • Repeatable pipelines: – automated flows that always apply masking, logging and access controls; avoiding manual ad‑hoc exports from production.
  • Restricted access: – dedicated roles and credentials, closer monitoring and shorter session durations for anyone working with the extracts.
  • Short retention with verifiable deletion: – explicit expiry dates and evidence that the data was removed once the work finished.

You should be able to answer, quickly: who requested the data, who approved it, how it was transformed, where it went, who accessed it, and when it was deleted.

Capturing these steps as part of your ISMS and mapping them to A.8.33 and data‑protection requirements helps auditors and regulators see that production‑derived data in QA is an exception handled carefully, not a permanent convenience.


How can we use external labs and contractors for certification without leaking RTP, RNG or player data?

You work with external labs and contractors safely by treating them as controlled participants in your test‑information lifecycle rather than as unmanaged islands.

A.8.33 continues to apply when test information leaves your core environment, so your technical design and contractual arrangements need to support one another.

What does a robust external testing model look like?

A pattern many studios adopt combines:

  • A dedicated external test environment

Accessible only from agreed IP ranges or VPN endpoints, with:

  • Narrow, role‑specific profiles such as “External Lab QA”.
  • No direct database or filesystem access; all interaction goes through approved clients, APIs or admin tools.
  • Outcome‑oriented tools: for labs and partners

Instead of handing over maths spreadsheets or RNG code, you provide:

  • Harnesses that run large volumes of spins, jackpots and bonus triggers under defined scenarios.
  • Dashboards that present RTP bands, hit frequencies, volatility distributions and error metrics.
  • Logs tuned to certification questions around fairness, integrity and stability, not internal model detail.
  • Tightly curated artefacts leaving your organisation:

To reduce leakage risk:

  • Builds compiled without debug menus or verbose logging that expose configuration or internal states.
  • Only synthetic or well‑masked datasets cross the boundary; live identifiers or financial detail stay in‑house.
  • Maths documentation limited to what regulators require (parameter ranges, theoretical RTP, constraints) rather than full implementation artefacts.

In this setup, external teams have what they need to certify fairness and stability, but do not receive enough information to reconstruct engines or compromise players.

How do contracts and governance keep this strong over time?

Contracts and internal governance should mirror your technical boundaries:

  • Statements of Work: that define which information types are in scope, which are not, and how long labs may retain data.
  • Security and confidentiality terms: covering storage, access, onward transfer and disposal of test information and artefacts.
  • Clear onboarding and offboarding materials: explaining which environments and tools to use, how to report suspected issues, and how to request extra access properly.

Internally, maintaining an up‑to‑date register of external testing parties helps you stay on top of:

  • Which lab or contractor can access which environments and information types.
  • Contract dates, renewals and termination steps.
  • Any security attestations, questionnaires or certifications you rely on.

When that register, the backing documents and relevant procedures are part of your ISMS in ISMS.online and linked to A.8.33, supplier controls and privacy requirements, you can demonstrate that your obligations follow your maths, test data and builds across organisational boundaries.


How do we demonstrate A.8.33 compliance efficiently to auditors and regulators?

You demonstrate A.8.33 efficiently by building a small, coherent evidence set and keeping it current, so each audit or regulator session becomes a guided walkthrough of how you operate rather than a last‑minute search for documents.

The emphasis is on consistency rather than volume: if your documents, diagrams and real‑world examples all tell the same storey, confidence rises quickly.

What belongs in a lean but convincing A.8.33 evidence pack?

For a gaming studio or platform, a focused evidence pack often includes:

  • A clear test‑information standard

One short document that:

  • Defines test information for your games and platforms, including maths, RNG and related artefacts.
  • Describes which types of test information are permitted in which environments.
  • Sets out defaults and exception handling for production‑derived data in QA.
  • Environment and data‑flow diagrams:

Illustrations that show:

  • Your environment tiers (for example development, test, acceptance, production) with permitted data and configuration levels in each.
  • Controlled flows of data “down” with masking and of configuration “up” with approvals.
  • Operational procedures and work instructions:

Practical guides describing:

  • How test data is generated, refreshed, masked and removed.
  • How maths, RNG and configuration are handled during QA and certification.
  • How external labs, certification bodies and contractors are onboarded, supported and offboarded.
  • Role and responsibility mapping:

A simple matrix that shows who is accountable and responsible for maths, RNG, QA, environments, player data and supplier management.

  • A small number of real examples

For instance:

  • A recent investigation where you used masked data to reproduce a live issue, alongside evidence of subsequent deletion.
  • A certification cycle where a lab used your external environment and harnesses without receiving raw maths or live player data.

Auditors and regulators often focus on those examples because they reveal whether your standards hold up under pressure. When the cases match your documented approach, it supports the argument that A.8.33 is genuinely embedded.

How can an ISMS platform like ISMS.online simplify repeat audits?

Managing this evidence in ISMS.online helps you:

  • Link policies, diagrams, procedures, contracts and example records directly to A.8.33 and related controls, such as environment, access and privacy requirements.
  • Assign owners and review cycles so materials stay aligned with new titles, regions and technical changes.
  • Capture audit findings, regulator feedback, incidents and improvements against the same controls, turning each experience into part of your ongoing assurance record.

Then, when an ISO auditor, gambling regulator or major B2B client asks how you manage test information, you can guide them through a single, structured view where your definitions, architecture and real practice line up. That positions you as a studio that treats test information as deliberately as live play, and it makes each future review easier for your QA, maths, security and compliance teams to handle with confidence.



Mark Sharron

Mark Sharron leads Search & Generative AI Strategy at ISMS.online. His focus is communicating how ISO 27001, ISO 42001 and SOC 2 work in practice - tying risk to controls, policies and evidence with audit-ready traceability. Mark partners with product and customer teams so this logic is embedded in workflows and web content - helping organisations understand, prove security, privacy and AI governance with confidence.

Take a virtual tour

Start your free 2-minute interactive demo now and see
ISMS.online in action!

platform dashboard full on mint

We’re a Leader in our Field

4/5 Stars
Users Love Us
Leader - Spring 2026
High Performer - Spring 2026 Small Business UK
Regional Leader - Spring 2026 EU
Regional Leader - Spring 2026 EMEA
Regional Leader - Spring 2026 UK
High Performer - Spring 2026 Mid-Market EMEA

"ISMS.Online, Outstanding tool for Regulatory Compliance"

— Jim M.

"Makes external audits a breeze and links all aspects of your ISMS together seamlessly"

— Karen C.

"Innovative solution to managing ISO and other accreditations"

— Ben H.