Skip to content

Why player data you don’t delete has become a strategic liability

Player data you fail to delete on time quickly turns into a security, privacy and regulatory liability for your studio. When telemetry, chat logs and support histories never really go away, every breach or enquiry drags in more systems, more evidence and more work than necessary. Treating end‑of‑life data as a managed risk lets you cut incident impact, simplify investigations and reduce how much information can be misused.

Player data now sits at the crossroads of revenue, regulation and reputation, so unmanaged retention quietly magnifies your exposure. Annex A.8.10 of ISO 27001:2022 is explicit that information must be deleted when it is no longer required, in a way that prevents recovery and that respects legal, regulatory, contractual and internal requirements. That language speaks as much to privacy and data‑protection expectations as it does to classic information security.

Most studios already know how to protect live systems; far fewer can show, with confidence, what happens to data when it is “no longer required”. That gap is exactly where A.8.10 lives. It asks you to stop treating old player data and logs as harmless archives, and start treating them as assets that must be deliberately retired. Whether you are implementing ISO 27001 for the first time or strengthening a mature ISMS, this control is where retention schedules meet real deletion.

Data you never collected, or deleted on time, can never be stolen, subpoenaed or used against you.

The hidden cost of hoarding player data

Hoarded player data hides in more places than most teams expect, from old telemetry pipelines to forgotten test databases. Each extra copy extends the blast radius if you suffer a breach or face questions about how long data is kept, because you have more systems in scope and more evidence to review than you planned for.

If you are honest about where player data lives, you will usually find far more than account tables and payment records. Typical examples include:

  • Legacy telemetry pipelines that still receive events even though dashboards are unused.
  • Old crash dumps with raw device identifiers and stack traces.
  • Chat archives kept “just in case” for moderation, but never reviewed.
  • Copies of production databases in test environments and sandboxes.

Every one of these copies expands how much is in scope if something goes wrong. If an attacker lands in your analytics cluster, or a regulator asks about retention for minors’ data, you cannot simply point at your main account database and call it done. You have to account for all the places data has drifted to over years of launches, updates and experiments.

This is not only about security. Over‑retention also undermines the storey you tell about data minimisation in privacy impact assessments, partner questionnaires and platform reviews. If policies say you retain logs for one year but systems quietly keep five, you have a credibility problem even before anything goes wrong. For teams that are formalising their ISMS, making this inventory visible is often the single biggest step in reducing risk.

How undeleted logs make incidents worse

Undeleted logs make incidents slower, more expensive and harder to explain, because they enlarge the pool of potentially exposed data and increase the effort needed to scope impact. When retention is not segmented by purpose, you end up keeping far more sensitive information for far longer than the risk really justifies.

When you respond to a breach, two things matter immediately: how fast you can scope what was exposed, and how confidently you can explain that scope to executives, partners and regulators. Long‑lived, poorly governed logs and telemetry pipelines cut against both goals, because they mix routine traces with highly sensitive information and keep everything for years.

It helps to distinguish between the different kinds of logs you hold:

  • Operational logs: – for performance, stability and debugging.
  • Security logs: – for access control, anomaly detection and incident response.
  • Fraud and anti‑cheat logs: – for long‑term pattern analysis and enforcement.

Security, anti‑cheat and fraud teams often argue for lengthy retention, and in some cases they are right. The problem is that retention is rarely segmented. Routine authentication logs and highly sensitive fraud‑ring indicators end up treated the same, and both are kept indefinitely.

In practice, that means forensics teams must trawl through huge volumes of data to understand what was touched, legal teams must consider whether very old records are now in scope for disclosure, and operational teams must cope with the performance impact of bloated log stores. ISO 27001 A.8.10 forces you to bring discipline to this sprawl through explicit limits, automation and monitoring.

Why gaming studios are uniquely exposed

Game studios are unusually exposed because they collect deep, behavioural data about how people play, spend and interact, often including minors and vulnerable players. When this information is retained for longer than necessary, it becomes a sensitive liability rather than a useful asset and makes any incident or criticism far harder to manage.

Game companies collect some of the richest behavioural data in any consumer industry. You often track not just spend and login events, but second‑by‑second gameplay, chat, social graphs, device profiles, location hints and anti‑cheat signals. You may also handle minors data, self‑excluded players, or individuals in territories with tight privacy rules.

All of that makes undeleted data more sensitive:

  • Match histories and chat logs can reveal play patterns, relationships and, in some cases, health or financial stress.
  • Monetisation data around loot boxes and microtransactions sits close to live debates about consumer protection.
  • Anti‑cheat and fraud systems may infer or store sensitive risk profiles about individuals.

Consider a simple example involving minors. A teenager plays under parental consent, chats about school and family, spends via a parents card, then closes their account. Years later, if detailed match and chat logs still exist, you are holding an unnecessary, highly sensitive behavioural history for someone who is now an adult, with no clear purpose. The same applies to self‑excluded or vulnerable players whose data you have a duty to treat carefully.

When those records survive long after they are needed, you carry unnecessary privacy and reputational risk. Aligning with A.8.10 lets you shrink that risk in a controlled way, instead of waiting for a breach, complaint or regulator to force the issue. A platform such as ISMS.online can help you see this picture clearly by pulling policies, data inventories and controls into a single view, so you can decide what truly needs to live, what should be anonymised, and what must finally be deleted, and then show auditors how those decisions are enforced.

Book a demo


What ISO 27001:2022 A.8.10 really demands of game studios

ISO 27001:2022 Annex A.8.10 expects you to treat deletion as a normal part of the player‑data lifecycle, not an afterthought. You decide when each type of information is no longer required, pick a suitable deletion or anonymisation method and then prove those methods actually run across the systems that hold that data.

On paper, A.8.10 looks short, but it has deep implications. It requires you to delete information when it is no longer required, in a way that prevents recovery and that aligns with legal, regulatory, contractual and internal requirements. For a games business, that means designing deletion as a built‑in activity, not a one‑off script when someone remembers.

In practical terms, you are being asked to decide when each type of player data and log stops being needed, to choose deletion or anonymisation methods that are appropriate to the risk, and to be able to demonstrate that those methods really run. A.8.10 operates alongside Annex A.5.32 on retention and your Clause 6 risk‑treatment process: you decide what to keep, for how long, and which threats secure deletion helps you manage.

A plain‑language view of Annex A.8.10

You can understand A.8.10 by treating it as five plain questions about your data and your decisions. These questions are not about describing specific products; they are about being able to explain, in simple terms, what you keep, why you keep it and what you do when it is no longer needed.

You can think of A.8.10 as built on five questions:

  1. What information are you talking about?
    Not just “personal data” in a privacy sense, but any information in systems, devices or media: account tables, gameplay events, fraud logs, telemetry, backups, exports and more.

  2. When is it no longer required?
    This is where A.8.10 meets A.5.32 on retention and your legal obligations. “No longer required” must be grounded in purpose and law, not just convenience.

  3. How will you delete or anonymise it?
    Logical deletes, cryptographic erasure, storage sanitisation, aggregation and anonymisation can all be valid, but they must be chosen deliberately.

  4. Who is responsible?
    Policies and procedures must assign responsibility for defining rules, operating deletion mechanisms and checking they work.

  5. How do you prove it?
    You need evidence: configuration, logs, tickets and internal audit results that show deletion or anonymisation really happens.

Seen that way, A.8.10 is less a “technology” control and more a bridge between your information governance – what you keep and why – and your technical implementation – how you make data disappear or become harmless.

How A.8.10 fits into your ISMS

A.8.10 only works if it is integrated into the rest of your information security management system. It relies on your risk assessments and retention decisions, and it provides concrete controls you can point to when describing how you reduce the impact of incidents, audits and privacy complaints.

If you already run an information security management system, A.8.10 should not sit in isolation. It connects to:

  • A.5.32 – Retention: which says you must define how long information is kept. A.8.10 is the execution arm: what happens at the end of that time.
  • Clause 6 – Risk treatment: where you decide which threats are reduced through secure deletion, anonymisation or minimisation.
  • Controls on logging and monitoring: because log retention rules and deletion jobs need to line up with security, fraud and privacy needs.
  • Cloud and supplier controls: because your deletion storey must cover infrastructure and services you do not directly operate.
  • Access control and encryption: because effective deletion is easier if sensitive data is segregated and encrypted with well‑managed keys.

When you document your controls, it helps to show this linkage explicitly: for example, by referencing retention rules in your deletion procedures, and by recording in your risk treatment plan how A.8.10 mitigates specific threats such as data remanence or over‑retention.

The difference between ignoring deletion and aligning with A.8.10 is often stark:

Without retention & deletion discipline Aligned with A.8.10
Incident scope hard to define Scope based on known, mapped data stores
Audits are reactive and painful Audits follow a documented lifecycle
Privacy storey feels inconsistent Retention rules and system behaviour clearly match
Player and partner trust is fragile You can evidence minimisation and retention limits

An ISMS platform such as ISMS.online makes these linkages easier by letting you relate policies, risks, controls and evidence, so an auditor – and your own leadership – can follow a straight line from the high‑level requirement down to concrete actions in your systems.

What auditors actually look for

Auditors care about how you design, implement and operate deletion, not just about a policy sentence, because they need to trust that your assurances match reality. They want to see that retention rules exist, are technically enforced and are monitored when something fails, so they can rely on your statements about player data and logs.

Auditors will never be satisfied with a single policy sentence that says “we delete data when it is no longer needed”. They typically look for three layers of evidence:

  • Design: – documented policies, standards and procedures that define retention periods, deletion methods and responsibilities for different data types.
  • Implementation: – system configurations, automation jobs and process artefacts such as scheduled tasks, object‑store lifecycle rules or database routines that match what the documents promise.
  • Operation and monitoring: – logs, tickets and internal audit checks showing that deletion or anonymisation has actually occurred, that failures are detected and corrected, and that exceptions are recorded and reviewed.

For player data and logs, this might mean showing them:

  • A retention and deletion matrix for the main data categories.
  • A procedure for handling player erasure requests.
  • Screens or exports from database, logging and storage systems where retention and deletion are configured.
  • A sample of deletion logs and internal audit findings.

If you can answer simple questions like “where would I go to see that chat logs older than eighteen months are removed or anonymised?” without scrambling, you are already a long way toward satisfying A.8.10 and making your next audit much less painful.




ISMS.online gives you an 81% Headstart from the moment you log on

ISO 27001 made easy

We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.




From ‘right to be forgotten’ to retention schedules: aligning A.8.10 with GDPR and global privacy

Secure deletion is not only a security topic; it is also how you prove to players and regulators that you respect privacy rights. Privacy laws such as the General Data Protection Regulation expect you to minimise what you hold, erase data that is no longer needed and honour rights such as data minimisation, storage limitation and the right to erasure. Those principles align closely with A.8.10, which gives you the practical, operational levers to enforce these expectations in real systems.

You do not have to become a privacy lawyer to design good deletion controls, but you do need to understand how legal duties translate into retention rules and technical behaviour across your systems.

Core privacy principles you must build on

Three widely recognised ideas determine how long you may hold player data and what you must do with it. They appear in many privacy frameworks and are common reference points for regulators when they assess your practices.

Those three ideas are:

  • Data minimisation: – collect and process only what is adequate, relevant and necessary for your purposes. If you do not truly need detailed telemetry at player level, consider aggregate reporting instead.
  • Storage limitation: – keep personal data in identifiable form only for as long as is necessary for those purposes. “We might want it one day” is not a lawful purpose.
  • Right to erasure: – in many circumstances, players can ask you to erase their personal data, particularly when they withdraw consent or when the original purpose no longer applies.

For a games company, these principles apply to:

  • Account and profile data.
  • Payment and transaction records.
  • Chat and social data.
  • Telemetry and analytics.
  • Anti‑cheat and fraud logs.
  • Support tickets and dispute histories.

Each of these categories needs explicit decisions: how long you keep identifiable data, when you switch to anonymised or aggregated forms, and how you honour valid deletion requests. Tax and accounting laws often sit alongside these privacy principles and can override a player’s erasure request for specific records, so you must be able to explain those interactions clearly.

This information is general and does not constitute legal advice. You should always obtain specific legal guidance for your jurisdictions and product mix.

Turning principles and rights into concrete retention rules

Turning abstract privacy rights into clear rules is essential if you want engineers and operations teams to act consistently. They need to know, for each data category, what the purpose is, how long you keep it and what happens at the end of that period so they can implement the right behaviours.

Privacy and security teams often agree on the principles, but friction appears when engineers ask for specifics. They need numbers and behaviours, not abstract phrases. A practical approach is to build a retention and deletion schedule that, for each category of player data, lists:

  • Purpose: – why you hold it, such as delivering the game, preventing fraud or complying with tax law.
  • Legal basis or obligation: – consent, contract, legitimate interest, statutory requirement.
  • Standard retention period: – how long you keep identifiable data under normal circumstances.
  • Exceptions: – situations where you need to keep data longer, such as open disputes or legal holds.
  • End state: – whether you delete, anonymise or aggregate at the end of the period.
  • Deletion method: – the technical approach you use, such as row deletion, key destruction or anonymisation.

When a player invokes their right to erasure, you can then reason systematically:

  • Which categories are covered by the request?
  • Are there any legal obligations that require you to keep some records, for example tax or anti‑money‑laundering rules?
  • In which systems does the relevant data live?
  • What technical controls do you trigger to delete or anonymise it where allowed?

Your ISO 27001 documentation and your privacy impact assessments should point to this same schedule, so you do not try to maintain parallel sets of rules that inevitably drift and become harder to defend.

Handling tricky categories: fraud, minors and disputes

Some of the hardest questions arise around data you use to protect the business and other players, because these categories raise privacy and fairness questions even as they justify longer retention. You may need extended retention for fraud and anti‑cheat, or to defend legal claims, yet you also want to minimise what you hold about individuals over time.

Some of the hardest questions arise around data you use to protect your business and other players:

  • Fraud and anti‑cheat logs: – you may need longer retention to spot patterns and defend the integrity of the game.
  • Payment and tax data: – financial laws often require you to keep certain records for a fixed number of years.
  • Dispute and support logs: – you may need records until limitation periods for legal claims have expired.
  • Minors’ data and self‑excluded players: – you may have additional obligations to protect vulnerable groups or limit certain processing.

A sensible pattern is to set clear, documented rules for these cases rather than allowing ad‑hoc decisions. You can then design controls that recognise both the protective purpose and the privacy risk.

Step 1 – Document the tension

Write down why you need extended retention in specific areas, including references to legal, regulatory or platform expectations so the trade‑off is transparent.

Step 2 – Segregate high‑risk data

Keep high‑risk logs and profiles in clear, limited locations with strong access controls and distinct retention rules so they do not blend into general systems.

Step 3 – Reduce identifiability over time

Move from full identifiers to pseudonyms, and from pseudonyms to aggregated or fully anonymised data as soon as practical while still meeting your protective needs.

Step 4 – Review extended retention regularly

Build periodic review of these special cases into governance so “temporary” retention does not become permanent through neglect or convenience.

Concrete examples make these ideas easier to act on. Fraud logs might be stored in a dedicated database where only hashed identifiers are retained after a certain age, keeping patterns visible but people less exposed. Payment data might be split so that only the minimal transaction references and amounts required for tax rules are retained in a finance system, separate from gameplay profiles. Minors’ and self‑excluded players’ accounts might be flagged so that some safety‑related records are retained for defined periods, while marketing telemetry and profiling data are cut off much earlier.

A.8.10 does not overrule your legal duties, and privacy law does not prevent you from keeping data you genuinely need for legal defence or compliance. The point is that any longer retention must be justified, documented and technically enforced, not just assumed, so that regulators and players can see you are acting fairly.




Mapping the player data and log lifecycle to A.8.10

To make A.8.10 work in practice, you need to think in terms of a lifecycle. Player data does not simply appear and vanish; it moves from collection to active use, then into different layers of storage before it is finally deleted or anonymised, and A.8.10 attaches controls to each stage of that journey. Secure deletion becomes much easier when you know, for each stage, where data sits and what should happen next, and when everyone in security, privacy, engineering and LiveOps shares the same map.

Many studios have informal mental models of this flow, but few have drawn it out in a way that different teams can rely on when they design systems, features and operational processes.

Visual: simple lifecycle diagram from collection → active use → warm archival → cold archival → deletion/anonymisation.

A typical lifecycle in modern games

Most modern game stacks follow a similar pattern, even if labels differ, because players generate events, you process those events to deliver experiences and then you slowly move older data into colder, cheaper or more restricted stores. Deletion and anonymisation decisions only work if they respect this real flow instead of pretending all data lives in one neat database.

Although every title differs, the broad stages are familiar:

  • Collection and ingestion: – players sign up, authenticate, play matches, chat, spend, and you ingest events into backends, logs and analytics.
  • Active use: – data is used to deliver the game, run LiveOps, power matchmaking, manage inventories and provide customer support.
  • Warm archival: – older data moves to cheaper storage or lower‑priority tables but remains identifiable for some time, for example for account recovery or longer‑running investigations.
  • Cold archival: – data is kept only for obligations such as tax, regulatory or serious fraud investigations, often in more restricted systems.
  • Deletion or anonymisation: – data is removed or transformed so that it no longer relates to an identifiable player.

This lifecycle applies not only to account tables but also to observability and security logs, telemetry and data lakes, anti‑cheat and risk‑scoring systems, support and moderation tools, and third‑party integrations and exports. The more clearly you can show which systems and datasets sit at each stage, the easier it becomes to assign A.8.10 controls and explain them to an auditor or a sceptical stakeholder.

Attaching A.8.10 controls to each phase

Attaching A.8.10 to the lifecycle means defining what must be true each time data crosses a boundary, because those boundaries are where risk changes. You collect new data, move it into a new store or decide it is no longer required, and each transition is an opportunity to enforce deletion, minimisation or anonymisation.

One useful way to think about this is to treat A.8.10 as a checklist that fires at every stage boundary.

When data moves from collection to active use:

Check what you collect

Confirm that fields are limited to what is necessary for gameplay, operations and obligations, not simply everything you can capture for curiosity.

Separate identifiers from content

Structure schemas so that player identifiers can be removed or swapped without destroying all useful analytical content or business metrics.

When data moves from active use to warm archival:

Confirm the retention trigger

Set a clear time or event after which data moves out of active stores, and document how that trigger is implemented across relevant pipelines or services.

Reduce access and adjust controls

Tighten access to archived data and configure retention limits in line with your schedule so older records do not silently accumulate.

When data moves from warm to cold archival:

Justify what remains

Ensure only data genuinely needed for legal, regulatory or security purposes is carried forward into cold storage and that this justification is documented.

Apply stronger safeguards

Apply stricter access controls, monitoring and, where appropriate, encryption for cold archives so that less‑used data does not become an easy target.

When data moves from cold archival to deletion or anonymisation:

Automate the end state

Define an automated job or process that deletes or anonymises data when retention expires, rather than relying on ad‑hoc clean‑ups.

Capture evidence and failures

Log successful runs and exceptions so you can prove the control works, investigate failures and refine your approach over time.

At each boundary, you should be able to answer: “If we say data moves to this stage after X, how do we know it actually has, and what happens then?” Those answers become the backbone of your A.8.10 controls and help you show regulators and partners that you take the full lifecycle seriously.

Including backups, test data and dark corners

Backups, test environments and exports often sit outside day‑to‑day thinking about the lifecycle, yet they hold large volumes of player data that can quietly undermine your deletion storey. You do not need to restate all of your backup design here, but you do need to bring these areas into the same map and then rely on your technical standards to cover how deletion actually happens.

It is easy to focus on primary systems and forget where data lingers. Backups and replicas deserve their own plan. If you use long‑lived backups, you may not be able to surgically delete single players’ data. In that case, you should:

  • Encrypt backups with strong, well‑managed keys.
  • Set retention periods and ensure expired sets are removed.
  • Ensure old backups are expired or rendered non‑restorable, for example by key destruction or media sanitisation.

Test and staging environments can hold large volumes of production data. If you seed them with live records, they must be in scope for your lifecycle and deletion rules, or you should anonymise data before use so that developers work with realistic but non‑identifiable information.

Exports and reports – csv files, data extracts and screenshots used for analysis or reporting – must either be governed or avoided. Where exports are necessary, store them in controlled locations with clear retention rules, and prefer centralised reporting or dashboards when you can.

A simple table can help, with columns such as “Store or system”, “Lifecycle stage” and “Retention and deletion behaviour”, and no more than a handful of rows per title. Once this mapping exists, tools and platforms can be aligned to it. An integrated ISMS solution such as ISMS.online gives you a single place to hold the lifecycle, the policies that reference it, and the evidence that shows it is followed, so you can manage dark corners as deliberately as primary systems.




climbing

Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.




Technical patterns for secure deletion across databases, logs, backups and telemetry

Secure deletion only works if the underlying architecture makes it practical and safe for your teams to apply. You need a small set of standard patterns that engineers understand, that are cheap to operate and that auditors can follow, so you are not reinventing deletion for every game and service.

Even the best policies mean little if your architecture makes deletion hard or dangerous. The good news is that there are repeatable patterns you can apply across many technologies. The aim is not perfection on day one, but a small set of standard approaches that engineers understand, that scale with your stack and that auditors can follow.

A key design goal is to make deletion safer and cheaper than ignoring the problem. That typically means planning for deletion at schema and pipeline level instead of trying to bolt it on later.

Secure deletion in production databases and services

Secure deletion in live databases means removing or de‑identifying player data without breaking game functionality, while giving you confidence that records are not quietly lingering in forgotten tables. You have a few main patterns to choose from, and you should standardise on the ones that match your risk appetite and operational maturity.

For databases that hold player accounts, profiles, inventories and other core data, you have several options:

  • Physical row deletion: – straightforward delete operations with appropriate cascading, followed by maintenance tasks such as vacuum or compaction to reclaim storage where relevant.
  • Soft delete plus periodic hard delete: – marking records as deleted for a short period to support account restoration or operational safety, then hard‑deleting after a defined interval.
  • Partitioning by time or tenant: – structuring tables or collections so that large volumes of aged data can be dropped or archived in bulk.

Whichever pattern you choose, you should:

  • Separate identifiers from less sensitive content where you can, so that deleting a small join table can effectively de‑identify large bodies of gameplay data.
  • Ensure application logic does not “resurrect” deleted data from caches, search indices or derived stores.
  • Implement idempotent deletion routines so that retrying a failed job does not break integrity or leave partial state.
  • Test cascade‑delete and referential‑integrity behaviour thoroughly in non‑production environments so cautious database administrators can see the impact before you touch live data.

Document these patterns as part of your technical standards for A.8.10, and link them back to the retention rules in your schedule. That way, when a new game or feature launches, engineers know which pattern to apply and how to prove it works.

Designing retention‑sensitive logs and telemetry

Logs and telemetry are essential for running and improving games, but they are also one of the noisiest sources of personal data and a common source of over‑retention that quietly expands your risk. The aim is not to stop logging or turn systems off, but to capture only what you need, keep it for as long as it is useful and then either delete it or remove direct links to individuals, designing retention and deletion in from the start.

Useful principles include:

  • Classify logs by purpose: – security, fraud, gameplay analytics and crash diagnostics may each justify different retention windows.
  • Avoid logging more than you need: – do not include full identifiers or payloads if hashes, tokens or aggregated metrics would suffice.
  • Use built‑in retention controls: – most logging and telemetry platforms let you set time‑based retention and automated deletion; configure these in line with your schedule.
  • Consider anonymisation: – for older data used only in aggregate analysis, replace identifiers with tokens or remove them entirely after a certain period.

In practice, this might translate to keeping detailed security logs for a defined period, then retaining only coarser aggregates for trend analysis, or retaining fine‑grained gameplay events at player level for a short period to tune features, then rolling them up into anonymised cohorts. The key is that these behaviours are configured centrally and can be evidenced, not left to individual teams to decide ad hoc.

Backups, archives and cryptographic erasure

Backups and archives are built to preserve data, so secure deletion here is about managing whole backup sets rather than trying to erase individual players, while still giving you a credible storey about what happens when retention expires. You rely on encryption, time‑limited retention and controlled destruction of keys or media to show that expired data is no longer accessible in practice.

Backups present a special challenge because they are designed specifically to preserve data, and often in large, opaque blobs. You rarely have the ability to delete one player’s data from a decade of full backups. Instead, you manage deletion at the level of backup sets.

Practical steps include:

  • Encrypt backups and archives: with strong keys managed separately from the data.
  • Define backup retention periods: that match your risk appetite and legal obligations, and avoid keeping backups indefinitely.
  • Ensure old backups become non‑restorable: by destroying the relevant keys or media in a controlled, documented way when retention expires.
  • Avoid using backups as archives: by keeping long‑term records in purpose‑built, access‑controlled stores with clear retention rather than general recovery backups.

Cryptographic erasure – making data unreadable by deleting or revoking keys – is often the only practical way to satisfy A.8.10 for large‑scale backups and distributed object stores. It depends on robust key management; if keys are re‑used across many datasets or poorly protected, your assurances are weaker. Deployed carefully, however, cryptographic erasure lets you combine operational resilience with strong assurances that expired data really is gone, which protects both players and your studio when incidents occur.




Governance, roles and exceptions: making deletion work in a live games business

Secure deletion only sticks when everyone knows who decides what, who does the work and how exceptions are handled, because otherwise old player data quietly piles up as difficult conversations are postponed. Clear governance turns A.8.10 from a side project into a normal part of how your games and services run.

Deletion is not a one‑team exercise. Security cannot do it alone, engineering cannot do it alone, and neither can privacy or LiveOps. To make A.8.10 work without constant friction, you need clear governance: who makes which decisions, who implements them, who checks they work and how exceptions are handled.

Without that clarity, deletion becomes a series of uncomfortable conversations and stalled tickets, which in turn encourages people to avoid raising the topic at all. For teams just starting their ISO 27001 journey, putting these responsibilities on paper early prevents one or two people from quietly absorbing all the work.

Defining who owns what

Defining ownership for retention and deletion decisions avoids confusion and finger‑pointing, because everyone can see who is accountable and who is responsible. A simple RACI matrix that names who is responsible, accountable, consulted and informed makes it obvious who must sign off rules and who must keep the technical controls running.

A simple RACI (Responsible, Accountable, Consulted, Informed) matrix for deletion can eliminate much confusion. Typical patterns include:

  • Security or the CISO function: – accountable for ensuring A.8.10 is implemented as part of the ISMS; consulted on risk impacts.
  • Privacy or the DPO: – responsible for making sure retention and deletion align with laws and player rights.
  • Data and platform engineering: – responsible for implementing and operating technical deletion or anonymisation.
  • LiveOps and product: – consulted on the impact of retention and deletion on game operations and analytics.
  • Player support and community teams: – responsible for handling player‑facing communications and routing complex cases.

Once these roles are agreed, you can build them into policy ownership sections, change‑management workflows and onboarding for new systems and vendors. That way, when someone asks “who decides how long to keep chat logs?” there is an answer other than “it depends who you talk to”, and deletion decisions can move at the same pace as game development.

Designing exceptions without losing control

Almost every studio will need exceptions to its standard retention rules for fraud, safety or legal reasons, but the danger is that these exceptions become permanent by habit. A light but disciplined exception process lets you hold on to important data when you must, for example during cheating investigations or regulatory inquiries, without quietly undermining your entire deletion strategy.

Almost every studio will need exceptions to its standard retention rules. Fraud, cheating, serious safety incidents and regulatory investigations all sometimes require you to keep data longer than usual. The risk is that exceptions accumulate informally and no‑one ever revisits them.

A robust approach is to:

  • Require a documented justification for any extended retention, including legal or regulatory references where applicable.
  • Set a review date or condition for each exception, such as “until investigation X closes plus two years”.
  • Limit access to the extended‑retention store to the smallest group who genuinely need it.
  • Review open exceptions at a regular governance forum and close them when no longer needed.

A good exception record might look like: “Fraud case F‑123 – retain related transaction, device and network logs until 31 December 2028; owner: Head of Fraud; review quarterly at risk committee.” That level of specificity keeps everyone aligned and gives you a clear audit trail, which supports both ISO 27001 audits and regulatory scrutiny.

Training frontline teams and aligning LiveOps

Frontline teams translate your deletion policies into player‑facing promises, so if support and community teams describe “account deletion” differently from how your systems behave you create both trust and compliance problems. Aligning training, scripts and LiveOps planning with your A.8.10 controls prevents those gaps.

Players, parents and even partners will often engage first with frontline teams: support, community managers, LiveOps. If those teams cannot explain clearly what “account deletion” means, or worse, promise things that are not technically true, you create both trust and compliance problems.

You should therefore:

  • Provide simple explanations and internal FAQs that describe, in plain language, what is deleted, what may be retained for legal reasons and over what timescales.
  • Train staff to recognise when a request may trigger legal holds or complex exceptions, and how to escalate appropriately.
  • Align LiveOps planning with upcoming changes to retention or deletion, so that telemetry or segmentation strategies are adjusted in good time.

When everyone understands that secure deletion is there to protect players and the studio, not to block good ideas, you get fewer last‑minute fights between product and compliance – and more thoughtful designs that support both. That, in turn, reduces incident cost, limits regulatory exposure and builds long‑term player trust.




ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.

ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.




Cloud, vendors and shared responsibility for deletion

Modern games rely heavily on cloud and software‑as‑a‑service providers, but you still remain responsible for how player data is stored and deleted across your stack. A.8.10 therefore has to extend beyond your own systems into contracts, configurations and vendor‑risk assessments, so that data is not kept longer than necessary just because it lives in someone else’s platform.

Very little of a modern game stack lives solely in your own data centre. Identity, payments, analytics, marketing, community, support and even core backends may all rely on cloud and software‑as‑a‑service providers. ISO 27001 A.8.10 still applies; it just means your deletion storey must span those providers too.

You cannot simply trust that “the vendor handles it”. You must understand, document and, where necessary, contractually define who deletes what, where and when. This is especially important when providers point to their own certifications; alignment with one framework does not guarantee that their retention schedules match yours or that they support your erasure timelines.

Understanding the shared responsibility model

Understanding the shared responsibility model helps you see which deletion levers you control and which sit with the provider, so you can design realistic controls rather than assumptions. You decide what player data flows into a service, how long it stays and when you request removal, while the provider owns how its own infrastructure is wiped or recycled.

Cloud providers commonly talk about shared responsibility: they secure the infrastructure, you secure how you use it. For deletion, this often splits roughly as:

  • Infrastructure‑as‑a‑service: – you control operating systems, databases and application data; the provider controls physical media and low‑level sanitisation.
  • Platform‑as‑a‑service: – you control your data and configurations within managed services; the provider handles backups and underlying systems.
  • Software‑as‑a‑service: – you typically control configuration and usage patterns; the vendor controls almost everything else.

For each significant service, you should be able to answer:

  • What data about your players is stored here?
  • Who can configure retention and deletion?
  • What happens to data when you delete an account or a record?
  • How does the provider handle backups and end‑of‑contract data return or destruction?

Documenting these answers forms part of both A.8.10 and other ISO 27001 controls around cloud use, and it gives you a clearer basis for vendor selection and negotiation.

Making deletion contractual

Deletion is much more reliable when it is written into contracts rather than handled informally, because you have a clear basis to assert your expectations. Your data‑processing agreements and security schedules should spell out retention limits, support for erasure requests and how data will be treated at the end of the relationship.

Policies and good intentions are not enough when other organisations hold your data. Your contracts, data‑processing agreements and security schedules should address:

  • Maximum retention periods for data after it leaves your systems.
  • Obligations to assist with player erasure requests within agreed timeframes.
  • How backups, logs and archives are treated at the end of retention or at contract termination.
  • What evidence the provider will give you, such as deletion logs or certificates, and under what circumstances.

You should also ensure that your vendor‑risk assessments cover deletion explicitly. If a provider cannot describe its own data lifecycle and deletion practices, or if it relies solely on generic certifications without retention detail, that is an important signal to treat them with caution or negotiate stronger terms. Industry expectations increasingly favour clear, written deletion commitments in contracts.

Keeping third‑party exports under control

Third‑party exports often create unmanaged copies of player data that slip outside normal controls, which can quietly undermine even a well‑designed deletion strategy. Dashboards, csv exports and synced datasets are convenient, but if you do not give them explicit retention rules they can linger unnoticed for years.

Even when core services behave well, data can leak into unmanaged corners through:

  • Manual exports from dashboards into spreadsheets.
  • Data synchronisation into business‑intelligence tools.
  • Attachments and files in ticketing or collaboration systems.

These copies are easy to forget and hard to delete in a targeted way. To reduce the risk:

  • Minimise ad‑hoc exports where possible and favour in‑place analytics tools.
  • Where exports are necessary, store them in governed locations with retention limits.
  • Include these patterns in your lifecycle mapping and staff training so they are not overlooked.

In many studios, simply making teams aware of the risk and providing a better alternative – such as centralised reports or dashboards – significantly reduces the problem. That, in turn, lowers the chance that an investigation or breach uncovers data no‑one knew still existed.




Book a Demo With ISMS.online Today

ISMS.online helps you turn ISO 27001 A.8.10 from a vague deletion rule into a clear, auditable set of controls that reduce risk from undeleted player data and retention‑sensitive logs across your titles and services. By centralising your policies, data inventories, retention schedules, technical standards and evidence, the platform gives you a single, reliable view of how information is governed across studios and vendors.

See your A.8.10 storey in one place

When you manage your ISO 27001 work in a dedicated environment such as ISMS.online, you unlock several advantages:

  • Your retention and deletion matrices sit alongside risk assessments and statements of applicability, so you can show exactly how A.8.10 and A.5.32 work together to mitigate identified risks.
  • Lifecycle diagrams, system inventories and vendor records become living artefacts, updated as your games and stack evolve rather than locked in forgotten slide decks.
  • Evidence of deletion – from configuration exports to internal audit notes – can be linked directly to the controls they support, making audits less of a last‑minute scramble.

For teams coordinating across security, privacy, data, engineering and LiveOps, having this shared view turns deletion from a vague idea into a concrete, trackable programme of work. It also gives less experienced studios a structure to follow when they are formalising controls for the first time. An ISMS platform can also hold your lifecycle maps, retention schedules and supplier records in one place, so you are not trying to piece together the A.8.10 storey from scattered documents and individual memories.

Next steps for your studio

If you recognise your own environment in the challenges described above, a short, focused conversation can be enough to see what better could look like. You might choose to:

  • Walk through one titles player‑data lifecycle and see where current deletion and retention controls fall short.
  • Review how your existing ISO 27001 documentation could be re‑used and strengthened to cover A.8.10 in more depth.
  • Explore how workflows, task assignments and dashboards could keep different teams aligned on who needs to do what, and when, to keep deletion on track.

Book a demo with ISMS.online and you will see how all the moving parts of Annex A.8.10 – from legal retention limits and lifecycle mapping to technical deletion patterns and audit evidence – can be brought together in a single, manageable system. That, in turn, lets you respect players data, reduce the impact of incidents, satisfy auditors and regulators, and keep shipping great games with confidence.

Book a demo



Frequently Asked Questions

How should a game studio rethink player‑data deletion under ISO 27001 A.8.10?

You should treat player‑data deletion as a designed, evidenced stage in the lifecycle, not an ad‑hoc favour you perform via support tickets.

How does A.8.10 change everyday assumptions about “deletion”?

Under ISO 27001 A.8.10, “we delete accounts when players ask” becomes the bare minimum, not the operating model. The clause expects you to:

  • Treat each player‑data category (accounts, payments, chat, telemetry, anti‑cheat, support) as a managed asset with a purpose, an owner and a defined end‑state.
  • Decide in advance when each category moves from active use (needed to run or protect the game) to justified retention (tax, disputes, fraud, safety) and finally to removal or anonymisation.
  • Turn those decisions into repeatable technical patterns: fixed soft‑delete windows, scheduled hard deletes, anonymisation jobs and lifecycle rules in storage and log platforms.
  • Capture evidence that these patterns run: job logs, change records, configuration exports and sample checks that your ISMS can keep alongside the A.8.10 control.

The real shift is from improvisation to predictability. A studio that knows exactly where identifiable players still exist, where only anonymised cohorts remain, and what has aged out entirely has a smaller blast radius when something goes wrong and a cleaner storey when explaining itself to auditors or platforms.

How does an ISMS make that mindset practical?

An information security management system gives you a single place to link policy, risk and implementation:

  • You keep data‑category inventories, retention rules and deletion standards in one workspace.
  • Each A.8.10 control links to specific risks, systems and operating procedures rather than sitting as abstract wording.
  • Internal audits, change approvals and incident reviews can all reference the same artefacts, so deletion becomes how you build and operate games, not a one‑off clean‑up before certification.

When you can walk an auditor calmly from risk register → retention rules → technical patterns → evidence, your studio looks like it understands long‑term player trust, not just short‑term feature delivery. An ISMS such as ISMS.online makes that walk‑through much easier by keeping controls, records and responsibilities tightly connected and always up to date.


How can we design player‑data retention schedules that protect fraud, security and analytics value?

You design retention around why you hold each category and what law or contract allows it, not around whichever database or dashboard feels most important.

How do we build a retention matrix that works across the whole games estate?

Most studios benefit from a single retention matrix that covers the full spread of player‑data types:

  • Account and identity (logins, contact details, age flags)
  • Payments and billing records
  • Chat and social interactions
  • Security and infrastructure logs
  • Anti‑cheat and fraud indicators
  • Gameplay telemetry and UX analytics
  • Support and community tickets

For each row you pin down four things:

  • Purpose: – why you collect it (operate the game, bill players, maintain safety, fight fraud, improve design).
  • Legal / contractual basis: – consumer law, tax rules, PCI DSS, platform terms, privacy law and so on.
  • Minimum retention: – what you must keep to stay compliant (for example, tax records or chargeback windows).
  • Maximum retention for identifiable data: – the point at which you delete or anonymise individuals, even if you retain aggregated patterns.

Fraud and security are where teams often slip into “keep everything, forever”. A.8.10 does not prevent longer windows where risk genuinely warrants them, but it expects you to:

  • Give those categories explicit, reasoned durations (for instance, dispute period plus a defined buffer).
  • Separate raw, identifiable records from derived signals (risk scores, hashed identifiers, anonymised cohorts), so you can keep signals longer than identity.
  • Treat unusual investigations as time‑boxed exceptions, each with an owner and review date, instead of unstated permanent carve‑outs.

On the analytics side, most design decisions hinge on patterns rather than specific players. That lets you shorten retention for full‑fidelity telemetry and move older data into aggregated or anonymised views that product and data teams can still query. It also forces you to bring exports (BI extracts, CSV dumps, sandbox datasets) into the same lifecycle rather than leaving them as invisible long‑term copies.

Where should these rules live so they stay real?

Retention rules decay quickly if they hide in email threads or isolated spreadsheets. When you manage them in an ISMS:

  • Privacy, security, analytics and engineering can all sign off on a single, shared matrix.
  • Reviews can be tied into your risk register and management‑review cycle.
  • Evidence like configuration screenshots, policy acknowledgements and spot‑check results sits next to the rules, so you can show auditors both the decision and how it runs in practice.

If you want to turn A.8.10 from a worry into a design tool, centralising that matrix in a platform such as ISMS.online makes a big difference. You get one view of retention that aligns with ISO 27001, privacy obligations and your live‑ops reality.


What does secure deletion actually involve for game databases, logs, telemetry and backups?

Secure deletion means that, once data hits its defined end‑of‑life, it is no longer practically recoverable with reasonable effort, across live systems, analytics stacks and backups.

How should we handle live services and databases?

For core services that hold accounts, entitlements, inventories and similar player records, secure deletion usually combines:

  • Application‑level actions: , such as deleting or anonymising row‑level records once a retention rule is met or an erasure request is validated.
  • Time‑based partitioning: , so whole table partitions or shards (for example, by month or season) can be dropped, avoiding expensive row‑by‑row clean‑ups.
  • Routine storage maintenance – compaction or vacuuming – so “deleted” records do not sit indefinitely in unallocated space.

The key is to express these as house patterns, not individual developer choices. A simple internal standard such as “accounts use pattern A; transaction history uses pattern B; inventories use pattern C” makes behaviour predictable across titles and much easier to document against A.8.10.

What about logs, telemetry and long‑term storage?

Log streams and telemetry often contain richer player context than the primary game database. In those systems, secure deletion leans heavily on configuration:

  • Built‑in retention and rotation controls in logging and observability tools, tuned differently for gameplay, performance and security streams.
  • Early minimisation – hashing, truncating or tokenising identifiers near the source – so not every log line exposes full identity, followed by anonymisation or down‑sampling as data ages.
  • Lifecycle rules in object storage or data lakes that expire or archive datasets and coordinate with key management, letting you retire encryption keys when data should effectively disappear.

Backups are where physical wiping of every copy stops being realistic. Many mature studios adopt cryptographic erasure instead: encrypt discrete datasets with scoped keys and treat scheduled key retirement as the deletion event. Combined with lifecycle policies and key‑management logs, this is widely accepted by auditors and regulators as a practical way to stop retaining readable history.

The working test is straightforward: for each major store of player data you can answer three questions – what happens when retention ends, who triggers it, and how you prove it. An ISMS such as ISMS.online helps you keep those answers consistent and evidenced across databases, log platforms and backup regimes.


How can a games studio map the player‑data lifecycle so ISO 27001 A.8.10 makes sense to everyone?

You map the lifecycle so people see A.8.10 as a shared picture of how player data flows, not as a paragraph in a standard.

What should a practical lifecycle map look like?

For one flagship title, a lifecycle map that actually helps people might:

  • Start where data appears: account creation, sign‑in, purchases, gameplay events, chat, anti‑cheat probes, support contacts, marketing entry points.
  • Show where data lands next: account service, matchmaking, anti‑cheat, telemetry collectors, data warehouse, log platforms, CRM and community tooling.
  • Distinguish active systems, warm storage, archives and deletion/anonymisation stages.
  • Mark the events that start retention clocks (last activity, end of subscription, end of chargeback window) and the processes or jobs that act when those points are reached.
  • Include less obvious shadows: staging environments populated from production, data‑science sandboxes, CSV exports and local developer copies.

Once that view exists for one game, you can standardise the pattern and adapt it for other titles instead of designing retention from scratch every time. New systems or vendors then have to declare where they sit in the lifecycle and how they honour the same transitions.

How does this connect back to A.8.10 and your ISMS?

With the lifecycle artefact referenced in your ISMS, you can:

  • Link A.8.10 directly to named transition points: where data leaves active use, when timers start, and where deletion or anonymisation applies.
  • Assign responsibilities at each point so it is explicit who configures retention, who runs jobs and who reviews the evidence.
  • Reuse the map in design reviews, change approvals and vendor assessments, so security, privacy and engineering teams argue from the same diagram instead of competing assumptions.

Keeping that map, its supporting rules and the related procedures in ISMS.online means lifecycle thinking becomes part of your normal governance. Management reviews and internal audits can ask “where was this data in its lifecycle?” after incidents, which is exactly how A.8.10 starts to feel like part of good game design rather than just a check box.


Who should own retention and deletion decisions in a live games business, and how do we stop exceptions from spreading?

Retention and deletion become reliable when they have clear ownership, a simple decision loop and visible tracking of exceptions.

How do we assign roles without building a bureaucracy?

In practice, most live studios settle on a lightweight RACI‑style split:

  • A security or CISO function is accountable for meeting A.8.10 across titles and shared services.
  • A privacy or legal function is responsible for ensuring retention and deletion align with law, platform obligations and what you tell players.
  • Data and platform engineering teams are responsible for implementing and operating deletion and anonymisation patterns in code, infrastructure and data pipelines.
  • LiveOps, product and analytics are consulted so retention windows do not quietly undermine fraud controls, experiment design or player experience.
  • Support and community teams are responsible for handling player requests, managing expectations and flagging unusual cases that might trigger temporary extensions.

To stop exceptions from slowly eroding your model, you add a light governance loop rather than a new committee:

  • Any extended retention for investigations, fraud cases or safety reasons is logged with a reason, an owner and a review date.
  • Those records are reviewed on the same cadence as your other risk and compliance topics – for example, in quarterly ISMS management reviews.
  • A small set of A.8.10 metrics – such as on‑time completion of erasure requests, number of overdue exceptions, and systems still missing defined rules – appears in regular reporting.

When you manage this in an ISMS platform like ISMS.online, the same workflows that handle incidents, changes and risk can carry retention and deletion decisions. That keeps what you actually do with player data aligned with what you tell players, partners and regulators, even when the studio is in launch mode or firefighting.


How do cloud services and vendors change our approach to A.8.10, and what should we embed in contracts and configurations?

Cloud and SaaS services change where and how player data is stored and deleted, but they do not change the reality that your studio is still responsible for deciding what is kept, for how long and when it must be removed or anonymised.

What should we capture for each service that touches player data?

For every provider that holds player identifiers or behavioural data, your ISMS records should spell out:

  • Which player‑data categories it stores (IDs, contact details, payment tokens, chat logs, telemetry, support records) and for which titles, regions or platforms.
  • Which retention and deletion options you can control: log retention sliders, object‑store lifecycle rules, built‑in erasure tooling, account‑closure behaviour.
  • How deletion is triggered in practice – by configuration, scheduled processes, API calls or support tickets – and what that means for backups, replicas and analytics exports.
  • What evidence you can gather and keep: configuration exports, audit logs, SOC 2 or ISO 27001 reports, vendor statements on backup handling and end‑of‑contract media sanitisation.

Those details shape two key artefacts:

  • Your internal lifecycle and retention matrix, where third‑party stores appear alongside in‑house databases and log platforms.
  • Your contracts and data‑processing agreements, which should set expectations for maximum retention, erasure support, backup treatment and behaviour at termination or migration.

Vendor‑risk assessments should treat deletion and retention as questions on the same level as encryption and access control. If a provider cannot meet the lifecycle you have defined for your players’ data, that becomes a conscious risk decision for your security and privacy leads rather than an accidental compromise under release pressure.

When you manage these expectations, configurations and evidences inside ISMS.online, you maintain a consistent A.8.10 storey even as your vendor mix evolves. You can show which services hold which categories of player data, how long they keep them, how you trigger deletion or anonymisation, and where you store proof that it happens – exactly the clarity platforms, regulators and players look for when deciding whether to trust a games studio over the long term.



Mark Sharron

Mark Sharron leads Search & Generative AI Strategy at ISMS.online. His focus is communicating how ISO 27001, ISO 42001 and SOC 2 work in practice - tying risk to controls, policies and evidence with audit-ready traceability. Mark partners with product and customer teams so this logic is embedded in workflows and web content - helping organisations understand, prove security, privacy and AI governance with confidence.

Take a virtual tour

Start your free 2-minute interactive demo now and see
ISMS.online in action!

platform dashboard full on mint

We’re a Leader in our Field

4/5 Stars
Users Love Us
Leader - Spring 2026
High Performer - Spring 2026 Small Business UK
Regional Leader - Spring 2026 EU
Regional Leader - Spring 2026 EMEA
Regional Leader - Spring 2026 UK
High Performer - Spring 2026 Mid-Market EMEA

"ISMS.Online, Outstanding tool for Regulatory Compliance"

— Jim M.

"Makes external audits a breeze and links all aspects of your ISMS together seamlessly"

— Karen C.

"Innovative solution to managing ISO and other accreditations"

— Ben H.