Practical considerations beyond data compliance

Regulatory compliance is a key consideration for organizations operating in the current day and age. At the same time, it is often almost handwaved. You must comply with data regulations, because it’s the law to do so. It is therefore instructive to look at the practical considerations beyond data compliance:

Regulations and requirements

In a world where the regulatory sphere is tightening, this presents a complex problem where multiple restrictions and regulations intersect; for example, the so-called Right To Be Forgotten aspect of GDPR meeting with the retention requirements of eDiscovery.

The major regulations currently in place in the EU include:

  • General Data Protection Regulation (GDPR): Aims to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market.
  • Data Act: The Data Act aims to ensure that users, both consumers and businesses, have greater control over the data generated by their connected devices.
  • Directive on Security of Network and Information Systems (NIS2): aims to improve European resilience against both current and future cyberthreats.
  • Cyber Resilience Act (CRA): Regulates “the design, development, production and making available on the market of hardware and software products” and covers the results under new CE standards.

There are of course more, and even more still once we leave the EU and enter the global scene – though it is usually the case that compliance with EU regulations will meet of exceed expectations elsewhere.

But of course it is not the regulations that we are here to cover; besides managing the legal expectations they lay out, what else are the push and pull factors towards adhering to best data handling practices?

The cost of storage

When looking at practical considerations behind data compliance, one stands out: Compliance gets expensive. Organizations have historically expanded storage capacity rather than improving data management because adding storage and keeping all data in perpetuity was simpler than addressing the underlying governance challenges.

However, the growth in AI adoption and expansion of what systems generate data has greatly expanded the typical volume of data being generated. This bloats storage, for some organizations up to the petabytes, stressing systems and swelling storage-associated costs. What’s worse, the data being produced by AI tools, IoT devices, chat systems and so on has an unpleasant tendency to be so called unstructured data.

Unstructured data is difficult to keep abreast of or routinely audit; lifecycle rules are therefore not always applied to it, certainly not as easily as they would be with more regularly generated data. Which leads neatly into the next issue: liability.

Where security and legality meet

When large volumes of data remain ungoverned – in fact, in may cases, the organization may not even know the data exists, much less where! – the risk that arises outside the immediate question of data bloat is data security and resulting legal liability.

In one real-world case, a well-known organization experienced a data breach that resulted in multiple class action lawsuits. Internal audits revealed that production systems were retaining information far beyond the mandated 10-year limit, significantly increasing the volume of data exposed in the incident.

The result of such a breach can be badly damaging, both in terms of direct costs by lawsuits and fines but also in reputational damage. To combat this, we have to accept it as a reality and begin looking at ways to mitigate the possibility. Which leads us right back to the very goal of data regulations – a more structured data management ecosystem with rules aimed at ensuring adherence to best practices.

Archive, backup and audit

The solution is as always to establish data management systems that adhere to best practices – ideally meeting all the different business needs and checking the requisite boxes:

  • Archive – long term storage and retention for data, with deduplication features and lifecycle policies to help lower data volume and eDiscovery modules allowing automation of data retention and deletion, as well as indexing and search to keep the data structure transparent.
  • Backup – short-term data copies for recovery from accidental deletion, deliberate alteration or criminal activity. Unlike an archive, a backup is not structured for long term storage but rather for a temporary copy of your systems.

Once these are in place, organizations are encouraged to regularly audit their data storage structures to ensure information is not falling through the cracks. The longer an issue is left unaddressed, the harder it will be to untangle once the can can no longer be kicked further down the road.

 

Your Data In Your Hands – With TECH-ARROW

by Matúš Koronthály