RAIHR / Framework / Security
RAIHR Framework · Dimension 4 of 7

Security

Why HR professionals cannot afford to treat AI security as someone else's problem

RAIHR Framework Series · Dimension 4 of 7


The most sensitive data in the organization

Ask most HR professionals whether their function handles sensitive data, and they will say yes without hesitation. Compensation records. Performance assessments. Disciplinary histories. Medical accommodations. Immigration status. In many organizations, HR manages data that is more personally sensitive than anything held by finance or legal.

Now add AI to that picture.

AI-powered HR tools do not just store data — they process it, combine it, score it, and generate outputs that can follow an employee for years. A predictive attrition score. A leadership potential rating. A cultural fit assessment from a video interview. These outputs carry the weight of objectivity that AI-generated numbers tend to convey, while being derived from data that, if exposed or manipulated, could cause serious harm to real people.

Security, in this context, is not an IT problem that HR hands off after procurement. It is an HR governance responsibility from the first conversation with a vendor to the last day of the contract.


What security means in AI-assisted HR

Security, as a dimension of responsible AI in HR, covers two distinct risk categories.

The first is data security: protecting the personal data that HR AI systems collect, process, and store from unauthorized access, breach, or misuse. This includes data at rest, data in transit, and data held by third-party vendors and their sub-processors.

The second is AI-specific security risks that go beyond traditional data protection:

  • Model poisoning: if an AI system learns from ongoing data inputs, adversarial manipulation of those inputs can degrade the model's performance or introduce systematic bias over time.
  • Adversarial inputs: candidates or employees who understand how a scoring system works may be able to game it in ways that undermine its validity — or, in the wrong hands, to manipulate who gets flagged or advanced.
  • Output integrity: AI-generated scores, recommendations, and reports need to be tamper-evident. If an attrition risk score or a performance rating can be altered between generation and use, the integrity of every decision downstream is compromised.
  • Access control: who in the organization can see AI-generated assessments? Is there a risk that sensitive outputs — low performance predictions, flight risk scores, wellness indicators — are accessible to people who should not have them?

Security for HR AI is not just about keeping data safe from external attackers. It is about ensuring that the AI systems your organization relies on cannot be manipulated, that their outputs cannot be tampered with, and that sensitive information reaches only those with a legitimate need to see it.


Why HR is the last human checkpoint

In most organizations, IT owns the security architecture and legal owns the contractual risk. HR owns neither — and that is often where the accountability gap opens.

When an HR AI tool suffers a data breach, the investigation will ask: who approved this vendor? Who reviewed the security terms? Who was responsible for ensuring that sensitive candidate and employee data was adequately protected? Those questions land on HR even when HR never saw the security documentation.

More practically: HR is the function that communicates with candidates and employees. When a breach occurs and affected individuals need to be notified, when media coverage of an AI manipulation incident names your organization, when an employee discovers their wellness data was shared with their manager without authorization — HR manages the human consequences. That accountability requires commensurate involvement in the security decisions that precede it.

HRIS professionals and HR technology leaders in particular carry a specific security responsibility: the systems they configure, integrate, and maintain are the pathways through which sensitive data flows. An insecure integration between a performance management platform and a predictive analytics tool is an HR technology problem before it is an IT problem.


What security governance looks like in practice

Scenario 1: The vendor cannot answer basic security questions

During procurement of a new AI talent analytics platform, your HR technology team asks the vendor about their security certifications, data encryption standards, and penetration testing schedule. The vendor confirms they are SOC 2 compliant and offers to share their certificate.

A security-literate HR professional understands that SOC 2 is a baseline, not a complete answer. They ask follow-up questions: what is the scope of the SOC 2 audit — does it cover the specific systems that will process your data? What encryption standards are used for data at rest and in transit? Who are the sub-processors that will handle your data, and what security requirements apply to them? When was the most recent penetration test conducted, and by whom?

A certificate without answers to these questions provides limited assurance.

Scenario 2: Access controls are not aligned with need

Your organization deploys an AI-powered performance analytics tool that generates individual employee risk scores across several dimensions. Three months after deployment, you discover that line managers across the business can access the detailed score breakdowns for every employee in the organization — not just their direct reports.

A security-literate HR professional identifies this as an access control failure with direct people consequences. Individual employees' AI-generated risk scores are sensitive personal data. Access should be limited to those with a defined, legitimate need — typically direct line managers and relevant HR business partners. The issue needs to be remediated immediately, and a review of what access was exercised during the window it was misconfigured is warranted.

Scenario 3: The contract ends but the data remains

Your organization switches AI vendors for candidate assessment. The old contract has expired. Six months later, someone on the HR team checks the old vendor's portal and finds that candidate profiles — including video recordings and assessment scores from the past two years — are still accessible.

A security-literate HR professional escalates immediately. Vendor data retention and deletion obligations should be documented in the contract and verified at contract end, not discovered by accident. They initiate formal deletion confirmation from the vendor, review whether the data residency terms were complied with, and update the vendor offboarding process to include explicit deletion verification for all future contract endings.


Key questions to ask

# Question
1 What security certifications does this vendor hold, and what is the specific scope of each — does it cover the systems and data centers that will process our data?
2 What encryption standards are applied to our data at rest and in transit? Who holds the encryption keys?
3 Who are the vendor's sub-processors that will have access to our candidate or employee data? What security requirements apply to those sub-processors?
4 How is access to AI-generated outputs — scores, assessments, risk flags — controlled within our organization? Who can see what, and is that access logged?
5 What is the vendor's data deletion process at contract termination, and how do we receive confirmation that deletion has been completed?
6 Has a security risk assessment been completed for the integrations between this AI tool and our existing HRIS, ATS, or other people systems?
7 What is the incident response process if this vendor experiences a data breach involving our candidates' or employees' data? What notification timeline are they contractually committed to?

Build the judgment to govern AI security

Security is one of seven dimensions in the RAIHR framework for responsible AI in HR. The ability to ask the right questions at procurement, recognize an access control problem when you see one, and hold vendors to deletion obligations at contract end — these are not IT skills. They are HR governance skills that every practitioner working with AI-powered tools needs to develop.

The RAIHR Certified Practitioner program tests security judgment through scenario-based examination grounded in real HR situations. You will be assessed not on your technical knowledge of security architecture, but on your ability to identify the right governance action and distinguish it from the plausible alternatives that fall short.

The certification is open to all HR professionals — regardless of seniority or technical background. No coding knowledge required. Open-book examination. 90 minutes. The question is not whether you can recall definitions. It is whether you can make the right call.

Ready to get certified? Register at raihr.org


RAIHR · Responsible AI in HR · raihr.org This article is part of the RAIHR Framework Series covering all seven dimensions of the certification program.

PrivacyHuman Oversight

Ready to certify your governance judgment?

Register and certify →