Transparency
What HR professionals need to know — and demand — when AI makes decisions about people
RAIHR Framework Series · Dimension 2 of 7
The black box problem
A candidate applies for a position. The AI screening system reviews their application and scores them below the threshold. They never hear back.
The candidate does not know why. The recruiter does not know why. Even the HR director, if asked, could only say that the system ranked the candidate low based on its model.
This is the black box problem — and it is not a technical edge case. It is the default state of most AI-powered HR tools in use today.
When an AI system cannot explain its reasoning in terms that a human decision-maker can understand and evaluate, something important has been lost: the ability to verify that the decision was sound, the ability to catch errors before they cause harm, and the ability to respond when a candidate or employee asks why.
What transparency means in AI-assisted HR
Transparency, as a dimension of responsible AI in HR, has two distinct but related meanings.
The first is explainability: can the system — or the vendor — tell you, in meaningful terms, why it produced a particular output? Not just "the model scored this candidate 62 out of 100," but what factors drove that score, how those factors were weighted, and whether those weights reflect legitimate job-relevant criteria.
The second is disclosure: are the people affected by AI-assisted decisions informed that AI was involved? Do candidates know their application was screened by an algorithm? Do employees know their performance data is being analyzed by a predictive model? Do they have any recourse?
Both dimensions matter. A system can be explainable to HR without being disclosed to candidates. A system can disclose that AI is involved without providing any meaningful explanation of how. Genuine transparency requires both.
Transparency is not just about understanding what the AI did. It is about being able to tell the people affected by its decisions that it was used — and why the outcome was what it was.
Why HR is the last human checkpoint
HR sits between the AI system and the people it affects. That position carries a specific transparency responsibility that no other function shares.
When a candidate is rejected, it is HR's name on the process — not the vendor's. When an employee is flagged by a predictive attrition model and subsequently passed over for a project, the harm lands on that person regardless of whether HR knew the AI was involved. When a regulator asks how hiring decisions were made, HR answers.
In a growing number of jurisdictions, this responsibility is also becoming a legal one. Regulations in the EU, New York City, and other regions now require disclosure to candidates when AI tools are used in hiring, and in some cases mandate independent bias audits. The regulatory direction is clear: AI in HR is moving toward required transparency, not optional transparency.
HR professionals who wait for their legal team to tell them when transparency is required are already behind. The governance standard for responsible practice is ahead of the current legal minimum — and closing fast.
What transparency governance looks like in practice
Scenario 1: A vendor cannot explain their scoring
Your team is evaluating an AI talent assessment tool. You ask the vendor to explain how the tool scores candidates and what factors it uses. The vendor tells you the model uses "hundreds of signals" and that the specific weighting is proprietary.
A transparency-literate HR professional recognizes this as insufficient. Proprietary methodology is a legitimate business concern — but it cannot override the HR team's need to understand, at a meaningful level, what the tool is actually measuring. If the vendor cannot tell you whether the model weights communication style, response speed, educational background, or behavioral patterns — and cannot tell you how each of those factors relates to job performance — you cannot evaluate whether the tool is appropriate for your use case.
The right response is not to walk away automatically, but to require a minimum level of explainability as a procurement condition. If the vendor cannot meet it, that is important information.
Scenario 2: Candidates are not informed AI is involved
Your organization uses an AI video interview platform that analyzes facial expressions, voice tone, and word choice to generate a candidate suitability score. Candidates are told they will be completing a video interview. They are not told the video will be analyzed by AI.
A transparency-literate HR professional identifies this as a disclosure gap — and increasingly, a legal risk. Candidates who later discover AI analysis was involved without their knowledge are likely to feel deceived, and in some jurisdictions that non-disclosure may violate emerging regulations.
The fix is straightforward: update candidate-facing communications to clearly state that AI tools are used in the assessment process, what data is collected, and how it is used. This is not just a compliance step — it is a trust-building one.
Scenario 3: An employee asks why they were passed over
A high-performing employee asks their HR business partner why they were not selected for a leadership development program. The HRBP knows the AI talent identification tool flagged the employee as "not ready," but cannot explain what criteria the tool used or why the employee scored below the threshold.
A transparency-literate HR professional understands that "the system said so" is not an acceptable answer to a legitimate employee question. They escalate to understand the tool's criteria, work with the vendor to get a meaningful explanation, and ensure that future communications about the program include clear information about how candidates are identified.
Key questions to ask
| # | Question |
|---|---|
| 1 | Can you explain, in terms a non-technical HR professional can understand, what factors your tool uses to score or rank candidates — and how those factors are weighted? |
| 2 | Is the scoring methodology fully proprietary, or can you provide a meaningful explanation of the model's decision logic without disclosing trade secrets? |
| 3 | What do candidates see or receive that informs them AI is being used in their assessment? Is this disclosed before they begin, or only in fine-print terms and conditions? |
| 4 | If a candidate or employee asks why they received a particular score or outcome, what can we tell them? Can we provide an individualized explanation? |
| 5 | In which jurisdictions does this tool operate, and have you assessed compliance with AI transparency requirements in each — including NYC Local Law 144 and the EU AI Act? |
| 6 | How does the tool handle cases where its output conflicts with human judgment? Is there a documented process for HR to override the AI recommendation? |
| 7 | What information is provided to internal stakeholders — hiring managers, HRBPs — about how to interpret and appropriately use the tool's outputs? |
Build the judgment to demand real transparency
Transparency is one of seven dimensions in the RAIHR framework for responsible AI in HR. The skills it requires — knowing what questions to ask vendors, understanding what disclosure obligations exist, being able to explain AI-assisted decisions to the people they affect — are not instinctive. They are learned.
The RAIHR Certified Practitioner program tests these skills through realistic scenario-based examination. You will encounter the situations described above — vendors with incomplete answers, candidates who deserve explanations, employees who were affected by decisions they did not know were AI-assisted — and be assessed on whether you can navigate them correctly.
The certification is open to all HR professionals — regardless of seniority or technical background. No coding knowledge required. Open-book examination. 90 minutes. The question is not whether you can recall definitions. It is whether you can make the right call.
Ready to get certified? Register at raihr.org
RAIHR · Responsible AI in HR · raihr.org This article is part of the RAIHR Framework Series covering all seven dimensions of the certification program.