Blog

Key Person Dependency: The Risk No One Puts in the Board Paper

Every tax team has a Sarah. Key person dependency isn’t a personnel issue — it’s a structural issue. This article examines how operating models and technology choices either concentrate institutional knowledge in one person’s head or convert it into something the organisation owns.

Andrew Danckert
March 1, 2026
2
min read

Every tax team has a Sarah.

She’s the one who knows which GL accounts map to which return labels. She knows why the depreciation schedule doesn’t reconcile to the fixed asset register, and why it doesn’t matter. She knows that Entity 7 has a different year end, that the R&D claim requires a manual adjustment in Q3, and that the dividend frankings need to be traced through three intercompany steps before they hit the consolidated return.

She knows all of this because she built the process. She’s been running it for years. And none of it is written down. At least not in any way that someone else could follow.

Key person dependency isn’t a personnel issue. It’s a structural issue. It’s a measure of how much institutional knowledge lives in someone’s head versus in the systems and processes around them. And for most tax functions, that measure is dangerously high.

TACIT KNOWLEDGE VS EXPLICIT KNOWLEDGE

The distinction matters because it changes how you think about the problem.

Explicit knowledge is what’s captured in your systems, your documentation, your processes. It’s the data in the GL extract, the formulas in the return template, the policy in the governance framework. It exists independently of any individual. If Sarah leaves, explicit knowledge stays.

Tacit knowledge is everything else. It’s the judgement calls Sarah makes without thinking about them. It’s knowing that the property revaluation in Entity 3 needs a manual override because the automated feed doesn’t capture it correctly. It’s knowing that the finance controller in Melbourne sends the intercompany workpaper late every quarter and you need to chase it on day two, not day five. It’s knowing which numbers on the return the ATO always asks about, and where the supporting evidence lives.

Tacit knowledge is acquired through experience. It’s difficult to articulate, almost impossible to document comprehensively, and it walks out the door when the person who holds it leaves.

Every tax function runs on a mix of both. The ratio between them (how much of your compliance process is explicit and system-embedded versus tacit and person-dependent) is the single best measure of your operational resilience. And every decision you make about operating model and technology either converts tacit knowledge into explicit knowledge, or it doesn’t.

IT’S NOT ABOUT WHETHER SARAH LEAVES

The obvious risk is that Sarah resigns. Finds a better role. Gets poached by a firm. Retires.

But key person dependency doesn’t require Sarah to leave. It only requires Sarah to be unavailable. Annual leave during a filing deadline. Parental leave. Illness. A restructure that spreads her across two roles. A promotion that takes her out of the detail.

And it doesn’t just affect one person. In a team of five, key person dependency often means that three people can do the work, one person can review it, and one person actually understands it. Remove the person who understands it and the other four are following a process they can’t explain, making adjustments they can’t justify, and producing a return they can’t defend.

That’s the risk the board never sees. Not because it’s hidden, but because it doesn’t have a line item. It’s not in the risk register. It’s not in the audit committee paper. It’s just there, quietly compounding, until something forces it into the open.

OPERATING MODELS AND KEY PERSON DEPENDENCY

The operating model your tax function runs determines where tacit knowledge concentrates and how fast it compounds.

Fully manual, in-house. Maximum concentration. Every decision, every workaround, every undocumented adjustment lives with the people who do the work. The spreadsheets contain data. The people contain logic. If the team is small (and most Australian tax teams are) one or two people hold the entire institutional memory of how tax compliance actually works in the organisation.

Co-sourced. The dependency shifts but doesn’t shrink. The external adviser absorbs some of the tacit knowledge, but now it sits across the fence. The co-source partner’s senior manager who “knows your account” becomes the new Sarah. When they rotate off (and they will, because that’s how professional services firms work) the same problem reappears in someone else’s organisation, where you have even less visibility into it.

Outsourced. The dependency is transferred entirely. You no longer have a key person problem inside your tax team. You have one inside your provider. And you’ve added a new layer: someone internally still needs to understand enough to brief the provider, review their output, and answer questions the ATO directs at the company. That person becomes your Sarah, except with less context because they’re not the one doing the work.

Technology-enabled. This is the only model that structurally converts tacit knowledge into explicit knowledge. But only if the technology is designed for it. And most of it isn’t. Which brings us to the real problem.

THE TAX SOFTWARE THAT DOESN’T SOLVE IT

This is the part that surprises people: purpose-built tax compliance software, the tools specifically designed for corporate tax, often does very little to reduce key person dependency. In some cases, it makes it worse.

Not because the software doesn’t work. It does. These platforms calculate correctly. They produce returns. They handle multi-entity consolidations and electronic lodgement. The logic is maintained by the vendor, which genuinely removes one category of tacit knowledge from the equation.

But the interfaces these products present to the user create a new category of dependency that’s just as dangerous.

Most dedicated tax compliance platforms were built over decades, accumulating features, modules, and configuration options. The result is software that is powerful but opaque. Applications where the user needs deep product training not just in tax, but in the platform itself. Operating them effectively requires months of learning. Understanding what’s actually happening beneath the surface requires years.

Platforms not originally designed for tax (general-purpose tools configured for tax compliance) have the same problem in a different form. The person who built the model is the only one who understands it. But the dedicated tax products are where this matters most, because they’re the ones organisations adopt specifically to reduce risk.

Four specific problems:

Import opacity. What data came in from the GL and what was adjusted inside the platform? In many of these systems, the boundary between source data and user-created adjustments isn’t visible. The tax manager who configured the import knows which figures are untouched and which have been overridden. A new team member, or an ATO analyst, looking at the same screen sees numbers without context. The platform doesn’t make the distinction clear because the UI wasn’t designed for legibility. It was designed for the person who set it up.

Adjustment traceability. Where are tax adjustments being made, and how do they flow through to the return? In a complex tax platform, adjustments can happen in multiple places: worksheets, mapping tables, override fields, configuration layers, calculation modules. The person who built the process knows the path from GL to return. Everyone else is navigating a system where the logic is distributed across screens and modules that don’t surface it in any coherent way. The traceability exists, but it requires the kind of platform expertise that takes years to develop. That expertise is tacit knowledge about the software, layered on top of the tacit knowledge about the tax process.

Outcome legibility. How does a specific adjustment appear in the final return? Can a reviewer trace a number on a return label back through the system to the originating GL entry without being a power user? In most dedicated tax platforms, the answer is no. Not without significant navigation, not without knowing which module to look in, and not without understanding the platform’s internal data architecture. The audit trail is there in theory. In practice, it’s accessible only to the person who already knows where everything is.

Confidence in completeness. This is the question that haunts every tax team using a complex platform: how do I know I used it right? How do I know something hasn’t fallen through the cracks? When the tax effect journal is out of balance, is that a genuine tax issue, or did a mapping fail, an import miss a batch, an adjustment land in the wrong period? When the calculated balance doesn’t agree with what’s being posted to the GL via the journal page, where do you start looking?

In a complex platform, answering these questions requires deep product knowledge. The user has to mentally reconstruct the data flow (from import to classification to adjustment to calculation to journal to return) and identify where the break occurred. There’s no single view that shows it. No reconciliation screen that says “here’s what came in, here’s what changed, here’s what went out, and here’s where they don’t agree.” The diagnosis requires the same tacit knowledge that created the dependency in the first place.

This is the most corrosive form of key person dependency. It’s not just that Sarah knows how to use the platform. It’s that Sarah is the only one who can tell whether the platform produced the right answer. The system doesn’t validate itself. Sarah validates the system. And when Sarah isn’t there, nobody knows whether the output is correct. They just know it produced an output.

A system that requires expert knowledge to verify its own accuracy isn’t a control. It’s a risk wearing the uniform of a control.

Here’s the reframe: these platforms have replaced “Sarah knows the tax rules” with “Sarah knows how to use the software.” The tacit knowledge hasn’t been converted to explicit knowledge. It’s been relocated from the tax process to the application process. The dependency is the same. Only the subject has changed.

THE MISSING LAYER: ORGANISATIONAL PROCESS

There’s a further dimension that almost no tax technology addresses, and it’s where some of the most critical tacit knowledge lives.

Tax compliance doesn’t start when data enters the platform. It starts with a series of upstream activities that are entirely organisation-specific. Which report do you run from the ERP to get the right trial balance extract? Which version, because the standard report doesn’t include the reclass entries and you need the one the finance team modified in 2022. Who in the finance team owns the intercompany reconciliation, and when does it need to be completed before tax can use it? What workpaper does the financial controller need to provide for the foreign exchange adjustments, and in what format, and does it need to be signed off before it comes to you?

This process knowledge (the who does what, when, in what order, and where do I find it) almost always lives in Sarah’s head. Or in a set of emails she can search. Or in a checklist document on a shared drive that was last updated three years ago and doesn’t reflect the current process.

A platform that stores data and calculations but doesn’t store process is solving half the problem. When Sarah leaves, the new person inherits a system that can produce a tax return, but no guide to the forty steps required to get the right data into it. They spend their first three months not learning tax. They spend it learning how things work around here. That’s tacit knowledge that the technology was supposed to eliminate.

WHAT ACTUALLY REDUCES KEY PERSON DEPENDENCY

Converting tacit knowledge to explicit knowledge requires the system to absorb what currently lives in people’s heads. Not just data. Not just calculations. Logic, process, and context.

A rules-based system embeds the logic in the platform itself. The classification rules, the adjustment methodologies, the treatment of each account. These are defined once, applied consistently, and maintained as the rules change. When Sarah leaves, the logic stays. That’s tacit-to-explicit conversion in the tax domain.

A transparent interface makes the system legible to someone who wasn’t there when it was configured. Clear delineation between imported data and internal adjustments. Visible adjustment trails that don’t require platform expertise to follow. One-click traceability from the return label back to the originating GL line. Not hidden behind five navigation layers, but visible on the surface, by design. That’s tacit-to-explicit conversion in the application domain.

Built-in validation removes the “how do I know I used it right?” problem entirely. When the system can show you, in a single view, what was imported, what was adjusted, how those adjustments flow to the return, and where anything doesn’t reconcile, the confidence question is answered by the platform, not by the operator. The system validates itself. That’s the difference between a control and a tool.

An embedded process layer stores the organisational knowledge alongside the tax knowledge. The extraction instructions. The upstream dependencies. The review checklists. The contacts and deadlines and “here’s who to call when Entity 7’s data is late.” Not in a separate document, but in the system, attached to the workflow, visible to whoever picks up the work next. That’s tacit-to-explicit conversion in the operational domain.

None of these features are technically exotic. But they require a design philosophy that prioritises legibility over flexibility, transparency over configurability, and institutional resilience over individual power-user capability.

THE BOARD PAPER TEST

Here’s a useful test: could you write a board paper that quantifies your key person dependency?

Not in abstract terms. In specific terms. If your senior tax manager left tomorrow, how many weeks would it take to reconstruct their tacit knowledge? How many return positions would be at risk of error? How many ATO queries could the remaining team answer without calling the person who left?

For most organisations, the honest answer is uncomfortable.

Run the simulation. Take your most recent income tax return and give it to someone on the team who wasn’t the primary preparer. Ask them to explain, without help, how three specific numbers were derived, from the GL source data through to the return label. Time how long it takes. Note where they get stuck. Note what they have to ask Sarah about. Every question they ask is tacit knowledge that hasn’t been converted.

Run the retrospective. Pull out the last set of Justified Trust questions your organisation responded to. If you haven’t been through a formal review yet, use the questions below as a simulation. They reflect the standard lines of enquiry the ATO pursues with Top 100 and Top 1000 taxpayers:

1. Can you demonstrate how a specific deduction was calculated, from the originating transaction to the return label? Not just the answer. The trail. Could someone other than the primary preparer walk the ATO through it without assistance?

2. Can you explain the basis for the tax treatment of a material adjustment? Not from memory. From what’s in the system. Is the rationale documented alongside the adjustment, or does it require someone to recall why it was done?

3. Can you show who reviewed and approved each material position on the return, and when? Is the approval trail in the platform, or was it an email that someone would need to search for?

4. Can you reconcile the tax expense in the financial statements back to the underlying tax calculations? And can someone other than your senior tax manager do it without guidance?

5. If the ATO queried a prior-year position, could your current team reconstruct the basis for it? Or does that depend on someone who may no longer be in the role?

6. Can you show how data moved from your source systems into your tax platform, and confirm that nothing was lost or altered in the process? Is the import reconciliation automatic and visible, or does it rely on someone manually checking that the numbers match?

7. Can you identify every manual adjustment made outside the system (in spreadsheets, workpapers, or email) and confirm it was captured in the final return? Or are there steps in the process that live entirely outside the platform?

8. If your tax team changed over entirely, could the new team produce next year’s return to the same standard using only what’s in the system and the documented process? This is the ultimate test. If the answer is no, the gap between that answer and yes is your key person dependency, measured in weeks and risk.

Score each one honestly: could anyone on the team answer it, or only Sarah? The ratio tells you exactly how exposed you are. More importantly, it tells you which categories of tacit knowledge your current systems and processes aren’t capturing.

The pattern to watch for: if the answers to these questions depend on navigating to specific screens in your platform, knowing which module to check, or understanding configuration choices that aren’t self-evident in the interface, that’s application-layer tacit knowledge. Your platform is part of the dependency, not the solution to it.

The solution isn’t hiring a backup Sarah. It’s building a tax function where tacit knowledge is systematically converted into explicit knowledge. Embedded in the rules, visible in the interface, stored in the process, and accessible to whoever needs it next.

That’s not a technology preference. It’s an operational risk decision.

Ready to transform your tax close?

Join tax teams who finally have calm, controlled closes.