JOH Partners
Engage
Perspective · Technology & DigitalFrom:Board Pulse

AI on the C-suite: the seat that doesn't know its name

Boards are creating AI roles before they know what the role does. Of eight Chief AI Officer mandates JOH ran since 2024, three were re-scoped within twelve months. What the seat actually needs to be.

Oliver Helvin· Founding Partner
11 December 202515 min read
AI on the C-suite: the seat that doesn't know its name

A tech-forward GCC conglomerate hired its first Chief AI Officer in early 2024. The appointment was deliberate, well-resourced and publicly announced. The candidate was credible: a senior operator with genuine technical depth, a track record of running large data programmes, and the personal authority to walk into a board meeting and hold the room. By the close of Q4 2024, less than ten months in, the role had been split. The original mandate had grown three legs that did not fit together: an AI strategy and investment thesis, a model governance and risk function, and an enterprise adoption programme that was, in practice, a global change-management job. The principal made the structural call: two roles, two reporting lines, two operating layers.

By Q2 2025, one of the two splits had been folded back into the CTO function. The model risk piece, which had been spun out as a Chief AI Risk Officer reporting to the COO, turned out to overlap so substantially with the CTO's existing model governance work that the duplication was producing more friction than coverage. The principal made the structural call again: collapse the risk role into CTO, keep the strategy and adoption role separate. By Q4 2025, eighteen months after the original announcement, the survivor of the original two appointments had been re-titled Chief Data and Platform Officer, with a substantially different scope from the role originally signed for. The other appointment had been reduced to a VP-level reporting line under the COO. Neither incumbent had failed. The institution had not failed. What had failed, with high precision, was the title.

This pattern is not unique to that conglomerate. Across the eight Chief AI Officer mandates JOH Partners has run between 2024 and Q1 2026, three were re-scoped or merged into adjacent roles within twelve months of the appointment. In each case, the re-scoping was the institution doing structural work that, with hindsight, should have been done before the role was first defined. The boards that created the role to signal seriousness about AI were, in most cases, signalling correctly. They were less correct about what the role actually did.

Why AI as a C-suite role keeps re-scoping

The pattern is consistent across the mandates. Boards in 2023 and 2024 recognised that AI was going to be a strategic capability the institution would have to build. The simplest way to signal that the institution was taking the capability seriously was to create a senior named role with the word AI in the title. The signalling worked: the institution announced the role, the markets responded positively, the senior-team headcount went up. The role then went to work.

The role discovered, in most cases within ninety days, that its mandate overlapped with three or four existing senior functions. The strategy and investment-thesis side of the AI mandate overlapped with the CEO's office and the corporate strategy function. The model risk and governance side overlapped with the CTO function, the CRO function, and in regulated entities the chief data officer. The enterprise adoption side overlapped with the COO function, the CHRO function and, in many cases, the chief transformation officer. The AI Officer was, in effect, asked to run a horizontal capability across an organisation whose vertical functions had not been redesigned to accommodate the horizontal.

In most of the eight mandates, the original appointee did serious work in the first six months. The strategic frameworks were drafted, the model inventories were built, the adoption pilots were launched. The work was, in itself, valuable. What broke was the operating model around the work. The strategic frameworks could not be approved without the CEO's office and the strategy function being substantively involved, which meant the AI Officer was running parallel strategy work to the strategy function rather than owning it. The model risk programme could not be implemented without the CTO and the CRO being substantively involved, which meant the AI Officer was running parallel governance work to the existing governance functions rather than owning it. The adoption programme could not land without the COO and the CHRO being substantively involved, which meant the AI Officer was running parallel change work to the existing change machinery.

By month nine, the friction was visible. By month twelve, the principal stakeholder, in most cases the CEO with chair-level support, made the structural call. The horizontal role was either folded back into one of the verticals (most often the CTO), split into two narrower roles with cleaner reporting lines, or, in one case in our mandate set, dissolved entirely with the work redistributed across the existing senior team. The role, in other words, was a transitional structure. It signalled the institution's seriousness, it produced the diagnostic work that surfaced the underlying capability gaps, and then it was reorganised.

The reorganisation was not a failure of the original incumbent. In all but one of the eight cases, the original appointee remained in the institution at a senior level after the re-scoping, often with a substantially upgraded scope under the new title. The reorganisation was a failure of the original mandate definition. The boards that signed off on the original mandate had, in most cases, defined the role on the assumption that AI was a single capability that could be held by a single senior leader. The institutions discovered, in operation, that AI was three different capabilities that the institution had to build at three different rhythms, and that holding all three under one senior title produced more conflict than coverage.

The role is not failing. The title is. The strategic-thesis seat, the risk-governance seat and the adoption seat are each a real C-suite job. Holding all three under one title is the structural mistake that produces the re-scoping a year later.
JOH Partners technology and digital practice, 2026

Three things the seat actually has to do

The work the role is asked to do, on careful inspection, decomposes into three substantively different jobs. Each is a real C-suite role in its own right. The structural error in the original Chief AI Officer construction was the assumption that one senior leader could hold all three. In our experience and in the mandate set, that assumption is wrong often enough that boards designing the role in 2026 should treat it as broken at the design stage.

Set the AI investment thesis

The first job is to set the AI investment thesis: what the institution will and will not do, where the strategic bets will be placed, what the time horizon and capital commitment will be, and how the AI investments will be governed against the rest of the institution's strategic portfolio. This is, in operation, a strategy job. The skill profile is closer to a chief strategy officer than to a chief technology officer. The candidate base sits in a few clearly identifiable places: senior strategy operators with credible technology depth, in some cases senior operators from the strategy consulting firms with extended technology practices, in some cases senior operators from the corporate development functions of large global technology groups. The role reports, naturally, to the CEO, and sits at the apex of the strategic portfolio rather than at the apex of the technology function.

The strategic-thesis job has been, in our mandate set, the job most often confused with the technology-leadership job in the original Chief AI Officer construction. The confusion is understandable: the strategic thesis depends on technical depth, and the strategic thesis is most credibly held by a senior leader who can have the technical conversation. But the job itself, when run cleanly, is a strategy job, not a technology job. The institutions that have separated the strategic thesis from the technology delivery have, in our experience, produced sharper strategic decisions and less internal friction.

Govern the AI risk surface

The second job is to govern the AI risk surface: the model risk, the data risk, the regulatory risk, the reputational risk that arises when AI systems make decisions that affect customers, employees, suppliers and the institution's own operating choices. This is, in operation, a risk job. The skill profile is closer to a chief risk officer than to a chief technology officer. The candidate base sits in the senior risk and compliance layer of regulated industries (financial services, healthcare, energy, telecommunications), in the senior model risk functions of large banks, and increasingly in the chief data officer layer that the regulated entities have built over the past decade.

The risk-governance job is the job most often underestimated by the institutions designing the role. The model risk surface is genuinely substantial: the institution running AI systems across customer-facing, operating and decision-support functions is exposed to risks that the existing risk machinery was not designed to detect or contain. The model risk job is closer to actuarial than to engineering: it is about understanding what the AI system can be expected to do under stress, where the failure modes are, what the regulatory exposure looks like, and how the institution's risk appetite should be expressed in the model design and the model deployment. In regulated entities the role is increasingly required by the regulator; in unregulated entities the role is, in most cases, the role the board did not realise it needed until the first material incident.

Drive enterprise adoption

The third job is to drive enterprise adoption: the change management, the workflow redesign, the upskilling, the operating-rhythm changes that turn an institution's AI investment into measurable productivity in the operating layers. This is, in operation, an operations and change job. The skill profile is closer to a chief operating officer or chief transformation officer than to a chief technology officer. The candidate base sits in the senior transformation and operations layer of large global groups, often with a track record of running multi-year change programmes that involved technology, workflow and workforce in combination.

The adoption job is the job that, in our mandate set, has most often been deferred or treated as a downstream consequence of the technology investment. The deferral is structural: the institutions that buy the technology and build the platform tend to assume the adoption will follow, when in practice the adoption is the substantially harder problem. The institutions that have run the adoption job seriously have, in our experience, produced the productivity returns that the AI investment was originally justified by. The institutions that have not run the adoption job seriously have, in most cases, produced AI capability that exists in the platform layer but does not show up in the operating numbers.

The argument is direct. Each of these three jobs is a real C-suite seat. None of them is the technology-delivery job, which is the CTO's. Holding all three under one Chief AI Officer title is the structural mistake that produces the re-scoping twelve months later. The institutions that have skipped the Chief AI Officer construction and gone directly to the three-job design have, in our experience, paid less and produced more.

The role is not failing. The title is.

3 of 8. Chief AI Officer mandates re-scoped or merged into adjacent roles within 12 months, 2024–2026JOH Partners mandate data, Q1 2024 to Q1 2026, n=8.

What boards should ask before creating the role

The mandates that have run cleanly in our experience are the mandates where the board has worked through, before the search begins, the question of which of the three jobs the institution actually needs to hire for first. The five questions below are the litmus test we now run before accepting a Chief AI Officer mandate. If the board cannot answer four of the five clearly, the role is not yet ready to be hired and the search will, in most cases, produce a re-scoping inside twelve months.

Figure 01FIG-01

Five-question litmus test for the AI seat

QuestionWhat the answer should look like
Which of the three jobs is the priority: strategic thesis, risk governance, or adoption?One of the three named, with a clear rationale tied to the institution's current AI maturity
What is the relationship to the CTO function?Defined operating boundary, agreed by the chair and CEO, in writing
Where does the role report?To the CEO if strategic thesis; to the CRO or COO if risk; to the COO if adoption
What is the success metric at month twelve?A small number of measurable outcomes tied to the priority job; not a generic AI maturity score
What is the principal stakeholder's personal time commitment?Weekly engagement on the priority job; not delegated to the COO or CTO
Figure 01. The five questions JOH Partners now runs through with a board before accepting a Chief AI Officer mandate. If the board cannot answer four of the five clearly, the role is not yet ready to be hired.Source · JOH Partners technology and digital practice, 2026

The five questions are not exhaustive. They are the threshold the board should be able to clear before the institution commits to creating the role. The question that boards most often cannot answer, in our experience, is the first: which of the three jobs is the priority. The institution that wants all three at once is, in operation, asking for a structure that will re-scope inside the first year. The institution that has chosen one as the priority, with a clear rationale, has the best chance of running the role cleanly and of getting the value from the appointment that the original board investment was justified by.

Where this is heading by 2027

The current Chief AI Officer construction is, in our view, a transitional title. By 2027, in our base case, most large groups will have settled on one of three structural patterns.

The first pattern is AI as a function under the CTO. This is the most common destination in our mandate set. The strategic-thesis work is held by the CEO's office or the corporate strategy function; the risk work is held by the CRO or the existing model risk team; the adoption work is held by the COO. The CTO function is upgraded to include the AI delivery and platform work, often with a senior VP-level reporting line specifically for AI engineering. This pattern is the structural answer for institutions where AI is, in operation, a technology capability that the institution wants to build into its existing technology function rather than as a separate horizontal.

The second pattern is AI as a business unit. This is the pattern for institutions where AI is, in operation, a product line or a revenue line in its own right, typically at large technology groups, telecommunications operators with a substantial enterprise software business, or financial services institutions with AI-driven product lines. The role has a P&L, runs an engineering team, manages partnerships and customer relationships, and reports to the CEO as a business-unit head. This is a clean structural answer when the underlying business has a coherent AI product or service offering.

The third pattern is AI as a governance role under the COO or CRO. This is the pattern for institutions where the priority is risk governance and adoption discipline rather than product or platform leadership, typically at regulated financial-services institutions, large healthcare groups, or institutions with a substantial AI-enabled operating model that needs to be governed at a senior level. The role does not own the technology delivery; it owns the operating standard, the risk appetite, and the adoption discipline. The reporting line is to the COO or the CRO, depending on the institutional configuration.

The Chief AI Officer title will, in our view, survive in tech-native firms where the underlying business and the AI capability are substantively the same thing. In most other institutions, the title will disappear into one of the three patterns above. The boards that get ahead of this now, by designing the role around one of the three structural patterns rather than around the current generic Chief AI Officer construction, will save themselves a rebuilding cycle that the institutions running the generic construction in 2024 and 2025 are now working through.

The boards that create the AI role to signal seriousness, without doing the underlying work to define which of the three jobs is the priority, will produce the same outcome the boards in our 2024 mandate set produced: a credible appointment, twelve months of useful diagnostic work, and a re-scoping that, in retrospect, the original mandate definition could have avoided.


JOH Partners runs senior leadership mandates across the technology and digital sector, with a particular focus on the structural questions that arise when an institution is building genuinely new senior roles. For confidential conversations on AI leadership and the Chief AI Officer brief, contact the partners directly.

-- Author

Oliver Helvin

Founding Partner

Oliver Helvin is a founding partner at JOH Partners. He writes on the GCC executive market, leadership transitions in family-controlled businesses, and the discipline of senior search.

LinkedIn ↗
Subscribe

A standing brief on the executive market.

New research, perspectives and market notes — direct to inbox. Read by chairs, chief executives and investors across three regions.

Weekly. No marketing. Unsubscribe in one click.
Engage a partner

Tell us about the seat.
We’ll tell you who’s right.

Confidential conversations with the partner leading the practice you need. We respond within one business day.