Back in March, I wrote about the recent phenomenon of AI-driven software platforms entering the R&D Tax Credit advisory space.
I questioned whether these AI tools were ready to handle the complexity of real-world claims, the intricacies of tax legislation and complex contractual relationships as well as their ability to manage HMRC compliance checks.
My article prompted some strong reactions, including a public response from Adam McCann of AI R&D claims platform, Claimer.
Adam said that I’d created a straw man argument by suggesting AI platforms were positioning themselves as a replacement for human advisors.
His view was that no such claim had been made but the wording “no tax specialisation required” does appear on Claimer’s website, along with a reference to a team of tax specialists “available to assist when needed.”
These kinds of statements understandably invite scrutiny over what’s being promised and what support actually exists.
For instance, a quick look at Claimer’s employee list on LinkedIn reveals no tax specialists at all.
In his same response, Adam also reflected on the broader trajectory of AI, suggesting it could eventually replace not only R&D advisors, but all knowledge work.
Whether or not that vision plays out, when a platform today is promoting the idea that R&D claims can be prepared without specialist input, it’s reasonable to raise a few practical questions.
This isn’t just marketing language. Removing specialist oversight from an area HMRC has flagged as prone to error and fraud potentially puts non-experts in a position they’re not equipped to handle.
It’s a shift in responsibility, presented as innovation.
Can vertical AI platforms replace traditional R&D advisors?
That may be true in the future but today’s vertical platforms are built on narrow rule sets, eligibility prompts and templated outputs that struggle to reflect the bespoke nature of actual R&D.
The idea that they’re meaningfully beyond conditional logic is unconvincing. In most cases, they still rely on structured form inputs and static workflows, not dynamic reasoning.
Responsibility is another area that’s often overlooked. If AI takes over parts of the advisory process and something goes wrong, it’s not going to be the platform that answers for it. It’s the advisor who submitted the claim, or the company that relied on the output. This creates risk that isn’t always obvious at the start.
As far as we know, there’s no clear evidence that Claimer performs reasoning, interprets contracts or navigates legislative nuance in any meaningful way. In practice, much of what it offers could likely be replicated using ChatGPT and a claim template.
Of course, this could change and Claimer might come to integrate compliance logic or case law analysis. But at the moment it’s not clear how it justifies its positioning as a significant step beyond general AI.
Which raises the obvious question: what advantage does a specialist R&D advisor or accountant gain by using Claimer? The platform doesn’t remove liability. It doesn’t help interpret legislation. And it doesn’t offer any protection if HMRC starts asking difficult questions.
I spoke with one senior ex-HMRC inspector with an interest in this area. They told me:
“Obviously if an AI system is subject to qualified human oversight then that is a safeguard. But that then turns it into a support tool rather than a claim preparer. There is no reason why a support tool could not be of assistance. However, the pitch [from Claimer] seems to be that you need no specialist tax knowledge. As the R&D claim requirements are derived from tax legislation then a qualified overseer would need that tax expertise”.
There are further concerns.
Claimer’s website uses terminology such as producing “bulletproof” tech narratives that will “stand up to HMRC scrutiny” and that it “provides insight into weak areas and how they could be strengthened.”
The ex-HMRC inspector continued:
“I suppose it is a question of whether they make things scrutiny-proof by in essence changing wording to conceal weak areas from HMRC, which would be questionable, or whether they just send the agent off to find out more facts, which would be an acceptable approach.
Given that AIs can rewrite text to read better there does seem a risk that they might sanitise a narrative by effectively hiding weaknesses”.
What exactly is being promised?
Claimer’s website contains a number of impressive-sounding statements, for example that it can:
“accurately assess eligibility”
“identify the strength of projects and their likelihood of passing an enquiry in seconds”
“build incredibly robust R&D narratives”
“produce a polished gem in minutes”
Take the claim that it can assess whether a project would survive an HMRC enquiry in seconds.
The fact is that HMRC enquiry outcomes depend on facts, uncertainty, documentation and how the claim fits within the wider context.
Or consider the idea that Claimer “deeply understands your client’s project”. As far as we know, Claimer prompts users for inputs and assembles a narrative based on what they enter, which may be useful but is not the same as genuine understanding.
One experienced R&D Tax Credit advisor who received a demo of the Claimer platform told me:
“I was genuinely alarmed by how easily a technical report could be produced [by Claimer].
“With just a few boxes ticked and some keywords entered, the system automatically generated a narrative that appeared complete but with no meaningful review or oversight.
“It felt like almost anyone, regardless of tax or technical knowledge, could produce a claim using the system”.
Even within the AI R&D claims market, not everyone is convinced by Claimer.
Shilin Chen, Co-founder and CEO of SmartClaim, an AI-based platform that assists with the writing of technical narratives, told me:
“If AI is used by those without the relevant tax expertise, a critical question arises: who is actually providing the tax advice?
“Tax agents are legally and professionally responsible for the guidance they give. AI, by contrast, is not a regulated professional and cannot be held legally responsible for its errors.
“The question isn’t whether AI can work – because in most of the cases, it does. The question is: what happens when it errors? If a user lacks the expertise, how can they validate AI’s output? Could this be seen by HMRC as negligence, misrepresentation, or even fraud?
“This is why AI must be seen for what it is: a tool, and nothing more, which enhances – not replaces – the expertise of R&D tax professionals.
“The nuances of R&D tax, unlike many other areas of accounting, are not black and white. Determining what qualifies as eligible R&D often requires professional judgement, context, and experience. AI can support this process, but it cannot replace the informed decisions made by qualified specialists. Users of AI systems must be capable of critically reviewing its output to ensure both accuracy and compliance”
What makes this quote so interesting is that it comes from someone building an AI tool in the same space. It reinforces the broader point: AI can be helpful, but it’s not a substitute for responsibility. If the output can’t be verified, or if users are told that verification isn’t necessary, then it’s probably not something that should be relied on.
Paul Rosser, a leading voice in the R&D claims advisory industry, raises the interesting possibility that HMRC may already be keeping an eye on how AI is being used in claim preparation and which tools are involved.
Paul commented:
“It’s also worth considering that we know HMRC are paying close attention to how advisors are using AI technology during their claim preparation process, and which technologies they are using.
“With no specific legal requirement for AI software vendors to ensure their technology isn’t used to create convincing sounding narratives for invalid projects, it stands to reason if HMRC start to see a lot of invalid claims which have been prepared using a particular AI technology, that they might start to target all users of that technology.”
Will general AI make vertical platforms redundant?
One point Adam raised in response to my earlier article was that I hadn’t clearly distinguished between general-purpose tools like ChatGPT and purpose-built platforms developed specifically for R&D Tax Credit claims.
It’s a fair point, but the gap between the two may not be as wide as it sometimes appears.
Some platforms may offer a more guided experience than ChatGPT, but in practice, many still operate closer to automation than to reasoning.
Their core proposition often centres on convenience and cost. But in a landscape where general AI is improving week by week, those advantages are becoming harder to maintain as genuine differentiators.
General-purpose AI is catching up fast. Many R&D advisors are now using open tools to draft narratives, reference tribunal cases and assess claim structure, often with more flexibility and control than dedicated platforms allow.
(For instance, ChatGPT confidently predicted that within 12 to 24 months, general AI models will be capable of doing 90% of what Claimer currently does).
It’s entirely possible that some of these tools will evolve with the technology. But unless they do, they may struggle to stay ahead of the very systems they were designed to improve upon.
To be clear, this article isn’t a blanket criticism of AI or of the platforms using it. When used well, AI has real potential to support R&D advisors, particularly in areas like templating, risk flagging and narrative consistency.
Some platforms may well be useful to certain firms. But the concern here is less about the technology itself, and more about how it’s being positioned, whether accountability is clearly defined and whether it risks replacing professional judgment with something that may sound convincing but fall short under scrutiny.
That is a legitimate concern, and one that is difficult to address without more transparency around where human expertise still fits in.
It is right to say that AI isn’t inherently risky. But in a regulated tax environment, it’s not enough to point to the user when something goes wrong.
If a platform plays a role in how a claim is prepared, then it should have a role in the risk that comes with it.
Article written by Rufus Meakin
Rufus Meakin works with tech companies to help ensure their R&D Tax Credit claims are accurate and defendable.
If you would like to discuss any aspect of your R&D Tax Credit claim then please feel free to book an exploratory call here: https://calendly.com/rufusmeakin-uk/r-d-tax-credits-exploratory-call


