In recent years, several AI-driven platforms have emerged, positioning themselves as a replacement for human expertise in preparing R&D Tax Credit claims.

This has sparked heated debate, with strong opinions on both sides.

I’ve been particularly struck by the provocative statements from some platforms, such as “startups: stop wasting time with R&D consultants” and suggestions that traditional, consultancy-based R&D advisors are “stuck in the dark age” with processes that are “clunky, unnecessarily disruptive and archaic”.

They suggest that the R&D advisory market is filled with outdated consultants desperately resisting the tide of automation. Like Blockbuster and Kodak before them, these firms are accused of clinging to outdated business models and are more concerned with protecting the ‘sunk costs’ of their investments in manual processes, skilled personnel and expert training than embracing change.

It is true that many R&D advisory firms built their business models around manual claim preparation, so there is bound to be scepticism towards AI-driven disruption. However, this doesn’t mean concerns about AI’s role in R&D claims should be dismissed. Some bold assertions have been made about its capabilities and not all of them stand up to scrutiny.

I was writing about some of these issues in my LinkedIn newsletter even before OpenAI launched ChatGPT 3.5 in November 2022, but looking back now, my articles feel almost quaint given how much AI has advanced since then.

I couldn’t have predicted how quickly Generative AI would emerge to assist with tasks like researching technology baselines and text editing, but some of the claims about AI-driven R&D Tax Credit processes still feel overhyped and unconvincing.

While AI has proven useful in certain areas, such as automating routine tasks, the question of whether AI can truly handle the complexity involved in R&D Tax Credit claims remains uncertain.

The increasing use of AI-driven platforms raises important concerns about accuracy, compliance and accountability.

The risks of AI use in R&D Tax Credit claims

AI is well known for generating plausible-sounding but inaccurate content, so much so that the industry itself acknowledges the issue, with firms now promoting “anti-hallucination” technology.

Given that even the biggest tech companies in the world have yet to solve this problem, there is little reason to believe AI can handle the complexity of R&D claims with any real reliability.

Indeed, there is a risk that AI-generated claims may be based on exaggerated or inaccurate narratives.

As R&D Tax Credit expert Paul Rosser has highlighted, ChatGPT’s latest ‘Deep Research’ tool could be used to fabricate an R&D claim for a care home by presenting a simple menu change for residents, such as switching ham sandwiches for tuna sandwiches, as a clinical trial on how omega-3 fatty acids might prevent dementia in elderly patients.

Paul told me: “if AI technology that has not been specifically trained for preparing R&D claim reports can still generate realistic-sounding output, which might well fool HMRC into believing qualifying R&D has taken place, one can only imagine what an AI system built specifically for R&D technical reports would be capable of”.

If an AI-generated R&D narrative proves to be fictional, defending an HMRC enquiry could become an impossible task. HMRC is likely to take a sceptical view of any AI-driven submission and demand oversight from a qualified professional.

Key risks of AI-generated R&D claims include:

Unreliable training data: some AI platforms have claimed to be trained on datasets of less than a thousand R&D claims and a small number of HMRC enquiries. This raises concerns about whether such datasets are broad enough to cover the full spectrum of R&D activity and compliance scenarios. Given the diversity of R&D activities, legal complexities and contractual nuances, it is unclear whether this is a sufficiently broad base. Without greater transparency on how these cases were selected and applied, there is no way to assess whether the system is learning from relevant and reliable data.

Potential for bias and misrepresentation: some AI-driven platforms boast exceptionally low rates of HMRC enquiries, yet their training datasets reportedly include cases with significantly higher enquiry rates. This raises questions about representativeness. Has the system been trained on claims that are riskier or more prone to errors? If the data is skewed in this way, the AI could be optimising for a different risk profile than the one its users expect. Without clearer disclosure, it is difficult to assess whether the AI is genuinely improving compliance or simply working from a biased sample.

AI-generated narratives may lack HMRC-required detail: AI appears to construct narratives by extracting key terms, searching the web for related content and merging these elements into a statement of technological advance. However, given that many R&D projects involve bespoke, unpublished developments, particularly in software, this approach is unlikely to produce accurate descriptions of genuine R&D. There is a strong possibility that such narratives will amount to little more than a compilation of generic, publicly available information. In this scenario, ensuring accuracy and compliance would become a significant challenge for any accountant reviewing the claim.

Lack of HMRC enquiry support: Some AI-driven R&D claim services focus on streamlining the application process but do not explicitly state whether they provide full support in the event of an HMRC enquiry. This leaves claimants completely exposed at the most critical stage of the process. Even if AI-driven firms introduce enquiry support in the future, the concern remains that claims generated with minimal human oversight may not be built to withstand HMRC scrutiny. At present, no reputable adviser would take on an AI-prepared claim without conducting their own review, which raises serious doubts about the platforms’ ability to truly reduce workload or risk.

Failure to account for legal and contractual complexity: R&D tax relief is a highly complex area requiring an understanding of evolving legislation, nuanced eligibility rules and contractual relationships. The new subcontracting rules introduce further complications, as claims now hinge on whether the principal intended R&D to take place. This requires human judgment to interpret relationships between parties who may have reached different understandings, something AI is not equipped to assess with accuracy.

The limitations of statistical modelling: There are too many judgment calls involved in preparing an R&D claim for a statistical probability model to generate fully compliant, defensible submissions. While AI excels in structured data analysis, R&D tax relief is far more ambiguous, requiring human interpretation of contractual agreements, technological uncertainties and evolving case law. Even if AI models improve over time, they are unlikely to replace the role of expert judgment in navigating HMRC compliance. The challenges posed by the new subcontracting rules, in particular, make it difficult to see how AI could reliably assess claim eligibility without human oversight.

At present, the most effective use of AI in this space is likely to be limited to templating, tracking progress and automating administrative tasks rather than constructing and defending claims.

Even in these areas, reports suggest that challenges remain in ensuring accuracy and compliance. The idea that AI can replace human expertise in preparing robust, defensible R&D claims is, at best, overstated and, at worst, a serious risk to claimants.

HMRC’s warning on AI-generated R&D claims

HMRC has made its position on AI-driven R&D claims clear: claimants remain fully responsible for their submissions, regardless of whether they use an accountant, a traditional R&D advisor or an AI platform.

I asked HMRC to comment for this article and a spokesperson told me:

“If you use a tax agent you must make sure you choose them carefully. This is because you remain responsible for your own affairs even if you use an agent. The same principle applies if you use any other intermediary, for instance a software or AI product to assist with your tax affairs.”

This statement is a direct warning to businesses relying on AI-generated claims. It reinforces the reality that if an AI-driven platform produces an inaccurate or misleading R&D claim, HMRC will not accept “the software said it was fine” as an excuse.

If the claim is challenged by HMRC, it is the R&D claimant not the AI provider that will be held accountable for any repayments, penalties or compliance failures.

HMRC’s concerns are not just theoretical. The HMRC spokesperson also pointed me to a recent tribunal case, Harber v HMRC, which illustrates the real dangers of relying on AI-generated content in tax matters.

In this case, a taxpayer appealed a penalty for failing to notify HMRC of a

 

capital gains tax liability. As part of the appeal, they submitted nine FTT summaries which supposedly supported their position. However, the Tribunal found that none of these were genuine legal judgments. They had in fact been generated by an AI system such as ChatGPT. The taxpayer, unaware that the cases were fictitious, had relied on them in good faith, but the Tribunal dismissed the appeal and upheld the penalty.

The ruling emphasised the dangers of using AI to substantiate legal arguments, particularly given AI’s tendency to produce “hallucinations” (outputs that appear highly plausible but are completely false). The Tribunal referenced similar cases, including Mata v Avianca, where lawyers in the US were sanctioned for citing AI-generated, non-existent legal precedents. The judge also warned that submitting fabricated judgments wastes public resources, undermines judicial precedent and damages trust in legal proceedings.

While this case does not directly involve R&D Tax Credits, it sets a clear precedent: AI-generated content is not inherently reliable, and businesses cannot shift responsibility onto AI if their claims are found to be inaccurate.

With HMRC increasing its focus on fraudulent or exaggerated R&D claims, any AI-generated technical report or financial justification is likely to be subject to heightened scrutiny.

Businesses considering AI-driven R&D claim platforms should be careful. AI may assist with certain aspects of claim preparation, such as templating and automating data collection, but it cannot replace expert judgment, particularly in areas where subjective analysis and compliance with evolving legislation are critical.

As the Harber v HMRC case demonstrates, failing to verify AI-generated content can have serious consequences.

CIOT urges caution over AI use in R&D Tax Credit claims

One R&D Tax Credit platform recently promoted its service by suggesting that accountants could prepare claims with “no specialist tax knowledge required.”

This was a rather eye-catching statement so I asked Ellen Milner, Director of Public Policy at the Chartered Institute of Taxation (CIOT) for her view on the use of AI tools to prepare R&D claims.

Ellen told me:

“Tax advisers are working in a rapidly evolving space when using AI tools, with the potential for many benefits but also exposing them to ethical challenges.

“Whether in the R&D sector or otherwise, advisors remain accountable for any work produced, including AI generated content. For members of the CIOT, the Professional Conduct in Relation to Taxation (PCRT) applies and sets out the fundamental principles and tax planning standards which they must adhere to in their work.

“This means that as well as being competent in advising on R&D, they must also be competent in the use of tools such as AI when delivering services. Members must apply requisite skill and care to the work they do.

“It is imperative that advisors review work produced with the assistance of AI to ensure it is accurate, specific to a client’s facts and circumstances and compliant with the relevant laws and regulations.”

Ellen’s comments highlight the responsibility that tax professionals bear when using AI tools.

However, beyond ethical considerations, there are deeper concerns about whether AI-driven platforms can truly produce reliable, defensible R&D claims.

AI-generated R&D claims raise serious reliability concerns, particularly due to the risk of producing misleading or inaccurate narratives.

With complex tax rules and contractual nuances, AI does not have the judgment needed to construct and defend compliant claims.

While it may assist with admin tasks, its reliance on generic content and limited training data makes it vulnerable to scrutiny.

Without expert oversight, claims risk failing under HMRC challenge, especially with the new subcontracting rules requiring careful interpretation.

 

Article written by Rufus Meakin

Rufus Meakin works with tech companies to help ensure their R&D Tax Credit claims are accurate and defendable.

If you would like to discuss any aspect of your R&D Tax Credit claim then please feel free to book an exploratory call here