Guest Column | April 16, 2026

Is It Time To Replace RECIST — Or Just Add AI?

A conversation between Immunocore Chief Regulatory and Quality Officer Mark Moyer and Clinical Leader Executive Editor Abby Proch

modern medical research center, ct scan-GettyImages-2160715346

The first thresholds for evaluating cancer treatment efficacy came in 1976, based somewhat crudely on how researchers measure simulated tumor masses sandwiched between a layer of foam and a mattress. It’s true.

In the 50 years since, the approach has evolved into what we now call Response Evaluation Criteria in Solid Tumors, or RECIST. However, although common to endpoint evaluation in oncology trials, RECIST can miss important immune-driven effects (even with the updated irRECIST/iRECIST/iRC approach).

In this Q&A,  Immunocore Chief Regulatory and Quality Officer Mark Moyer explains why new tools, including AI-based approaches, may better capture treatment response.

Clinical Leader: To start, what are the traditional endpoints in immunotherapy?

Mark Moyer: Since my graduate school days, in which I was an immunologist in oncology development at Roswell Park Cancer Institute, endpoints have changed. Overall survival is/was the ultimate goal with clinical benefit. It's a straightforward endpoint but other endpoints identify earlier effective therapies.

Typically, that's been a RECIST criterion for tumor shrinkage and maintenance of that shrinkage over time, whether it is unconfirmed or confirmed by a CT scan or MRI. However, those have become a bit obsolete when identifying early impact of immunotherapies in oncology.

For a while there were immune-response RECIST criteria. That was popular for trying to better understand the dynamics of the tumor relative to the immunotherapy, because immunotherapies often will not have the same response timeframe as a traditional chemotherapy. It may take your natural immune system, which is being corrected by immunotherapy, longer to observe that tumor response of decrease in volume and, demonstrating durability of that becomes problematic. irRECIST was a first attempt at trying to improve that.

What are researchers missing when they use RECIST?

With IR-RECIST, you're looking for tumor shrinkage. In a partial response, you're looking at a 33% reduction in the tumor based on the measurement. Well, not all tumors are measurable. An example would be mesothelioma, which presents itself in sheets that aren't very well measurable within the confines of a CT scan or an MRI. The other thing is that, often within oncology, you are infiltrating a tumor with T cells and there can be pseudo-enlargement associated with that swelling. The timing of the scan might demonstrate an increase in tumor, but the treatment may actually be effective and you're just not timed right to see that decrease.

The other issue can be a very deep response, such as a 95% decrease in tumor burden. The question is whether that is just necrotic tissue that has not been removed by natural means or still an active tumor.

The final limitation is that if you have a reduction in your tumor burden but there's one new lesion, that's considered a progression. That new tumor may be very small, with no clinical consequences, but technically has become RECIST measurement issue due to the strict criteria of a progressive disease.

How difficult is it for investigators to interpret the results uniformly under RECIST?

Investigators will select typically five target lesions. However, there might be many other lesions available, and there could be a selection bias associated with that.

Additionally, if a patient starts describing a pain or some other effect, investigators may, from a clinical practice, do an unscheduled scan to see what's going on. The interpretation and information can impact the trial. That's why health authorities will often ask for a blinded review. Because, if a patient has an off-schedule scan and it shows a progression, is that a true progression or is it because the drug didn't have time to affect that tumor? Investigators are focused on the patient's needs, appropriately so, but it could actually impact the study as well. 

So, if RECIST can be problematic, how can it be improved upon or even replaced?

CT scans are bidimensional, sometimes only unidimensional. AI could potentially make them three-dimensional and provide more accuracy on volume of the overall tumor and determine shrinkage or stabilization of that tumor. Our desire is to impact overall survival, and that doesn't always require a shrinkage of the tumor. If we stabilize a tumor or we have some decrease in the velocity of growth, that could impact overall survival. An AI review could offer not only an association with overall survival but a correlation with overall survival, which is a statistical approach.

Would that require changes to RECIST itself or just integrating a new AI-assisted review into your workflow?

As an industry, we need to establish that any AI tool accurately and consistently demonstrates the dynamics of the tumor and then ultimately correlate it statistically with overall survival. And that takes time. Some of the tools in AI are not there yet, and other tools are well established and moving forward. There are mammograms now that have AI-assisted evaluation in addition to the radiologist, and they're detecting new tumors that weren't observed by the physician alone reviewing those mammograms. We're already making progress, but there needs to be a better understanding of the complexities and the nuances to make sure we're accurate and reliable.

As these AI-assisted technologies become available, there could be challenges with adoption or acceptance by regulators. Is that something that you're keeping tabs on?

Yes. Fortunately, there are also third parties that help. Friends of Cancer Research, for example, held several workshops on AI-assisted RECIST in February of this year. Often, these third-party groups develop noncompetitive consortiums that share their data. Collectively, we can do more than one company can do on their own.

How would you anticipate an AI-assisted technology for RECIST impacting the overall success of the trials, the timeline of a trial, even the budget?

I think it will end up decreasing the cost and improving the reliability of results. If we can get to a point where AI is reliable, then we're investing in that versus the blinded independent review, which takes more time. When you're doing an independent blinded review, you have to collect the scans and then provide them to the third-party radiologists who are reviewing those scans in sequence. With AI, it's a matter of seconds. So, the efficiency of a study could improve. And then you could get improvement in the cost and even reliability, because you're not relying solely upon human review, you're getting more consistency of review, and you can get more scans done.

About The Expert:

Mark Moyer joined Immunocore in 2018 to lead the regulatory science functions. Prior to Immunocore, Mark was vice president, global regulatory sciences – oncology at Bristol-Myers Squibb, where he led regulatory approval for oncology projects, including Opdivo, Empliciti, Yervoy, and Sprycel. Before joining BMS, Mark spent 22 years at Sanofi Aventis Pharmaceuticals where he oversaw the global regulatory oncology and anti-infectives group and U.S. regulatory development group of 70 professionals for all therapeutic areas. Mark earned his Master of Science degree in immunology and biochemistry from SUNY Medical School and his Bachelor of Science degree in biology/chemistry from Houghton University.