Discover
In the Interim...
In the Interim...
Author: Berry
Subscribed: 3Played: 31Subscribe
Share
© 2025 Berry Consultants
Description
A podcast on statistical science and clinical trials.
Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.
Explore the intricacies of Bayesian statistics and adaptive clinical trials. Uncover methods that push beyond conventional paradigms, ushering in data-driven insights that enhance trial outcomes while ensuring safety and efficacy. Join us as we dive into complex medical challenges and regulatory landscapes, offering innovative solutions tailored for pharma pioneers. Featuring expertise from industry leaders, each episode is crafted to provide clarity, foster debate, and challenge mainstream perspectives, ensuring you remain at the forefront of clinical trial excellence.
50 Episodes
Reverse
In this episode of "In the Interim…", Dr. Scott Berry and Dr. Lindsay Berry investigate the statistical foundations and clinical implications of analyzing ordinal endpoints, drawing on experience from major stroke and COVID-19 trials. Discussion centers on the Modified Rankin Scale, DAWN, MR CLEAN, and REMAP-CAP, demonstrating that methods such as proportional odds, dichotomization, and utility weighting all impose explicit or implicit clinical weights on the outcome categories. The episode presents direct mathematical derivations, exposes the equivalence between proportional odds models and value-weighted analysis, and uses real trial data to explore how statistical and clinical perspectives on endpoint weighting may diverge. Emphasis remains on transparency and the need for clinically relevant weight assignment in trial endpoints.Key HighlightsStructural overview and clinical significance of the Modified Rankin Scale scores.Illustration that proportional odds models and dichotomized analyses apply hidden, prevalence-driven or threshold-based weights.Utility weighting in DAWN, formulated from EQ-5D patient utilities and economic studies, with observed alignment.MR CLEAN investigators' critique of utility weighting; empirical data demonstrated relative consistency and challenged the claim that statistical approaches resolve variation across patients.REMAP-CAP platform trial: Organ Support Free Days endpoint analyzed with proportional odds imposed weights on the scale from death to free of organ support .Extension of these arguments to win ratio/rank-based approaches, with caution that all methods encode clinical assumptions.For more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry marks the podcast’s one-year anniversary, sharing listener metrics, watch data, and regional engagement. He then delivers a step-by-step analysis of the FDA meeting process, detailing the progression from initial sponsor meeting requests and question submission to briefing book preparation, feedback cycles, and in-person logistics for a Type C meeting at the White Oak facility. Drawing from more than 25 years of trial design and regulatory experience, Scott offers precise guidance on technical preparation, sponsor responsibilities, and common errors in sponsor-FDA dialog, emphasizing what works and what wastes time inside the one-hour meeting constraint. His practical approach focuses on clarity, respect for process, and actionable advice.Key HighlightsSlightly over 30,000 people tuned in during the first year across 45 episodes; about 10,000 via audio, 20,000 via video with a global worldwide reach.FDA meeting workflow: request, submit four to eight questions, draft briefing book, receive written feedback, strict one-hour in-person discussion controlled by sponsor.Advice on briefing book content, avoiding new materials at the meeting, even what not to bring through the White Oak facility.Sponsor pitfalls: disingenuous patient advocacy, asking impossible questions, taking adversarial stance in statistical discussion.For more, visit us at https://www.berryconsultants.com/
Dr. Nathan O’Hara (University of Maryland), Dr. Gerard Slobogean (UC Irvine), and Dr. Sheila Sprague (McMaster University) describe the launch and design of the Musculoskeletal Adaptive Platform Trial (MAPT)—the first major adaptive platform trial in orthopaedic surgery. The discussion covers MAPT’s master protocol structure, patient-centered endpoint framework, and operational strategies for multinational implementation. Focus areas include the FASTER-HIP domain’s use of Bayesian modeling with a hierarchical clinical endpoint and the standards established for adaptation, data coordination, and future scalability. Listeners gain insight into a trial infrastructure designed to lower barriers for evidence generation and facilitate ongoing evidence generation in musculoskeletal trauma care.Key HighlightsMAPT as a scalable, master protocol for orthopaedic intervention evaluationHierarchical, patient-centered endpoint (survival, 4-level ambulation, days alive/out of hospital), analyzed with a Bayesian-modeled, non-parametric win ratioDomain-specific adaptation thresholds based on clinical differentiationInterim analyses after 100 patients, then every 50, informing early adaptation40 sites across US, Canada, and Europe, centralized data management at McMasterA unified DSMB structure with capacity for domain-specific expertise as neededTiered protocol access: open sharing, collaboration, direct integrationInfrastructure enables rapid domain addition and multi-investigator participationFor more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry speaks with Dr. Michael Harhay, Associate Professor at the University of Pennsylvania and Director of the Center for Clinical Trials Innovation. The conversation explores Dr. Harhay’s progression through neuroscience, philosophy, epidemiology, and statistics, examining how this academic path shapes his work in clinical trial methodology. They discuss the Center’s role in addressing unresolved methodological questions arising from pragmatic, health system-based trials, including challenges with cluster and factorial randomized designs. The episode focuses on statistical and conceptual issues in endpoint selection for critical care, such as the analysis of informatively truncated outcomes, composite endpoints including organ support-free days, and the application of the win ratio. The increasing use of Bayesian methods in trial design is addressed.Key HighlightsDr. Harhay’s academic background and transition into clinical trial methodology at Penn.The mission of the Center for Clinical Trials Innovation to support methodologic research and training, particularly among statisticians participating in multi-center health system trials.Discussion of hospital-level and provider-level randomization strategies in cluster and factorial designs within health systems.Ongoing challenges in analysis of composite and informatively truncated endpoints, especially in critical care, exemplified by ventilator-free and organ support-free days.Evaluation of analytic strategies including survival average causal effect, composite endpoints, and the win ratio, with emphasis on the need for clinical rather than purely statistical weighting of outcomes.Consideration of the conceptual strengths of Bayesian methods and their integration into modern trial design and decision analysis.For more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele deliver a quick reaction to the FDA’s draft guidance on Bayesian statistics for clinical trials of drugs and biologics. Their assessment addresses the structure, content, and impact of the document, emphasizing evidence-based requirements and guidance scope. The episode breaks down regulatory language, technical expectations, and workflow implications for clinical trial sponsors and statisticians.Key HighlightsClear distinction between trials justified by type 1 error control and trials justified by agreement on Bayesian priors and decision rule.Explanation of how informative priors can be created based on external or historical data.Technical explanation of dynamic discounting/borrowing, especially in Bayesian hierarchical models for rare populations, pediatric-adult extrapolation, related disease subgroups, and platform and basket trials (e.g., ROAR).In-depth look at the necessity of sensitivity and robustness checks for different priors, and the FDA’s design prior and analysis prior terminology.FDA’s requirements for accepting external data sources: data provenance, patient-level comparability, recency, and appropriate covariate adjustments.Comparison with ICH E20 on adaptive designs, providing context for ongoing regulatory harmonization and possible influence on international regulatory directions.Direct warning against attempts to misuse Bayesian methodology as a substitute for scientific rigor; legitimate uses must meet FDA standards and not simply serve to lower evidentiary bars.Resource: FDA News Release: https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trialsFor more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry is joined by Dr. Tanya Simuni, Arthur C. Nielsen Jr. Professor of Neurology and Director of the Parkinson’s Disease and Movement Disorders Center at Northwestern University, and Dr. Barbara Wendelberger, Senior Statistical Scientist at Berry Consultants. The conversation focuses on the Path to Prevention (P2P) platform trial—an international, multi-arm prevention study in Parkinson’s disease targeting participants defined by biological markers, specifically alpha-synuclein pathology, prior to clinical diagnosis. The discussion covers the PPMI cohort, trial operational and statistical structure, the rationale behind biomarker-driven inclusion, and the use of Bayesian platform trial design.Key Highlights:Parkinson’s disease pathobiology and risk: genotype-phenotype variability, multi-system involvement, and the central roles of age, environment, and genetics.Michael J. Fox Foundation’s PPMI cohort: 4,000+ participants, prospective longitudinal biomarker and clinical data, high participant retention, enabling study of early Parkinson’s.P2P platform structure: multi-arm design, two-stage randomization with shared placebo group, integration of non-randomized PPMI cohort in Bayesian analysis for improved inference.Inclusion criteria: prodromal population biologically defined by CSF alpha-synuclein seed amplification and dopaminergic imaging (DAT-SPECT), highlighting regulatory nuances.Dual primary endpoints: biomarker (DAT-SPECT) and clinical (MDS-UPDRS Part III), 24-36 months follow-up.Commitment to public data sharing in line with the Michael J. Fox Foundation’s open science philosophy.For more, visit us at https://www.berryconsultants.com/
In this episode of “In the Interim…,” host Dr. Scott Berry examines the challenge of communicating complex statistical concepts to non-statistical audiences. Drawing from firsthand experiences in agriculture, professional golf, and clinical development, as well as examples involving historical and scientific figures, Scott reflects on why technical rigor alone often fails to influence. The discussion focuses on the consequences of mismatched language, the importance of empathy, and the utility of simulation when bridging the gap between analysis and stakeholder understanding.Key HighlightsIllustrated barriers to statistical communication using stories from farming, golf, and early career encounters.Examples involving John Glenn, Ada Lovelace, and Charles Babbage show how communication, not just science, determines impact.Insights from Alan Alda on empathy as a foundational tool for scientists presenting technical ideas.Clinical trial simulations revealed knowledge gaps—such as misunderstanding of power—when communicating with decision-makers.Emphasizes the necessity of translating analytic outputs into operational, financial, or clinical language for meaningful impact.For more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…", host Dr. Scott Berry and frequent co-host Dr. Kert Viele, Senior Statistical Scientist at Berry Consultants, analyze the potential shift in FDA regulatory policy from requiring two independent trials to accepting a single trial as sufficient for “substantial evidence” in drug approvals. Reflecting on the statutory and regulatory definitions originating with the 1962 Federal Food, Drug, and Cosmetic Act and 21 CFR 314.126, they dissect current and emerging interpretations, referencing recent statements by Dr. Martin Makary and coverage described in a STAT article. The conversation focuses on the scientific and statistical foundations of the two-trial threshold, challenges with dichotomous results, and how pooled evidence might increase efficiency and rigor. They discuss statistical implications including alpha thresholds, sample size effects, program power, and the consequences for clinical labeling. The episode also introduces Bayesian approaches as a method for integrating totality of evidence. Attention is given to both population breadth and the possible risks of a narrowed evidentiary base under a single-trial standard.Key HighlightsRegulatory and historical context of “substantial evidence” since 1962 and current FDA directives.Industry practice: simultaneous Phase III trials, statistical power, and evidentiary replication.Criticism of binary, trial-level significance thresholds; merits of pooling or meta-analysis.Potential efficiency gains and tradeoffs with a more stringent alpha requirement for single trials.Strategic and operational effects on trial design, sample size, and label indications.Bayesian statistical approaches for full evidence integration, discussed as an analytical viewpoint.
In this episode of "In the Interim…", Dr. Jenny Devenport, Global Head of Methods, Collaboration, and Outreach at Roche, joins Dr. Scott Berry for a detailed discussion on career evolution, statistical culture, and communication in the pharmaceutical industry. Dr. Devenport describes her transition from psychology in New Mexico to statistical leadership in Basel, emphasizing the formative role of early academic mentors and her experience working across the US and Europe. She outlines her current functions in methods development, internal collaboration, and industry outreach, highlighting active engagement with academic and regulatory communities. The episode scrutinizes differences in workplace culture, such as the emphasis on debate and long-term collaboration in Europe, and differences in educational backgrounds among statisticians. The conversation covers practical barriers and slow adoption of Bayesian methods and the importance of communication in the acceptance of futility analyses in pharma, the importance of scale in problem-solving, and the emergence of AI as a tool for statisticians. Dr. Devenport provides pragmatic strategies for statisticians to improve their influence through tailored, audience-specific communication.Key HighlightsDr. Devenport’s academic and geographic move from the US to EuropeResponsibilities in methods development, collaboration, and outreach at RocheContrasts in US and European pharmaceutical statistics culturesMeasured perspective on AI’s effect on statisticians’ responsibilitiesPractical guidance for statisticians on communication and influence
In this episode of "In the Interim…", Dr. Scott Berry delivers a metaphoric critique of single-question trial infrastructure through the sports arena analogy, illustrating the cost, patient burden, and data inefficiency of conventional clinical trials. He provides a methodical comparison of traditional trial models and the platform trial approach, clarifying distinctions between platform, basket, and master protocol structures. Through examples from HEALEY ALS, I-SPY 2, PALM (Ebola), REMAP-CAP, RECOVERY, EPAD, GBM AGILE, and Precision Promise, Scott outlines the measurable efficiencies of platform trials: shared control arms, flexible arm addition and removal, reduced placebo exposure, accelerated timelines, and improved statistical inferences. The episode further examines platform trial performance during the COVID-19 pandemic, highlighting trial adaptability, and the rapid generation of actionable evidence. Scott also addresses failure scenarios, focusing on EPAD Alzheimer’s as a cautionary case in platform sustainability, cost allocation, and initial funding barriers. Listeners will gain a perspective on the operational and statistical design choices governing today’s most innovative clinical studies.Key HighlightsArena analogy applied to delineate clinical research inefficiency.Operational, statistical, and patient-focused efficiencies in platform versus single-question trials.Precision in terminology: platform, basket, and master protocol definitions.Effects of platform trials on speed and scientific rigor.Factors underlying both platform trial successes and failures.For more, visit us at https://www.berryconsultants.com/
In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.Key HighlightsStatistical structure and efficiency of seamless Phase II/III trial designsAdministrative (financial) interim analysis setup as funding decision triggers, distinct from futility analysesFDA operational bias guidance and requirements for trial blindingPredictive probability and assurance as objective interim criteriaSample data and simulation outputs to facilitate stakeholder alignmentFor more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim...", Dr. Scott Berry interviews Dr. Kaspar Rufibach, Co-Head of Advanced Biostatistical Sciences at Merck. The conversation tracks Rufibach’s evolution from academic training in actuarial and mathematical statistics through cancer research collaborations, postdoctoral work, and academic consulting, leading to applied roles in Roche and Merck. Discussion centers on methodological rigor, pragmatic approaches to assurance and predictive probability, and real-world experience in drug development. Rufibach examines the organizational integration of quantitative disciplines at Merck—incorporating pharmacology, real-world data, statistics, programming, and data science—while remaining candid on the role and boundaries of AI in current pharmaceutical practice.Key HighlightsStatistical education in Switzerland, bridging theory and early applied cancer trial experienceMove from academic consulting to a trial statistician role at Roche, emphasizing structured problem-solving in drug developmentApproach to predictive probability and assurance, balancing Bayesian and frequentist tools with strict emphasis on practicalityFormation of professional special interest groups with EFSPI and PSI, stepping in to address unmet community needs rather than seeking formal leadershipPerspective on Merck’s unified quantitative department, designed to remove silos and leverage interdisciplinary expertiseCautious view of AI as a complement to specific tasks, but not yet a replacement for nuanced clinical trial design or regulatory-facing strategiesCurrent focus on expanding causal inference methods and multi-state modeling for improved trial efficiency and evidence synthesisFor more, visit us at https://www.berryconsultants.com/
In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.Key Highlights:Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methodsEthical tension in trial conduct, referencing the Belmont Report and equipoiseAdvances in computation and Markov Chain Monte Carlo (MCMC)Regulatory frameworks for Bayesian adaptive trials, including FDA guidanceImplementation details from I-SPY 2 and REMAP-CAP platform trialsBayesian methodology in the context of artificial intelligence, precision medicine, and future data integrationFor more, visit us at https://www.berryconsultants.com/
In episode 37 of "In the Interim…", Dr. Jeff Saver, Director of the UCLA Comprehensive Stroke and Vascular Neurology Program, details his shift from behavioral neurology to clinical stroke research after early engagement with multicenter trials like TOAST. The discussion covers the biology of acute ischemic stroke, quantifying neuronal loss, and the scientific underpinnings of “time is brain.” Dr. Saver outlines the evolution of endovascular therapy, from early device challenges to current reperfusion success rates exceeding 85%. Key methodological issues in stroke trial analyses are presented, including debate over endpoint selection—dichotomous versus ordinal approaches and the limitations therein. Special focus is placed on the utility-weighted modified Rankin Scale, which assigns empirically derived, patient-centered health values to each disability state, providing a comprehensive measure that captures both benefit and harm. The episode explores regulatory hesitancy, differing analytic preferences within the field, and the design prospects for neuroprotectant interventions. Heterogeneity in patient outcomes and implications for public health and trial methodology are addressed. The episode provides an empirical account of clinical trial endpoint selection, interpretation, and future directions in cerebrovascular research.Key HighlightsEarly career influences and pivotal trial participation.Pathophysiology and quantification of acute stroke injury.Endovascular device development and clinical impact.Comparative analysis of endpoint methods: dichotomous, ordinal, and utility-weighted approaches.Technical derivation and application of utility-weighted mRS.Ongoing regulatory and methodological debate.Heterogeneity in ischemic vulnerability and future trial directions.For more, visit us at https://www.berryconsultants.com/
In Episode 36 of "In the Interim…", Dr. Scott Berry and Dr. Don Berry analyze the Phase II trial of Lecanemab (BAN2401) in Alzheimer’s disease, focusing on the application of adaptive Bayesian methods following persistent failures in Alzheimer’s drug development. The conversation covers the specific design features of five active arms, response adaptive randomization, and a longitudinal Bayesian model driving interim decisions, as well as direct operational and statistical challenges encountered during the trial. The hosts address regulatory proceedings, critique from "experts" regarding adaptive methods on noisy cognitive endpoints, and the direct alignment of the trial’s Bayesian 18-month efficacy estimates with the subsequent Phase III results and regulatory approvals.Key HighlightsAlzheimer’s drug development context: Widespread Phase III failures prompted a retreat from conventional trial designs and a demand for greater rigor and adaptability.Lecanemab Phase II methodology: Five active arms, two dosing schedules, response adaptive randomization, and adaptive interim analyses at every 50 patients enabled real-time adjustment and efficient dose evaluation.Bayesian modeling and imputation: Use of a longitudinal model to address missing data, forecast 12- and 18-month outcomes, and inform both allocation and stopping criteria.Operational adaptations: The design accommodated unplanned safety restrictions, such as stratified randomization for APOE4-positive participants after ARIA signals.Expert skepticism: Addressed Paul Aisen’s concerns about adapting to noisy interim cognitive data, emphasizing safeguards against erroneous stopping or success.Regulatory outcome: The 18-month efficacy estimates from Bayesian modeling during Phase II matched Phase III findings; FDA granted accelerated approval based on amyloid reduction and later full approval after Phase III confirmation.For more, visit us at https://www.berryconsultants.com/
On this episode of “In the Interim…”, which is co-sponsored by the Journal of Statistics and Data Science Education, Dr. Scott Berry talks with Dr. Jim Albert, Professor Emeritus at Bowling Green State University, whose extensive work encompasses Bayesian statistics and computation, sports analytics, and decades of exemplary teaching. Dr. Albert shares insights on integrating sports into statistics education and discusses his transition from academic roots to consulting for the Houston Astros. This episode highlights the evolution of sports statistics—from manual data collection to sophisticated analytics—and critiques traditional metrics in favor of advanced systems. The dialogue explores career opportunities in sports statistics as well as the need for open research avenues in sports analytics, facilitating broader access and distribution of statistical insights.Key HighlightsUse of sports to contextualize statistical concepts, providing practical illustrations over abstract textbook issuesExposing misconceptions about randomness, streakiness, and “clutch ability” perpetuated by both public myths and sports simulationsAnalytical evolution from traditional metrics like batting average to advanced assessments like OPS and on-base percentageRegression-to-the-mean explained with sports scenarios and its analogous application in clinical trial progressionChallenges in adopting a unified approach to teaching statistics given students’ diverse cultural and sports familiarityBarriers in publishing sports analytics research, prompting initiatives for accessible, open publicationsFor more, visit: https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry examines the concept of “digital twins” in clinical trials. He details how simulation of clinical trials is a direct analog of digital twin methodology, allowing for the in-silico modeling of the physical trial conduct, enrollment, dropouts, and patient outcomes under varied assumptions. Scott discusses model-based patient prediction and highlights scenarios where prediction of counterfactual outcomes can increase efficiency, particularly in rare disease or limited-data settings. He provides a systematic comparison of Unlearn’s PROCOVA neural network approach with traditional covariate adjustment, noting that proprietary models must demonstrate clear improvement over standard methods, which is unlikely. There is great potential in the simulation of many digital twins for a patient as a potential augmentation or substitute for controls. Key HighlightsDefines digital twins using NASA history and Wikipedia.Describes clinical trial simulation as a digital twin methodology.Examines patient-level model-based prediction and covariate adjustment.Compares Unlearn’s PROCOVA with traditional approaches.Highlights transparency and reproducibility concerns with proprietary algorithms.Asserts that future trial efficiency demands integration of predictive modeling with randomization and large external datasets.For more, visit: https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry interviews Dr. Andrew Thomson, owner and lead consultant of Regnitio. Thomson discusses his academic progression from mathematics at Cambridge to a Master’s at Southampton and advanced study with Prof. Sylvia Richardson at Imperial College, followed by doctoral work in cluster randomized trials at the London School of Hygiene and Tropical Medicine. He recounts the realities of regulatory roles, including contemplative study of data, working within multidisciplinary teams, and delivering regulatory assessments to senior committees. The episode contrasts EMA’s collaborative cross-country structure against the more centralized FDA process and explores methodological challenges faced by both. Scott and Andrew discuss regulatory expectations for interim analyses, the definition and metrics of trial complexity, and differing approaches to Type I error control across agencies. The conversation also covers the rapid adoption and adaptation of platform trials during COVID-19, and the impact on trial evaluation frameworks. Concluding, Thomson explains the motivation for launching Regnitio, emphasizing how regulatory perspective and multidisciplinary insight can support informed decision-making throughout clinical development.Key HighlightsAcademic and professional pathway: Cambridge, Southampton, Imperial College, London School of Hygiene and Tropical MedicineRoles as a statistical assessor: analysis, collaborative review, expert panel presentationsEMA vs. FDA: consensus-driven versus centralized approaches, harmonization challengesTrial complexity, Interim analyses, and diversity in regulatory interpretationsAdoption and practicalities of platform trials during the COVID-19 responseConsulting goals: integrating regulatory perspective and broad expertise for drug development decisionsFor more, visit: https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele analyze how regulatory, editorial, and science community standards often impose additional, inconsistent requirements for novel methods in clinical trial design, rarely applied to standard approaches. Examples from oncology, enrichment trials, platform studies, and endpoint analysis illustrate how adaptive and Bayesian designs are frequently subject to higher scrutiny, shifting metrics, or distinct evidentiary demands. The episode covers technical and regulatory issues, such as the selective application of Type 1 error controls, evolving multiplicity guidance, and challenges in ethical reasoning with adaptive allocation. Scott and Kert frame the discussion with empirical comparisons and advocate for the use of clinical trial simulation to ensure fair, metric-driven evaluation of both novel and legacy designs.Key Highlights:Oncology combination therapy trial with Bayesian borrowing facing heightened regulatory caution versus single-arm historical controls.Hierarchical versus pooled analysis in enrichment/basket trials, with focus on error definitions and subgroup effects that have always existed.ICH E20 guidance potentially discourages use of enrichment by imposing new subgroup comparison burdens absent from standard trials.Platform trial multiplicity rules contrasted with parallel single-arm trials; regulatory stance continues to evolve.Ethical debate on adaptive allocation: questioning rationale behind adaptive randomizing may be ethically challenging, but fixed allocation is okay despite same interim data.Critical review of explicit utility weighting in the DAWN trial, despite alternative methods having the same issuesFor more, visit: https://www.berryconsultants.com/
In this episode of "In the Interim…", Dr. Scott Berry examines the mathematical foundations and efficiency claims of the promising zone design for adaptive sample size in clinical trials. Scott unpacks the conditional power thresholds that trigger sample size increases without the need to adjust alpha, as originally presented by Mehta & Pocock. He systematically demonstrates, via simulation, that the promising zone rarely provides meaningful efficiency gains over fixed designs and is consistently outperformed by group sequential designs that allocate alpha across multiple analyses. Using a driving-route analogy, Scott highlights the practical flaw in making pivotal trial decisions earlier than necessary due to arbitrary statistical rules rather than observing current data. He underlines that at Berry; simulation efforts have yet to reveal a scenario where the promising zone design is more efficient than a thoughtfully constructed group sequential or Goldilocks trial. The episode urges trialists to simulate, compare, and optimize—not to accept appealing mathematical tricks without rigorous evaluation.Key HighlightsExplanation of the promising zone’s conditional power mechanism and alpha control.Simulation-based comparison of power and average sample size across design types.Direct comparison of group sequential vs. promising zone designs.Discussion of futility rules and their impact on design choice.Commentary on Goldilocks designs for incomplete data.For more, visit: https://www.berryconsultants.com/



