AUGUST 2-3

December 6 and 7

NO TRAVEL NEEDED

NO TRAVEL NEEDED

GUEST EXPERTS

BREAKOUT SESSIONS

LEARN & NETWORK

LEARN & NETWORK

Welcome

The organising committee warmly invite you to join us at our first Australian Trials Methodology Conference, which will take place fully online over the 6th and 7th of December 2021.
The meeting program will showcase cutting-edge trial design and analysis approaches in a way that is thought-provoking yet still accessible to biostatisticians, methodologists and trialists (across any clinical discipline) alike.
We are delighted to announce our keynote speakers Stephen Senn (Consultant statistician, Edinburgh) and Marcel Wolbers (CH) who will open the conference by discussing design and analysis issues with COVID trials and the key differences across pharmaceutical versus academic trials; we look forward to the engaging discussion that will follow. Please see the program below for details on our invited sessions covering novel trial design, cluster randomised trials, and trial analysis. We also invite attendees to submit abstracts for contributed talks.
There will be ample opportunity for attendees to engage with speakers and fellow delegates following talks and in our networking lounge. While our in-person meeting will have to wait, we look forward to networking with and learning from our colleagues and invited speakers in this digital space.

We hope you will join us in December.

Rory Wolfe

Lead Investigator, AusTriM CRE

Robert Mahar

Chair, Conference Local Organising Committee

On behalf of the Local Organising and Scientific Program Committees

About the event

This event is being convened by the Australian Trials Methodology (AusTriM) Research Network, an NHMRC Centre of Research Excellence. You can learn more about our network on our website, and follow us on Twitter and LinkedIn.

For any queries please get in touch via email: austrim@monash.edu

s

Program


Please note the session times below are all listed in Australian Eastern Daylight Time (AEDT), GMT +11.


Monday 6th December

SESSION 1: Opening plenary
  9-9:05

Introduction and Welcome to Country

Chair

  9:05-9:45

What can academia learn from industry randomized clinical trials (and vice versa)?
This presentation provides a selective personal perspective on the design, conduct, and analysis of academic and industry trials, respectively, from a biostatistician who spent half of his career to date in each area. Two specific topics which I plan to include are the relevance of regulatory guidance documents such as the ICH E9 estimands addendum to academic trials (with an application to trials disrupted by the COVID-19 pandemic) and the role of adaptive and other innovative clinical trial designs.

Marcel Wolbers, Roche, Switzerland

Marcel Wolbers obtained a PhD in mathematical statistics from ETH Zurich in 2002. Since then, he has spent half of his career as an academic biostatistician and the other half as a statistician in the pharmaceutical industry. Between 2009 and 2016, he was the head of biostatistics at the Oxford University Clinical Research Unit in Ho Chi Minh City, Vietnam. Since 2016, he has worked as an expert statistical scientist in the methods, collaboration, and outreach group of Roche's data sciences and statistics department. His research interests include the design and analysis of innovative clinical trials, estimands, prognostic models, and competing risks.

  9:45-10:30

Design and analysis of COVID vaccine trials
Design and analysis of COVID vaccine trials - The challenge of providing vaccines to help manage the COVID-19 pandemic has been met by a number of pharmaceutical companies and other organisations. Most of what has been important in meeting that challenge has been done by the virologists, vaccinologists, biochemists and physicians who have developed a variety of traditional and mRNA vaccines. Nevertheless, proving efficacy of the vaccines has also been important and this raises statistical issues of design and analysis, including fundamental ones regarding inference, that I shall consider in this lecture by looking at five of the vaccine programmes.

Stephen Senn, Consultant Statistician, UK

Stephen Senn has worked as a statistician but also as an academic in various positions in Switzerland, Scotland, England and Luxembourg. From 2011-2018 he was head of the Competence Center for Methodology and Statistics at the Luxembourg Institute of Health. He is the author of Cross-over Trials in Clinical Research (1993, 2002), Statistical Issues in Drug Development (1997, 2007,2021) and Dicing with Death (2003). In 2009 was awarded the Bradford Hill Medal of the Royal Statistical Society. In 2017 he gave the Fisher Memorial Lecture. He is an honorary life member of PSI and ISCB.
http://www.senns.uk/
https://twitter.com/stephensenn
http://www.senns.uk/Blogs.html

Tea break #1

SESSION 2: Novel Trial designs
 10:45-11:15

Assessing Personalization in Digital Health
Assessing Personalization in Digital Health - Reinforcement Learning provides an attractive suite of online learning methods for personalizing interventions in Digital Health. However, after a reinforcement learning algorithm has been run in a clinical study, how do we assess whether personalization occurred? We might find users for whom it appears that the algorithm has indeed learned in which contexts the user is more responsive to a particular intervention. But could this have happened completely by chance? We discuss some first approaches to addressing these questions.

Susan Murphy, Harvard University, USA

Susan Murphy’s research focuses on improving sequential, individualized, decision making in health, in particular, clinical trial design and data analysis to inform the development of just-in-time adaptive interventions in digital health. Her lab works on online learning algorithms for developing personalized mobile health interventions. She developed the micro-randomized trial for use in constructing digital health interventions; this trial design is in use across a broad range of health-related areas. She is a former President of the Institute of Mathematical Statistics and the Bernoulli Society, a 2013 MacArthur Fellow, a member of the National Academy of Sciences and the National Academy of Medicine, both of the US National Academies.

 11:15-11:45

Registry-embedded randomised clinical trials (RRCTs) - sustaining innovation
Randomised controlled trials, properly implemented, provide the highest quality evidence for evaluating the efficacy or effectiveness of new and established therapies. However, the infrastructure needed to support a randomised trial is increasingly expensive. Clinical registries collect uniform baseline and outcome data on populations defined by, for example, a particular disease or diagnosis and are potentially reusable platforms for enacting many of the key functions of a randomised trial, in particular, data collection, at substantially lower cost compared to the traditional randomised trial. These registry-based randomised controlled trials (RRCTs) are a novel approach to trial conduct with the potential for enormous cost savings to the extent that a trial can use existing data collected by the registry. In this presentation I discuss RRCTs as sustaining innovation (doing better what we already do) and contrast the ideal maximal cost-saving RRCT with more functional hybrids that may better serve the scientific objectives of randomised trials

Elaine Pascoe, University of Queensland, AUS

Elaine Pascoe is an applied statistician in the Faculty of Medicine at the University of Queensland and Head of Biostatistics for the Australasian Kidney Trials Network (AKTN). For the past 10 years she has provided leadership in data management and statistical analysis for multi-centre clinical trials coordinated by the AKTN. Before joining the AKTN in 2011, she was a biostatistician for 10 years at Princess Margaret Hospital for Children and a research consultant for 6 years at Edith Cowan University in Perth, Western Australia. She is an inaugural member of the group leadership of the Australian Clinical Trials Alliance Statistics in Trials Interest Group (ACTA STInG) and a long-time member and current secretary of the International Society for Clinical Biostatistics.

 11:45-12:15

Trial design from a decision-maker’s perspective: putting the horse before the cart

Tom Snelling, University of Sydney, AUS

Tea break #2

SESSION 3: Contributed talks

Monday A: Adaptive trial design

6th December

12:30-2pm AEDT

 12:30-12:45

A biomarker guided Bayesian response-adaptive phase II platform trial for metastatic melanoma: the PIP-Trial
Anti-PD1-based immunotherapies have been approved for many cancer types, and are now front-line treatment for metastatic melanoma. Despite this, about 50% of metastatic melanoma patients fail to respond to therapy. It is therefore critical to identify patients with low likelihood to respond to treatment and be able to investigate alternative effective therapy options. In this framework, we designed the phase II Personalised Immunotherapy Platform Trial (PIP-Trial), an investigator initiated clinical trial evaluating two biomarker driven treatment selection of 5 novel agents as: 1) first-line therapy in metastatic melanoma for patients predicted resistant to the Pharmaceutical Benefit Scheme (PBS) subsidised therapy (Part-A), and 2) second-line immunotherapy (after PBS standard-therapies) (Part-B). Part-A) is a Bayesian adaptive multi-arm multi-stage design using response adaptive randomisation after a burn-in period where patients are randomised to the existing arms with equal probability. From then on, interim analyses will be carried out with the objective to either drop poorly performing arms or continue. Part-B) is an open platform without control that combined a selection and an expansion phase to identify which novel agent(s) work(s) best as second-line therapy. Dropping an arm occurs when the posterior probability of observing a clinically significant effect on the primary outcome (i.e. 6-month RECIST objective response rate) is too low. The operational characteristics of the design were investigated through simulations considering various scenarios. They show good performance of the design and a better allocation of resources for a reasonable maximum sample size of 216. All simulations were conducted using the upcoming R package BATS.

Serigne Lo

 12:45-1:00

BATS: A fast modulable package for the simulation of Bayesian adaptive designs
The growth of Bayesian adaptive designs has been hampered by the lack of software readily available to statisticians. Part of the problem is due to the burden generated by Monte Carlo Markov Chains (MCMC) typically used to compute posterior distributions. In this work, we follow a different approach based on the Laplace approximation to circumvent MCMC. The main aim of this project is to provide a flexible structure for the fast simulation of Bayesian adaptive designs. We focus our attention on multi-arm multi-stage (MAMS) designs investigated as a first step. We will illustrate how the BATS package (Bayesian Adaptive Trials Simulator) can be used to define the operating characteristics of a Bayesian adaptive design for different types of endpoints given the most common adaptations: stopping arms for efficacy or futility, fixed or response-adaptive randomisation, based on self-defined rules. Other important features include: parallel processing, customisability, use on a cluster computer or PC/Mac, adjustment for covariates. BATS has been successfully used for the recent rounds of MRFF or NHMRC grant applications.

Dominique-Laurent Couturier

 1:00-1:15

Making SMART decisions in prophylaxis and treatment studies
The ‘COVID-19 prevention and treatment in cancer: a sequential multiple assignment randomised trial (SMART)’ is an innovative multi-stage design that randomises high-risk cancer patients to prophylaxis and, if they develop COVID-19, re-randomises them to an experimental treatment conditional on their disease severity (NCT04534725). SMARTs can be used to identify personalised treatment sequences (also known as ‘dynamic treatment regimens’), but typically only once a trial is complete. Identifying and implementing an efficacious COVID-19 prophylaxis and treatment regimen for cancer patients is an immediate priority. Response-adaptive randomisation is one approach that could increase the chance that patients are randomised to the most promising treatment and enable rapid clinical implementation.
When applied to a SMART, response-adaptive randomisation algorithms that ignore dynamic treatment effects may erroneously favour suboptimal dynamic treatment regimens. Q-learning, a dynamic programming method that can be used to analyse SMART data, is one of the few algorithms that has been proposed for SMART response-adaptive randomisation [1, 2]. Q-learning uses stage-wise statistical models and backward induction to incorporate later-stage ‘payoffs’ (i.e., clinical outcomes) into early-stage ‘actions’ (i.e., treatments). We propose a Bayesian decision-theoretic Q-learning method to perform response-adaptive randomisation. This approach allows dynamic treatment regimens with distinct binary endpoints at each stage to be evaluated, a current limitation of the standard Q-learning method [3].
Our simulation study, motivated by the COVID-19 trial, aims to explore whether the Bayesian decision-theoretic Q-learning method can expedite treatment optimisation and improve the outcomes of prophylaxis and treatment trial participants compared to simpler approaches.

Robert Mahar

 1:15-1:30

Causal discovery using an adaptive design
Clinical trials are often conceptually divided into those which are explanatory and those which are pragmatic, or in simple terms those answering the questions “how does it work?” and “which works better?”, respectively. We believe that a pragmatic trial generates large amounts of high quality mechanistic information which is not commonly used, either in the course of the trial to inform safe delivery of
the intervention, or in the design of future trials. We propose for the first time a method to combine two powerful inferential concepts via network theory: responsive adaptive randomisation within a clinical trial, and causal network production. We believe that this may allow pragmatic trials to clarify the mechanisms at work either to explain the observed results within the explanatory paradigm, or to modify treatments during the course of the trial while still allowing reliable inference

Lewis Campbell

 1:30-1:45

Valid inference when using non-concurrent controls in platform trials
Platform trials aim at evaluating the efficacy of several experimental treatments, usually compared to a shared control group. The number of experimental arms is not prefixed, as arms may be added or removed as the trial progresses. Compared to separate trials with their own controls, this increases the statistical power and requires fewer patients. Shared controls in platform trials include concurrent and non-concurrent control data. Non-concurrent controls for a given experimental arm refer to data from patients allocated to the control arm before the arm enters the trial. Using non-concurrent controls is appealing because it may improve the trial’s efficiency while decreasing the sample size. However, since arms are added in a sequential manner, randomization occurs at different times. This lack of true randomization over time might introduce biases due to time trends. The challenge is to discern when and how to use non-concurrent controls to increase the trial’s efficiency without introducing bias.
In this talk, we review methods to incorporate non-concurrent controls in treatment-control comparisons allowing for time trends. We focus mainly on frequentist approaches that model the time trend and Bayesian strategies that limit the borrowing level depending on the heterogeneity between concurrent and non-concurrent controls. We examine the impact of time trends on the operating characteristics of treatment effect estimators for each method under different time trends patterns. We outline under which conditions the methods lead to unbiased estimators and discuss the gain in power compared to trials only using concurrent controls.

Marta Bofill Roig

 1:45-2:00

Panel discussion

Monday B: Issues in analysis of trials 1

6th December

12:30-2pm AEDT

 12:30-12:45

Comparison of Analysis Methods adjustment for Addressing Stratification Errors

Background: Many trials use stratified randomisation with permuted blocks, where participants are randomised within strata defined by one or more baseline variables. While it is important to adjust for the stratification variables when estimating treatment effects, the appropriate method of is unclear when stratification errors occur and hence some participants are randomised in the incorrect stratum.

Methods: A simulation study was conducted to compare performance measures for different methods of analysis when stratification errors occur. Continuous outcomes were generated under a range of scenarios and analysed using linear regression models. Analysis methods included no adjustment, adjustment for the strata used during randomisation (the randomisation strata), adjustment for the strata after any errors are corrected (the actual strata) and adjustment for the strata after a subset of errors are identified and corrected (the observed strata).

Results: The unadjusted model performed poorly in all settings, producing standard errors that were too large. There was little difference between adjusting for the randomisation strata or the actual strata when stratification errors were rare. When errors were common, adjusting for the actual strata was more powerful. If only some errors were identified and corrected, adjusting for the observed strata led to bias in certain settings.

Conclusions: Adjusting for stratification variables is important even when stratification errors occur. We generally recommend adjusting for the actual strata in the primary analysis, however caution may be needed in specific settings. The method for dealing with stratification errors should be pre-specified in the statistical analysis plan.

Lisa Yelland

 12:45-1:00

Between-centre differences in overall patient outcomes and in trial treatment effects in multicentre perioperative trials

Background: In multicentre trials, characteristics of randomised patients can differ between centres. This may lead to variation in the outcome or treatment effect on the outcome between centres. Not accounting for this heterogeneity in the analysis of multicentre trials may lead to biased estimates and standard errors.

Aims: Our aim was to examine whether there are differences in the outcome or treatment effect on the outcome between centres and if heterogeneity affects the overall treatment effect estimate and its precision. We used data from six large multicentre randomised controlled trials in anaesthesia of the primary trial-specific endpoint and hospital length of stay (LOS), which was recorded in all six trials.

Methods: Mixed-effects logistic (primary endpoint) and Weibull (hospital LOS) regression was performed to estimate the between-centre and treatment effect heterogeneity. In addition, a non-inferential visual aid by Schou and Marschner1 was used to explore treatment effect heterogeneity. The overall treatment effect estimated from the mixed-effects regression, that incorporates between-centre heterogeneity, was compared with that estimated from a fixed-effects model.

Results: There were between-centre differences in both outcomes. There was no heterogeneity in the primary endpoint but heterogeneity in the treatment effect between centres ranged from moderate to substantial for hospital LOS. In general, the treatment effects were similar when accounting for centre effects, albeit with wider confidence intervals for hospital LOS in the presence of heterogeneity.

Conclusions: In six multicentre trials in anaesthesia, analysis accounting for between-centre heterogeneity may decrease bias in treatment effect and standard errors.

Vanessa Pac Soo

 1:00-1:15

Guidelines to reduce, handle and report missing data in palliative care trials co-produced using a multi-stakeholder nominal group technique
Missing data can introduce bias and reduce the power, precision and generalisability of study findings. Guidelines on how to address missing data are limited in scope and detail and poorly implemented.
Aim: To develop guidelines on how best to reduce, handle and report missing data in palliative care clinical trials.
Method: Modified nominal group technique with patient/public research partners, clinicians, trialists, methodologists and statisticians. This involved five steps: (i) evidence summary, (ii) silent generation of ideas, (iii) contributing and developing ideas by structured groups, (iv) voting, (v) guideline writing.
Results: The top five of 28 main recommendations were: (i) train all research staff on missing data, (ii) prepare for missing data at the trial design stage, (iii) address how missing data will be handled in the statistical analysis plan, (iv) collect the reasons for missing data to inform strategies to reduce and handle missing data and (v) report descriptive statistics comparing the baseline characteristics of those with missing and observed data to enable an assessment of the risk of bias. Preparing for and understanding the reasons for missing data were greater priorities for stakeholders than how to deal with missing data once they had occurred.
Conclusion: The first co-produced comprehensive guidelines on how to address missing data recommend that internationally, trialists designing and conducting studies in palliative care should prioritise preparing for and understanding the reasons for missing data, so missing data are prevented in the first place. Guideline implementation will require the endorsement of research funders and research journals.

Jamilla Hussain

 1:15-1:30

Handling missing disease information due to death in trial endpoints that need two visits to diagnose

In many trials, outcomes can be assessed at a single time point. But some endpoints require two consecutive visits to prospectively define their occurrence, for example consecutive positive test results for identifying persistence or chronicity of a condition. The occurrence of such endpoints is subject to the competing risk of death and disruption of visit follow-up. In addition, the interval censored nature of the outcome must be considered in the analysis.
In practice, the most common approaches to address these challenges are to censor participants at death or to exclude participants who died [1]. Another approach is to use a full-likelihood based illness-death model (IDM) [2,3]. This model considers both the multi-state nature of the data (health, disease, and death) and the interval-censored nature of the disease onset. However, no studies have investigated the performance of these methods in diseases that require two visits to diagnose.

Using simulated data and a real trial, we investigate the bias in risk factor effect estimation of the following censoring approaches for both the IDM and the Cox proportional hazards model: i) Censoring at the last visit, ii) Censoring at the second-last visit, iii) Censoring at the last visit if the last test is negative, and at the second-last visit if the last test is positive, and iv) Censoring at death. We use the 19,114 participants from the ASPirin in Reducing Events in the Elderly trial to evaluate the association of the risk factor diabetes with onset of persistent physical disability.

Thao Le

 1:30-1:45

Handling missing data and drop out in hospice/palliative care trials through the estimand framework
Context
Missing data are common in hospice/palliative care trials due to high drop-out by virtue of its demographic of interest. It can reduce statistical power and introduce biases.
Objectives
Recently the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) released updated guidance on statistical principles for clinical trials introducing the estimand framework to align trial objectives, trial conduct, statistical analysis and interpretation of results. Our objective is to present how the estimand framework can be used to guide the handling of missing data in palliative care trials.
Methods
We outline the estimand framework by highlighting the five elements of an estimand (treatment, population, variable, summary measure and intercurrent event handling), listing common intercurrent events in palliative care trials and presenting strategies to deal with intercurrent events based on the five strategies for handling them outlined in the ICH guidance.
Results
We list intercurrent events anticipated in a palliative trial, discuss and justify what analytic strategies could be followed with each intercurrent event. We provide an example using a palliative care trial comparing two opioids for pain relieve in participants with cancer pain.
Conclusion
When planning a trial, the estimand should be explicitly stated, including how intercurrent events will be handled in the analysis. This should be informed by the scientific objectives of the trial. The estimand guides the handling of missing data during the conduct and analysis of the trial. Defining an estimand is not a statistical activity, but a multi-disciplinary process involving all stakeholders.

Anneke Grobler

 1:45-2:00

Panel discussion

Tuesday 7th December

SESSION 1: Trial Analysis
 9-9:30

Using Joint Models to Disentangle the Treatment Effect in an Alzheimer Clinical Trial
In many clinical trials, patients are repeatedly measured for several longitudinal outcomes. The patient follow-up can be stopped due to an outcome-dependent event, such as clinical diagnosis, death, or dropout. Joint modeling is a popular choice for the analysis of this type of data. Motivated from a prodromal Alzheimer’s disease trial, we propose a new type of multivariate joint model in which longitudinal brain imaging outcomes and memory impairment ratings are associated with time to open-label medication and dropout but also affect each other directly. Existing joint models for multivariate longitudinal outcomes account for the correlation between the longitudinal outcomes through the random effects, often by assuming a multivariate normal distribution. However, for these models, it is difficult to interpret how the longitudinal outcomes affect each other. We model the dependence between the longitudinal outcomes differently so that a first longitudinal outcome affects a second one. Specifically, for each longitudinal outcome, we use a linear mixed-effects model to estimate its trajectory, where, for the second longitudinal outcome, we include the linear predictor of the first outcome as a time-varying covariate. This facilitates an easy and direct interpretation of the association between the longitudinal outcomes and provides a framework for assessing mediation to understand the underlying biological processes.

Dimitris Rizopoulos, Erasmus University Medical Center, NLD

Dimitris Rizopoulos is a Professor in Biostatistics at the Erasmus University Medical Center. He received an M.Sc. in statistics (2003) from the Athens University of Economics and Business and a Ph.D. in Biostatistics (2008) from the Katholieke Universiteit Leuven. Dr. Rizopoulos wrote his dissertation and many methodological and applied articles on various aspects of models for survival and longitudinal data analysis. He is the author of a book on the topic of joint models for longitudinal and time-to-event data. He has also written three freely available packages to fit such models in R under maximum likelihood (i.e., package JM) and the Bayesian approach (i.e., packages JMbayes and JMbayes2). He currently serves as co-Editor for Biostatistics.

 9:30-10

Handling unplanned disruptions in randomised trials using missing data methods: a four-step strategy
The coronavirus pandemic (Covid-19) presents a variety of challenges for ongoing clinical trials, including unplanned treatment disruptions, participant infections and an inevitably higher rate of missing outcome data, with non-standard reasons for missingness. This presentation explores a four-step strategy for handling unplanned disruptions in the analysis of randomised trials that are ongoing during a pandemic using missing data methods. We discuss and highlight controlled multiple imputation as an accessible tool for conducting sensitivity analyses. The framework is consistent with the statistical principles outlined in the ICH-E9(R1) addendum on estimands and sensitivity analysis in clinical trials. Following an outline of the main issues raised by a pandemic we describe each point of the guidance in turn, which we illustrate using an ophthalmic trial ongoing during Covid-19. Scenarios where treatment effects for a ‘pandemic free world’ and ‘world including a pandemic’ are of interest are considered.

Suzie Cro, Imperial College London, UK

Suzie Cro is an advanced research fellow at Imperial Clinical Trials Unit. She has been a statistician in clinical trials for over 10 years and has a broad range of experience in the design and analysis of clinical trials from phase 1 to phase IV, across therapeutic areas. She currently holds a personal NIHR advanced research fellowship to develop statistical methods for estimating treatment estimands in randomised controlled trials where clinical outcomes have been affected by post randomisation events. She also conducts statistical research in relevant accessible methods for handling missing data, estimands and transparency in the statistical analysis of clinical trials.

 10-10:30

Missing data in randomised trials: how to avoid multiple imputation

Ian White, University College London, UK

Ian is a medical statistician with an interest in developing new methodology for design and analysis of clinical trials, meta-analysis and observational studies. Ian is particularly interested in methods for design of non-inferiority and other trials, including the non-inferiority frontier for non-inferiority trials with uncertain control event rate, the personalised randomised controlled trial (PRACTical) for settings without an accepted standard of care, and the combination of factorial and multi-arm multi-stage trials. Other interests in trials are in causal inference. He has worked for many years in missing data, where he has contributed to the widespread use of multiple imputation and is now developing extensions for missing-not-at-random data. He is also particularly interested in meta-analysis and network meta-analysis, where he has developed methods for assessing and testing inconsistency. He co-wrote a tutorial on simulation studies, and produces a range of Stata software. MRC Profile

Tea break #1

SESSION 2: Cluster Randomised Trials
 10:45-11:15

Community Randomized Trials with Rare Events:
Negative Binomial Regression vs. Traditional Marginal Modeling Approaches
Community randomized trials (CRTs) randomize entire communities of subjects to different trial arms. For example, the HEALing (Helping to End Addiction Long-term) Communities Study (HCS) is a multi-site (Kentucky, Massachusetts, New York and Ohio), parallel-group, study in the United States in which 67 communities are randomized to either an intervention or a wait-list control arm. The goal of the intervention is to reduce opioid-related overdose fatalities, which are expected to be rare events. Traditional marginal modeling approaches in the CRT literature include the use of generalized estimating equations with an exchangeable correlation structure when utilizing subject-level data, or analogously quasi-likelihood based on an over-dispersed binomial variance when utilizing community-level data as is the case in the HCS. In this talk, we demonstrate that negative binomial regression is an alternative modeling approach that can be employed and that may have utility over traditional approaches. Specific modeling examples, as well as analyses of data from communities participating in the HCS, are used to demonstrate concepts.

Philip Westgate, University of Kentucky, USA

Dr. Philip Westgate is an Associate Professor in the Department of Biostatistics at the University of Kentucky. His research interests include cluster randomized trials (CRTs) and longitudinal studies. He has first-authored publications addressing methods for the analysis of data arising from such studies. Furthermore, he is a co-investigator on multiple ongoing CRTs, and serves as the University of Kentucky’s lead biostatistician for the HEALing Communities Study.

 11:15-11:45

Updating the Ottawa Statement: Identifying new ethical issues in cluster randomized trials
The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials (2012) provided the first international ethical guidance for cluster trials. Since its publication, novel cluster randomized designs and increased use of cluster trials for pragmatic evaluation of individual-level interventions have highlighted ongoing ethical challenges. The authors of the Ottawa Statement plan to update the document in 2022. As a first step, we undertook a citation analysis of the Ottawa Statement and relevant background documents. In this paper, we discuss gaps and new issues identified in the research ethics literature. Our review highlighted the need for further guidance for stepped wedge trials, cluster trials conducted in low-resource settings, the use of alternative methods of obtaining informed consent (such as integrated or verbal consent), and the identification and protection of vulnerable participants.

Charles Weijer, Western University, CAN

Charles Weijer is Professor of Medicine, Epidemiology & Biostatistics, and Philosophy at Western University in London, Canada. He is a leading expert in the ethics of randomized controlled trials. From 2008 to 2013 Charles co-led a collaboration that produced the first international ethics guidelines for cluster randomized trials. His current work explores ethical issues in pragmatic randomized controlled trials that evaluate health interventions in real-world conditions to better inform patients, health providers and health systems managers. Charles led the writing team for the World Health Organization guidance on “Ethical Considerations for Health Policy and Systems Research,” published in 2019. In 2020, he served on the WHO Working Group for Guidance on Human Challenge Studies in COVID-19. Charles held the Canada Research Chair in Bioethics from 2005 to 2019, and, in 2016, he was elected to the Royal Society of Canada.

 11:45-12:15

Power analysis for cluster randomized trials with multiple continuous co-primary endpoints
Pragmatic trials evaluating health care interventions often consider the cluster randomized design due to administrative or logistical considerations. In recent pragmatic cluster randomized trials (CRTs), the intervention may be tested on multiple co-primary endpoints to demonstrate its effectiveness on equally important outcomes. While methods for power analysis based on K (K≥2) binary co-primary endpoints were previously studied in CRTs, methods for designing CRTs with multiple continuous co-primary outcomes remain unavailable. Assuming a multivariate linear mixed model that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with two different types of null hypotheses. We characterize the relationship between the power of each test and the correlation parameters. To further allow for more pragmatic design calculations, we relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power by simulation, when the parameters in the multivariate linear mixed model are estimated via a bias-corrected expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.

Fan Li, Yale University, USA

Fan Li is an Assistant Professor in the Department of Biostatistics at the Yale School of Public Health. His research interests include methodology for the design, monitoring, and analysis of parallel-arm, crossover, and stepped-wedge cluster randomized trials, and more broadly, methodology for clustered data. He has also been developing causal inference methods for designing and analyzing observational studies. His methodological research efforts are currently supported by the Patient-Centered Outcome Research Institute.

Tea break #2

Session 3: Contributed talks

Tuesday A: Innovations in clinical trial design

7th December

12:30-2pm AEDT

 12:30-12:45

How are progression decisions made following external randomised pilot trials? A qualitative interview study and framework analysis

Background: External randomised pilot trials help researchers decide whether, and how, to do a future definitive randomised trial. Progression criteria are often prespecified to inform the interpretation of pilot trial findings and subsequent progression decision-making. We aimed to explore and understand the perspectives and experiences of key stakeholders when making progression decisions following external pilot trials.

Methods: Thirty-five remote semi-structured interviews with external randomised pilot trial team members including Chief Investigators, Trial Managers, Statisticians and Patient and Public Involvement representatives. Questions focussed on experiences and perceptions of pilot trial progression decisions and whether, and how, progression criteria informed this decision. Data were analysed using the Framework Method approach to thematic analysis.

Results: Interviews were conducted between December 2020 and July 2021. Six descriptive themes were developed to capture the experiences and perspectives of participants, Figure 1. These themes were underpinned by an overarching interpretative theme ‘a one size approach to progression does not fit all’ to describe the highly nuanced and complex decision-making process that occurs following external randomised pilot trials. Progression criteria are rarely the only consideration informing the decision to progress to future research.

Conclusions: One size does not fit all when it comes to progression criteria and pilot trial progression. Progression criteria are only one of many considerations’ researchers consider when deciding whether a pilot trial is feasible. External pilot trial progression is not guaranteed even when a pilot trial is considered feasible (based on progression criteria and/or other considerations), indicating inefficiency and potential research waste.

Katie Mellor, Oxford Clinical Trials Research Unit / Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, England

 12:45-1:00

Changing the Primary Endpoint of a RCT in Response to a Low Event Rate
A lower-than-expected event rate on the primary endpoint of a randomised controlled trial (RCT) can reduce statistical power. Objective: To elucidate the issues associated with changing the primary endpoint in response to low event rates. Method: Simulation study using 8,939 records from the LIPID trial – an RCT of statin therapy in cardiovascular participants. Two endpoint choices were considered: coronary heart disease death (CHDD) (original primary endpoint in LIPID with 585 events); and, revascularisations (RVSC) (1,217 events). 100,000 simulated trials were constructed using bootstrapping under scenarios defined by the choice of hazard ratio (HR) imposed on each endpoint. HRs of 1, 0.75, or 0.85 were used (nine scenarios). Analyses performed using competing-risk survival regression under two strategies (A and B). For Strategy A, CHDD was retained as pre-specified primary endpoint as planned. For Strategy B, a switch from CHDD to RVSC was made when observed number of CHDD events was below 585. Results: The distribution of the test statistics (z-values; see Figure) were virtually identical for both strategies under the null hypothesis (H0: HRRVSC = HRCHDD=1). Strategy B correctly rejected H0 more often than Strategy A when 1 > HRRVSC = HRCHDD (because RVSC events > CHDD events), and when HRRVSC < HRCHDD. The converse was true when HRRVSC> HRCHDD. Conclusions: Switching the primary endpoint in response to a low event rate does not inflate type I error under H0 but may not improve statistical power if the treatment effect is weaker on the endpoint to which the analysis is switched.

Xanthi Coskinas, The National Health and Medical Research Council Clinical Trials Centre, University of Sydney, Sydney, Australia

 1:00-1:15

EMR Embedded Randomised Control Trials Design in the Australian Paediatric Hospital Setting

Background: Embedded trials are randomised trials embedded into electronic medical record (EMR) systems. These trials utilise patient data already existing within the EMR. They have high translational value as information routinely collected within the EMR reflects real world, clinically relevant evidence1.They are inexpensive, they require no extra data collection and consent can be gained by the clinical team. Embedded trials using Epic EMR are being piloted by Murdoch Children’s Research Institute (MCRI) at The Royal Children’s Hospital (RCH).

Lessons learned: The EMR can be used for patient identification, randomisation, treatment allocation, and outcome data extraction. Comparative effectiveness trials that don’t require complex consent discussions are ideal for embedding. An important design aspect is having broad but clearly defined eligibility criteria2, this enables the EMR to accurately identify potential trial patients. The clinical flow of the trial population needs ensure adequate numbers are expected to be consented and randomised. Currently, only simple randomisation is available within Epic EMR, therefore other software linked to the EMR is being utilised to perform block randomisation

Conclusion: The feasibility of embedding trials is dependent on the trial design and the constraints of the EMR and its users. Embedded trials have minimal ongoing costs but require substantial upfront work and expertise to develop in the EMR. Only individually randomised trials are currently being explored at RCH. There is scope in the future to extend this to cluster randomised designs. Embedded trials with limited research contact with patients offer an exciting cost-effective and COVID safe trial design.

Alannah Rudkin, Centre for Health Analytics, Melbourne Children’s, Australia Melbourne Children’s Trials Centre, Murdoch Children’s Research Institute, Australia

 1:15-1:30

N-of-1 Trials: Novel Personalised Trial Designs

Background: N-of-1 trials are individualised randomised controlled trials using patients as their own control. They represent a powerful trial methodology because they identify how an individual patient responds to a treatment. The “gold standard” randomised controlled trial (RCT) focuses on the “average” patient. Pooled N-of-1 trials, where data from a series of N-of-1 trials are statistically aggregated, can provide population-based estimates of treatment effectiveness akin to RCTs but with fewer participants. This reduces the time, cost and recruitment issues associated with RCTs.

Objectives: The key opportunities (efficiency, precision, patient-centredness, cost-effectiveness) and challenges (limited awareness of the method, statistical analysis methods) associated with N-of-1 trials will be presented, followed by an introduction to the ‘International Collaborative Network for N-of-1 Trials and Single-Case Designs (ICN)’, a global network of ~500 members using these methods in 33 countries. The ICN aims to further the science of N-of-1 research and encourage discussion and collaboration on a global scale.

Conclusion(s): N-of-1 trials play an important role in the movement towards personalised medicine, digital health, shared clinical decision-making and patient-centred healthcare. Rapid advances in digital technology can drive adoption of this powerful, personalised treatment methodology. Digital N-of-1 trials harness the power of individual patient data, facilitating collection of ‘real world data’ that can be translated into ‘real world evidence’, which healthcare providers can use to make informed decisions about patient care. Limited awareness about the possibility of using N-of-1 trials to obtain individual and population estimates of treatment effectiveness is a barrier to wider adoption.

Suzanne McDonald, UQ Centre for Clinical Research, The University of Queensland, Australia

 1:30-1:45

Sham Control Methods in Physical, Psychological and Self-Management Intervention Trials for Pain: A Systematic Review and Meta-Analysis

Blinding is challenging in randomised controlled trials (RCTs) of physical, psychological, and self-management therapies (PPSTs) for pain. To develop standards for the design, implementation, and reporting of sham controls, a systematic overview of current sham interventions was required.

Twelve databases were searched for placebo or sham controlled RCTs of PPSTs in clinical pain populations. Two reviewers extracted general trial features, sham control methods, and outcome data (protocol: CRD42020206590). The similarity of sham controls to experimental treatments was rated across 25 features. Meta-regression analyses assessed putative links between employed sham control methods, observed effect sizes in pain-related outcomes, attrition, and blinding success.

The review included 177 control interventions, published between 2008 and 2020. Most trials studied people with chronic pain, and more than half were manual therapy trials. Sham interventions ranged from clearly modelled based on the active treatment to largely dissimilar shams. Similarity between sham and active interventions was more frequent for certain aspects (e.g., treatment duration and frequency) than others (e.g., physical treatment procedures and patient sensory experiences). Resemblance between sham controls and active interventions predicted variability in pain-related outcomes, attrition, and blinding effectiveness. Influential were group differences in the number of treatment sessions and environments.

A comprehensive picture of prevalent blinding methods is provided. The results support the supposed link between blinding methods and effect sizes. Challenges to effective blinding are complex, and often difficult to discern from trial reports. Nonetheless, these insights have the potential to change trial design, conduct, and reporting and will inform guideline development.

David Hohenschurz-Schmidt, Pain Research, Dept. Surgery & Cancer, Faculty of Medicine, Imperial College, Chelsea & Westminster Hospital campus, London

 1:45-2:00

Panel discussion

Tuesday B: Issues in Analysis of Trials 2

7th December

12:30-2pm AEDT

 12:30-12:45

Analysis of adaptive platform trials using a network approach

Treatment comparisons from adaptive platform trials may be subject to confounding if there are underlying time trends in the population risk level. There are two common approaches to dealing with this confounding, which we refer to as adjustment and stratification. Adjustment, which was used for example in the REMAP-CAP trial, incorporates a time epoch adjustment into a statistical model that permits comparisons between treatment groups that were not necessarily randomized during the same time periods. Stratification, which was used for example in the STAMPEDE trial, uses only comparisons between treatment groups randomized during the same time period, and does not permit comparisons of non-concurrent randomizations. We present a novel method that embeds these two approaches into a common analysis framework using the principles of network meta-analysis, with the purpose of exploring sensitivity to the use of non-concurrent comparisons. The cohorts of randomizations between adjacent adaptation timepoints are treated like separate fixed design randomized trials. These fixed design cohorts produce a network of direct and indirect treatment comparisons which may be aggregated using network meta-analysis principles. This allows a transparent decomposition of the overall information from a platform trial into direct randomized evidence and indirect non-randomized evidence. Restricting the analysis to direct comparisons is equivalent to the stratified analysis approach while use of both direct and indirect comparisons is equivalent to the adjusted approach. Our network approach provides a natural framework for comparing the two. Simulations will be presented as well as a re-analysis of data from the STAMPEDE trial.

Ian Marschner ,NHMRC Clinical Trials Centre, University of Sydney, Australia

 12:45-1:00

Conditional Logistic Modelling for Adaptive Trials

There is increasing interest in applying Bayesian approaches to clinical trials. I will introduce a conditional logistic model and demonstrate its utility for the design of Bayesian adaptive trials when the time to the endpoint is long relative to recruitment. The model is applicable for adaptive trial designs where interim analyses are conducted for a binary endpoint that has not yet been observed in those individuals with incomplete follow up. E.g., when the endpoint is based on disease status at six months and an interim analysis is conducted when some participants are disease-free but have not yet completed follow up. We detail how this data can be analysed and how to assess adaptive trial decision rules. Typically, this issue has been addressed by either excluding those with incomplete follow up, or by imputing their future observations. These options either discard available information or rely on predictive distributions so are potentially sub-optimal. The conditional logistic model handles such data by modelling posterior distributions for each follow up time point and thus incorporates all available information. I will present the results from a comparison of these different approaches using simulation.

Michael Dymock, Telethon Kids Institute, Nedlands, Australia

 1:00-1:15

The Twice-Generalized Odds Ratio: A method for performing dose-response and prognostic variable analysis with complex, multifaceted outcome data

There is a growing acknowledgement in medical research that patient outcomes are often complex and multifaceted, and statistical methods are needed that can handle this complexity. One method that has received recent attention is the Win Ratio (Pocock et. al. 2012), which uses arbitrary statements of outcome preference to identify improved outcomes on two-group data.

While the Win Ratio enables two-group analyses of complex patient outcomes, it is unable to consider dose-response relationships with more than two groups, nor can it be used to explore the relationship of common prognostic variables (e.g. age, injury severity) with patient outcomes.

We propose the Twice-Generalised Odds Ratio statistic as an extension to the Win Ratio approach to well-ordered explanatory variables, thus enabling multiple group dose-response analysis and the investigation of prognostic variables in the context of complex and multifaceted patient outcomes. This statistic is a further generalisation of Agresti’s Generalized Odds Ratio statistic (Agresti 1980).

We illustrate the twice-generalized odds ratio using data from the EXTEND-IA TNK hyper-acute stroke trial. We also use computational experiments to compare this method to the original Win Ratio on two group case, showing the two methods have extremely strong levels of agreement. We demonstrate that the Twice-Generalized Odds Ratio is an order of magnitude faster to calculate, providing a valuable practical contribution for trial simulation.

Hannah Johns, Melbourne Medical School, University of Melbourne, Australia

 1:15-1:30

Use of information criteria for selecting a correlation structure for longitudinal cluster randomised trials

When designing and analysing longitudinal cluster randomised trials such as the stepped wedge, the similarity of outcomes from the same cluster must be accounted for through the choice of the within-cluster correlation structure. Several choices for this structure are commonly considered for application within the linear mixed model paradigm. The first assumes a constant correlation for all pairs of outcomes from the same cluster (the exchangeable/Hussey and Hughes model); the second assumes that correlations of outcomes measured in the same period are higher than outcomes measured in different periods (the block exchangeable model); and the third is the discrete-time decay model, which allows the correlation between pairs of outcomes to decay over time. Currently, there is limited guidance on how to select the most appropriate structure. We present the results of a simulation study to determine the effectiveness of the Akaike and Bayesian Information Criteria (AIC and BIC) for selecting the appropriate model. Both AIC and BIC perform well at correctly identifying the exchangeable model. However, depending on the values of the model parameters, they can require much more data to reliably identify the more complex models. In practice, we recommend that researchers conduct supplementary analyses under alternative correlation structures to gauge sensitivity to the original choice, and that AIC and BIC values be reported along with correlation parameter estimates.

Rhys Bowden, School of Public Health and Preventive Medicine, Monash University, Australia

 1:30-1:45

Exact Confidence Limits Compatible with the Result of a Group Sequential Trial

Sequential (or adaptive) designs are common in acceptance sampling and pharmaceutical trials. This is because they can achieve the same type 1 and type 2 error rate with fewer subjects on average than fixed sample trials. After the trial is completed and the test result decided, we need full inference on the main parameter Δ. In this paper, we are interested in exact one-sided lower and upper limits.

Unlike standard trials, for sequential trials there need not be an explicit test statistic, nor even p-value. This motivates the more general approach of defining an ordering on the sample space and using the construction of Buehler (1957). This is guaranteed to produce exact limits, however, there is no guarantee that the limits will agree with the test. For instance, we might reject Δ≤Δ0 at level α but have a lower 1−σ limit being less than Δ0. This paper gives a very simple condition to ensure that this unfortunate feature does not occur. When the condition fails, the ordering is easily modified to ensure compliance.

Chris Lloyd, Melbourne Business School, University of Melbourne

 1:45-2:00

Panel discussion