
Registration type | Price |
---|---|
Standard | $200 |
Student | $100 |
Residents of developing countries* | $100 |
*Please contact us via email austrim@monash.edu to arrange a reduced cost ticket.
For a full list of eligible countries, please refer to the DFAT Listing.

December 6 and 7

NO TRAVEL NEEDED

BREAKOUT SESSIONS

LEARN & NETWORK
Welcome
The organising committee warmly invite you to join us at our first Australian Trials Methodology Conference, which will take place fully online over the 6th and 7th of December 2021.
The meeting program will showcase cutting-edge trial design and analysis approaches in a way that is thought-provoking yet still accessible to biostatisticians, methodologists and trialists (across any clinical discipline) alike.
We are delighted to announce our keynote speakers Stephen Senn (Consultant statistician, Edinburgh) and Marcel Wolbers (CH) who will open the conference by discussing design and analysis issues with COVID trials and the key differences across pharmaceutical versus academic trials; we look forward to the engaging discussion that will follow. Please see the program below for details on our invited sessions covering novel trial design, cluster randomised trials, and trial analysis. We also invite attendees to submit abstracts for contributed talks.
There will be ample opportunity for attendees to engage with speakers and fellow delegates following talks and in our networking lounge. While our in-person meeting will have to wait, we look forward to networking with and learning from our colleagues and invited speakers in this digital space.
We hope you will join us in December.
Rory Wolfe
Lead Investigator, AusTriM CRE
Robert Mahar
Chair, Conference Local Organising Committee
On behalf of the Local Organising and Scientific Program Committees
About the event
This event is being convened by the Australian Trials Methodology (AusTriM) Research Network, an NHMRC Centre of Research Excellence. You can learn more about our network on our website, and follow us on Twitter and LinkedIn.
For any queries please get in touch via email: austrim@monash.edu

Program
Please note the session times below are all listed in Australian Eastern Daylight Time (AEDT), GMT +11.
Monday 6th December
SESSION 1: Opening plenary

Introduction and Welcome to Country
Chair

What can academia learn from industry randomized clinical trials (and vice versa)?

Marcel Wolbers, Roche, Switzerland

Design and analysis of COVID vaccine trials

Stephen Senn, Consultant Statistician, UK
http://www.senns.uk/
https://twitter.com/stephensenn
http://www.senns.uk/Blogs.html
Tea break #1
SESSION 2: Novel Trial designs

Assessing Personalization in Digital Health

Susan Murphy, Harvard University, USA

Registry-embedded randomised clinical trials (RRCTs) - sustaining innovation

Elaine Pascoe, University of Queensland, AUS

Trial design from a decision-maker’s perspective: putting the horse before the cart

Tom Snelling, University of Sydney, AUS
Tea break #2
SESSION 3: Contributed talks
Monday A: Adaptive trial design
6th December
12:30-2pm AEDT

A biomarker guided Bayesian response-adaptive phase II platform trial for metastatic melanoma: the PIP-Trial

Serigne Lo

BATS: A fast modulable package for the simulation of Bayesian adaptive designs

Dominique-Laurent Couturier

Making SMART decisions in prophylaxis and treatment studies
When applied to a SMART, response-adaptive randomisation algorithms that ignore dynamic treatment effects may erroneously favour suboptimal dynamic treatment regimens. Q-learning, a dynamic programming method that can be used to analyse SMART data, is one of the few algorithms that has been proposed for SMART response-adaptive randomisation [1, 2]. Q-learning uses stage-wise statistical models and backward induction to incorporate later-stage ‘payoffs’ (i.e., clinical outcomes) into early-stage ‘actions’ (i.e., treatments). We propose a Bayesian decision-theoretic Q-learning method to perform response-adaptive randomisation. This approach allows dynamic treatment regimens with distinct binary endpoints at each stage to be evaluated, a current limitation of the standard Q-learning method [3].
Our simulation study, motivated by the COVID-19 trial, aims to explore whether the Bayesian decision-theoretic Q-learning method can expedite treatment optimisation and improve the outcomes of prophylaxis and treatment trial participants compared to simpler approaches.

Robert Mahar

Causal discovery using an adaptive design
the intervention, or in the design of future trials. We propose for the first time a method to combine two powerful inferential concepts via network theory: responsive adaptive randomisation within a clinical trial, and causal network production. We believe that this may allow pragmatic trials to clarify the mechanisms at work either to explain the observed results within the explanatory paradigm, or to modify treatments during the course of the trial while still allowing reliable inference

Lewis Campbell

Valid inference when using non-concurrent controls in platform trials
In this talk, we review methods to incorporate non-concurrent controls in treatment-control comparisons allowing for time trends. We focus mainly on frequentist approaches that model the time trend and Bayesian strategies that limit the borrowing level depending on the heterogeneity between concurrent and non-concurrent controls. We examine the impact of time trends on the operating characteristics of treatment effect estimators for each method under different time trends patterns. We outline under which conditions the methods lead to unbiased estimators and discuss the gain in power compared to trials only using concurrent controls.

Marta Bofill Roig

Panel discussion
Monday B: Issues in analysis of trials 1
6th December
12:30-2pm AEDT

Comparison of Analysis Methods adjustment for Addressing Stratification Errors
Background: Many trials use stratified randomisation with permuted blocks, where participants are randomised within strata defined by one or more baseline variables. While it is important to adjust for the stratification variables when estimating treatment effects, the appropriate method of is unclear when stratification errors occur and hence some participants are randomised in the incorrect stratum.
Methods: A simulation study was conducted to compare performance measures for different methods of analysis when stratification errors occur. Continuous outcomes were generated under a range of scenarios and analysed using linear regression models. Analysis methods included no adjustment, adjustment for the strata used during randomisation (the randomisation strata), adjustment for the strata after any errors are corrected (the actual strata) and adjustment for the strata after a subset of errors are identified and corrected (the observed strata).
Results: The unadjusted model performed poorly in all settings, producing standard errors that were too large. There was little difference between adjusting for the randomisation strata or the actual strata when stratification errors were rare. When errors were common, adjusting for the actual strata was more powerful. If only some errors were identified and corrected, adjusting for the observed strata led to bias in certain settings.
Conclusions: Adjusting for stratification variables is important even when stratification errors occur. We generally recommend adjusting for the actual strata in the primary analysis, however caution may be needed in specific settings. The method for dealing with stratification errors should be pre-specified in the statistical analysis plan.

Lisa Yelland

Between-centre differences in overall patient outcomes and in trial treatment effects in multicentre perioperative trials
Background: In multicentre trials, characteristics of randomised patients can differ between centres. This may lead to variation in the outcome or treatment effect on the outcome between centres. Not accounting for this heterogeneity in the analysis of multicentre trials may lead to biased estimates and standard errors.
Aims: Our aim was to examine whether there are differences in the outcome or treatment effect on the outcome between centres and if heterogeneity affects the overall treatment effect estimate and its precision. We used data from six large multicentre randomised controlled trials in anaesthesia of the primary trial-specific endpoint and hospital length of stay (LOS), which was recorded in all six trials.
Methods: Mixed-effects logistic (primary endpoint) and Weibull (hospital LOS) regression was performed to estimate the between-centre and treatment effect heterogeneity. In addition, a non-inferential visual aid by Schou and Marschner1 was used to explore treatment effect heterogeneity. The overall treatment effect estimated from the mixed-effects regression, that incorporates between-centre heterogeneity, was compared with that estimated from a fixed-effects model.
Results: There were between-centre differences in both outcomes. There was no heterogeneity in the primary endpoint but heterogeneity in the treatment effect between centres ranged from moderate to substantial for hospital LOS. In general, the treatment effects were similar when accounting for centre effects, albeit with wider confidence intervals for hospital LOS in the presence of heterogeneity.
Conclusions: In six multicentre trials in anaesthesia, analysis accounting for between-centre heterogeneity may decrease bias in treatment effect and standard errors.

Vanessa Pac Soo

Guidelines to reduce, handle and report missing data in palliative care trials co-produced using a multi-stakeholder nominal group technique
Aim: To develop guidelines on how best to reduce, handle and report missing data in palliative care clinical trials.
Method: Modified nominal group technique with patient/public research partners, clinicians, trialists, methodologists and statisticians. This involved five steps: (i) evidence summary, (ii) silent generation of ideas, (iii) contributing and developing ideas by structured groups, (iv) voting, (v) guideline writing.
Results: The top five of 28 main recommendations were: (i) train all research staff on missing data, (ii) prepare for missing data at the trial design stage, (iii) address how missing data will be handled in the statistical analysis plan, (iv) collect the reasons for missing data to inform strategies to reduce and handle missing data and (v) report descriptive statistics comparing the baseline characteristics of those with missing and observed data to enable an assessment of the risk of bias. Preparing for and understanding the reasons for missing data were greater priorities for stakeholders than how to deal with missing data once they had occurred.
Conclusion: The first co-produced comprehensive guidelines on how to address missing data recommend that internationally, trialists designing and conducting studies in palliative care should prioritise preparing for and understanding the reasons for missing data, so missing data are prevented in the first place. Guideline implementation will require the endorsement of research funders and research journals.

Jamilla Hussain

Handling missing disease information due to death in trial endpoints that need two visits to diagnose
In many trials, outcomes can be assessed at a single time point. But some endpoints require two consecutive visits to prospectively define their occurrence, for example consecutive positive test results for identifying persistence or chronicity of a condition. The occurrence of such endpoints is subject to the competing risk of death and disruption of visit follow-up. In addition, the interval censored nature of the outcome must be considered in the analysis.
In practice, the most common approaches to address these challenges are to censor participants at death or to exclude participants who died [1]. Another approach is to use a full-likelihood based illness-death model (IDM) [2,3]. This model considers both the multi-state nature of the data (health, disease, and death) and the interval-censored nature of the disease onset. However, no studies have investigated the performance of these methods in diseases that require two visits to diagnose.
Using simulated data and a real trial, we investigate the bias in risk factor effect estimation of the following censoring approaches for both the IDM and the Cox proportional hazards model: i) Censoring at the last visit, ii) Censoring at the second-last visit, iii) Censoring at the last visit if the last test is negative, and at the second-last visit if the last test is positive, and iv) Censoring at death. We use the 19,114 participants from the ASPirin in Reducing Events in the Elderly trial to evaluate the association of the risk factor diabetes with onset of persistent physical disability.

Thao Le

Handling missing data and drop out in hospice/palliative care trials through the estimand framework
Missing data are common in hospice/palliative care trials due to high drop-out by virtue of its demographic of interest. It can reduce statistical power and introduce biases.
Objectives
Recently the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) released updated guidance on statistical principles for clinical trials introducing the estimand framework to align trial objectives, trial conduct, statistical analysis and interpretation of results. Our objective is to present how the estimand framework can be used to guide the handling of missing data in palliative care trials.
Methods
We outline the estimand framework by highlighting the five elements of an estimand (treatment, population, variable, summary measure and intercurrent event handling), listing common intercurrent events in palliative care trials and presenting strategies to deal with intercurrent events based on the five strategies for handling them outlined in the ICH guidance.
Results
We list intercurrent events anticipated in a palliative trial, discuss and justify what analytic strategies could be followed with each intercurrent event. We provide an example using a palliative care trial comparing two opioids for pain relieve in participants with cancer pain.
Conclusion
When planning a trial, the estimand should be explicitly stated, including how intercurrent events will be handled in the analysis. This should be informed by the scientific objectives of the trial. The estimand guides the handling of missing data during the conduct and analysis of the trial. Defining an estimand is not a statistical activity, but a multi-disciplinary process involving all stakeholders.

Anneke Grobler

Panel discussion
Tuesday 7th December
SESSION 1: Trial Analysis

Using Joint Models to Disentangle the Treatment Effect in an Alzheimer Clinical Trial

Dimitris Rizopoulos, Erasmus University Medical Center, NLD

Handling unplanned disruptions in randomised trials using missing data methods: a four-step strategy

Suzie Cro, Imperial College London, UK

Missing data in randomised trials: how to avoid multiple imputation

Ian White, University College London, UK
Tea break #1
SESSION 2: Cluster Randomised Trials

Community Randomized Trials with Rare Events:
Negative Binomial Regression vs. Traditional Marginal Modeling Approaches

Philip Westgate, University of Kentucky, USA

Updating the Ottawa Statement: Identifying new ethical issues in cluster randomized trials

Charles Weijer, Western University, CAN

Power analysis for cluster randomized trials with multiple continuous co-primary endpoints

Fan Li, Yale University, USA
Tea break #2
Session 3: Contributed talks
Tuesday A: Innovations in clinical trial design
7th December
12:30-2pm AEDT

How are progression decisions made following external randomised pilot trials? A qualitative interview study and framework analysis
Background: External randomised pilot trials help researchers decide whether, and how, to do a future definitive randomised trial. Progression criteria are often prespecified to inform the interpretation of pilot trial findings and subsequent progression decision-making. We aimed to explore and understand the perspectives and experiences of key stakeholders when making progression decisions following external pilot trials.
Methods: Thirty-five remote semi-structured interviews with external randomised pilot trial team members including Chief Investigators, Trial Managers, Statisticians and Patient and Public Involvement representatives. Questions focussed on experiences and perceptions of pilot trial progression decisions and whether, and how, progression criteria informed this decision. Data were analysed using the Framework Method approach to thematic analysis.
Results: Interviews were conducted between December 2020 and July 2021. Six descriptive themes were developed to capture the experiences and perspectives of participants, Figure 1. These themes were underpinned by an overarching interpretative theme ‘a one size approach to progression does not fit all’ to describe the highly nuanced and complex decision-making process that occurs following external randomised pilot trials. Progression criteria are rarely the only consideration informing the decision to progress to future research.
Conclusions: One size does not fit all when it comes to progression criteria and pilot trial progression. Progression criteria are only one of many considerations’ researchers consider when deciding whether a pilot trial is feasible. External pilot trial progression is not guaranteed even when a pilot trial is considered feasible (based on progression criteria and/or other considerations), indicating inefficiency and potential research waste.

Katie Mellor, Oxford Clinical Trials Research Unit / Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, England

Changing the Primary Endpoint of a RCT in Response to a Low Event Rate

Xanthi Coskinas, The National Health and Medical Research Council Clinical Trials Centre, University of Sydney, Sydney, Australia

EMR Embedded Randomised Control Trials Design in the Australian Paediatric Hospital Setting
Background: Embedded trials are randomised trials embedded into electronic medical record (EMR) systems. These trials utilise patient data already existing within the EMR. They have high translational value as information routinely collected within the EMR reflects real world, clinically relevant evidence1.They are inexpensive, they require no extra data collection and consent can be gained by the clinical team. Embedded trials using Epic EMR are being piloted by Murdoch Children’s Research Institute (MCRI) at The Royal Children’s Hospital (RCH).
Lessons learned: The EMR can be used for patient identification, randomisation, treatment allocation, and outcome data extraction. Comparative effectiveness trials that don’t require complex consent discussions are ideal for embedding. An important design aspect is having broad but clearly defined eligibility criteria2, this enables the EMR to accurately identify potential trial patients. The clinical flow of the trial population needs ensure adequate numbers are expected to be consented and randomised. Currently, only simple randomisation is available within Epic EMR, therefore other software linked to the EMR is being utilised to perform block randomisation
Conclusion: The feasibility of embedding trials is dependent on the trial design and the constraints of the EMR and its users. Embedded trials have minimal ongoing costs but require substantial upfront work and expertise to develop in the EMR. Only individually randomised trials are currently being explored at RCH. There is scope in the future to extend this to cluster randomised designs. Embedded trials with limited research contact with patients offer an exciting cost-effective and COVID safe trial design.

Alannah Rudkin, Centre for Health Analytics, Melbourne Children’s, Australia Melbourne Children’s Trials Centre, Murdoch Children’s Research Institute, Australia

N-of-1 Trials: Novel Personalised Trial Designs
Background: N-of-1 trials are individualised randomised controlled trials using patients as their own control. They represent a powerful trial methodology because they identify how an individual patient responds to a treatment. The “gold standard” randomised controlled trial (RCT) focuses on the “average” patient. Pooled N-of-1 trials, where data from a series of N-of-1 trials are statistically aggregated, can provide population-based estimates of treatment effectiveness akin to RCTs but with fewer participants. This reduces the time, cost and recruitment issues associated with RCTs.
Objectives: The key opportunities (efficiency, precision, patient-centredness, cost-effectiveness) and challenges (limited awareness of the method, statistical analysis methods) associated with N-of-1 trials will be presented, followed by an introduction to the ‘International Collaborative Network for N-of-1 Trials and Single-Case Designs (ICN)’, a global network of ~500 members using these methods in 33 countries. The ICN aims to further the science of N-of-1 research and encourage discussion and collaboration on a global scale.
Conclusion(s): N-of-1 trials play an important role in the movement towards personalised medicine, digital health, shared clinical decision-making and patient-centred healthcare. Rapid advances in digital technology can drive adoption of this powerful, personalised treatment methodology. Digital N-of-1 trials harness the power of individual patient data, facilitating collection of ‘real world data’ that can be translated into ‘real world evidence’, which healthcare providers can use to make informed decisions about patient care. Limited awareness about the possibility of using N-of-1 trials to obtain individual and population estimates of treatment effectiveness is a barrier to wider adoption.

Suzanne McDonald, UQ Centre for Clinical Research, The University of Queensland, Australia

Sham Control Methods in Physical, Psychological and Self-Management Intervention Trials for Pain: A Systematic Review and Meta-Analysis
Blinding is challenging in randomised controlled trials (RCTs) of physical, psychological, and self-management therapies (PPSTs) for pain. To develop standards for the design, implementation, and reporting of sham controls, a systematic overview of current sham interventions was required.
Twelve databases were searched for placebo or sham controlled RCTs of PPSTs in clinical pain populations. Two reviewers extracted general trial features, sham control methods, and outcome data (protocol: CRD42020206590). The similarity of sham controls to experimental treatments was rated across 25 features. Meta-regression analyses assessed putative links between employed sham control methods, observed effect sizes in pain-related outcomes, attrition, and blinding success.
The review included 177 control interventions, published between 2008 and 2020. Most trials studied people with chronic pain, and more than half were manual therapy trials. Sham interventions ranged from clearly modelled based on the active treatment to largely dissimilar shams. Similarity between sham and active interventions was more frequent for certain aspects (e.g., treatment duration and frequency) than others (e.g., physical treatment procedures and patient sensory experiences). Resemblance between sham controls and active interventions predicted variability in pain-related outcomes, attrition, and blinding effectiveness. Influential were group differences in the number of treatment sessions and environments.
A comprehensive picture of prevalent blinding methods is provided. The results support the supposed link between blinding methods and effect sizes. Challenges to effective blinding are complex, and often difficult to discern from trial reports. Nonetheless, these insights have the potential to change trial design, conduct, and reporting and will inform guideline development.

David Hohenschurz-Schmidt, Pain Research, Dept. Surgery & Cancer, Faculty of Medicine, Imperial College, Chelsea & Westminster Hospital campus, London

Panel discussion
Tuesday B: Issues in Analysis of Trials 2
7th December
12:30-2pm AEDT

Analysis of adaptive platform trials using a network approach
Treatment comparisons from adaptive platform trials may be subject to confounding if there are underlying time trends in the population risk level. There are two common approaches to dealing with this confounding, which we refer to as adjustment and stratification. Adjustment, which was used for example in the REMAP-CAP trial, incorporates a time epoch adjustment into a statistical model that permits comparisons between treatment groups that were not necessarily randomized during the same time periods. Stratification, which was used for example in the STAMPEDE trial, uses only comparisons between treatment groups randomized during the same time period, and does not permit comparisons of non-concurrent randomizations. We present a novel method that embeds these two approaches into a common analysis framework using the principles of network meta-analysis, with the purpose of exploring sensitivity to the use of non-concurrent comparisons. The cohorts of randomizations between adjacent adaptation timepoints are treated like separate fixed design randomized trials. These fixed design cohorts produce a network of direct and indirect treatment comparisons which may be aggregated using network meta-analysis principles. This allows a transparent decomposition of the overall information from a platform trial into direct randomized evidence and indirect non-randomized evidence. Restricting the analysis to direct comparisons is equivalent to the stratified analysis approach while use of both direct and indirect comparisons is equivalent to the adjusted approach. Our network approach provides a natural framework for comparing the two. Simulations will be presented as well as a re-analysis of data from the STAMPEDE trial.

Ian Marschner ,NHMRC Clinical Trials Centre, University of Sydney, Australia

Conditional Logistic Modelling for Adaptive Trials
There is increasing interest in applying Bayesian approaches to clinical trials. I will introduce a conditional logistic model and demonstrate its utility for the design of Bayesian adaptive trials when the time to the endpoint is long relative to recruitment. The model is applicable for adaptive trial designs where interim analyses are conducted for a binary endpoint that has not yet been observed in those individuals with incomplete follow up. E.g., when the endpoint is based on disease status at six months and an interim analysis is conducted when some participants are disease-free but have not yet completed follow up. We detail how this data can be analysed and how to assess adaptive trial decision rules. Typically, this issue has been addressed by either excluding those with incomplete follow up, or by imputing their future observations. These options either discard available information or rely on predictive distributions so are potentially sub-optimal. The conditional logistic model handles such data by modelling posterior distributions for each follow up time point and thus incorporates all available information. I will present the results from a comparison of these different approaches using simulation.

Michael Dymock, Telethon Kids Institute, Nedlands, Australia

The Twice-Generalized Odds Ratio: A method for performing dose-response and prognostic variable analysis with complex, multifaceted outcome data
There is a growing acknowledgement in medical research that patient outcomes are often complex and multifaceted, and statistical methods are needed that can handle this complexity. One method that has received recent attention is the Win Ratio (Pocock et. al. 2012), which uses arbitrary statements of outcome preference to identify improved outcomes on two-group data.
While the Win Ratio enables two-group analyses of complex patient outcomes, it is unable to consider dose-response relationships with more than two groups, nor can it be used to explore the relationship of common prognostic variables (e.g. age, injury severity) with patient outcomes.
We propose the Twice-Generalised Odds Ratio statistic as an extension to the Win Ratio approach to well-ordered explanatory variables, thus enabling multiple group dose-response analysis and the investigation of prognostic variables in the context of complex and multifaceted patient outcomes. This statistic is a further generalisation of Agresti’s Generalized Odds Ratio statistic (Agresti 1980).
We illustrate the twice-generalized odds ratio using data from the EXTEND-IA TNK hyper-acute stroke trial. We also use computational experiments to compare this method to the original Win Ratio on two group case, showing the two methods have extremely strong levels of agreement. We demonstrate that the Twice-Generalized Odds Ratio is an order of magnitude faster to calculate, providing a valuable practical contribution for trial simulation.

Hannah Johns, Melbourne Medical School, University of Melbourne, Australia

Use of information criteria for selecting a correlation structure for longitudinal cluster randomised trials
When designing and analysing longitudinal cluster randomised trials such as the stepped wedge, the similarity of outcomes from the same cluster must be accounted for through the choice of the within-cluster correlation structure. Several choices for this structure are commonly considered for application within the linear mixed model paradigm. The first assumes a constant correlation for all pairs of outcomes from the same cluster (the exchangeable/Hussey and Hughes model); the second assumes that correlations of outcomes measured in the same period are higher than outcomes measured in different periods (the block exchangeable model); and the third is the discrete-time decay model, which allows the correlation between pairs of outcomes to decay over time. Currently, there is limited guidance on how to select the most appropriate structure. We present the results of a simulation study to determine the effectiveness of the Akaike and Bayesian Information Criteria (AIC and BIC) for selecting the appropriate model. Both AIC and BIC perform well at correctly identifying the exchangeable model. However, depending on the values of the model parameters, they can require much more data to reliably identify the more complex models. In practice, we recommend that researchers conduct supplementary analyses under alternative correlation structures to gauge sensitivity to the original choice, and that AIC and BIC values be reported along with correlation parameter estimates.

Rhys Bowden, School of Public Health and Preventive Medicine, Monash University, Australia

Exact Confidence Limits Compatible with the Result of a Group Sequential Trial
Sequential (or adaptive) designs are common in acceptance sampling and pharmaceutical trials. This is because they can achieve the same type 1 and type 2 error rate with fewer subjects on average than fixed sample trials. After the trial is completed and the test result decided, we need full inference on the main parameter Δ. In this paper, we are interested in exact one-sided lower and upper limits.
Unlike standard trials, for sequential trials there need not be an explicit test statistic, nor even p-value. This motivates the more general approach of defining an ordering on the sample space and using the construction of Buehler (1957). This is guaranteed to produce exact limits, however, there is no guarantee that the limits will agree with the test. For instance, we might reject Δ≤Δ0 at level α but have a lower 1−σ limit being less than Δ0. This paper gives a very simple condition to ensure that this unfortunate feature does not occur. When the condition fails, the ordering is easily modified to ensure compliance.

Chris Lloyd, Melbourne Business School, University of Melbourne
