Conference workshop program – Monday 15 September

>>> DOWNLOAD A PRINTABLE VERSION (WITH PRICING DETAILS)

Having trouble seeing the full overview table? Switch to mobile version.
View Tuesday conference workhop program.

The following categories will help you select sessions best suited to your interests: Foundation – Intermediate – Advanced

8–9am REGISTRATION
9am–12:30pm WORKSHOP PROGRAM

Full day workshop:
An introduction to theory-based randomised controlled trials

Presented by Andi Fugard (KEYNOTE) 

FOUNDATION / INTERMEDIATE 

> Details

Full day workshop:
Mixed methods mastery: Practical approaches for evaluators

Presented by Brad Astbury, Andrew Hawkins

FOUNDATION / INTERMEDIATE 

> Details

Full day workshop:
Evaluating place-based and system-wide approaches

Presented by Jess Dart

INTERMEDIATE 

> Details

Full day workshop:
Navigating AI in evaluation, from basics to advanced applications

Presented by Gerard Atkinson, Tuli Keidar

FOUNDATION / INTERMEDIATE

> Details

Full day workshop:
Introduction to evaluation: Core concepts and methods

Presented by Charlie Tulloch, Arun Jyothi Callapilli

FOUNDATION 

> Details

 

Full day workshop:
Getting below the surface: Qualitative methods in the evaluation context

Presented by Joan Young

FOUNDATION / INTERMEDIATE

> Details

 
12:30–1:30pm LUNCH
1:30–5pm WORKSHOP PROGRAM

Fugard workshop continued

Astbury, Hawkins workshop continued

Dart workshop continued

Atkinson, Keidar workshop continued

Tulloch, Jyothi Callapilli workshop continued

Young workshop continued


Workshop details


Full day workshop 

An introduction to theory-based randomised controlled trials

presented by Andi Fugard  |  FOUNDATION / INTERMEDIATE

Workshop description

Randomised controlled trials (RCTs) are often referred to as the ‘gold standard’ of evaluation and placed at the top of evidence hierarchies. A distinguishing feature of RCTs is that participants (or areas or other ‘units’) are randomly assigned to the programme under evaluation and one or more other conditions such as usual practice. RCTs can make it easier than other approaches to infer that a social programme was causally responsible for outcomes; however, random assignment makes them more challenging to implement.

This workshop will introduce RCTs, highlighting the similarities and differences with other approaches that they build on and key issues to consider when designing and running them. We will explore these issues together with the overall aim to help deepen participants’ understandings of RCTs, and when they can and cannot be used.

Workshop content

  1. How do RCTs relate to other approaches and why are they so revered and often misunderstood?
  2. The ethics of running RCTs, including unique issues and issues in common with other approaches
  3. The ‘fundamental problem of causal inference’ and how randomisation helps
  4. Key steps in designing and running RCTs, from idea and preregistration to final analysis and report
  5. Common methods of randomisation and the challenges they present, including individual and cluster randomisation
  6. Key issues in choosing a sample size
  7. The role of mixed methods implementation and process evaluation in RCTs
  8. Best practice in analysing data from RCTs and reporting analyses (including common mistakes)

By the end of the workshop, participants will be able to:

  1. Critically appraise the ethics of RCT designs
  2. Understand what randomising people to intervention conditions does and does not achieve and how RCTs relate to neighbouring approaches such as quasi-experiments
  3. Understand and explain the key steps in conducting an RCT, from initial idea to final report
  4. Understand the central importance of theories of change and in what sense an RCT should be theory-based
  5. Understand the role of implementation and process evaluation
  6. Be able to make a robust case against RCTs when necessary, particularly when attempting to run one would be unethical or unfeasible
  7. Critically appraise RCTs designs and reports

This workshop aligns with Domains 1, 2 and 4 of the AES Evaluators’ Professional Learning Competency Framework.

Who should attend?

Foundational – Foundational sessions assume no previous knowledge from the audience – they are beginner-friendly. The topic is presented at an introductory level using accessible language that can be understood by non-expert audiences.

About the facilitator

Andi Fugard, keynote speaker at aes25, is a Senior Director in Verian UK’s Evaluation Practice. They have experience designing, project managing, and analysing data from large-scale randomised controlled trials and quasi-experiments across a range of policy areas including mental health, education, and housing. Before joining Verian, Andi co-directed the Evaluation team at the UK’s National Centre for Social Research and prior to that they were an academic, teaching quantitative research methods at University College London and Birkbeck, University of London. Andi is also a member of the UK Evaluation and Trial Advice Panel, which provides advice to government on evaluation methodology.

> back to overview


FULL DAY WORKSHOP  

Mixed methods mastery: Practical approaches for evaluators 

presented by Brad Astbury, Andrew Hawkins   |  FOUNDATION / INTERMEDIATE     

Workshop description

This workshop is designed for evaluators and commissioners of evaluation seeking practical guidance on integrating qualitative and quantitative methods in evaluation. Mixed methods evaluation is increasingly used to enhance the depth, breadth, and validity of findings, particularly in complex policy and program contexts.Participants will gain insight into different mixed methods designs, strategies for integrating data, and practical considerations when planning and conducting mixed methods evaluations.

The workshop will also cover real-world challenges and solutions, using case studies and creative interactive activities to illustrate key concepts.

This applied workshop will provide participants with:

  • an overview of mixed methods evaluation and when to use it
  • common mixed methods designs and integration strategies
  • techniques for collecting and analysing qualitative and quantitative data in tandem• strategies for managing and presenting mixed methods findings
  • space for creative reflection on the extent of ‘mixing’ within mixed methods
  • practical case studies demonstrating mixed methods in action

Participants will have the opportunity to engage in discussions, apply concepts through exercises, and reflect on their own evaluation practice. Cross-disciplinary creativity will be encouraged as participants reflect on the different ways by which methods may be mixed, and what that means for an evaluation.

By the end of this workshop, participants will:

  • understand the value of mixed methods evaluation and how it enhances evidence-based decision-making
  • be familiar with key mixed methods designs and how to select an appropriate approach
  • learn practical techniques for data collection, analysis, and integration
  • develop strategies for overcoming common challenges in mixed methods evaluation
  • be enthused to explore new ways to mix data

This workshop is suited to evaluators and commissioners of evaluation who are:

  • new to mixed methods evaluation and want to build foundational skills
  • experienced in either qualitative or quantitative methods and looking to integrate both approaches more effectively
  • seeking practical tools and frameworks for applying mixed methods evaluation in real-world settings

This workshop aligns with Domains 1, 2, 3 and 4 of the AES Evaluators’ Professional Learning Competency Framework.

About the facilitators

Brad Astbury has more than 20 years’ experience in evaluation, particulary in evaluation theory, mixed methods and impact evaluation. He was part of the AES Best Study awards team in 2007. Brad has published the following publications: Program Theory, in Research Handbook on Program Evaluation; and co-authored ‘Evaluating the implementation of a digital coordination centre in an Australian hospital setting: A mixed method study protocol’, in BMJ Health & Care Informatics. 

Andrew Hawkins has worked at ARTD for almost 18 years, leading hundreds of evaluation projects across the public policy spectrum. Andrew's technical expertise spans experimental, quasi-experimental, realist, and systems-based evaluation approaches and methods. With a deep respect for ethics and the philosophy of science, and both quantitative and qualitative data he maintains a pragmatic focus on how evidence can inform real-world decisions. His interdisciplinary background in public administration, psychology, philosophy, statistics, and administrative law equips him to help clients navigate uncertainty and complexity in diverse policy environments. This experience led him to develop Propositional Evaluation as a pragmatic, cost-effective approach to evaluation focused on preventing failure rather than measuring it. This sets out a cooperative form of inquiry to develop logical propositions for action and a structured approach for adaptive management in complex systems – it’s a theory for evaluation that simply reflects how may practicing evaluators work and what their clients need, it’s about developing plans that make sense and then adapting them to emerging conditions. Brad and Andrew have co-delivered many professional development workshops together, including AES conferences. 

> back to overview


FULL DAY WORKSHOP  

Evaluating placed-based and systems-wide approaches

presented by Jess Dart   |  INTERMEDIATE    

Workshop description

The world in which ‘program evaluation’ was born and crafted has shifted. To address ‘wicked’ challenges such as entrenched placed-based disadvantage or climate change, no one person, organisation, sector, or discipline can hope to achieve lasting change. There is a call to move beyond traditional programmatic and sectoral approaches. There is increasing recognition that we need to place community and people with lived experience in the centre of this work, and address the way local efforts are connected to one another and the wider systems that create the conditions for these local efforts to be successful. ‘Systems-wide’ and ‘place-based’ initiatives are gaining considerable philanthropic and government attention and funding.

Unsurprisingly, these non-programmatic approaches do not lend themselves to being evaluated using traditional program evaluation.

The purpose of this full day workshop is to learn about placed-based and systems-wide approaches and practical frameworks and tools to evaluate them.

To kick this workshop off, we spend time getting clear on what we mean by systems wide and place-based approaches and then invite participants to explore the evaluation challenges for this type of work. Drawing on both international literature and practice we explore practical and tested frameworks and tools that work in these settings. We will look at a typical arc of evaluation across a 9-year initiative, from developmental through to shared results. We will explore the messy middle of systems change and how to do ‘patch evaluation’ and ‘systems shift monitoring’. We share our favorite tools for collective learning.

 The learning outcomes are:

  • identify the key characteristics of systems-wide and place-based approaches, distinguishing them from traditional programmatic interventions
  • understand the unique challenges presented when assessing these approaches
  • select relevant tools to evaluating these approaches

This workshop is pitched at the intermediate level, but all are welcome. This workshop aligns with Domains 3 and 7 of the AES Evaluators’ Professional Learning Competency Framework.

About the facilitator

Receiving the 2018 Award for Outstanding Contribution to Evaluation, Jess Dart is a Fellow of AES and a recognized thought leader and ‘evaluation entrepreneur’. With a PhD in evaluation, she has a wide range of interests in evaluation, with a speciality in systems wide approaches. She is a highly experienced evaluator and workshop facilitator, with over 25 years of experience. She has presented over ten pre-conference workshops and over 30 conference sessions, including keynote addresses and received extremely positive feedback. She is trained in adult learning techniques and is an evaluation educator. She is author of the ‘Evaluation Framework for place-based approaches’, launched by the Federal Minister for Families and Social Services in 2020. 

> back to overview


FULL DAY WORKSHOP  

Navigating AI in evaluation, from basics to advanced applications

presented by Gerard Atkinson, Tuli Keidar   |  FOUNDATION / INTERMEDIATE 

Workshop description

In the ever-evolving landscape of policy and program evaluation, this workshop aims to equip evaluation professionals with a comprehensive understanding of Artificial Intelligence (AI) and its strategic integration into the evaluation process. The workshop comprises five distinct subsessions, each addressing crucial aspects of AI in evaluation.

The purpose of the workshop is to demystify AI through hands-on learning, offering participants: 

  • a foundational knowledge base in AI models and approaches
  • an appreciation of ethical considerations in applying AI
  • insights into the range of AI tools available to evaluators
  • practical experience in applying AI tools, including prompt engineering for evaluation contexts 
  • resources to enable effective evaluation that leverages AI technologies

The instructional methods will include a blend of presentations with recap quizzes, case discussions to stimulate critical thinking, and hands-on practical exercises applying OpenAI's ChatGPT and Anthropic Claude to a simulated evaluation. This diverse approach ensures an engaging learning experience that caters to the different learning styles of participants.

The target group for this workshop is intermediate-level professionals in the field of policy and program evaluation. While no specific prerequisites are mandated, participants with a basic understanding of evaluation concepts will benefit most from the workshop. No prior experience of AI is required, though participants will need access to a computer, tablet or phone to complete exercises. 

This workshop aligns with Domain 4 and 5 of the AES Evaluators’ Professional Learning Competency Framework. It caters to professionals seeking to enhance their evaluation practices by incorporating cutting-edge AI techniques.

About the facilitators

Gerard Atkinson is Managing Director of Iris Ethics. Prior to taking on this role he was a Director and Chair of the Learning and Development committee at ARTD Consultants. He has worked with big data and AI approaches for over 20 years, originally as a physicist then as a strategy consultant and evaluator. He has an MBA in Business Analytics focusing on the applications of machine learning to operational data. Gerard has previously presented at AES conferences on big data (2018) and on experimental tests of AI applications in evaluation (2023, 2024), and has delivered this workshop in 2024 and 2025.

Co-facilitator Tuli Keidar is a member of the Innovation committee at ARTD Consultants and is working with ARTD founder Michael Brooks to develop AI-driven methods for evaluation. Prior to working in evaluation, Tuli worked as a company manager, operations manager and educator in the specialty coffee sector. Drawing on a modular education model, Tuli launched a barista and coffee roaster training school, developed the course structures and content, and managed its launch and marketing. 

> back to overview


FULL DAY WORKSHOP  

Introduction to evaluation: Core concepts and methods 

presented by Charlie Tulloch, Arun Jyothi Callapilli   |  FOUNDATION 

Workshop description

The aim of the workshop is to familiarise participants with the main aspects of evaluation practice and to share resources for further development. It will help people who are attending the conference for the first time to feel very comfortable engaging with diverse topics over coming days, within a framework that is clear and easy to understand. 

The workshop will position evaluation within different policy and organisational contexts. It will define key terms, talk about what evaluators do, why and when evaluations happen (or do not), with practical tips for evaluation commissioners and personal practice development over time.  

The most substantive element of the training is to step through a seven-stage process for planning and conducting an evaluation via a case study approach. This allows participant to explore the different ways that evaluation can be done. 

Specific techniques to take away include: logic modelling, designing interview questions, forming value judgements and selecting evaluation approaches/methods. 

The workshop is targeted at evaluators or interested non-evaluators who wish to reflect on, or build, familiarity with key aspects of evaluation practice.

Teaching and learning strategy, plus resources: The work will be participatory throughout. Initially, we will undertake facilitated discussion. This will move into case-study based small group exercises as we move through the seven stages of evaluation. There will be various periods for open questions and tackling emerging dilemmas that participants are facing. 

All attendees will receive a workbook with templates for their use at the workshop and afterwards. 

The workshop touches on all areas of AES Evaluators’ Professional Learning Competency Framework, but is focused primarily on Domains 1, 3 and 4.

About the facilitators

Charlie Tulloch has been an evaluation consultant since 2008, with seven years running his own business. He has delivered variety of evaluations, ranging from small desktop studies through to multi-year, national longitudinal studies. Charlie is a generalist evaluator, but with deeper expertise in impact evaluation, program logic, question design, defining M&E plans, analysing data, writing for government and data visualisation. Since 2016, Charlie has delivered introductory evaluation training to over 1,000 learners via the AES and direct to client organisations.Charlie has presented at the India Urban Space Conference (Mumbai, 2007) on Torrens Land Title and at various AES conferences (Launceston, Sydney, Canberra, Adelaide, Melbourne). Charlie has also provided AES online training to over 20 cohorts. 

Arun Jyothi Callapilli works with Charlie and is an emerging evaluator. She has a Masters in Statistics and has worked in State Government (Telangana, India) and the community sector (Jameel Poverty Action Lab).    

> back to overview


FULL DAY WORKSHOP  

Getting below the surface: Qualitative methods in the evaluation context

presented by Joan Young  |  FOUNDATION / INTERMEDIATE 

Workshop description

The purpose of the workshop is to develop knowledge and understanding to assist participants to identify when qualitative methods would add value, the range of techniques and practices available and to build competency in how to undertake qualitative research in an evaluation context.

The training objective is to increase knowledge and skills to conduct qualitative data collection and analysis in an evaluation context. The workshop will use theory, case-studies and practical exercises to cover the what, when, why and how of qualitative data collection and analysis. 

It aims to deliver the following learning outcomes:

  • Increased knowledge about approaches to qualitative data collection and analysis and ability to select the most appropriate to meet specific evaluation objectives
  • Increased competency in developing qualitative research questions and techniques to address key evaluation questions including the use of open-ended questions, probing and projective techniques

Participants will be asked to share their learning objectives and to reflect on progress during each session to ensure the workshop is focused on supporting each participant to achieve their learning outcomes.Knowledge, techniques, and practices include:

  • developing a program logic, theory of change and evaluation framework 
  • how theory e.g. the transtheoretical Model and COM-B can guide evaluation design, data collection and analysis
  • the value of segmentation in understanding expectations and experiences and how to incorporate into data collection and analysis 
  • interviewing and focus group moderation skills 
  • limitations of qualitative methods 

The workshop targets evaluators, policy makers and program managers who wish to build their capacity to use and get the most from the inclusion of qualitative methods in their evaluation practice. 

The workshop will be conducted in alignment with the specific competencies within Domain 1 and 4 of the AES Evaluators’ Professional Learning Competency Framework.

About the facilitator

Joan Young has over 30 years’ evaluation experience in New Zealand, MENA, Africa and Australia. She has dedicated her career to working with government and not for profit organisations to develop and evaluate social policy, programmes, services and communications. Joan was made a Fellow of The Research Society in Australia in 2021 and has been a member of the Australian Evaluation Society since 2000. 

Joan's research and evaluation work has contributed to strategies addressing gender and financial inequality, domestic and family violence, homelessness, child protection, alcohol and drug consumption, road and sea safety, community violence, recidivism, positive parenting, literacy and early childhood education, environmental sustainability, health, employment, education, taxation, voting and workplace safety.  

A published example of one of the evaluations Joan contributed to which integrated qualitative, quantitative and secondary data sources resulting in insights that were translated into actionable outcomes for policy and community impact is the Mornington Island Restorative Justice Project Evaluation.

Joan’s research and evaluation has been extensively published. Joan has received numerous awards and acknowledgements, and she has presented at AES conferences in New Zealand, Australia, and internationally and recently facilitated two qualitative research workshops for AES.

> back to overview

Australian Evaluation Society
425 Smith Street
Fitzroy Vic 3065 Australia
Phone +61 3 8685 9906

© Copyright 2024–2025 Australian Evaluation Society Limited. ABN 13 886 280 969 

We acknowledge the Australian Aboriginal and Torres Strait Islander peoples of this nation. We acknowledge the Traditional Custodians of the lands in which we conduct our 2025 conference, the Ngunnawal and Ngambri peoples. We pay our respects to the ancestors and Elders, past and present, of all Australia’s Indigenous peoples. We are committed to honouring Australian Aboriginal and Torres Strait Islander peoples’ unique cultural and spiritual relationships to the land, waters and seas and their rich contribution to society.

Conference logo design: Keisha Leon, Cause/Affect  | Site design: Ingrid Ciotti, Studio 673