The concept of programmatic assessment has come to the fore in recent times. Mentioned in both the recent TEQSA (2023) paper “Assessment reform in the age of artificial intelligence” and in the draft MQ advantage education strategy, it has been touted as a possible approach to addressing and adapting to the rise of generative artificial intelligence in higher education.

Lets have a look at Programmatic Assessment.

Programmatic Assessment was first coined in its contemporary form in a seminal paper by van der Vleuten & Schuwirth (2005) and has since been developed in the medical education context where it has risen in prominence. At MQ, the Doctor of Medicine (Macquarie MD) has also adopted a form of programmatic assessment where each student’s development in four key competence areas are tracked over the duration of their studies via the Macquarie Assessment Portfolio (MAP). See Dean (2024) for a detailed review.

It is important to note that programmatic assessment not just about degree (programme) level learning outcomes being more prominent, but instead it is about creating a cohesive and wholistic “programme of assessment” (hence the name).

A programme of assessment aims to go beyond unit level assessment grades, instead it strives create a wholistic and detailed picture of each student’s knowledge, reasoning, problem solving and skills throughout their entire learning journey.

Features of Programmatic Assessment

The following key features provide some clarification of what programmatic assessment is and what it is not.

Triangulation and constructive alignment

Assessment of student competence across a programme of study means triangulating evidence from multiple sources and assessment events. Building a learning journey and a comprehensive view of competence requires curriculum mapping and clear constructive alignment of the course (degree) learning outcomes down to unit learning outcomes then assessment tasks, rubrics, learning activities and supporting resources in each unit.

Coaching and evaluative judgement

Programmatic assessment is inherently a student centered approach with coaching and mentoring play a key role. The learning program is a journey that is focused on each student’s growth, encouraging them to become self-directed, lifelong learners. It is important to develop each student’s evaluative judgement with assessment transparency and actionable feedback via learning loops. In this context, competence is best represented as rich qualitative descriptions instead of just numbers. This benefits students in that descriptions convey more meaning than numbers alone and this also enables staff mentors to more effectively advise students on progress and learning strategies across the whole programme of study.

Recognising that a given assessment task cannot do all things, a balance of the competing elements of authenticity, integrity and scalability is needed across the programme of study. Similarly, a mix of assessment as, for and of learning will need to be adjusted with a focus on the former given the need to provide insight into student learning processes rather than product while enabling feedback loops to occur. Active, authentic and integrative assessments will be more frequently used compared to traditional program structures with a greater focus on how students apply knowledge and skills in professional practice settings.

Examples of assessments that may be utilised in programmatic assessment include:

  • Portfolio and learning logs.
  • Interactive oral assessments or group presentations
  • Written tasks such as essay, reports and case studies
  • Invigilated assessments
  • Scenarios and role plays
  • Workplace based assessment
  • Literature review and research projects
  • Practical and skills assessments.

A set of principles for programmatic assessment

Heeneman (2021) outlines a set of principles for programmatic assessment:

  1. Every (part of an) assessment is but a data-point.
  2. Every data-point is optimised for learning by giving meaningful feedback to the learner.
  3. Pass/fail decisions are not given on a single data-point.
  4. There is a mix of methods used for assessment.
  5. The method chosen should depend on the educational justification for using that method.
  6. The distinction between summative and formative is replaced by a continuum of stakes.
  7. Decision-making on learner progress is proportionally related to the stakes.
  8. Assessment information is triangulated across data-points towards an appropriate framework.
  9. High-stakes decisions (promotion, graduation) are made in a credible and transparent manner, using a holistic approach.
  10. Intermediate review is made to discuss and decide with the learner on their progression.
  11. Learners have recurrent learning meetings with educators using a self-analysis of all assessment data.
  12. Programmatic assessment seeks to gradually increase the learner’s agency and accountability for their own learning through the learning being tailored to support individual learning priorities.

Challenges and Opportunities

Programmatic assessment presents challenges and opportunities with a need to rethink the way we do higher education and particularly assessment in light of the arrival of generative AI tools.

To date, programmatic assessment has largely been applied in tightly defined, virtually integrated degrees with relatively smaller cohort sizes. It remains to be seen if the same approach can be implemented in large, generalist degrees where students have much more choice in their study programme.

There are challenges to overcome that stem from structural elements in place within the current higher education sector. These include existing administrative structures, staff-to-student ratios, a highly casualised workforce, regulation around student progression, commonly used uniform unit sizes, as well as the current workload patterns of both staff and students. It is likly that new program level support systems and a greater stability of staffing will be needed to enable the teaching team cohesion required for consistent mentoring needed for students across varying areas of competence and over the duration of their programme of study.

Contemporary programmatic assessment

Schuwirth (2022) presented on the application of Programmatic Assessment in contemporary universities and it roots in Medical education [1 hour recording embedded below].


A one page summary of this post is available as an L&T quick guide:

Post updated 9 Jan 2024 to add a link to a newly published paper Dean (2024)

Posted by Mathew Hillier

Mathew has been engaged by Macquarie University as an e-Assessment Academic in residence and is available to answer questions by MQ staff. Mathew specialises in Digital Assessment (e-Assessment) in Higher Education. Has held positions as an advisor and academic developer at University of New South Wales, University of Queensland, Monash University and University of Adelaide. He has also held academic teaching roles in areas such as business information systems, multimedia arts and engineering project management. Mathew recently led a half million dollar Federal government funded grant on e-Exams across ten university partners and is co-chair of the international 'Transforming Assessment' webinar series as the e-Assessment special interest group under the Australasian society for computers in learning in tertiary education. He is also an honorary academic University of Canberra.

Leave a reply

Your email address will not be published. Required fields are marked *