Students will shortly be asked to complete a Learner Experience of Unit (LEU) survey for each unit they are undertaking. We provide an overview of how reliability and validity are considered, the criteria used for the reporting of results based on minimum response rates and what is included in each LEU results report.

Evaluating the student experience

Formal evaluation surveys provide insight into the students’ experience of the unit and teaching practices helping to inform quality assurance, enhancement, and improvement. It is also a requirement of the Higher Education Standards Framework for the University to provide students with opportunities to provide feedback on their educational experiences.

The Learner Experience of Unit (LEU) survey is administered in every offering of all coursework units in each study period.

All academic staff are required to formally evaluate their teaching once a year and the Learner Experience of Teaching (LET) survey can be used for that purpose.

How are the surveys administered?

The Teaching Evaluation for Development Service (TEDS) team coordinates the survey process. Unit Convenors will already have received an email detailing the process for this session.

Each session, students will receive an email at the beginning of Week 10 advising them when the survey opens and inviting them to participate. Students will have until the end of Week 12 to complete the survey. Students access the survey via their iLearn home page.

Improving survey response rates

The better the student response rate to the survey, the better the reliability of the survey results. Two things you can do to improve response rates are:

  1. Give students the opportunity to complete the survey in class time. This is the single most effective step you can take towards receiving sufficient responses. Note: this is now mandated in the Student Survey Policy.
  2. Talk to your students about the survey. Let your students know now, and remind them closer to the survey start date, that a survey is coming and that you value their input.

Reliability and Validity

There is no magic number indicating when a survey becomes reliable – it is a sliding scale based on response rates and sample sizes that in turn translates into an error margin. Macquarie uses the related concepts of validity and reliability as follows.


Validity refers to the instrument being used. It answers the question “Is the survey instrument able to measure what we intend to measure?” Evaluation surveys completed by students are a reflection of the students’ experience of the teaching session. These are not a measure of teaching quality per se and do not measure student learning. The survey questions are based on a considerable body of research (see the TEDS website for sources) and aim to provide actionable insights for teachers.


Reliability relates to the stability of responses over time. It answers the question “Can I reliably make inferences about changes in results from one administration to the next?” A response rate of 70% or above indicates a representative sample, but this is not common.

A margin of error can be estimated for samples above a minimum size. In Table 1 below, use the cross point between the sample size (top row) and the broad agreement value (left column). In Table 1, BA% refers to the percentage of Broad Agreement and the values on the right are the +/- percentage point error margin for each sample size, given a 90% confidence interval. Note: “n/a” indicates that the conditions for calculation are not met. For example, a sample of 30 and a broad agreement (BA) of 70% has an error range of + and – 14. This means we can be 90% confident that the true value lies between 56% and 84%.

Table 1: Estimated error margins for broad agreement
BA %Percentage point error margin for sample sizeā€¦
50+/-26+/- 20+/- 18+/- 15+/- 13+/- 12+/- 9+/- 8+/- 7<6 +/-
45 or 55n/a+/- 21+/- 18+/- 18+/- 13+/- 12+/- 9+/- 8+/- 7<6 +/-
40 or 60n/a+/- 21+/- 18+/- 15+/- 13+/- 11+/- 9+/- 8+/- 7<6 +/-
35 or 65n/a+/- 20+/- 18+/- 14+/- 12+/- 11+/- 9+/- 6+/- 6<6 +/-
30 or 70n/an/a+/- 17+/- 14+/- 12+/- 11+/- 9+/- 8+/- 6<5 +/-
25 or 75n/an/a+/- 16+/- 13+/- 11+/- 10+/- 8+/- 7+/- 6<5 +/-
20 or 80n/an/an/a+/- 12+/- 10+/- 9+/- 8+/- 7+/- 5<5 +/-
15 or 85n/an/an/an/a+/- 9+/- 8+/- 7+/- 6+/- 5<4 +/-
10 or 90n/an/an/an/an/a+/- 7+/- 6+/- 5+/- 4<3 +/-

A margin of error for the Mean = Standard Deviation divided by the square root of the sample size, multiplied by 1.645 (using a 90% confidence interval). In 2019 the overall Standard Deviation for LEUs was 1. Therefore a table of estimated error margins for the mean across a range sample sizes can be produced – see Table 2 below.

Table 2: Estimated error margins for LEU mean scores
+/- Error Margin
Sample size10152030405075100150200+
90% confident error margin+/- 0.52+/- 0.42+/- 0.37+/- 0.30+/- 0.26+/- 0.23+/- 0.19+/- 0.16+/- 0.13+/- 0.12

For example, with a sample size of 10 and a mean score of 4, we can be 90% confident that the true mean is between 3.48 and 4.52.

LEU results reports

Reports of LEU and LET survey results are available for teaching staff. Guidelines for reporting responses based on statistical theory are shown in Table 3 below.

Table 3: Reliability criteria for TEDS reporting
Sample sizeReliability criteria:
N = Number of responses.
RR = Response rate.
Report inclusion criteria
ConvenorDepartment or DisciplineCombined offerings
<=20N > = 10 or
RR > = 70%
N > = 5 (See Note below)N > = 10Available on request
21-50N > = 20 or RR > = 50%N > = 20 or RR > = 50%
51-100N > = 30 or RR > = 40%N > = 30 or RR > = 40%
101-200N > = 30 N > = 30
> 200N > = 50 N > = 50

Note: To maintain privacy, no report is issued if a response count is less than five.

Timing of survey results release

Full survey reports are released immediately following the release of final grades to students. Unit Convenors will receive an email directly from the TEDS Team.

LEU headline data will be available on the Unit Monitoring and Grade Ratification dashboard to coincide with the moderation and grade ratification process.

LEU report content

Reports will contain the following:

Details of the population and sample: unit/teacher, session/year, number of responses and the response rate.

Scale questions: A histogram is displayed for each scale question (See Figure 1 for an example). The histogram helps surface the nature of the spread in opinion i.e. a bimodal ‘love-hate’ spread, a normal distribution or if they were closely clustered.

A great result is a mean close to 5 and a standard deviation of less than 1. i.e., most respondents strongly agreed it was good.

Descriptive statistics:

  • n = Count of responses,
  • av = Average (The mean),
  • md = Median (The middle value),
  • dev = Standard deviation (The spread or variation in responses. A value above 1
    indicates a large variation in opinions)
  • cf = Broad agreement (BA combines the values for 3 ‘neutral’, 4 ‘agree’ and 5 ‘strongly agree’. This allows comparison with other surveys such as SES).
Figure 1: Scale item statistics and histogram

Other items may be single response (Yes/No) or multiple response (either ‘pick one’ or ‘pick any that apply’). Each choice is treated as binary. The percentage and number of responses for each choice will be shown. See Figure 2 for an example.

Figure 2: Other items – binary choice example

Demographics are provided on LET reports only. These include: gender, program of study, first language and status as being international, indigenous or neither. These allow you to make judgements about the representativeness of the response group versus the students in the unit as a whole.

Comments from students as typed text: These can provide great insights as to why the students responded the way they did.

Explore this topic further

This article is available as a one page LT Quick Guide:

The Teaching and Unit Evaluation website has further advice on interpreting and actioning TEDS reports.

The MQ Wiki page on Monitoring and Grade Ratification Dashboard includes information on how to use and understand the dashboard plus a link to access the Power BI Dashboard.

The MQ Wiki page on LEU Headline data includes information on what data is included on the LEU Headline page.

Key policies:

Other Learning and Teaching Quick Guides:

Acknowledgement: Information prepared in conjunction with TEDS Team.

Posted by Mathew Hillier

Mathew has been engaged by Macquarie University as an e-Assessment Academic in residence and is available to answer questions by MQ staff. Mathew specialises in Digital Assessment (e-Assessment) in Higher Education. Has held positions as an advisor and academic developer at University of New South Wales, University of Queensland, Monash University and University of Adelaide. He has also held academic teaching roles in areas such as business information systems, multimedia arts and engineering project management. Mathew recently led a half million dollar Federal government funded grant on e-Exams across ten university partners and is co-chair of the international 'Transforming Assessment' webinar series as the e-Assessment special interest group under the Australasian society for computers in learning in tertiary education. He is also an honorary academic University of Canberra.

Leave a reply

Your email address will not be published. Required fields are marked *