Students will shortly be asked to complete a Learner Experience of Unit (LEU) survey for each unit they are undertaking. We provide an overview of how reliability and validity are considered, the criteria used for the reporting of results based on minimum response rates and what is included in each LEU results report.
Evaluating the student experience
Formal evaluation surveys provide insight into the students’ experience of the unit and teaching practices helping to inform quality assurance, enhancement, and improvement. It is also a requirement of the Higher Education Standards Framework for the University to provide students with opportunities to provide feedback on their educational experiences.
The Learner Experience of Unit (LEU) survey is administered in every offering of all coursework units in each study period.
All academic staff are required to formally evaluate their teaching once a year and the Learner Experience of Teaching (LET) survey can be used for that purpose.
How are the surveys administered?
The Teaching Evaluation for Development Service (TEDS) team coordinates the survey process. Unit Convenors will already have received an email detailing the process for this session.
Each session, students will receive an email at the beginning of Week 10 (in S2, 2022 on the 10th of October) advising them when the survey opens and inviting them to participate. Students will have until the end of Week 12 (30th October) to complete the survey. Students access the survey via their iLearn home page.
Improving survey response rates
The better the student response rate to the survey, the better the reliability of the survey results. Two things you can do to improve response rates are:
- Give students the opportunity to complete the survey in class time. This is the single most effective step you can take towards receiving sufficient responses. Note: this is now mandated in the Student Survey Policy.
- Talk to your students about the survey. Let your students know now, and remind them closer to the survey start date, that a survey is coming and that you value their input.
Reliability and Validity
There is no magic number indicating when a survey becomes reliable – it is a sliding scale based on response rates and sample sizes that in turn translates into an error margin. Macquarie uses the related concepts of validity and reliability as follows.
Validity refers to the instrument being used. It answers the question “Is the survey instrument able to measure what we intend to measure?” Evaluation surveys completed by students are a reflection of the students’ experience of the teaching session. These are not a measure of teaching quality per se and do not measure student learning. The survey questions are based on a considerable body of research (see the TEDS website for sources) and aim to provide actionable insights for teachers.
Reliability relates to the stability of responses over time. It answers the question “Can I reliably make inferences about changes in results from one administration to the next?” A response rate of 70% or above indicates a representative sample, but this is not common.
A margin of error can be estimated for samples above a minimum size. In Table 1 below, use the cross point between the sample size (top row) and the broad agreement value (left column). In Table 1, BA% refers to the percentage of Broad Agreement and the values on the right are the +/- percentage point error margin for each sample size, given a 90% confidence interval. Note: “n/a” indicates that the conditions for calculation are not met. For example, a sample of 30 and a broad agreement (BA) of 70% has an error range of + and – 14. This means we can be 90% confident that the true value lies between 56% and 84%.
|BA %||Percentage point error margin for sample size…|
|50||+/-26||+/- 20||+/- 18||+/- 15||+/- 13||+/- 12||+/- 9||+/- 8||+/- 7||<6 +/-|
|45 or 55||n/a||+/- 21||+/- 18||+/- 18||+/- 13||+/- 12||+/- 9||+/- 8||+/- 7||<6 +/-|
|40 or 60||n/a||+/- 21||+/- 18||+/- 15||+/- 13||+/- 11||+/- 9||+/- 8||+/- 7||<6 +/-|
|35 or 65||n/a||+/- 20||+/- 18||+/- 14||+/- 12||+/- 11||+/- 9||+/- 6||+/- 6||<6 +/-|
|30 or 70||n/a||n/a||+/- 17||+/- 14||+/- 12||+/- 11||+/- 9||+/- 8||+/- 6||<5 +/-|
|25 or 75||n/a||n/a||+/- 16||+/- 13||+/- 11||+/- 10||+/- 8||+/- 7||+/- 6||<5 +/-|
|20 or 80||n/a||n/a||n/a||+/- 12||+/- 10||+/- 9||+/- 8||+/- 7||+/- 5||<5 +/-|
|15 or 85||n/a||n/a||n/a||n/a||+/- 9||+/- 8||+/- 7||+/- 6||+/- 5||<4 +/-|
|10 or 90||n/a||n/a||n/a||n/a||n/a||+/- 7||+/- 6||+/- 5||+/- 4||<3 +/-|
A margin of error for the Mean = Standard Deviation divided by the square root of the sample size, multiplied by 1.645 (using a 90% confidence interval). In 2019 the overall Standard Deviation for LEUs was 1. Therefore a table of estimated error margins for the mean across a range sample sizes can be produced – see Table 2 below.
|+/- Error Margin|
|90% confident error margin||+/- 0.52||+/- 0.42||+/- 0.37||+/- 0.30||+/- 0.26||+/- 0.23||+/- 0.19||+/- 0.16||+/- 0.13||+/- 0.12|
For example, with a sample size of 10 and a mean score of 4, we can be 90% confident that the true mean is between 3.48 and 4.52.
LEU results reports
Reports of LEU and LET survey results are available for teaching staff. Guidelines for reporting responses based on statistical theory are shown in Table 3 below.
|Sample size||Reliability criteria:|
N = Number of responses.
RR = Response rate.
|Report inclusion criteria|
|Convenor||Department or Discipline||Combined offerings|
|<=20||N > = 10 or |
RR > = 70%
|N > = 5 (See Note below)||N > = 10||Available on request|
|21-50||N > = 20 or RR > = 50%||N > = 20 or RR > = 50%|
|51-100||N > = 30 or RR > = 40%||N > = 30 or RR > = 40%|
|101-200||N > = 30||N > = 30|
|> 200||N > = 50||N > = 50|
Note: To maintain privacy, no report is issued if a response count is less than five.
Timing of survey results release
Full survey reports are released immediately following the release of final grades to students. Unit Convenors will receive an email directly from the TEDS Team.
LEU report content
Reports will contain the following:
Details of the population and sample: unit/teacher, session/year, number of responses and the response rate.
Scale questions: A histogram is displayed for each scale question (See Figure 1 for an example). The histogram helps surface the nature of the spread in opinion i.e. a bimodal ‘love-hate’ spread, a normal distribution or if they were closely clustered.
A great result is a mean close to 5 and a standard deviation of less than 1. i.e., most respondents strongly agreed it was good.
- n = Count of responses,
- av = Average (The mean),
- md = Median (The middle value),
- dev = Standard deviation (The spread or variation in responses. A value above 1
indicates a large variation in opinions)
- cf = Broad agreement (BA combines the values for 3 ‘neutral’, 4 ‘agree’ and 5 ‘strongly agree’. This allows comparison with other surveys such as SES).
Other items may be single response (Yes/No) or multiple response (either ‘pick one’ or ‘pick any that apply’). Each choice is treated as binary. The percentage and number of responses for each choice will be shown. See Figure 2 for an example.
Demographics are provided on LET reports only. These include: gender, program of study, first language and status as being international, indigenous or neither. These allow you to make judgements about the representativeness of the response group versus the students in the unit as a whole.
Comments from students as typed text: These can provide great insights as to why the students responded the way they did.
Explore this topic further
This article is available as a one page LT Quick Guide:
The Teaching and Unit Evaluation website has further advice on interpreting and actioning TEDS reports.
The MQ Wiki page on Monitoring and Grade Ratification Dashboard includes information on how to use and understand the dashboard plus a link to access the Power BI Dashboard.
The MQ Wiki page on LEU Headline data includes information on what data is included on the LEU Headline page.
Other Learning and Teaching Quick Guides:
- Actioning Evaluation (interpreting survey results and planning improvements)
- Informal Evaluation (evaluating units beyond formal surveys)
- Link to more Learning and Teaching Quick Guides
Acknowledgement: Information prepared in conjunction with TEDS Team.