A key part of a lecturer's work is providing feedback to students regarding their work. The feedback can be when the students are performing a task (eg the lecturer observing a student carrying out an experiment), on marked assignments e.g. reports and essays, and from student's answers in class tests and exams. In computing, students can build artefacts such as computer programs and databases to be evaluated to provide feedback.Performing tasks and building artefacts allow for authentic assessments but unfortunately do not scale easily in terms of providing feedback.
This poster shows how we automated evaluating the servers that students had configured during labs. Students were asked to perform a number of typical real world tasks on a Windows Server computer. When students were finished, they ran a custom script that generated a snapshot of the system and storeds it in a simple text file. The students then uploaded their system snapshots into a assignment submission tool (in this case QOL).
The text files were downloaded by the lecturer, and processed by a number of scripts, one for each task. The scripts examined the snapshot and produce an individualised report on the students performance on the task (such as what they did and how it differs from the expect result. For example, a task could be “create a RAID-1 volume that can store 128Mb of data”. Where a student created a RAID-0 (striped volume) which with 256Mb capacity the report would inform the student that “You created the wrong type of volume (a striped volume is RAID-0) but you did use the correct amount from the two hard-drives (128Mb each)”. These reports were then returned to the students.
Future includes automating downloading student submissions on a regularly basis (eg overnight) to enable students to get timely feedback on their performance.
|Publication status||Published - 12 Apr 2017|