Assess

"Assessment is today's means of modifying tomorrow's instruction."

Carol Ann Tomlinson

yellow neutral, red unhappy and green smiley face drawings on a white page with highlighter pens, post-its and devices surrounding

Assess

Assessment design is fundamental to our goal of providing meaningful and engaging learning experiences for our students.

When assessment is authentic, well designed and aligned with our teaching and learning activities the result will be students who are prepared for their future careers through critical engagement and deep learning.

CTL provide support and resources to assist all stages of assessment design, development, integration and reporting.

If you would like CTL to run a workshop on assessment please contact us.

Programmatic Assessment in the Southern Cross Model

Programmatic assessment at Southern Cross University (SCU) is a deliberately structured and systemic approach to evaluating student learning across a course. This method of assessment involves the careful selection and combination of various assessment tasks to gather comprehensive, triangulated information about a student’s progress and proficiency attainment across the course. By integrating multiple assessment points (known as data points), programmatic assessment provides a holistic view of student development, ensuring that evaluations are cumulative, developmental, continuous, they guide student learning and reflect progress made across the entire learning journey.

The SCM Assessment Principles underpin programmatic assessment. The SCM Assessment Principles are found in the first column. The second column provides the detail of how programmatic assessment aligns with the assessment principles of the SCM.

Programmatic Assessment in the Southern Cross Model Update video poster
Professor Ruth Greenaway delivers an update on Programmatic Assessment in the SCM

Professor Ruth Greenaway, Director, Centre for Teaching and Learning delivers an update on Programmatic Assessment in the SCM video.

Watch the Programmatic Assessment in the SCM update video

Assessment Principle

Details

Assessment Principle 1: Assessment is, normally, designed for delivery in the Southern Cross Model.

Our programmatic assessment:

  1. Employs the tenets of focused, guided and active learning;
  2. Is mindful of student cognitive load, and purposefully aligned to the intended learning outcomes;
  3. Is designed around six-week terms with a seventh week used for study, review and assessment.

Assessment Principle 2: Assessment is designed on a whole of course basis for the attainment of learning outcomes.

Our programmatic assessment design:

  1. Takes a systematic whole-of-course approach aligned with the discipline and award.
  2. Aligns with course and unit learning outcomes, study content and learning activities, in accordance with the Curriculum Policy;
  3. Is appropriately scaffolded and interlinked across both units and courses;
  4. Encourages the appropriate use of technologies, tools and resources; and
  5. Is aligned with professional accreditation and expectations, where appropriate.

Assessment Principle 3: Assessment is designed for student learning, engagement and success.

Our programmatic assessment:

  1. Engages and motivates students’ learning across the course progression;
  2. Facilitates students’ induction into higher education and the principles of academic integrity;
  3. Evidences the process of learning over time and in context;
  4. Builds a future-focused foundation for lifelong learning; and
  5. Is appropriate in volume and cognitive load.

Assessment Principle 4: Assessment is authentic, innovative and relevant to students.

Our programmatic assessment is:

  1. Fit for purpose in the Southern Cross Model, relevant to students and their educational and/or professional goals.
  2. Designed specifically for the teaching periods in which units are offered;
  3. Evaluated using multiple means of assessment across a unit and course;
  4. Designed using practice-based and/or authentic tasks, and relevant to students’ career or further study aspirations; and
  5. Enhanced through the use of technology, and where appropriate, generative artificial intelligence, enabling students to engage with innovative tools, while maintaining academic integrity. Refer to GenAI Tool Descriptors - Feb 2025 for Students and Staff.

Assessment Principle 5: Assessment is inclusive, fair, transparent and equitable.

Our programmatic assessment is:

  1. Inclusive of all students, irrespective of educational background, entry pathway, mode or place of study;
  2. Cognisant of student diversity, learners’ needs and multiple ways of knowing;
  3. Evaluated on the basis of students’ achievement against clear criteria, rubrics and standards;
  4. Designed to minimise complexity and confusion; and
  5. Written in simple, clear and plain English.

Assessment Principle 6: Assessment incorporates timely, clear and constructive feedback to help improve student learning and performance

Our programmatic assessment feedback is:

  1. Timely, clear, and directly relevant to the assessment task, criteria and purpose;
  2. Based on structured rubrics and clear grading standards, guiding students to accurately appreciate the quality of their learning; and
  3. Constructive, formative and respectful, taking a strengths-based approach focused on current and future learning.

Assessment Principle 7: Assessment maintains academic and professional standards, assuring quality and demonstrable learning informed by scholarship.

Our programmatic assessment is:

  1. Designed by academics with contemporary skills and expertise in curriculum design, informed by scholarship and disciplinary knowledge, and industry or professional skills;
  2. Peer reviewed in line with the Assessment Principles, to ensure quality in design, implementation and against academic standards;
  3. Developed against academic, quality and grading standards, and is continuously improved and updated;
  4. Benchmarked to ensure consistency with the level of qualification awarded; and
  5. Built on a foundation of professional learning and contemporary assessment practice and drawing from a range of assessment methods.

Getting started with programmatic assessment

Programmatic assessment takes a longitudinal view of learning in relation to learning outcomes, graduate capabilities and when relevant, professional accreditation requirements. The longitudinal view of assessments monitors learner growth and development and empowers the student to receive detailed feedback that enables them to take responsibility for their own learning. Mentoring students is built into this growth and development.

Programmatic Assessment at SCU is defined as a consciously designed systemic program of assessment in which the outcomes of purposefully selected assessment tasks are collated and combined to obtain triangulated information about a student’s progress.

Feedback is the cornerstone of programmatic assessment. Feedback loops are built into every task to foster and progress learning. Feedback is from teachers, mentors, workplace supervisors and peers, and learners are required to critically reflect on feedback and implement learning in future tasks. Self-directed learning and learner agency are deliberately promoted through a dialogue with learners in these trusted relationships.

Programmatic assessment is designed to gather data in multiple ways, through different types of assessment. Assessment tasks are deliberately selected to (a) triangulate evidence; and (b) evidence developing capabilities and knowledge. Therefore, programmatic assessment is a purposeful quantitative and qualitative inquiry on the developing capability of a learner.

Why

Programmatic assessment is an integral assessment design to optimise assessment at course level. It is focused on the learner and their learning.

The purpose of programmatic assessment is to evidence learning over time through a combination of assessments.

  • Its goal is to develop learner capability in a systematic and scaffolded way to achieve more complex learning.
  • By sampling across time, methods of assessment and assessors, assessment data is triangulated into a meaningful and trustworthy interpretation.
  • At graduation, programmatic assessment has assured our graduates are ready to practice within their respective.

How

Some initial considerations

Programmatic assessment privileges both the learning journey and the methods used to capture its attainment. This is a more holistic and meaningful approach to developing an assessment system for learning.

Consider the necessary conditions for learning to thrive for a specific course of study.

  • Decide how best to define competencies and capabilities for each course; and
  • Determine the essential conditions for enabling and sustaining assessment transformation to optimise learning.

The assessment master plan or blueprint is the overarching structure that includes course learning outcomes, graduate attributes and competencies of relevant professional accreditation body.

  • Consider a single assessment moment as a single data point Build in several assessment moments through various assessment formats to provide evidence for coherent interpretation of achievement.
    • Recognise that a single individual assessment is one data point only, with limited (Consider a data point to be like a pixel of an image.)
    • High-stakes decisions require abundant data and should be based on many data (Consider that many pixels will provide a clearer image.)
  • Redefine formative and summative assessments as a continuum of stakes, ranging from low- to high- stakes decisions through a series of scaffolded Decouple individual assessment moments.
  • Consider a holistic assessment design with a combination of a range of scaffolded Map assessment moments (the data points) to this overarching structure.
  • Select each assessment method and its content based on a clear educational justification on why it was used in the curriculum at that particular Then justify and clearly link its contribution to the master plan.
  • Identify areas that require longitudinal development, such as complex behavioural skills (eg. Collaboration, Communication, Professionalism).
  • Draw data from many contexts and assessors to reduce bias in subjective, non-standard assessments (eg. direct observation in real-life settings, under unstandardised conditions by professional experts).

Every assessment (data point) must optimise feedback to the learner about the quality of their learning. Pass/ Fail decisions should not be made based on a single assessment.

  • Design assessments that require student follow-up on the feedback
  • Avoid summative assessments such as high-stakes exams because learners often focus only of passing the exam and ignore feedback.
  • Design individual assessments that are low stakes so learning from each assessment can be built upon for the next assessment and in assessments in subsequent units.

High-quality feedback should be the focus of every assessment. Although good quality feedback is time consuming and resource intensive, programmatic assessment is entirely dependent on quality, frequent feedback.

  • Build in opportunities for rich, meaningful feedback in many
  • Design assessments so that a numeric score is supported by narrative
  • Design feedback opportunities from peers, teachers,
  • Consider how longitudinal overviews of progress and feedback can be
  • Enhance feedback for complex skills by narrative
  • Guide learners using ongoing verbal feedback
  • Avoid Pass/Fail decisions or marks including unstandardised assessments with quantitative information and rating scales as they are limited in their feedback.

Design assessments that facilitate decision-making. These assessments should

  • Provide diagnostic information (how is the learner doing?),
  • Provide remedial information (what should the learner do to improve further?), and
  • Give predictive (what might happen to the learner if the current development continues to the point of the high-stake decision?).

In programmatic assessment, individual excellence is promoted through a mentoring process. Reflection on feedback, and follow-up on feedback are essential for learning and expertise development. Connect learners with a non-judgemental mentor, usually a staff member with curriculum knowledge (e.g. tutor), who creates a safe learning environment, and draws the best from the learner.

  • Close the feedback loop through reflective learning and
  • Design opportunities to discuss learning from feedback if not individually, in the group or class
  • Engage learners in reflective dialogues to stimulate and follow up on Mentoring is an effective way to create such a dialogue.
  • Design for meaningful, reflective dialogues with a mentor or trusted professional to validate ideas and plan remedial and follow-up activities.

There will be a large amount of information. Organise a system for longitudinal information gathering ensuring that the information can be handled conveniently and flexibly. Ensure that all stakeholders find it user-friendly and accessible. Collect data to

  • allow periodic analyses of the learner’s developing knowledge and competencies in relation to learning
  • develop a repository of formal and informal assessment feedback and other learning results (i.e. assessment feedback, activity reports, learning outcome products, and reflective reports).

Once the master plan is created, use it to guide course decisions and its curriculum. Interrogate the master plan by ask the following questions:

  • Are the learning outcomes, graduate attributes and professional skills addressed and assessed effectively across the course?
  • Are there any units that are not addressing these or addressing too many?
  • Are there adequate opportunities for students to develop knowledge, skills or abilities?
  • What are the overlaps or redundancies that can be refined within the course?
  • Is the overall design a trustworthy decision-making strategy?
  • Are high-stakes decisions based on many data points? Are those data points drawn from broad sampling across contexts, methods and assessors?
  • Does it enable rich information from both quantitative and qualitative data? Can this information be aggregated through expert professional judgement?

 

Programmatic assessment draws on all elements of the Southern Cross Model as is shown in Figure 1.

Figure 1 illustrates Programmatic Assessment in the Southern Cross Model, emphasising the holistic, integrated, and feedback-driven approach to student learning at both the course and unit levels. It represents how the various elements of the SCM contribute to a whole of course approach to student learning.

Figure 1 illustrates Programmatic Assessment in the Southern Cross Model, emphasising the holistic, integrated, and feedback-driven approach to student learning at both the course and unit levels. It represents how the various elements of the SCM contribute to a whole of course approach to student learning.

The diagram is structured into three primary layers, all centred around Student Learning: The central focus of the model is Student Learning, emphasising that all assessment, feedback, and instructional practices aim to support and enhance student learning. The unit level focuses on specific skills and knowledge developed within individual units.

The course level ensures that learning is cumulative and mapped to broader learning outcomes.

The unit level represents direct learning experiences where students engage in learning tasks, self-access materials, and classroom activities. The Unit Learning Outcomes (ULOs) define the specific skills and knowledge students will learn and practice within each unit. The model highlights both the independent study and structured classroom activities of the SCM, tutorials and workshops, as integral components of learning. Continuous feedback at the unit level helps students refine their understanding and improve performance before summative assessments. Instead of focusing on individual, isolated assessments, the diagram illustrates how assessment is structured across both unit and course levels. Programmatic assessment involves continuous evaluation, rather than a single high-stakes assessment.

The course level represents broader competencies and knowledge areas achieved by integrating multiple unit- level learnings over time. The programmatic assessment approach ensures that learning is not measured in isolation but progressively evaluated throughout the course. Assurance of Learning ensures that students meet graduate attributes and learning outcomes across the entire course and are on track to meet the course learning outcomes. Feedback loops ensure students have multiple opportunities to improve before final evaluations. Feedback provided at course level is essential for long-term learning, guiding students through progressive assessments and the learning process.

Programmatic Assessment and Considerations for the SCM

The key structural consideration when designing programmatic assessment involves careful attention to

Examples of Programmatic Assessment in the SCM

Units

Term 1

Term 2

Term 3

Term 4

Grading scale

Satisfy requirements/ Did not satisfy requirements

SR / DNSR

SR / DNSR

Standard graded unit

Assessment

Multiple small formative assessments that contribute to

the portfolio e.g. Key elements of a business plan

Formative assessment that contributes to the portfolio e.g. Target audience, marketing, strategy.

One Formative assessment that contributes to the portfolio e.g. Financial implications

Portfolio 60%

e.g. Business plan Presentation/ Viva 40%

Feedback

Extensive feed forward building knowledge and skills for the portfolio

Extensive feed forward building knowledge and skills for the portfolio

Extensive feed forward building knowledge and skills for the portfolio

Marks and grades for the portfolio and Viva.

 

Programmatic Assessment Framework for the Graduate Certificate in Small Business Management. The framework emphasises formative portfolio activities and a final integrative assessment

Programmatic Assessment Framework for the Graduate Certificate in Small Business Management. The framework emphasises formative portfolio activities and a final integrative assessment.

Assessment Structure

  1. Unit 1: Small Business Entrepreneurship and Innovation
    • Two formative portfolio activities with an incomplete grade until the final
    • Commence Integrative Assessment and receive preliminary feedback
  2. Units 2, 3, and 4:
    • Each unit has three formative portfolio
    • Graded as Satisfactory (S) or Not Satisfactory (NS).
  3. Final Integrative Assessment:
    • Conducted after the completion of Unit
    • The result determines the final grade for Unit

Grading Rules

  • Units 2, 3 and 4 require a Satisfactory result to
  • The final Integrative Assessment must be passed to complete the
  • Students may attempt the final Integrative Assessment again within six Acknowledgement: Dr Owen Hogan

 

Term 1

 

Term 2

 

Term 3

 

Term 4

 

Year 1

Ungraded

Ungraded

Ungraded

Graded

Ungraded

Graded

Ungraded

Graded

Year 2

Ungraded

Ungraded

Graded

External accreditation requirements

Ungraded

Graded

External accreditation requirements

Ungraded

Graded

External accreditation requirements

Graded

External accreditation requirements

Year 3

Ungraded

Ungraded

Graded

External accreditation requirements

Ungraded

Graded

External accreditation requirements

Ungraded

Graded

External accreditation requirements

Graded

External accreditation requirements

Assessment examples:

Graded: Invigilated assessment, OSCE, Viva Voce, Portfolio of ungraded assessment.

Ungraded: Opportunities to practice and to learn without pressure of grades. Portfolio components, skills practice, report components, briefs, submissions.

  • Programmatic assessment: A consciously designed systemic program of assessment in which the outcomes of purposefully selected assessment tasks are collated and combined to obtain triangulated information about a student’s progress.
  • Summative assessment: Assessment tasks used to cumulatively evaluate what students have learned over a certain time period or across a set amount of unit It is a formal evaluation of student learning, typically conducted at the end of a unit, course, or program, to determine how well students have met specific learning objectives and to assign grades or provide a final evaluation of their performance.
  • Formative assessment: These tasks and activities are designed to monitor student learning and provide ongoing feedback to The focus is to identify strengths and areas for improvement during the learning process. It is used throughout the learning process, not just at the end. These are generally low-stakes and have little or no impact on a student’s final grade. While formative assessments are used to inform instruction and improve learning during the process, summative assessments are used to evaluate the final outcome of learning.
  • Data point: Each assessment is considered a data point that provides information on learner performance. It is a discrete piece of information or evidence about a student’s Each data point must be optimised for learning, not purely for decision making. Therefore, a data point must be designed with an embedded learning strategy that provides meaningful feedback and impels the student to use feedback for learning. In programmatic assessment, an assessment strategy is consciously designed with multiple data points allowing data to be gathered from various assessment activities. This variety contributes to a broader understanding of students’ progress towards program- level outcomes across a course. Across the various data points the longitudinal development of the student is visible to both the student and the academics.
  • Low stakes assessment: These have no significant impact on the student’s final Its focus is to provide feedback and facilitate learning without the pressure of a high-stakes assessment. These are mostly formative assessments, quick checks for understanding, practice activities and ungraded assessments. Students are encouraged to take risks and learning from mistakes without fear of negatively impacting their grade. Low-stakes does not mean low value. They give a snapshot of where the student is at currently.
  • Triangulated information on student progress: In programmatic assessment, multiple data points gather and combine evidence from multiple sources and assessment events to create a holistic view of a student’s learning and progress towards program-level Rather than relying on a single assessment, the multiple assessments serve to triangulate evidence of learning. Triangulation provides a holistic view and helps to avoid a narrow or incomplete view of a student’s learning by considering multiple perspectives and assessment methods. Both quantitative and qualitative data, including feedback, reflections, and observations, exams, presentations are used for triangulation resulting in a richer understanding of student learning. The multiple sources of evidence used for triangulation ensures that assessment decisions are based on a more accurate and comprehensive view of student learning.

Van Der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2014). Twelve Tips for Programmatic Assessment. Medical Teacher, 37(7), 641–646. https://doi.org/10.3109/01421 59X.2014.973388

Policy – https://policies.scu.edu.au/document/view-current.php?id=66

Procedures – https://policies.scu.edu.au/document/view-current.php?id=255

electronic rubic

Recorded Workshop 1 of 1 Design efficient and effective rubrics

In this workshop you will learn how to design efficient and effective rubrics, which will help reduce marking time and provide clearer feedback to students about their performance. Learn how to develop appropriate criteria statements and write well-written descriptions of student performance. (This workshop was recorded 1 pm – 1:50 pm, 19 May 2021).

Watch design rubrics recording
logo
Image source: https://www.turnitin.com/

Recorded Workshop 2 of 1 How to build and upload a Turnitin rubric

This workshop will show you how to build and upload a Turnitin rubric and integrate this with the Grade Centre. We will demonstrate how to use QuickMarks for more efficient marking, and enhance the consistency of feedback to students. You will be supported in hands-on practice at using these skills. (This workshop was recorded  2 pm – 2:50 pm, 19 May 2021).

Watch Turnitin rubric recording