Thread: Stakes in education: Medium or rare?

‘The most pervasive and damaging myth, however, is that all children are motivated by competition’ …Highstakes testing in schools is based on the premise that if the student competes with others, they will do better (Richard Lavoie) 

A term that is rarely left out of accountability discussions is the subject of stakes. The term is relative and is dependent on those reporting on the stake. That is, what is ‘at stake’ says more about the person’s judgment of the context than the stake itself. The year 12 results (exit) Australia hold significant consequences for students’ career trajectories and arguably fodder for a school’s marketing opportunities. However, discourses in the literatures are scarce about such consequences being a high stakes exercise. In Australia, external tests cloak some school systems like old cardigans (the NSW HSC exam is close to a 40 year history), with high-stakes tests being the cloth that is worn as a symbol of a normalised culture (Ayres, Sawyer & Dinham, 2004). That is, the stakes associated with the HSC have become normalised to a point where little has been referred to in the literatures as suggesting high stakes. HSC designers and implementers (educators) espouse its credibility as a rigorous and almost the superior instrument over other exit credentials from other jurisdictions and programs in Australia (i.e. the Diploma of the International Baccalaureate). On the other hand, NAPLAN has been tarnished with destroying schools, cheating, teaching to the test and developing a ‘naplanish pseudo curriculum’.

Since the introduction of NAPLAN testing and its public disclosure of results, there are multiple references in the Australian literature claiming that NAPLAN testing and the consequences of its disclosure is now a high-stakes accountability exercise (Klenowski & Wyatt-Smith, 2012 ; Lewis & Hardy, 2015; Lingard, Thompson, & Sellar, 2015; Smeed, Spiller, & Kimber, 2009). It is an irony as the Year 12 exit credentials, across Australia, have more at stake for the individual student than the disclosure of NAPLAN results—which is limited to the amalgamation of student results telling a certain story about the school, not the student. The research studies arising from NAPLAN and the disclosure of results through MySchool is a hundred-fold compared to one or two sources about year 12 exit exams. One possible reason is that the school reputation is more important, or ‘higher stake’ than an individual student’s post school options. Another reason could be that policy makers continue to be steadfast in their views if schools are publicly held to account (inevitably high stakes) that this will drive school improvement.

While there are few definitions for the terms low or high stakes in the research literature, where they are mentioned, the term stake is interchangeable with the term ‘consequence’ (Jacob, 2005; Stobart, 2008). If we apply this understanding of a stake to this study’s definition of performative accountability, then the consequence (stake) would be the outcome with regard to the levels of favourability of the performance results from the external test. While consequences in accountability regimes are often described in terms of low or high stakes, the determination of the stake, similar to performance results, is often relative and subjective. However, in low-stakes environments it stands to reason that there are few or no consequences from the regulation of performance results (Klinger & Rogers, 2011). One such low stake is accounting to the school community in general terms regarding annual learning goals, in the form of the Annual Report (NSW Government, 2020). Whereas in other contexts, the consequences are classified as high (or extremely high) stakes (Stobart, 2008).

Some Australian educational scholars, such as Klenowski and Wyatt-Smith (2011), Smeed et al. (2009), Reid (2011) and Hardy (2015), describe the consequences of public disclosure of student performance results from NAPLAN testing as high stakes with detrimental consequences. For Australian education, the consequences such as low staff and student morale and loss of enrolments would suggest high stakes. However, some US jurisdictions have experienced greater consequences than this from public disclosures of students’ performances, such as loss of enrolments (funding), school closures and loss of employment (Perryman, 2006; Shipps & White, 2009; Stobart, 2008). Hence, the review of the interpretation of stakes in the literatures appears relative to the experiences of those reporting them. To date in Australia, there have been no high-stakes consequences compared to our global peers involving performance pay, school closures or loss of employment resulting from the nation-wide test results.

At the beginning of NAPLAN testing in Australia (2010), distinct differences were observed in the reactions from various school sectors to educators’ descriptions of NAPLAN testing and its consequences. These differences were marked in the first inquiry into NAPLAN testing (Senate References Committee on Education, 2010), with primary school principals by far the most disaffected group as a result of the initial testing and subsequent public disclosures of results. Secondary school principals featured less in the initial research (Klenowski & Wyatt-Smith, 2012 ), but they were included in the research from an ethical leadership perspective by Ehrich, Harris, Klenowski, Smeed, and Spina (2015). It is reasonable to propose that secondary school principals were normalised with external testing and disclosure of results. NAPLAN results itself hold fewer consequences in their minds about enrolments and the greater stake appears to be the results from the year 12 results. The public disclosure of these year 12 results are high stakes for both the students and the school: the students post school options and the school’s marketing opportunities, externally and internally. Of importance here is the influence that an external test has in the minds of educational leaders and their actions. Notably the higher the stake the greater the need to resolve some of the ensuing issues. Irrespective of the relativity of the stakes, the evidence is strong that high-stakes consequences are likely to present complications for school communities. International studies have shown that educational accountability systems that regulate outcomes through performance-based mechanisms (PBMs) with high-stakes consequences have undesirable outcomes for students, teachers and leaders (Stobart, 2008; Darling-Hammond,2010).

References

Ayres, P., Sawyer, W., & Dinham, S. (2004). Effective teaching in the context of a grade 12 high-stakes external examination in New South Wales, Australia. British Educational Research Journal, 30(1), 141-165.

Ehrich, L. C., Harris, J., Klenowski, V., Smeed, J., & Spina, N. (2015). The centrality of ethical leadership. Journal of Educational Administration, 53(2), 197-214.

Jacob, B. A. (2005). Accountability, incentives and behavior: The impact of high-stakes testing in the Chicago Public Schools. Journal of public Economics, 89(5-6), 761-796.

Klenowski, V., & Wyatt-Smith, C. (2011). The impact of high stakes testing: the Australian story. Assessment in Education: Principles, Policy & Practice, 19(1), 65-79. doi:10.1080/0969594x.2011.592972

Klenowski, V., & Wyatt-Smith, C. (2012 ). The impact of high stakes testing: the Australian story, Assessment in Education. Assessment in Education: Principles, Policy & Practice, 19(1), 65-79. Retrieved from http://dx.doi.org/10.1080/0969594X.2011.592972

Klinger, D. A., & Rogers, T. (2011). Teachers’ Perceptions of Large-Scale Assessment Programs Within Low-Stakes Accountability Frameworks. International Journal of Testing, 11(2), 122-143. Retrieved from http://dx.doi.org/10.1080/1530505058.2011.552748

Lewis, S., & Hardy, I. (2015). Funding, reputation and targets: the discursive logics of high-stakes testing. Cambridge journal of education, 45(2), 245-264.

Lingard, B., Thompson, G., & Sellar, S. (2015). National testing from an Australian perspective. National Testing in Schools: An Australian Assessment, 1.

NSW Government. (2020). Annual Reports. Retrieved from https://education.nsw.gov.au/about-us/strategies-and-reports/annual-reports

Perryman, J. (2006). Panoptic performativity and school inspection regimes: disciplinary mechanisms and life under special measures. Journal of Education Policy, 21(2), 147-161. doi:10.1080/02680930500500138

Reid, A. (2011). The NAPLAN Debate. QTU Professional Magazine (November, 2010).

Senate References Committee on Education, Employment and Workplace Relations. (2010). Administration and reporting of NAPLAN testing. Canberra: Senate Printing Unit

Shipps, D., & White, M. (2009). A New Politics of the Principalship? Accountability-Driven Change in New York City. Peabody Journal of Education, 84(3), 350-373. doi:10.1080/01619560902973563

Smeed, J., Spiller, K., & Kimber, M. (2009). Issues for principals in high-stakes testing. Principal Matters, 81, 32-34.

Stobart, G. (2008). Testing Times: the Uses and Abuses of assessment. New York: Routledge.