The Pressure Cooker : Australian Education

metal pipes plumbing pressure
Photo by Pixabay on Pexels.com

Introduction from forthcoming publication: The Principal’s Scorecard: Friend Foe or Frenemy

At both a macro and micro level, the Australian educational landscape, is a pressure cooker of comparison and competition. At the macro level Australian policy makers compare student performance results with international jurisdictions through testing instruments implemented by the Organisation for Economic Co-operation and Development (Gorur & Wu, 2014; Baroutsis & Lingard, 2016). Policy makers analyse the ‘high performing’ jurisdictions searching for their magical ingredient (Sellar & Lingard, 2013). As experienced in Australian school systems the ingredient in turn may be adopted and mandated (Harrison, 2008). State and territory governments also create their own regulated recipes for their school systems. In Australian jurisdictions performance results (from National Assessment Plan Literacy and Numeracy – NAPLAN) along with exit results can be publicly compared and ranked whereby schools are judged as worthy or unworthy (Thompson & Tomaz, 2011). At the micro level, to sustain funding by securing enrolments, principals and their school systems, market these results. Within this cooker of comparing, marketing and commodifying young peoples’ performance results, principals need to make decisions, often of an ethical nature.

Making decisions is fraught in these comparative cultures of performativity for any educational leader (Perryman, 2006; 2009). There are challenges for Australian principals when these external demands continue to drive for improved student performance  (Duignan, Burford, d’Arbon, Ikin, & Walsh, 2003; Ehrich, Harris, Klenowski, Smeed, & Spina, 2015). The leadership challenge positions principals in the centre of reforms (Gawlik, 2015; Volante, 2012) which calls for an analysis of leaders’ internal processes (Brezicha et al., 2015; Clifford et al, 2012), notably how they resolve external demands with their internal school commitments (Louis & Robinson, 2012). More so than in any time in Australian educational history a principal’s agency in producing successful student performances is public and understandably is highly accountable (Boies & Fiset, 2018; Robinson, 2011; Sun & Leithwood, 2017).

EdTech Evangelism: Where are Coding Evangelists Leading Us?

@acueducationandarts @acuedle #edle682 interesting discourse here Masterclass. Keen to hear your responses to these super questions

Educhatter

“Wow!,” “Fantastic,” and “Inspirational”were words that filled the Twitter feed coming out of the latest Halifax Regional Centre for Education (HRCE) Innovation in Teaching Day (#HRCEID2018), held November 2 and 3, 2018.  The primary cause of the frenzied excitement was a keynote talk by Brian Aspinall, author of the edtech best-seller Code Breaker, a teacher’s guide to training-up a class of “coder ninjas.”  The former Ontario Grade 7 and 8 teacher from Essex County honed his presentation skills at TEDx Talks in Chatham and Kitchener and is now the hottest speaker on the Canadian edtech professional development circuit.

Mr. Aspinall, the #CodeBreaker, is a very passionate, motivational speaker with a truly amazing social media following. He built his first website in the 1990s before graduating from Harrow District High School, earned his B.Sc. and B.Ed. at the University of Windsor, and learned the teaching craft in the…

View original post 1,030 more words

EdtechPosium2018 Oct 29 and 30

 

Screen Shot 2018-10-27 at 4.38.31 pmDesign. Develop.Play. I am pretty excited to be presenting a short session about our Masterclass. The focus of the session is building communities of practice in the online mode of teaching and learning. Notably the subset is increasing learner engagment. I am utilising my current Masterclass #EDLE685 @acueduandarts. Their final piece was to blog here about their evaluative tools in their mini research project on leading evidence based learning. Lots of layers here! Let’s see how many have engaged with this blog by the end of the weekend. I have 5 so far. Please blog I need some data, and evidence! Let me know if bribes work.

Masterclass: Evidence based Leading for Learning: Tools for evaluating an evidence based project

During the course of Evidence based Leading for Learning @ACUeducandarts #EDLE685 students have been invited to design a problem based project. The problem or concern is situated In their learning communities. Their first step is to identify a problem and provide evidence it is a problem! The problem needs to relate to student learning. Given this is a leadership unit students also need to frame the problem from a leadership (L) perspective. The L perspective is in the planning, implementation and evaluation of the project.  In this blog students are invited to share their evaluation techniques that they will employ.

Students: In the comment section post your framework/model/figure (FMF) to demonstrate how you will evaluate your evidence based project. Name the FMF and also you are invited to write a few sentences about it. Remember to acknowledge sources. 

Evidence-based Leading for Learning

In the Master of Educational Leadership at Australian Catholic University we have a unit titled Evidence-based Leading for Learning – within the specialisation of Leadership for Learning.

Concerning terms

At the risk of confusing students I need to place a few thoughts down here, in case students have been thinking about these concerns as well.

While this is my first time teaching this unit I have experienced several concerns in gaining some conceptual understandings around several terms.

Settling on terms: Evidence-based leadership / evidence-based learning?

The first concern is understanding the meaning behind the title of this Unit. The title Evidence-based Leading for Learning could have three interpretations: a. it could mean the evidence we know about learning and the leadership that springs from this; b. the title could mean the evidence we know about ‘the leadership’ to lead learning or; it could mean both the evidence about learning and the evidence about the leadership of that learning (which is also evidence based – that is, what we know through evidence of what learning is, or is not). For our Unit purposes of interrogating the evidence areas of leadership and learning and honouring that this is an educational leadership unit (as opposed to an education unit) then the third interpretation is the preferred understanding. That is, we will be looking at both – the evidence of leadership and evidence around learning in that leadership. For a further understanding of leadership visit this blog on ‘Leadership as a Verb’.

 Distinguishing data from evidence

The second concern is the distinction between data and evidence. However a lightening bolt about how to distinguish between the two concepts for the purposes of the unit distinction woke me last night. I propose that once we have our data (drowning in it some would say) and through analysis of the data when we arrive at our conclusions (or findings) we have our evidence – which we as educators may act upon or not. An easy idea for me to remember is that data do not speak for themselves – but evidence does.

I have taken several ideas of Clarke’s (2013) which I think aligns with the above thoughts and yet expands these ideas further. First he argues from the medicine (parent) discipline that evidence and data are distinguished from one another, and as Clarke argues, they must be.

Second he sources the science discipline where data become evidence when they stand in a particular testing relationship, such as a hypothesis.

His third source is the philosophy discipline where the distinction between evidence and data purporting that evidence is an intelligible concept, and data are not (that is, they are just that, ‘data’)

Brendan Clarke.

From metaphors to mantras – principals making sense of and integrating accountability expectations: a grounded theoretical model

System-wide assessment programs such as NAPLAN are introduced to determine the value added by schools towards student learning. This study investigates how secondary school principals in NSW accommodate the testing-based accountability within their views of learning. The findings indicate that these Principals accept the notion of school accountability though they reject reducing learning to a single score. They do not believe that test scored are adequate measures of student leanirng. The study offers a deep insight into the thinking of these Principals as they accommodate between their beliefs about learning and the demands of assessment regimes.

Judith Norris EdD (Conferred April, 2018)

Global Influences on Australian Educational Policy

While it is unclear what influence international organisations, such as the OECD, have on governing education, there is a growing interest in such influences (Morgan & Shahjahan, 2014), especially empirical comparisons with particular countries and jurisdictions, such as Finland, Singapore and Shanghai (Morgan & Shahjahan, 2014). The OECD has built on past successes and continues to ‘gain authority as an expert and resource for evidence based education policy’ (Morgan & Shahjahan, 2014, p.194). The OECD, described by Woodward (2009), operates through soft power[1] and through ‘cognitive’ and ‘normative’ governance. Cognitive governance asserts its function through the agreed values of the member nations. While normative governance, described as peer pressure, is perceived as being vague (Woodward, 2009), yet it may hold the most influence because it ‘challenges and changes the mindsets’ of the member people (Sellar & Lingard, 2013, p. 715). This is important because the influence the OECD may hold over the mindsets of federal policy makers, who may in turn influence school system leaders, the key regulating authorities for teachers and students in Australian schools.

The OECD uses the reports of the data from the Program for International Student Assessment[2] (PISA) to make recommendations to countries and jurisdictions, with certain effects on their policy directions (Breakspear, 2012). By 2015, more than 70 countries had taken part in the PISA survey, which has allowed the OECD to track progress and examine three areas: public policy issues in preparing young people for life; literacy in the ways that students apply their knowledge and skills in key learning areas; and lifelong learning, with students measured not only in their reading, mathematics and science literacy but also asked about their self-beliefs (retrieved from https://www.oecd.org/pisa/pisafaq/). Importantly, the paper ‘Beyond PISA 2015: A longer-term strategy of PISA’ (OECD, 2016) explains that the PISA assessment is a tool to enable governments to review their education systems. Of importance for this study is that our national and state policy makers are fuelled by initiatives such as the OECD’s PISA data to compare and contrast Australia with other countries (Lingard & Sellar, 2016). These comparisons and contrasts are likely to influence the directions of school systems and jurisdiction in Australia, as is occurring elsewhere.

Empirical research studies have compared international curriculum systems, using OECD reports from various jurisdictions (Creese, Gonzalez, & Isaacs, 2016). There is a recognition that international organisations contribute to the construction and continuation of evidence-based cultures, which as Pereyra, Kotthoff, and Cowen (2011) assert, legitimatises comparative data being employed as a tool to govern education. In essence, national policy makers now adopt this comparative data heavily, to guide their educational directions (Breakspear, 2012; Morgan & Shahjahan, 2014). It is possible that using evidence-based cultures as a governing tool has become normalised and this possibility was considered in this inquiry.

Interestingly, an OECD 2013 publication on the evaluation of school leaders, advised the ways head teachers (principals) should be appraised in terms of ‘fostering pedagogical leadership in schools’ (OECD, 2013). The priorities that school system leaders give to certain areas of leadership, such as pedagogical leadership and evidence-based leadership, can result in principals being evaluated on their students’ performance results, which may affect their ongoing tenure.

Table 1.1 provides an overview of the largest OECD-based studies in education. The table illustrates that most age groups were assessed in some form by these studies.

Table 1.1
‘Beyond PISA 2015: A longer-term Strategy of PISA’ (Adapted from Schleicher, 2013, p. 8)

Study Age Subject areas Sources of context information[3] Frequency Global coverage
OECD PISA 15 – Reading
– Mathematics
– Science
– Collaborative problem solving (2015)
– Problem solving (2012)
– Financial literacy
– Students
– Parents (optional)
– Teachers (optional)
– School principals
Every 3 years since 2000 OECD countries: 34
non-OECD participants: 40
(PISA 2009)
OECD PIAAC[4] 16–65 – Literacy
– Numeracy
– Reading components
– Problem solving in technology-rich environments
– The individuals who are assessed Frequency to be decided1 OECD countries: 24
non-OECD participants: 2
(PIAAC 2011)
OECD TALIS[5] Teachers of lower secondary education2 – Focuses on the learning environment and working conditions of teachers – Teachers
– School principals
5 years between first 2 cycles OECD countries: 16
non-OECD participants: 7
(TALIS 2008)
OECD AHELO[6] University students at the end of their B.A. program – Generic skills common to all university students (e.g., critical thinking)
– Skills specific to economics and engineering
– Students
– Faculties
– Institutions
Feasibility study carried out in 2012 Institutions from 17 countries participated in the feasibility study

 

Some people assert that the PISA program has helped to normalise the use of comparative data in education on the global stage (Sahlberg, 2011). Countries seek to understand why students in the top-performing education systems, such as Finland and Shanghai, perform so well in PISA testing. Others assert that bodies such as the OECD fuel national educational reforms, which are kept in the public eye by the media in the OECD triennial cycle (Bagshaw & Smith, 2016). These media assertions often disregard the incompatibility between the NAPLAN test (skills) and the PISA survey (applications of skills) (Lingard & Sellar, 2013) and other system impact factors on PISA data (Sellar & Lingard, 2013, p. 723), such as a student demographic of multi culturalism. As global studies increase both in number and in sectors, they are often adopted by policy makers as benchmarks for comparative rankings, although at times the validity of such comparisons may be questioned due to the impact factors at play.

The implication is that when national and state policy makers compare data with other countries, the trickle-down effect from national policy makers to school system leaders is likely to form part of the principals’ experiences of regulated assessment-focused accountability.

 

References

Bagshaw, E., & Smith, A. (2016, March 25, 2016). Education policy not adding up: OECD asks what’s wrong with Australia’s schools?, Sydney Morning Herald. Retrieved from http://www.smh.com.au/national/education/education-policy-not-adding-up-oecd-asks-whats-wrong-with-australias-schools-20160323-gnpno9

Breakspear, S. (2012). The Policy Impact of PISA: An Exploration of the Normative Effects of International Benchmarking in School System Performance. OECD Education Working Papers, No. 71. OECD Publishing (NJ1).

Creese, B., Gonzalez, A., & Isaacs, T. (2016). Comparing international curriculum systems: the international instructional systems study. The Curriculum Journal, 27(1), 5-23.

Lingard, B., & Sellar, S. (2013). ‘Catalyst data’: Perverse systemic effects of audit and accountability in Australian schooling. Journal of Education Policy, 28(5), 634-656.

Lingard, B., & Sellar, S. (2016). The Changing Organizational and Global Significance of the OECD’s Education Work. Handbook of Global Education Policy, 357.

Morgan, C., & Shahjahan, R. A. (2014). The legitimation of OECD’s global educational governance: examining PISA and AHELO test production. Comparative Education, 50(2), 192-205.

Nye, J. (2012). China’s soft power deficit to catch up, its politics must be unleash the many talents of its civil society The Wall Street Journal.

OECD. (2013). Synergies for better learning: an international perspective on evaluation and assessment, Paris: OECD  Retrieved 29/10/2016, 2016, from http://www.oecd.org/edu/school/Synergies%20for%20Better%20Learning_Summary.pdf

OECD. (2016). OECD BETTER POLICIES FOR BETTER LIVES  Retrieved 8 ApriL, 2016, from http://www.oecd.org/about/

Pereyra, M. A., Kotthoff, H., & Cowen, R. (2011). PISA under examination: Springer.

Sahlberg, P. (2011). Finnish lessons: Teachers College Press.

Schleicher, A. (2013). BEYOND PISA 2015: A LONGER-TERM STRATGEY OF PISA.  Paris:  Retrieved from http://www.oecd.org/callsfortenders/ANNEX.

Sellar, S., & Lingard, B. (2013). The OECD and global governance in education. Journal of Education Policy, 28(5), 710-725.

Woodward, R. (2009). The Organization for Economical Co-operation and Development (OECD): Routledge: London.

 

[1] Joseph Nye of Harvard University developed this concept to describe a way to ‘attract and co-opt’, rather than use force (hard power) (Nye, 2012).

[2] 70 member countries of the OECD test 15-year-olds’ skills and knowledge (OECD, 2016).

[3] Sources of context information: refers to who is assessed and/or where it is assessed

[4] Program for the International Assessment of Adult Competencies

[5] Teaching and Learning International Survey

[6] Assessment of Higher Education Learning Outcomes

Public Purposes of Education

Helpful for EDLE682 Leading Learning and Teaching, Master of Ed Lead @ Australian Catholic University

Leadership Threads

Public Purposes of Education

The public purposes of education across the Western world are based on common ideologies of social justice, liberty and equity (Wiseman, 2010). However, certain purposes gain dominance because of the political processes that reflect the climate of that time in history (Gunzenhauser, 2003; Reid, Cranston, Keating, & Mulford, 2011). There is evidence to support the failure of certain accountability arrangements, such as high-stakes testing in certain jurisdictions, when accounting for learning (Siegel, 2004). In Australia, the lack of alignment between the purposes of education and the federal and state arrangements of educational accountability has an effect on the way some educational leaders perceive their responsibilities of leadership notably their accountability of learning (Cranston, Reid, Mulford, & Keating, 2011). Pertinent to this blog is the educational leader’s ideologies about the purposes of education and the way these could have an effect on their interpretations of leading learning.

View original post 1,001 more words