System-wide assessment programs such as NAPLAN are introduced to determine the value added by schools towards student learning. This study investigates how secondary school principals in NSW accommodate the testing-based accountability within their views of learning. The findings indicate that these Principals accept the notion of school accountability though they reject reducing learning to a single score. They do not believe that test scored are adequate measures of student leanirng. The study offers a deep insight into the thinking of these Principals as they accommodate between their beliefs about learning and the demands of assessment regimes.
@acueducationandarts @acuedle #edle682 interesting discourse here Masterclass. Keen to hear your responses to these super questions
“Wow!,” “Fantastic,” and “Inspirational”were words that filled the Twitter feed coming out of the latest Halifax Regional Centre for Education (HRCE) Innovation in Teaching Day (#HRCEID2018), held November 2 and 3, 2018. The primary cause of the frenzied excitement was a keynote talk by Brian Aspinall, author of the edtech best-seller Code Breaker, a teacher’s guide to training-up a class of “coder ninjas.” The former Ontario Grade 7 and 8 teacher from Essex County honed his presentation skills at TEDx Talks in Chatham and Kitchener and is now the hottest speaker on the Canadian edtech professional development circuit.
Mr. Aspinall, the #CodeBreaker, is a very passionate, motivational speaker with a truly amazing social media following. He built his first website in the 1990s before graduating from Harrow District High School, earned his B.Sc. and B.Ed. at the University of Windsor, and learned the teaching craft in the…
View original post 1,030 more words
Design. Develop.Play. I am pretty excited to be presenting a short session about our Masterclass. The focus of the session is building communities of practice in the online mode of teaching and learning. Notably the subset is increasing learner engagment. I am utilising my current Masterclass #EDLE685 @acueduandarts. Their final piece was to blog here about their evaluative tools in their mini research project on leading evidence based learning. Lots of layers here! Let’s see how many have engaged with this blog by the end of the weekend. I have 5 so far. Please blog I need some data, and evidence! Let me know if bribes work.
During the course of Evidence based Leading for Learning @ACUeducandarts #EDLE685 students have been invited to design a problem based project. The problem or concern is situated In their learning communities. Their first step is to identify a problem and provide evidence it is a problem! The problem needs to relate to student learning. Given this is a leadership unit students also need to frame the problem from a leadership (L) perspective. The L perspective is in the planning, implementation and evaluation of the project. In this blog students are invited to share their evaluation techniques that they will employ.
Students: In the comment section post your framework/model/figure (FMF) to demonstrate how you will evaluate your evidence based project. Name the FMF and also you are invited to write a few sentences about it. Remember to acknowledge sources.
In the Master of Educational Leadership at Australian Catholic University we have a unit titled Evidence-based Leading for Learning – within the specialisation of Leadership for Learning.
At the risk of confusing students I need to place a few thoughts down here, in case students have been thinking about these concerns as well.
While this is my first time teaching this unit I have experienced several concerns in gaining some conceptual understandings around several terms.
Settling on terms: Evidence-based leadership / evidence-based learning?
The first concern is understanding the meaning behind the title of this Unit. The title Evidence-based Leading for Learning could have three interpretations: a. it could mean the evidence we know about learning and the leadership that springs from this; b. the title could mean the evidence we know about ‘the leadership’ to lead learning or; it could mean both the evidence about learning and the evidence about the leadership of that learning (which is also evidence based – that is, what we know through evidence of what learning is, or is not). For our Unit purposes of interrogating the evidence areas of leadership and learning and honouring that this is an educational leadership unit (as opposed to an education unit) then the third interpretation is the preferred understanding. That is, we will be looking at both – the evidence of leadership and evidence around learning in that leadership. For a further understanding of leadership visit this blog on ‘Leadership as a Verb’.
Distinguishing data from evidence
The second concern is the distinction between data and evidence. However a lightening bolt about how to distinguish between the two concepts for the purposes of the unit distinction woke me last night. I propose that once we have our data (drowning in it some would say) and through analysis of the data when we arrive at our conclusions (or findings) we have our evidence – which we as educators may act upon or not. An easy idea for me to remember is that data do not speak for themselves – but evidence does.
I have taken several ideas of Clarke’s (2013) which I think aligns with the above thoughts and yet expands these ideas further. First he argues from the medicine (parent) discipline that evidence and data are distinguished from one another, and as Clarke argues, they must be.
Second he sources the science discipline where data become evidence when they stand in a particular testing relationship, such as a hypothesis.
His third source is the philosophy discipline where the distinction between evidence and data purporting that evidence is an intelligible concept, and data are not (that is, they are just that, ‘data’)
For Master students @ACUeducationandarts in #edle682 and #edle685 Leading Learning and Teaching and Leading Evidence Based Learning check out Slaven’s commentaries on drawing upon empirical studies to inform practice. The blog leads to many interesting topics other than ‘Hattie is Wrong’. The comments by the medical researcher also had an interesting twist.
Our leaders should be held accountable! Our results should be better – if only teachers would be more accountable! Common cries as the soft power of the OECD creeps into our psyche. However, as educators we are not necessarily clear when we speak of accountability.
Stobart (2008) posits that ‘we are so familiar with accountability in many spheres of life that it is hardly defined’ (p. 117). His notes reviewed Herman and Haertel’s Uses and misuse of data for educational accountability and improvement (2005), finding no single formal definition of accountability, therefore ‘assuming we know what it is’ (Stobart, 2008, p. 193).
This blog goes some way in bringing clarity to the term. It settles on a definition for performance-based accountability in the context of Australian education. Speaking plainly this is the accountability for performance results from external assessment programs such as NAPLAN and Year 12 exit instruments. The definition presented here is deliberately specific avoiding general terms such as educational accountability. In Australia, educational accountability may include a multitude of accountabilities such as bureaucratic and market accountabilities or work, health and safety accountabilities. Given that a readily formed definition of educational accountability is obscure (see inset: Stobart, 2008) definitions in other spheres are explored. For this blog several definitions of accountability are analysed and synthesised into five key concepts, to form a usable definition which can be framed in educational terms.
The economic sector defines accountability as the obligation to provide information so that people can make informed judgements about the performance and financial position of an organisation (Halligan, 2007). In a corporate governance context, Huse (2005) defines accountability as defending one’s reasons for actions and supplying normative grounds by which they may be justified. Within a legal context, Bovens (2007) defines accountability as a ‘relationship between the actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement and the actor may face consequences’ (Bovens, 2007a, p. 450).
Gray’s (2002) definition in a social context, similar to that of Bovens (2007), is clear about the persons involved and the consequences that are to be faced. Accountability is explained in terms of individuals and organisations presenting an account of the actions for which society holds them responsible (Gray, 2002). Kuchapski’s (2001) definition, from a political context, specifically identified consequences as ‘redress’, defining accountability as those in office providing information, justifying and explaining and providing redress to the people. Coghill et al. (2006), in the economic context, similar to Gray (2002), included the notion of relationship in the sense of ‘direct authority’, defining accountability as the ‘direct authority relationship within which one party accounts to a person or body for the performance of tasks or functions conferred, or able to be conferred, by that person or body’ (Coghill et al., 2006, p. 457). In the educational context, the definitions are less specific, explaining accountability as the processes involved in meeting goals (Leithwood & Earl, 2000) or as regulations for measuring educational outputs (Rowe, 2005).
Five key ideas from this initial sample of definitions that were deemed useful for this study were drawn from Bovens (2007) and Kuchapski (2001): (a) disclosure, making information known (Kuchapski, 2001); (b) transparency, providing clarity about the disclosed information and ensuring that this information makes sense to those receiving the information (Kuchapski, 2001); (c) consequences from the information disclosed, with some form of redress or appropriate action able to be taken from the disclosed information (Bovens, 2007; Kuchapski, 2001); (d) being obliged to explain and justify the information (Bovens, 2007); and (e) the notion of relationship between the person being held accountable and their constituency (Bovens, 2007a).
These five understandings underpin the following definition of accountability, which works as a platform for settling on a definition:
Accountability is a relationship between a person who is held responsible for the delivery of certain outcomes (the actor) and the individuals and organisations from whom they receive their mandate for those outcomes. This relationship requires that the actor behaves transparently and discloses, explains and justifies their conduct and its outcomes in the area of the mandate, with the expectation that there will be consequences contingent on these.
The following definition now aligns the above definition through the educational lens. The definition situates the principal in the role of the actor, with the School system leaders as the mandating authorities for the Government.
Performance-based accountability is a relationship between the principal (person) who is held responsible for the delivery of favourable performance results from external assessment programs (certain outcomes) and the School system (individuals or organisations) from whom they receive their expectations (outcomes). This relationship requires that the principal (actor) behaves transparently and discloses, explains and justifies their ways of accounting for the performance results (conduct and its outcomes) in the area of the mandate, with the expectation that there will be consequences contingent on these.
Figure 1: Key elements of performance-based accountability
The principal, with educators, are the persons who act with responsibility to another. In other words, the accountability relationship is a responsibility relationship. The principal’s and educator’s role in the accountability relationship can be defined as being obliged to be transparent, to disclose and justify their performance results to their performance results to their communities (parents, students, and state and federal authorities). In their accountability relationship, principals (as in the economic and political sectors) are accounting in different directions: to the government or school system, to parents and students, and to teachers. Essential in the relationship is the element of redress. If there is no evidence of redress, even if intentional, the view maybe interpreted that there has been ‘no accountability’.
Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468.
Coghill, K., Crawford, D., Cunliffe, I., Grant, B., Hodge, G., Hughes, & Zifcak, S. (2006). Why accountability must be renewed: Discussion paper on reform of Government accountability in Australia [Workshop of Parliamentary Scholars and Parliamentarians (7th: 2006: Wroxton College, Oxfordshire).]. Australasian Parliamentary Review, 21(2), 10–48.
Gray, R. (2002). Thirty years of social accounting, reporting and auditing: What (if anything) have we learnt? Business Ethics: A European Review, 10(1), 9–15.
Halligan, J. (2007). Accountability in Australia: Control, paradox, and complexity. Public Administration Quarterly, 31(4), 453–479.
Huse, M. (2005). Accountability and creating accountability: A framework for exploring behavioural perspectives of corporate governance. British Journal of Management, 16, S65–S79.
Kuchapski, R. (2001). Conceptualizing accountability in education (SSTA Research Centre report). Saskatoon: University of Saskatchewan.
Leithwood, K., & Earl, L. (2000). Educational Accountability Effects: An International Perspective. Peabody Journal of Education 75(4), 1–18.
Organisation for Economic Co-operation and Development. (2016). OECD better policies for better lives. Retrieved from http://www.oecd.org/about/
Rowe, K. (2005, August). Evidence for the kinds of feedback data that support both student and teacher learning. Paper presented at the The Australian Council for Educational Research 2005 Conference, Melbourne, VIC.
Stobart, G. (2008). Testing times: The uses and abuses of assessment. New York, NY: Routledge.
 As this blog is situated in principals’ understandings of accountability the meaning of ‘favourable’ is determined by the principal. Hence, favourability is a useful term to describe performance results, rather than high, mid or low results.
This post points to ways educational leaders, or the leadership activity[i], may release some of the external pressures in the pursuit of favourable performance results from Australian school systems. Since the introduction of public disclosures of NAPLAN results (MySchool) in 2010 there have various empirical research studies (Klenowski & Wyatt-Smith, 2011; McGuire, 2013) that demonstrate school life for Australian students, parents and educators has changed. For some school communities this change has felt like a squeeze. Australian educators have moved from being accountable for inputs of resources to outputs for student performance results, as predicted by Rowe (2005). The impacts of high stakes testing, are often construed as consistent with our global peers (‘Staking Australian educational accountability’) with negative impacts, mutating our learning communities. However, few empirical studies reveal how educational leaders are part of this change process. While my research (2017) confirms some negative impacts as our global peers from a high stakes testing regime, such as over preparation for tests, setting performance targets for teachers and students and ‘cherry picking’ students, I found an overwhelming number of principals who could meet high levels of accounts for student results at the same time as enacting their beliefs about learning.
A plethora of empirical and theoretical studies draw direct (Dinham, 2005, 2008; Robinson, 2011) and indirect parallels (Hattie 2015; Le Fevre and Robinson 2014; Sun and Leithwood 2015) with regard to educational leaders’ ways of enabling learning and influencing teaching (Dinham et al., 2013). In my study I also found direct parallels with regard to principals’ styles and expressions of leadership, such as instructional leadership (Bendikson, Robinson, & Hattie, 2012; Brown & Chai, 2012), accountable leadership (Elmore, 2005), pedagogical leadership (Male & Palaiologou, 2012) and data-informed leadership (Pettit, 2010), in managing their external expectations for results and their fundamental beliefs about learning.. Notably, Pettit’s research and ongoing work, set in NSW/ACT Australia, opened a research stream on the topic of educational leaders’ use of data from external assessment programs. The expectation by Catholic NSW School systems of their principals is that they are ‘Leaders of Learning’. Integral to this function is the expectation that principals will be accountable for learning and demonstrate how they utilise data to inform their practices. However, Pettit’s (2010) findings suggested that principals did not meet these expectations.
The findings by Norris (2017) offer an extension on Pettit’s work and demonstrate insights by drawing out significant acts of leadership through comparing and contrasting them with the extant literature.
There were several ways the educational leaders, principals in this case, met their accountability for results and at the same time felt enabled to enact their own beliefs about learning. They were:
- Positioning learning in the centre;
- Targeting their work with teachers about learning; and
- Implementing accountable processes as part of the learning cycle
- Positioning learning in the centre
- Articulating a vision for learning
In this study, the principals who revealed that articulating a vision for learning was essential in their enactments of leading learning spoke about their knowledge about learning, different curriculum designs, working closely with teachers on learning projects and the learning processes more often. In Hershey and Blanchard’s (1988) leadership framework, selling is a behavioural task. In this current study, ‘selling’ a vision for learning and informing people about it were often couched in persuasive terms and as a ‘selling’ task. These same principals reported negative impacts of external expectations less frequently, which may suggest that being able to articulate a vision of learning is also one step in influencing teachers. Most principals in this study were committed to an ongoing professional learning about the learning cycle
- Pursuing knowledge of learning
The principals in this study revealed that they pursued their knowledge of learning and teaching through formal post-graduate study, including doctoral studies, analysing empirical research about learning, reviewing other curriculum designs, schools and School systems (national and international) and engaging in peer leader conversations through meetings and conferences. While there appeared to be no definitive pattern regarding the impact of these activities on principals, the principals who sought out specific professional readings on learning outside of any formal study program were more likely to enact these learnings in their relationships with teachers, with reported influence: ‘When it comes down to it, it’s about learning. You can’t argue with that’(Graham). These same principals noted that students’ results in external testing were not the full representation of learning.
- Building levels of self-efficacy in the knowledge of learning
Investigating the principals’ levels of self- efficacy was not an immediate goal of this study. However, principals’ self -efficacy levels factor seemed to influence their ways of leading learning. In turn, the findings suggested that the principals’ levels of self-efficacy in leading were influenced by their confidence levels with regard to understanding learning. George disclosed, ‘I am concerned about the results but I really don’t know enough about learning’. Self-efficacy is an important determinant for behaviour in educational leadership. A study by Tschannen-Moran and Gareis (2004) found that self-efficacy influences principals’ efforts, persistence and resilience in managing demands and expectations. McCollum and Kajs (2007), for example, found that the self-efficacy construct was relevant in a broad sense of principals’ abilities to lead schools. In comparing general literature on self-efficacy, this study could describe principals’ levels of self-efficacy as a self-referent construct (Ajzen, 1991) and describe self-efficacy as a leader’s confidence in their knowledge and skills (Schwarzer, 2014). While Lovell’s (2009) study found some relationship between leading and leaders’ levels of self-efficacy, he suggested that further research was needed to examine the relationship between principals’ sense of efficacy for instructional leadership and their sense of efficacy in enacting. However, no studies were found in the literature about the influential relationship between principals’ processes of meeting accountability expectations and their self-efficacy.
In this study, the greatest influence on the principal’s agency in leading learning was found to be the principal’s perceived knowledge and skill about the teaching and learning processes. Graham reflected on his past: ‘Look, when I first came into the job I was told that [learning and teaching] was an area I needed to develop—and I did’. Graham and the leadership team relentlessly pursued learning ideas in various contexts – from epedagogies to agile learning spaces. In this study appeared to be a dependent relationship between the comfort or confidence in being accountable for results and leading learning with the principals’ knowledge regarding learning and teaching. Several studies point to similar dependent relationships in principals’ leadership. A study by McCollum and Kajs (2007) found that the self-efficacy of principals was related to their confidence in their knowledge base and skill (McCollum & Kajs, 2007). Likewise, Nelson and Sassi (2005) and Stein and Nelson (2003) found that a barrier to more effective instructional leadership is the adequacy of leaders’ knowledge of teaching and learning processes. They found that leaders who demonstrated lack of confidence were likely to be reluctant to observe teachers and give them feedback. Moreover, Spillane and Seashore-Lois (2002) found that if leaders do not demonstrate knowledge and confidence, their chances of being influential with teachers were not high. If principals are to be influential in leading assessment-focused accountability ( or want for a better term, ‘leading accountable learning’), then they need to be confident and convinced in enacting their own knowledge and skills regarding teaching and learning.
- Prioritising learning
When tasked with leading learning and meeting accountability expectations, educational leaders need to balance competing priorities (Leithwood, 2005), reorder goals (Seashore Louis & Mintrop, 2012) and be creative in integrating information (Thiel et al., 2012). Many studies on the topic of leader effectiveness in student learning outcomes offered insights for this current study (Hattie et al., 2015; Le Fevre & Robinson, 2014; Robinson et al., 2008). However, as noted earlier, there are few studies on the topic of the influence of educational leaders’ agency on teaching and learning while at the same time being accountable for results. However in this study building credibility with teachers was seen as being essential to leading learning in the principals’ accountability contexts (see Section b). Their self -efficacy levels were affected by their knowledge of learning. Therefore, these findings, alongside the literature, clearly suggests that if educational leaders are to have influence in leading assessment-focused accountability, they need to feel confident and convinced in enacting their own knowledge and skills in teaching and learning.
2. Targeting their work with teachers about teaching and learning
- Building credibility with teachers – knowing what they do and how they do it
Finding ways to influence teachers and their teaching requires educational leaders to build credibility with their teaching team. Educational leaders need to develop a deep understanding of what teachers do and how they do it. The principals in my study reported that to meet the accountability expectations for learning (both external and internal) they needed to be able to influence teachers and their teaching. Principals may have carried out this role either directly or indirectly. Raymond in the study, for example, delegated this function to others. This delegation is not unusual, as secondary school principals regularly devolve or distribute the tasks before them (Jäppinen & Maunonen-Eskelinen, 2011; Spillane, 2006). There are some studies that describe how educational leaders influence (or do not influence) teachers and teaching, as well as students and student learning (Hattie, Masters, & Birch, 2015; Le Fevre & Robinson, 2014; Robinson, Lloyd, & Rowe, 2008).
Building credibility and being credible are two essential leadership acts that influence teachers’ thinking and actions. I describe building credibility as the degree in which educational leaders or leadership activity may influence teachers’ thinking and actions, through the leader’s (or leadership activity) way of doing. Building credibility was reported (Norris, 2017) as being essential in influencing and persuading teachers in their thinking and acting, to meet the external accountability expectations yet remain true to the schools’ internal learning goals. The level of principals’ credibility was reported in terms of the benefit teachers could determine in meeting the expectations for favourable student performance results and at the same time as pursuing their own commitments regarding teaching and learning. As such, the degree of credibility could be said to be determined by the teacher yet influenced by the views and actions of the leader(s) or leadership. The importance of building credibility with teachers and their teaching was mentioned often by early career leaders and those new to their school communities: ‘They don’t know me so I am not sure of my creds [credibility] yet’.
Some educational leaders build credibility by understanding their teachers’ work. Working beside teachers provides opportunities to not only be in close working relationships but also to know and remain current about the teachers’ work. Charmaine, in this study, described herself as ‘a hands-on leader’ and she attended staff professional studies days, along with the teachers, as a part of the team on. In this way, Charmaine was building and maintaining relationships through common tasks with equal power relationships. Educational leaders working beside teachers aligns with Hersey’s and Blanchard’s (1988) behavioural task of participating, which is described as shared decision making with regard to task accomplishment and fewer requests for a task to be completed, while maintaining high relationship behaviour. Similarly, Hargreaves (2015) asserted the importance of working together to remain strong for a common purpose. Franken, Penney, and Branson (2015) found that teachers were more likely to be influenced when they perceived that their middle leaders understood their aspirations and needs. Importantly, Robinson’s (2011) study found that the characteristic of being close to teachers and their learning resulted in better student outcomes. This research of working besides and being close to teachers, participating in the team and knowing teachers aspirations and needs, suggests possible transference for educational leaders when tasked with being accountable for learning.
- Using data to inform learning and teaching plans
The majority of principals in my study were expansive about what data could offer, such as leverage for persuasion and notably, the way data informed teaching and learning practices. Empirical research was also considered data. Most of the principals described the importance of using and personalising data (Kaufman, Graham, Picciano, Popham, & Wiley, 2014; Sharratt & Fullan, 2012), acknowledging the influence of educational research on their understandings of learning and the impact of teaching on student learning (Bendikson et al., 2012; Hattie & Timperley, 2007; Richmond, 2007; Timperley, 2007). A minority of principals only used data to inform performance target setting. Koretz’s (2008) study found that in regimes with higher stakes consequences, accountability in driving for performance results gradually superseded the ‘diagnosis of the strengths and weaknesses of individual students’ learning’ (p. 47). In this current study, the principals who employed data for the purpose of performance target setting were also the principals who believed that the external expectations were a tool for judging and being measured against themselves as a principal. This was an important finding and supports Koretz’s study. The higher stake in this case was the principal thinking that their competency was being judged according to the students’ results in external testing. In this instance, these principals set targets for students’ performance results and grades. At the same time these same principals displayed higher levels of anxiety and disclosed they experiences symptoms of burnout and tiredness.
The teacher needs to see a benefit for themselves in changing a teaching practice or using data from performances (Dinham, 2008). Teachers are more likely to see a benefit if implementation plans have been established collectively and agreed by community members. Principals in this study who demonstrated their pursuits in solo (I and me) terms were more likely to demonstrate frustration and anger regarding their attempts to either persuade or influence teachers or to shield or buffer system expectations. One principal reported their solo pursuit as exhausting: ‘I have been going in this job now for [XX] years I don’t know how I will continue’ (Damien).
Principals in this study reported that students’ performance results were part of their learning goals, but only a small part. Joseph ‘tried a whole school approach—it’s a great scaffold for writing—so it will help in all subject areas but also should improve our results’. There were many studies about data informing leadership practices to address results, from No child left behind (Anderson, Leithwood, & Strauss, 2010; Stobart, 2008) and OfSTD (Earl & Fullan, 2003) to NAPLAN (Carter, 2015; Harris et al., 2013; Klenowski & Wyatt-Smith, 2011) and there was one study about the HSC (DeCourcy, 2005). While the principals who were in the National Partnership school program verbalised the processes that they used to measure their performance growth, they also reported more esteem for the incidental learning that occurred and more about the difficulties when students’ performances were the only targets. This finding suggested that targets other than students’ performance results were needed. Other principals in the study carried out their evaluations according to their own learning goals, rather than basing them on improvements in students’ performance results. This represented an increase in data informing practices, with the principals needing to present evidence of not only their implementation plans but also the evaluations and outcomes of those plans. This magnified the level and specificity of data and accountability. Being capable and confident in their capability of utilising data to inform leadership practices and evaluate accordingly appeared to be linked to principals’ levels of self-efficacy.
- Teacher benefit and leaders working collectively with teachers
The teacher needs to see a benefit for their students in changing a teaching practice or using data from performance results (Dinham, 2008). Teachers are more likely to see a benefit if implementation plans pertaining to accountability for learning have been established collectively and agreed by community members. Principals in this study who demonstrated their pursuits in solo (I and me) terms were more likely to demonstrate frustration and anger regarding their attempts to either persuade or influence teachers or conversely they shielded, buffered or ignored system expectations. One principal reported their solo pursuit as exhausting: ‘I have been going in this job now for [XX] years I don’t know how I will continue’(Damien). Finally the students need to know clearly where the expectations for performance results reside in their learning cycle. This clarity can be achieved when teachers agree on the expectations and there is a unified approach from all teachers across all their subject areas. Such collective agreements with a teaching team requires effective leadership skills which are fuelled by a vision and knowledge of learning and teaching.
3. Implementing accountable processes as part of the learning cycle
To effectively release the hold of the accountability squeeze educational leaders need to align accountable processes with their internal learning goals. Enmore (2005) in his study found that educational leaders who were less inclined to pressure their teachers to teach to the test or develop pseudo curriculum were more likely to have already established internal accountable processes as part of the learning cycle. Enmore’s finding is congruent with my findings where principals integrated the external demands into existing their school practices. In particular they emphasised the importance of managing external school systems expectations with internal learning commitments simultaneously. One such example was holding accountable conversations with teachers as part of their professional learning plans
- Managing the external accountability expectations simultaneously with leading learning
Some educational leaders seamlessly integrate their external expectations (favourable performance results) with their internal school learning goals. Participating principals in this research noted the importance of gaining collective agreements from teachers and simultaneously being responsive to the current sets of data and their analytical tools. This finding indicates that leaders need to develop the capability of building coherence between what is being asked of them and meeting their own existing school commitments. In the study most principals ignored the pressure to set performance results as their target and replaced this with broader learning goals. Leithwood, Riehl, Firestone, and Riehl (2005) found that the way principals manage the external and internal expectations is by attending to some concerns and disregarding others, thus balancing competing demands and making choices. They may reorder their goals (Seashore Louis, Knapp, & Feldman, 2012). In my study, the principals appeared to have developed sophisticated integration skills in the face of increasing accountability demands and expectations to implement School system-imposed programs, notably prioritising certain expectations over others.
To avoid the accountability squeeze leaders need not only to prioritise the expectations but also need to be competent in understanding information and being able to integrate this information to the current situation. Thiel et al. (2012) asserts that one essential strategy for leaders who are making decisions is their capacity to integrate information from their environments. A certain amount of shrewdness on the part of leaders bolsters and enables an agile culture. Koyama (2014) found that principals negotiate and appropriate external accountability in innovative and ‘clever and savvy’ ways to meet multiple demands. A strategy in being shrewd, clever and savvy is for leaders to understand what is being asked of them yet suspend their expectations. Such suspension allows for adequate information integration, which enables educational leaders to be considered, yet creative in their approaches. Such creative or sensible approaches can occur when accountable conversations are part of the conversation of the professional learning plan with teacher and ‘leader’.
- Holding accountable conversations
In the body of literature about educational accountability little has been written about and less has been researched about the conversations that occur between the teacher and the leader about their students’ performance results. This paucity is surprising given the plethora of research about the plight of testing regimes and school systems’ expectations for favourable performance results.
Knowing teachers’ needs and motivations creates opportunities for the leader/s or leadership to influence teachers and their teaching. Conversations based on inquiry are avenues to help leaders’ understandings. However, a study by Le Fevre and Robinson (2014) found that principals demonstrated low to moderate capacity to hold conversations about performance; they were more skilled in advocating their own viewpoints than being able to inquire into and check their understandings of the views of the teachers.
In this current study, DeCourcy data were esteemed by the participating principals, possibly because it could be accessed easily, it was reported as being not complicated and it had few items to analyse. Additionally the DeCourcy guides to the data were employed to help hold the accountable conversation. This function was esteemed in beneficial terms particularly by the cohort of principals who had been working with the tool for at least 10 years. The majority of principals in this study used DeCourcy data for not only the provision of a different analysis from the HSC results but also as guide for questions to conduct during review conversations with teachers. While only one principal in this study reported that they held conversations focused on teacher developmental issues (outside of DeCourcy data) with regard to unfavourable performance results, Graham revealed that making the time and a structure for these conversations, even when they were difficult, resulted in positive outcomes. Le Fevre and Robinson (2014) found that one reason for educational leaders’ reluctance to address poor performance issues was owing to their tendency to avoid negative emotions. However, addressing issues of performance is important. The implications of not doing this were noted in Bryk and Schenider’s study (2002), which found that teachers’ (and parents’) trust of leaders is diminished when leaders avoid dealing with poor teacher performance or deal with it inadequately (resulting in no change). Hence, and possibly ironically, if principals avoid holding their teachers to account, this may decrease trust in their leadership in the community. There has been little research on the topic of the DeCourcy tool’s function as a guide for leaders’ questions, until now albeit as a sidebar finding. Given its widespread adoption by NSW Catholic secondary schools, this could be a future research area, particularly the impact of the DeCourcy guide questions.
This post aimed to present ways that leaders may release the accountability squeeze. It demonstrated ways that educational leaders may integrate their external expectations in the pursuit of favourable performance result with their own ideas about learning and teaching. While these are not detailed strategies to release the squeeze, the literature along with my study impress on the importance of leaders/leadership activity working closely with the teaching team, knowing and understanding their work and needs and motivations, making sense of the external expectations and integrating these expectations into the current processes in the school. Building credibility with teachers is an essential leverage point for those engaged in the leadership relationship. Positioning learning in the centre of the educational leader’s work is also essential as a platform to collectively engage teachers and their teaching.
Ajzen, I. (1991). The Theory of Planned Behaviour. Organizational Behaviour and Human Decision Processes, 50, 179-211.
Bendikson, L., Robinson, V., & Hattie, J. (2012). Principal instructional leadership and secondary school performance.
Branson, C. M., Franken, M., & Penney, D. (2015). Middle leadership in higher education A relational analysis. Educational Management Administration & Leadership, 1741143214558575.
Brown, G. T. L., & Chai, C. (2012). Assessing instructional leadership: a longitudinal study of new principals. [Article]. Journal of Educational Administration, 50(6), 753-772. doi: 10.1108/09578231211264676
Carter, M. G. (2015). A multiple case study of NAPLAN numeracy testing of Year 9 students in three Queensland secondary schools.
Dinham, S. (2008). The Effects of Quality Teaching ACER – Sydney.
Earl, L., & Fullan, M. (2003). Using data in leadership for learning. Cambridge Journal of Education, 33(3), 383-394.
Franken, M., Penney, D., & Branson, C. M. (2015). Middle leaders’ learning in a university context. Journal of Higher Education Policy and Management, 37(2), 190-203.
Harris, P., Chinnappan, M., Castleton, G., Carter, J., de Courcy, M., & Barnett, J. (2013). Impact and consequence of Australia’s National Assessment Program-Literacy and Numeracy (NAPLAN)-using research evidence to inform improvement. TESOL in Context, 23(1/2), 30.
Hattie, J. (2015). HIGH IMPACT LEADERSHIP. EDUCATIONAL LEADERSHIP, 72(5), 36-40.
Hattie, J., Masters, D., & Birch, K. (2015). Visible learning into action: International case studies of impact: Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81-112.
Hersey, B., & Blanchard, K. H. (1988). Management of organizational behaviour. Utilizing human resources.
Jäppinen, A.-K., & Maunonen-Eskelinen, I. (2011). Organisational transition challenges in the Finnish vocational education – perspective of distributed pedagogical leadership. Educational Studies, 38(1), 39-50. doi: 10.1080/03055698.2011.567024
Kaufman, T. E., Graham, C. R., Picciano, A. G., Popham, J. A., & Wiley, D. (2014). Data-Driven Decision Making in the K-12 Classroom Handbook of Research on Educational Communications and Technology (pp. 337-346): Springer.
Klenowski, V., & Wyatt-Smith, C. (2011). The impact of high stakes testing: the Australian story. Assessment in Education: Principles, Policy & Practice, 19(1), 65-79. doi: 10.1080/0969594x.2011.592972
Koyama, J. (2014). Principals as Bricoleurs Making Sense and Making Do in an Era of Accountability. Educational Administration Quarterly, 50(2), 279-304.
Le Fevre, D. M., & Robinson, V. M. (2014). The Interpersonal Challenges of Instructional Leadership Principals’ Effectiveness in Conversations About Performance Issues. Educational Administration Quarterly, 0013161X13518218.
Leithwood, K. (2005). Understanding successful principal leadership: progress on a broken front. Journal of Educational Administration, 43(6), 619-629.
Leithwood, K., Riehl, C., Firestone, W., & Riehl, C. (2005). A new agenda: Directions for research on educational leadership.
Lovell, C. W. (2009). Principal Efficacy: An Investigation of School Principals’ Sense of Efficacy and Indicators of School Effectiveness: ERIC.
Male, T., & Palaiologou, I. (2012). Learning-centred leadership or pedagogical leadership? An alternative approach to leadership in education contexts. International Journal of Leadership in Education, 15(1), 107-118.
McCollum, D. L., & Kajs, L. T. (2007). School administrator efficacy: Assessment of beliefs about knowledge and skills for successful school leadership. Advances in educational administration, 10, 131-148.
McGuire, R. (2013a). Australian government school principals respond to My School. Australian Educational Leader, 35(1), 12–16.
Nelson, B. S., & Sassi, A. (2005). The effective principal: Instructional leadership for high-quality learning: Teachers College Press.
Norris, J. (2017). Making Sense and Enacting Accountability Environments: A Grounded Theoretical Model of Principals’ Experiences of Accountability. (for EdD), Australian Catholic University, Unpublished.
Pettit, P. (2010). From Data-informed to data-led? School leadership within the context of external testing. Leading & Managing, 16(2), 90-107.
Richmond, C. (2007). Teach more, manage less: A minimalist approach to behaviour management: Scholastic.
Robinson, V. (2011). Student-Centred Leadership. San Fancisco: Jossey-Bass.
Robinson, V., Lloyd, C., & Rowe, K. (2008). The impact of leadership on student outcomes: An analysis of the differential effects of leadership types. Educational Administration Quarterly, 44(5), 635-674.
Schwarzer, R. (2014). Self-efficacy: Thought control of action: Taylor & Francis.
Seashore Louis, K., Knapp, M. S., & Feldman, S. B. (2012). Managing the intersection of internal and external accountability: Challenge for urban school leadership in the United States. Journal of Educational Administration, 50(5), 666-694.
Seashore Louis, K., & Mintrop, H. (2012). Bridging accountability obligations, professional values and (perceived) student needs with integrity. Journal of Educational Administration, 50(5), 695-726.
Sharratt, L., & Fullan, M. (2012). Putting FACES on the Data: What Great Leaders Do! : Corwin Press.
Spillane, J. P. (2006). Distributed leadership.
Stein, M. K., & Nelson, B. S. (2003). Leadership content knowledge. Educational evaluation and policy analysis, 25(4), 423-448.
Sun, J., & Leithwood, K. (2015). Leadership Effects on Student Learning Mediated by Teacher Emotions. Societies, 5(3), 566-582.
Thiel, C., Bagdasarov, Z., Harkrider, L., Johnson, J. F., & Mumford, M. (2012). Leader Ethical Decision-Making in Organizations: Strategies for Sensemaking. Journal of Business Ethics, 107, 49-64. doi: 10.1007/s10551-012-1299-1
Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2007). Teacher Professional Learning and Development: Best Evidence Sythesis Iteration.
Tschannen-Moran, M., & Gareis, C. R. (2004). Principals’ sense of efficacy: Assessing a promising construct. Journal of Educational Administration, 42(5), 573-585.
[i] Leaders, leadership, leading are employed synonymously in this blog – I prefer to conceptualise leadership as a verb – a way of ‘doing, that is when I think of projects or initiatives that may have been sparked by a ‘leadership team’ however, encouragingly, they start to evolve into a life of their own with teachers. On the other hand leadership may be about the leader; their way of ‘being’. For more ideas around the conceptualisation of leadership click here (hyperlink)
While it is unclear what influence international organisations, such as the OECD, have on governing education, there is a growing interest in such influences (Morgan & Shahjahan, 2014), especially empirical comparisons with particular countries and jurisdictions, such as Finland, Singapore and Shanghai (Morgan & Shahjahan, 2014). The OECD has built on past successes and continues to ‘gain authority as an expert and resource for evidence based education policy’ (Morgan & Shahjahan, 2014, p.194). The OECD, described by Woodward (2009), operates through soft power and through ‘cognitive’ and ‘normative’ governance. Cognitive governance asserts its function through the agreed values of the member nations. While normative governance, described as peer pressure, is perceived as being vague (Woodward, 2009), yet it may hold the most influence because it ‘challenges and changes the mindsets’ of the member people (Sellar & Lingard, 2013, p. 715). This is important because the influence the OECD may hold over the mindsets of federal policy makers, who may in turn influence school system leaders, the key regulating authorities for teachers and students in Australian schools.
The OECD uses the reports of the data from the Program for International Student Assessment (PISA) to make recommendations to countries and jurisdictions, with certain effects on their policy directions (Breakspear, 2012). By 2015, more than 70 countries had taken part in the PISA survey, which has allowed the OECD to track progress and examine three areas: public policy issues in preparing young people for life; literacy in the ways that students apply their knowledge and skills in key learning areas; and lifelong learning, with students measured not only in their reading, mathematics and science literacy but also asked about their self-beliefs (retrieved from https://www.oecd.org/pisa/pisafaq/). Importantly, the paper ‘Beyond PISA 2015: A longer-term strategy of PISA’ (OECD, 2016) explains that the PISA assessment is a tool to enable governments to review their education systems. Of importance for this study is that our national and state policy makers are fuelled by initiatives such as the OECD’s PISA data to compare and contrast Australia with other countries (Lingard & Sellar, 2016). These comparisons and contrasts are likely to influence the directions of school systems and jurisdiction in Australia, as is occurring elsewhere.
Empirical research studies have compared international curriculum systems, using OECD reports from various jurisdictions (Creese, Gonzalez, & Isaacs, 2016). There is a recognition that international organisations contribute to the construction and continuation of evidence-based cultures, which as Pereyra, Kotthoff, and Cowen (2011) assert, legitimatises comparative data being employed as a tool to govern education. In essence, national policy makers now adopt this comparative data heavily, to guide their educational directions (Breakspear, 2012; Morgan & Shahjahan, 2014). It is possible that using evidence-based cultures as a governing tool has become normalised and this possibility was considered in this inquiry.
Interestingly, an OECD 2013 publication on the evaluation of school leaders, advised the ways head teachers (principals) should be appraised in terms of ‘fostering pedagogical leadership in schools’ (OECD, 2013). The priorities that school system leaders give to certain areas of leadership, such as pedagogical leadership and evidence-based leadership, can result in principals being evaluated on their students’ performance results, which may affect their ongoing tenure.
Table 1.1 provides an overview of the largest OECD-based studies in education. The table illustrates that most age groups were assessed in some form by these studies.
|Study||Age||Subject areas||Sources of context information||Frequency||Global coverage|
|OECD PISA||15||– Reading
– Collaborative problem solving (2015)
– Problem solving (2012)
– Financial literacy
– Parents (optional)
– Teachers (optional)
– School principals
|Every 3 years since 2000||OECD countries: 34
non-OECD participants: 40
|OECD PIAAC||16–65||– Literacy
– Reading components
– Problem solving in technology-rich environments
|– The individuals who are assessed||Frequency to be decided1||OECD countries: 24
non-OECD participants: 2
|OECD TALIS||Teachers of lower secondary education2||– Focuses on the learning environment and working conditions of teachers||– Teachers
– School principals
|5 years between first 2 cycles||OECD countries: 16
non-OECD participants: 7
|OECD AHELO||University students at the end of their B.A. program||– Generic skills common to all university students (e.g., critical thinking)
– Skills specific to economics and engineering
|Feasibility study carried out in 2012||Institutions from 17 countries participated in the feasibility study|
Some people assert that the PISA program has helped to normalise the use of comparative data in education on the global stage (Sahlberg, 2011). Countries seek to understand why students in the top-performing education systems, such as Finland and Shanghai, perform so well in PISA testing. Others assert that bodies such as the OECD fuel national educational reforms, which are kept in the public eye by the media in the OECD triennial cycle (Bagshaw & Smith, 2016). These media assertions often disregard the incompatibility between the NAPLAN test (skills) and the PISA survey (applications of skills) (Lingard & Sellar, 2013) and other system impact factors on PISA data (Sellar & Lingard, 2013, p. 723), such as a student demographic of multi culturalism. As global studies increase both in number and in sectors, they are often adopted by policy makers as benchmarks for comparative rankings, although at times the validity of such comparisons may be questioned due to the impact factors at play.
The implication is that when national and state policy makers compare data with other countries, the trickle-down effect from national policy makers to school system leaders is likely to form part of the principals’ experiences of regulated assessment-focused accountability.
Bagshaw, E., & Smith, A. (2016, March 25, 2016). Education policy not adding up: OECD asks what’s wrong with Australia’s schools?, Sydney Morning Herald. Retrieved from http://www.smh.com.au/national/education/education-policy-not-adding-up-oecd-asks-whats-wrong-with-australias-schools-20160323-gnpno9
Breakspear, S. (2012). The Policy Impact of PISA: An Exploration of the Normative Effects of International Benchmarking in School System Performance. OECD Education Working Papers, No. 71. OECD Publishing (NJ1).
Creese, B., Gonzalez, A., & Isaacs, T. (2016). Comparing international curriculum systems: the international instructional systems study. The Curriculum Journal, 27(1), 5-23.
Lingard, B., & Sellar, S. (2013). ‘Catalyst data’: Perverse systemic effects of audit and accountability in Australian schooling. Journal of Education Policy, 28(5), 634-656.
Lingard, B., & Sellar, S. (2016). The Changing Organizational and Global Significance of the OECD’s Education Work. Handbook of Global Education Policy, 357.
Morgan, C., & Shahjahan, R. A. (2014). The legitimation of OECD’s global educational governance: examining PISA and AHELO test production. Comparative Education, 50(2), 192-205.
Nye, J. (2012). China’s soft power deficit to catch up, its politics must be unleash the many talents of its civil society The Wall Street Journal.
OECD. (2013). Synergies for better learning: an international perspective on evaluation and assessment, Paris: OECD Retrieved 29/10/2016, 2016, from http://www.oecd.org/edu/school/Synergies%20for%20Better%20Learning_Summary.pdf
OECD. (2016). OECD BETTER POLICIES FOR BETTER LIVES Retrieved 8 ApriL, 2016, from http://www.oecd.org/about/
Pereyra, M. A., Kotthoff, H., & Cowen, R. (2011). PISA under examination: Springer.
Sahlberg, P. (2011). Finnish lessons: Teachers College Press.
Schleicher, A. (2013). BEYOND PISA 2015: A LONGER-TERM STRATGEY OF PISA. Paris: Retrieved from http://www.oecd.org/callsfortenders/ANNEX.
Sellar, S., & Lingard, B. (2013). The OECD and global governance in education. Journal of Education Policy, 28(5), 710-725.
Woodward, R. (2009). The Organization for Economical Co-operation and Development (OECD): Routledge: London.
 Joseph Nye of Harvard University developed this concept to describe a way to ‘attract and co-opt’, rather than use force (hard power) (Nye, 2012).
 70 member countries of the OECD test 15-year-olds’ skills and knowledge (OECD, 2016).
 Sources of context information: refers to who is assessed and/or where it is assessed
 Program for the International Assessment of Adult Competencies
 Teaching and Learning International Survey
 Assessment of Higher Education Learning Outcomes