Free Download: 10 Questions for Better Feedback on Teaching

Category Archives for "Articles"

How To Hire Teachers Remotely with Virtual Interviews

Every summer, schools hire thousands of teachers to fill last-minute vacancies. This year, in the midst of the COVID-19 pandemic, we may see additional vacancies as teachers choose to retire or find other work rather than return to the classroom.

In most areas, meeting in person is still an iffy prospect. How can schools hire high-quality teachers without conducting face-to-face interviews?

And how can we conduct the process inclusively and efficiently, given the large number of people involved in filling multiple positions?

While conducting interviews remotely via videoconference is an obvious solution, we can improve the entire hiring process—from screening, to interviewing, to making decisions—with a bit of forethought.

Live Video Interviews: Keeping It Simple with Zoom

A basic approach is to establish a single Zoom meeting for a group of back-to-back interviews, and share the link with each candidate.

When it's time for a candidate to interview, simply admit them to the Zoom meeting, and remove them at the end of the interview.

Your interview team can remain in the Zoom meeting for discussion and coordination before and after each interview.

This approach is much simpler than creating a separate Zoom link for every candidate, which would leave your team scrambling to find the right link for every candidate.

Note: if you're using a platform other than Zoom, make sure it has a “waiting room” feature so candidates don't accidentally join during each other's interviews.

But live video interviews happen toward the end of the process. How can we make every stage of hiring more efficient, inclusive, and evidence-based?

Application Screening: Add Video for Better Decisions

If you have a large number of candidates, you'll first need to narrow down your list based on the application materials they've submitted.

The application screening process traditionally relies on a written application, résumé, and cover letter, which provide basic facts but very little sense of what a candidate is like in person.

How can you give your hiring team a richer impression of the applicants before deciding who to invite for an interview?

If you currently ask screening questions, you can easily convert one or more of them from written response to video response.

A great way to manage these responses—and organize your entire hiring process—is in a Sibme Assessment Huddle.

While most schools use Sibme for video-based coaching and professional growth, it works great as a hiring platform, too.

Simply create a new Assessment Huddle with the application instructions:

Not a Sibme user? Sign up for a free trial »

Then, invite applicants to login and submit their materials:

Applicants can upload videos and documents to their private workspace, then submit them via the Assessment Huddle:

As each candidate submits their materials, you'll be able to view and comment on them.

Each member of your hiring team can leave their own comments, so you can collaborate asynchronously—no need to have everyone gather at the same time to review applications.

Comments on videos are timestamped, allowing you to make evidence-based decisions focused on the substance of each candidate's responses.

You can even create your own rubric, with a custom scale and criteria:

You can then have each member of your hiring team use this rubric to tag specific evidence from applicants' videos, and rate the candidate according to the rubric.

This holds your hiring team accountable for making evidence-based decisions, and makes the basis for ratings clear.

Finally, you can review all of the evidence and apply an overall rating for each candidate, so you can decide who will be invited to interview for the position.

These features were designed to meet the needs of university-based teacher preparation programs, and they work perfectly for remote hiring.

Live Interviews: Getting More Input from More Staff

When it's time to conduct interviews, you can use a live videoconference tool like Zoom, but how can you get input from as many people as possible?

Over the summer, it's usually fairly difficult to convene a large interview team that's representative of your school community.

Some people may be able to provide input, but not join in real time.

With video-based interviews, people don't have to be available at a certain time to be part of the decision-making process.

You can use a small team to greet candidates and ask questions, then upload the recording to a Sibme huddle for broader input from your hiring team.

Simply create a new Collaboration Huddle in Sibme and invite the members of your hiring team:


Then, copy each candidates' application materials and interview videos to the Collaboration Huddle so your team can provide input.

You can even clip out specific sections of a longer video, right within the Sibme app. For example, if you recorded all of your interviews in a single Zoom file, you can clip out one candidate's interview, or just a single response.

No need for special editing software or multiple files on your computer—simply upload the video once, and crop it into as many separate clips as you need.

Then, your hiring team members can login on their own schedule and leave timestamped comments to provide their input—again, using the rubric you've created to make the process transparent and fair.

Establish Buy-In From The Start with Shared Criteria

Hiring over the summer can be contentious because people want to have input, but aren't always available, and vacancies can arise without much warning.

When no one has had the chance to think about or discuss what they're looking for in a new member of the staff, it can be hard to agree on the best candidate—especially when you have several strong applicants.

To prevent this problem, start the hiring process by asking your staff for input on the characteristics of the ideal candidate.

You can post a discussion in your Sibme Collaboration Huddle, and staff can leave nested comments to discuss the hiring process without the need for a meeting.

Email notifications can keep everyone in the loop, and you can easily reply from the Sibme mobile app.

You can even use Discussions to get input as you develop a custom rubric for evaluating applicants at each stage of the hiring process.

Transparency Leads to Better Hires

Giving staff more visibility and input into the hiring process can pay huge dividends in both trust and decision quality.

Simply put, you'll make better hires, with more buy-in from staff, if you have a transparent, inclusive hiring process during the summer months.

Too often, summer hiring is conducted by the principal and a skeleton crew of other staff—who may have been picked for no reason other than their availability on short notice.

Especially if you have concerns about bias—for example, if one of the applicants is a friend of an existing staff member—it's critical to establish clear criteria in advance, and to make transparent, evidence-based decisions at each stage of the process.

If you aren't already using Sibme to manage your hiring process, request a free trial here.

Low Walls for Protecting Time for Classroom Visits

Are you on track to get into classrooms 500 times a year?

Making 3 visits a day is all it takes—and also happens to be the sweet spot:

  • If you have 30 teachers, 3/day gets you around to everyone every 10 days
  • You'll never go more than 2 weeks between visits to the same classroom
  • You'll see each teacher 18 times a year, instead of just a handful of times

Imagine the impact it would have.

Imagine the feedback you could give.

You'd know so much more about your school, and you'd be known so much better by students and staff.

If you're on track for 500, leave a comment below and let me know!

If not, why not? Let me guess…

Whenever I ask administrators “What's keeping you from getting into classrooms?” the answer is always the same set of “external” challenges:

  • Lack of time
  • Interruptions and emergencies
  • Discipline issues and parent needs
  • Overwhelming levels of email and paperwork

We all know if we want to get into classrooms consistently…we'll never find time. It's all spoken for.

We'll have to make time.

All it takes is 10-15 minutes, three times a day, to visit classrooms 500x/year.

It's absolutely doable—if we develop systems for handling the predictable issues we tend to face.

We know email and paperwork can wait a few minutes, but we're often afraid to make other people wait.

We're afraid things might fall apart if we make ourselves unavailable:

  • Emergencies and student discipline issues won't get handled
  • Teachers and office staff will feel unsupported
  • We'll seem aloof and out of touch with staff and parents

And those are all reasonable fears. 

It's not a great idea to say “No one can ever interrupt me, under any circumstances, when I'm in classrooms.”

Try that on the fire department next time they show up 🙂

Monty Python - now go away, or I shall taunt you a second time

So here's what I recommend instead…

Rather than protecting your time with impenetrable “castle wall” systems…

The kind of rigid policies that keep everyone out, no matter what, like a shark-filled moat…

…we need a gentler, more flexible approach to protecting our time in classrooms.

We need systems for preventing every issue from becoming an emergency…while ensuring that everything still gets handled.

Think of these systems as low walls with gates, like you might find on a farm. 

A stone-walled pasture is no fortress, but it's pretty good for keeping everything where it's supposed to be. 

We can protect our time for classroom visits with systems that work the same way:

  1. Anticipate the interruption
  2. Have a designated person (often the office staff) who can triage the interruption, and either…
  3. Decide it can wait until you return, or…
  4. Interrupt you if necessary

For example, you don't need to be totally unreachable by phone…but your office staff should be able to intercept 80-90% of the callers who ask for you, so your classroom visits don't get interrupted by non-emergencies. 

Perhaps the entire staff has your cell number right now, so you're getting emergency and non-emergency calls while you're trying to visit classrooms. 

You might need to tell people you'll only answer calls from the office—so everyone else should just call the office if they need you. 

Good “low wall” systems combine people and policies to minimize interruptions. 

The “walls” don't need to be especially high—just high enough to direct most people to the correct “gate.”

Real emergencies can always go over the “wall” and interrupt you.

For example, think about discipline situations.

Sometimes, they're true emergencies—when someone is out of control, or someone is getting hurt. 

In those cases, I want my classroom visits to be interrupted, because dealing with the emergency is more important. 

But a lot of discipline situations are NOT true emergencies…

Yes, you need to deal with them, but it's no big deal if the students have to wait 5 extra minutes to see you.

(In fact, those 5 extra minutes might give them a chance to calm down!)

So you can finish up your classroom visit, without getting interrupted, because you have a “low wall” system to handle the situation in the meantime.

What if the situation escalates? Can people still leap over the wall and reach you faster?

Yes, that’s always an option—and that’s why there’s no real risk of becoming aloof or unresponsive with systems like this.

If the kid waiting in the office is starting to cause problems for your office staff, they can always call you to come down faster.

The policies should also protect the people who are protecting your time. 

With low walls—modest barriers protecting our time and attention—we can see what’s happening on the other side, and we can ourselves leap over to lend a hand when necessary.

Make sense?

So if you're finding it hard to get into classrooms—even under ideal circumstances—know this: you are 100% normal

If you feel too busy and too overwhelmed to get into classrooms more, I'm here to give you two things:

  1. Reassurance that nothing is wrong with you. You're doing great. This is a tough job.
  2. Hope that you can make a change—that you can choose to get into classrooms far more. 

You absolutely CAN get into classrooms three times a day, every day.

I know you can, because I get emails like this all the time:

“Hey Justin, our school year is 1/3 over (around 60 days) and my classroom visit count is at 163, and I've got 3 more to do today.  My count's a little low, but it's made a huge difference in my relationship with the students already.  Now when I walk in it's like I'm just another member of the class, it's no big deal.
Thanks for the challenge!” —Kelly

How wild is this: Kelly wasn't on track to hit 500 visits/year…just 489 :). 

You can make a commitment to yourself, your teachers, and your students, to finally start getting into classrooms…

  • Not just once in a while…
  • Not just “as time allows” (whatever that means)… 
  • …but consistently—3 times a day, 500 times a year. 

I know…that might sound far-fetched. Perhaps it sounds impossible from where you sit. 

But I promise you this—someone:

  • Who isn't as smart or hard-working as you
  • Who has a harder job than you
  • Working with fewer resources

…is getting into classrooms more than you are. 

How is that possible?

Simple: They have a good plan. They're following a proven model.  

See, it's not a lack of talent or expertise that's keeping you from getting into classrooms every day.

You are more than enough.

It simply comes down to following a plan—a system—that's already working in thousands of schools.

So, use the “low walls” approach above to start dealing with some of those external barriers—the interruptions that are pulling you away from classrooms—and let me know how it goes.

The Evidence-Driven Leadership Manifesto

K-12 leaders must abandon the delusion of “data-driven” decision-making, and instead embark on a serious evidence-driven overhaul of learning, teaching, and leadership. 

With learning in particular, we don’t have much evidence of learning because we are have replaced too much content with skills that aren’t really skills. We’ve tried to get students working on higher-order cognitive tasks, without giving them the knowledge they need to do those tasks. 

As a result, we don’t really have much evidence of learning. And without evidence of learning, we have a limited ability to connect teacher practice to its impact on student learning. When we don’t know how we’re impacting teacher practice, we don’t have evidence of improvement. We have only data—and data can’t tell us very much. 

The Data Delusion

Beginning in earnest with the passage of No Child Left Behind, our profession embarked on a decades-long crusade to make K-12 education “data-driven,” a shift that had already been underway in other fields such as business and public health. 

To be sure, bringing in data—especially disaggregated data that helps us see beyond averages that mask inequities—was an overdue and helpful step. But we went too far in suggesting that data should actually drive decisions about policy and practice. Should data inform our decisions? Absolutely. But the idea that data should drive them is absurd.

Imagine making family decisions “driven” by data. Telling your spouse “We need to make a data-driven decision about which grandparents to visit for the holidays” is both unworkable on its face, and an approach that misses the point of the decision. Data might play a role—how much will plane tickets cost? How long of a drive is it? How many days do we have off from school?—but letting data drive the decision would be wrong. 

And we’d never advise our own teen, “Honey, just make a spreadsheet to decide who to take to prom.” I’m sure we all know someone who did that, but it’s not a great way to capture what really matters. 

We know intuitively that we make informed judgments holistically, based on far more than mere data. Yet it’s a truth we seem to forget every time we advocate for data-driven decision-making in K-12 education. 

Knowing Where We’re Going: The Curriculum Gap

The idea of collecting data becomes even more ridiculous when we consider the yawning gap in many schools: a lack of a guaranteed and viable curriculum. When we aren’t clear on what teachers are supposed to be teaching—and what students are supposed to be learning—teaching is reduced to abstract skills that can supposedly be assessed by anyone with a clipboard. 

No less a figure than Robert Marzano has stated unequivocally “The number one factor affecting student achievement is a guaranteed and viable curriculum” (What Works In Schools, 2003). Historically, having a guaranteed and viable curriculum has meant that educators within a school would generally have a shared understanding of what content students would be taught and expected to learn. 

For some reason, though, the idea of content has fallen out of fashion. We’ve started to view teaching as a skill that involves teaching skills to students, rather than a body of professional knowledge that involves teaching students a body of knowledge, and layering higher-order intellectual work on top of that foundation of knowledge. 

I attribute this fad to a popular misconception about Bloom’s Taxonomy (or if you prefer, Webb’s Depth of Knowledge): the idea that higher-order cognitive tasks are actually better, and shouldn’t be just the logical extension of more foundational tasks like knowing and comprehending, but should actually replace them. 

This misconception has spread like wildfire through the education profession because next-generation assessments—like those developed by the PARCC and SBAC consortia to help states assess learning according to the Common Core State Standards—require students to do precisely this type of higher-order intellectual work. 

There’s nothing wrong with requiring students to do higher-order thinking—after all, if high-stakes tests don’t require it, it’s likely to get swept aside in favor of whatever the tests do require (as we’ve seen with science and social studies, which have been de-emphasized in favor of math and reading). 

The problem is that we’re no longer clear about what knowledge we want students to do their higher-order thinking on—largely because the tests themselves aim to be content-neutral when assessing these higher-order skills. 

Starting in the 1950s, Benjamin Bloom convened several panels of experts to develop the first version of his eponymous taxonomy:

It’s no accident that Bloom’s model is often depicted as a pyramid, with the higher levels resting on the foundation provided by the lower levels. Each layer of the taxonomy provides the raw material for the cognitive operation performed at the next level. 

“Reading comprehension” is not a skill that can be exercised in the abstract, because one must have knowledge to comprehend; you can’t comprehend nothing. That’s why, as Daniel Willingham notes, reading tests are really “knowledge tests in disguise” (Wexler, 2019, The Knowledge Gap, p. 55).

The preference for “skills” over knowledge is explored in depth in one of the best books I’ve read this year, Natalie Wexler’s The Knowledge Gap: The Hidden Cause of America's Broken Education System—and How to Fix It. She explains:

[S]kipping the step of building knowledge doesn’t work. The ability to think critically—like the ability to understand what you read—can’t be taught directly and in the abstract. It’s inextricably linked to how much knowledge you have about the situation at hand. 

p. 39

Wexler argues that we’ve started to treat as “skills” things that are actually knowledge, and as a result, we’re teaching unproven “strategies”—in the name of building students’ skills—rather than actually teaching the content we want students to master. Wexler isn’t arguing for direct instruction, but rather the intentional teaching of specific content—using a variety of effective methods—rather than attempting to teach “skills” that aren’t really skills. 

For example, most educators over the age of 30 mastered the “skill” of reading comprehension by learning vocabulary and, well, reading increasingly sophisticated texts—with virtually no “skill-and-strategy” instruction like we see in today’s classrooms. Somehow, the idea that we should explicitly teach students words they’ll need to know has become unpalatable, even regressive in some circles. 

In a recent Facebook discussion, one administrator wondered in a principals’ group “Why are students still asked to write their spelling words 5 times each during seat work??” Dozens of replies poured in, criticizing this practice as archaic at best—if not outright malpractice. Clearly, learning the correct spelling of common words is a lower-level cognitive task, but it’s one that is absolutely foundational to literacy and success with higher-order tasks, like constructing a persuasive argument. 

This aversion to purposefully teaching students what we want them to know is driven by fads among educators, not actual research. Wexler writes:

[T]here’s no evidence at all behind most of the “skills” teachers spend time on. While teachers generally use the terms skills and strategies interchangeably, reading researchers define skills as the kinds of things that students practice in an effort to render them automatic: find the main idea of this passage, identify the supporting details, etc. But strategies are techniques that students will always use consciously, to make themselves aware of their own thinking and therefore better able to control it: asking questions about a text, pausing periodically to summarize what they’ve read, generally monitoring their comprehension.

Instruction in reading skills has been around since the 1950s, but—according to one reading expert—it’s useless, “like pushing the elevator button twice. It makes you feel better, perhaps, but the elevator doesn’t come any more quickly.” And even researchers who endorse strategy instruction don’t advocate putting it in the foreground, as most teachers and textbook publishers do. The focus should be on the content of the text.

p. 56-57

Part of the problem may be that the Common Core State Standards in English Language Arts mainly emphasize skills, while remaining agnostic about the specific content used to teach those skills. This gives teachers flexibility in, say, which specific novels they use in 10th grade English, so it isn’t necessarily a flaw—unless we make the mistake of omitting content entirely, in favor of teaching content-free skills. 

(The Common Core Math Standards, in contrast, make no attempt to separate content from skills, and it’s obvious from reading the Standards that the vocabulary and concepts are inseparable from the skills.)

Yet separating content and skills is precisely what we’ve done in far too many schools—and not just in language arts. Seeking to mirror the standardized test items students will face at the end of the year, we’ve replaced a substantive, content-rich curriculum with out-of-context, skill-and-strategy exercises that contain virtually no content. We once derided these exercises as drill-and-kill test prep, yet somehow they’ve replaced actual content.

Even more perversely, teaching actual content has become unfashionable to the point that content itself has become the target of the “drill-and-kill” epithet.

As a result of these fads, many schools today simply lack a guaranteed and viable curriculum in most subjects, with the notable exception of math. 

Is Teaching A Skill?

For administrators, the view that students should be taught skills rather than content is paralleled by a growing belief that teaching is a set of “skills” that can be assessed through brief observations. 

This hypothesis was put to the test by the Gates-funded Measures of Effective Teaching project, which spent $45 million over a period of three years recording some 20,000 lessons in approximately 3,000 classrooms. Nice-looking reports and upbeat press releases have been written to mask the glaring fact that the project was an abject failure—we are no closer to being able to conduct valid, stable assessments of teacher skill than before. 

Why did MET fail to yield great insights about teaching? Because it misconstrued teaching as a set of abstract skills rather than a body of professional practice that produces context-specific accomplishments. Every principal knows that there’s an integral relationship between the teacher, the students, and the content that “data” (such as state test scores) fail to capture. 

We cannot “measure” teaching as an abstract skill, because it’s not an abstract skill. Teachers always teach specific content to specific students—and the specifics are everything. Yes, there are “best practices,” but best practices must be used on specific content, with specific students—just as reading comprehension strategies must be used on a specific text, using one’s knowledge of vocabulary, along with other background knowledge about the subject matter. 

Teaching is not an abstract skill in the sense that, say, the high dive is a skill. It can’t be rated with a single score the way a high dive can. Involving more “judges” doesn’t improve the quality of any such ratings we might want to create. 

A given teacher’s teaching doesn’t always look the same from one day to the next, or from one class to the next, and it can’t be assessed as if there existed a “platonic ideal” of a lesson. 

To understand the root of the “guaranteed and viable curriculum” problem as well as the teacher appraisal problem, we don’t have to dig very far—Bloom’s Taxonomy provides a robust explanation. 

Bloom’s Taxonomy and the “Data-Driven Decision-Making” Problem

Neither Bloom’s Taxonomy or Wexler’s Knowledge Gap focuses specifically on teacher evaluation, but the parallels are clear. Principals who regularly spend time in classrooms, building rich, firsthand knowledge of teacher practice are in a far better position to do the higher-order instructional leadership work that follows. Knowledge—the foundation of the pyramid—that has been comprehended can then be applied to different situations, and principals who repeatedly discuss and analyze instruction in post-conferences with teachers will be far more prepared to make sound evaluation decisions at the end of the year. 

On the other hand, it’s impossible to fairly analyze and evaluate a teacher’s practice based on just one or two observations or video clips, because such a limited foundation of knowledge affords observers very little opportunity to truly comprehend a teacher’s practice.

Using Bloom’s Taxonomy to understand the failure of the MET project is straightforward, because the resulting diagram is decidedly non-pyramidal: an enormous amount of effort went in to the analysis, synthesis, and evaluation of a very small amount of knowledge of teacher practice, with very few efforts to comprehend or apply insights about the specific instructional situation of each filmed lesson. It’s more of a mushroom than a pyramid. 

By treating teaching as an abstract skill that can be filmed and evaluated—apart from even the most basic awareness of the purpose of the lesson, its place within the broader curriculum, students’ prior knowledge and formative assessment results, and their unique learning needs—the MET project perpetuated the myth that education can be “data-driven.”

It’s time to call an end to the “data-driven” delusion. It’s time to take seriously our duty to ground professional practice in evidence, not just data. It’s time to ensure that all students have equitable access to a guaranteed and viable curriculum. It’s time to treat student learning and teacher practice as the primary forms of evidence about whether a school is improving—and reduce standardized tests to their proper role as merely a data-provider, and not a “driver” of education. 

As leaders, we need clear, shared expectations for student learning and teacher practice. We need direct, firsthand evidence. Only then can we make the right decisions on behalf of students. 

Rubrics as Growth Pathways for Instructional Practice

Who's the best person to decide what instructional practices to use in a lesson?

Obviously, the teacher who planned the lesson, and who is responsible for teaching it and ensuring that students learn what they're supposed to learn. 

Yet too often, we second-guess our teachers. 

We do it to be helpful—to provide feedback to help teachers grow—but I'd suggest it's often not the best way to help teachers grow.

Over the past couple of days, I've been arguing that we're facing a crisis of credibility in our profession.

Too often, we adopt reductive definitions of teacher practice, because so much of teacher practice can't be seen in a brief observation. 

It's either beneath the surface—the invisible thinking and decision-making that teachers do—or it takes place over too long a span of time. 

We've been calling these two issues “visibility” and “zoom.”

Sometimes, when we second-guess teachers, we tell them they should have used other practices:

“Did you think about doing a jigsaw?”

“Did you think about using small groups for that part of the lesson?”

And hey, this can be helpful. Every day, administrators are giving teachers thousands of good ideas.

But sometimes we're making these suggestions without a clear sense of the teacher's instructional purpose

The practices must match the purpose, and a quick visit may not give us enough information to make truly useful suggestions.

The remedy to most of this is simply to have a conversation with the teacher—to treat feedback as a two-way street rather than a one-way transfer of ideas from leader to teacher. 

But we shouldn't enter into these conversations alone. 

There aren't just two parties involved when a leader speaks with a teacher.

The third party in every conversation should be the instructional framework—the set of shared expectations for practice. 

Why? 

Because a framework serves as an objective standard—an arbiter. 

It turns a conversation from a clash of opinions into a process of triangulation.

A more formal definition:

An instructional framework is a set of shared expectations serving as the basis for conversations about professional practice.

The best frameworks aren't just descriptions—they're leveled descriptions…

Or what we typically call rubrics. 

When you have a rubric, you have a growth pathway.

When teachers can see where their practice currently is—on a rubric, based on evidence—they can get a clear next step.

How?

By simply looking at the next level in the rubric.

If you're at a 3, look at level 4. 

If you're at a 1, look at level 2.

Now, we usually have rubrics for our evaluation criteria.

But what about the instructional practices that teachers are using every day?

Do we have leveled rubrics describing those practices?

Often, we don't bother creating them, because they're so specific to each subject and grade. 

They don't apply to all teachers in all departments, and we prefer to focus on things that we can use with our entire staff. 

So we miss out on one of the highest-leverage opportunities we have in our profession:

The opportunity to create clear descriptions of instructional practice, with subject-specific details that provide every teacher with pathways for growth. 

We can do it. In fact, teachers can do it mostly on their own, with just a bit of guidance. 

So let me ask you: 

What areas of instructional practice could your teachers focus on?

Where would it be helpful to have them develop leveled rubrics?

I'm sure it's specific to your school, and you wouldn't want to just download a rubric from the internet. You'd want teachers to have ownership. 

So what would it be?

Visibility & Zoom: the Evidence of Practice Grid

Is teacher practice always something we can actually see in an
observation?

Sometimes, the answer is clearly yes. But as I've argued over the past few emails, it's not always so simple. 

I thought it might be helpful to plot this visually, along two axes. Let's call this the Evidence of Practice Grid:
If a teaching practice falls in the top-left quadrant, it's probably something you can directly observe, in the moment. 

There's still an “observer effect”—teachers can easily put on a song and dance to show you what you want to see—but at least the practice itself is fundamentally see-able.

If it's in the top-right quadrant, a practice may be visible, but not on the time scale of a typical classroom visit. It might take weeks or months for the practice to play out—for example, building relationships with students. 

The bottom two quadrants include what Charlotte Danielson calls the “cognitive” work of teaching—the thinking and decision-making that depend on teachers' professional judgment. 

These “beneath the surface” aspects of practice are huge, but we can't observe them directly. We must talk with teachers to get at them. 

So, for any given practice, we can figure out how visible it is, and how long it takes to play out, using this grid. 

That's the Evidence of Practice Grid

The horizontal axis in our diagram is zoom—the “grain size” or time scale of the practice.

The vertical axis in our diagram is visibility—how directly observable the practice is.

So how can this grid be useful?

If you're focusing on an area of practice that's on the bottom or to the right, the grid can help you realize that it's something that's hard to directly observe. 

With this knowledge, you can stop yourself and say “Wait…did I actually see conclusive evidence for this practice, or just one brief moment that may or may not be part of a pattern?”

Conversely, when you know you're looking at a tight-zoom, highly visible practice, you don't have to shy away from giving immediate feedback. 

And in all cases, if you want to know more than observation alone can tell you…

You can ask. You can get the teacher talking. 

Conversation makes the invisible visible—and therefore, useful for growth and evaluation. 

Hope this is helpful!

As you gather evidence of teacher practice, and use it to provide feedback or make evaluation decisions…

Make sure you're aware of the zoom level and visibility of the practice you're focusing on.

Make sense?

Give it a try now:

Plot a given practice on this grid—mentally—and think about itsvisibility and zoom level. 

Where does it fall?

What comes up when you try to observe for or give feedback on this area of teacher practice?

Instructional Purpose: The Right Practice for the Right Circumstances

When should teachers use any given instructional practice?

If we're going to give feedback about teachers' instructional practices, it's worth asking:

When is it appropriate to use a given practice? Under what circumstances?

I'm using “practice” to mean professional practice—as in, exercising professional judgment and skill—as well as to mean teaching technique.

So some practices are in use all the time—for example, monitoring student comprehension as you teach, or maintaining a learning-focused classroom environment. 

Other practices are more specific to a particular instructional purpose.

For example, if a teacher is trying to help students think critically about a historical event, she might use higher-order questioning techniques, with plenty of wait time. 

If a teacher is trying to review factual information to prepare students for a test, he might pepper them with lower-level questions, with less wait time. 

If we're going to use instructional practices for the right instructional 
purposes, we have to be OK with not seeing them on command.

If we insist on seeing the practices we want to see, when we want to see them…we'll get what we want.

But it won't be what we really want. It'll be what I call hoop-jumping

Have you ever seen a dog jumping through a hoop?

My 5-year-old saw one at a high school talent show the other day, and it blew her mind. 

The human holds up the hoop, and the dog knows what to do.
(And yes, that's actually a pig in the GIF 🙂

Cute, but a terrible metaphor for instructional leadership, right?

Teachers aren't trained animals doing tricks. 

Yet too often, we treat them that way.

“Hey everyone, this week I'm going to be visiting classrooms and giving feedback on rigor. I'll be looking for higher-order questions, which—as we
learned in our last PD session—are more rigorous.”

We show up, ready to “inspect what we expect.”

Only, if we haven't thought deeply enough about what it is that we expect, or whether it's appropriate for that moment and the teacher's instructional purpose, or whether it's even observable…

Teaching is reduced to jumping through a hoop.

Dutifully, most teachers will do it. 

We'll show up, and teachers will see the hoop.

They'll know they need to ask some higher-order questions while we're in the room, because that's how we've (reductively) defined rigor. 

They know what we're hoping to see, so they'll use our pet strategy (see
what I did there?). 

We'll have something to write down and give feedback on, and we'll go away happy—satisfied that we've instructional-leaded* for the day. 

Yet in reality, we've made things worse. 

We've wasted teachers' time playing a dumb game—a game in which we pretend to give feedback, and teachers pretend to value it, and we all pretend it's beneficial for student learning. 

*And no, “instructional-leaded” is not a grammatically correct term.
I really hope it doesn't catch on.

But when I see dumb practices masquerading as instructional leadership, I feel compelled to give them a conspicuously dumb label. I'm not grumpy—I'm just passionate about this 🙂 

All of this foolishness is avoidable, if we're willing to think a little harder. 

Last week, I shared some thoughts on observability bias—the idea that instructional leaders tend to oversimplify what teachers are really doing, in order make it easier to observe and document. 

We adopt reductive definitions of teacher practice in order to make our lives easier, even if it means giving bad feedback, like “You shouldn't ask so many lower-level questions, because higher-order questions are more rigorous.”

So far, we've identified a couple of different factors to consider when
observing a practice in the classroom:

1. Zoom—is it something you can observe in a moment, or does it play out over days, weeks, or the entire year?

2. Visibility—is it an observable behavior you can see, or is it really invisible thinking and decision-making?
We're calling ^^^ this diagram ^^^ the Evidence of Practice Grid

And now we can add a third factor:

3. Instructional Purpose—under what circumstances is the practice
relevant?

If we ignore instructional purpose, and just expect teachers to use a practice every time we visit because we value it, we'll see “hoop-jumping”
behavior.

We'll walk into a classroom and immediately see the practice we're
focusing on—not because it fits the instructional purpose, because
teachers know we want to see it. 

So if you're seeing this kind of behavior, it's worth asking yourself—whenshould teachers be using this practice, under what circumstances, and
what would be good evidence*** that they're using it appropriately and
well?

***P.S. And if you're thinking “Well, I'd really have to talk with the teacher to know” then I think we're on the same page 🙂

Lecturing from the Back of the Room: The Data Conspiracy

Earlier this week, I asked for examples of oversimplified expectations——when administrators reduce teaching to whatever is easiest to observe and document

…even if that means lower-quality instruction for students…

…and downright absurd expectations for teachers. 

And wow, did people deliver. My favorite example so far:

The main push this year is “where is the teacher standing?” (with the implication that “at the front” = bad).

teachers now lecture from the back of the room (with the projection up front), which is resulting in a diminished learning environment for the students, even while earning more “points” for the teacher from the roaming administrators.

Students have even complained that they have to turn around to even listen well…

…the teachers miss out on many interactions with the students because they can't see the students' faces and reactions to the (poor) lectures.

You can't make this stuff up!

But here's the kicker: at least this school is trying!

The administrators are getting into classrooms, and emphasizing something they think will be better for students. 

That's more than most schools are doing! But we can do better.

Having clear expectations is great.

Getting into classrooms to support those expectations is great. 

Giving teachers feedback on how they're doing relative to shared expectations is great. 

But the “how” matters. It matters enormously. 

So why are schools taking such a reductive, dumbed-down approach to shared expectations? 

I have a one-word answer and explanation: data.

I blame the desire for data. 

To collect data, you MUST define whatever you're measuring reductively. 

If your goal is to have a rich, nuanced conversation, you don't have to resort to crude oversimplifications.

If you talk with teachers in depth about lecturing less and getting around the classroom more as you teach, the possibilities are endless.

But if your goal is to fill out a form or a spreadsheet—well, thenyou have to be reductive

In order to produce a check mark or score from the complex realities of teaching and learning…oversimplifying is the only option. 

So here's my question—and I'd love to have your thoughts on this:

What if we stopped trying to collect data?

What if we said, as a profession, that it's not our job as instructional leaders to collect data?

As a principal and teacher in Seattle Public Schools, I interacted with many university-trained researchers who visited schools to collect data. 

I myself was trained in both qualitative and quantitative research methods as part of my PhD program as well as earlier graduate training. 

knew how to collect data about classroom practice…

But as a principal, I realized that I was the worst person in the worldto actually do this data collection in my school.

Why? Because of what scholars have identified as one of the biggest threats to quality data collection:

Observer effects.

When the principal shows up, teachers behave differently.

When teachers know what the observer wants to see, the song-and-dance commences. 

You want to see students talking with each other? OK, I'll have them “turn and talk” every time you walk into the room, Justin. Write that down on your little clipboard.

You don't want me to lecture from the Smartboard all day? OK, I'll stand at the back, and lecture from there, Colleague.

The late, great Rick DuFour—godfather of Professional Learning Communities—used to tell the story of how he'd prepare his students for formal observations when he was a teacher.

I'm paraphrasing, but it went something like this:

OK, kids—the principal is coming for my observation today, so whenever I ask a question, you all have to raise your hands.

If you know the answer, raise your right hand. If you don't know the answer, raise your left hand, and I won't call on you.

The principal needed “data” on whether students were engaged and understanding the lesson…so the teacher and students obliged with their song-and-dance routine.

Across our profession, in tens of thousands of schools, we're engaged in a conspiracy to manufacture data about classroom practice

It's not a sinister conspiracy. No one is trying to do anything bad. 

We're all behaving rationally and ethically:

—We've been told we need data about teacher practice
—We have a limited number of chances to collect that data from classroom visits
—Teachers know they'll be judged by the data we collect

So they show us what we want to see…

…even if it results in absurd practices like lecturing from the back of the room. 

So here's my suggestion: let's stop collecting data from classroom visits

We already get plenty of quantitative data from assessments, surveys, and other administrative sources. 

We already have enough hats to wear as instructional leaders. We don't need to be clipboard-toting researchers on top of everything else. 

Instead, let's focus on understanding what's happening in classrooms. 

Let's gather evidence in the form of rich, descriptive notes, not oversimplified marks on a form.

Let's talk with teachers about what they're doing, and why, and how it's working. 

Let's stop trying to reduce it all to a score or a check mark. 

The Observability Bias: A Crisis in Instructional Leadership

Our profession is facing a crisis of credibility:

We often don't know good practice when we see it.

Two observers can see the same lesson, and draw very different conclusions. Yet we mischaracterize the nature of the problem.

We think this is a problem of inter-rater reliability. We define it as a calibration issue.

But it's not.

Calibration training—getting administrators to rate the same video clip the same way—won't fix this problem. The crisis runs deeper, because it's a fundamental misunderstanding of the nature of teaching. 

See, we have an “observability bias” crisis in our profession.

I don't mean that observers are biased. I mean that we've warped our understanding of teacher practice, so that we pay a great deal of attention to those aspects of teaching that are easily observed and assessed…

…while undervaluing and overlooking the harder-to-observe aspects of teacher practice, like exercising professional judgment. 

We pay a great deal of attention to surface-level features of teaching, like whether the objective is written on the board…Yet we don't even bother to ask deeper questions, like “How is this lesson based on what the teacher discovered from students' work yesterday?”

The Danielson Framework is easily the best rubric for understanding teacher practice, because it avoids this bias toward the observable, and doesn't shy away from prioritizing hard-to-observe aspects of practice. 

Charlotte Danielson writes:


“Teaching entails expertise; like other professions, professionalism in teaching requires complex decision making in conditions of uncertainty.

If one acknowledges, as one must, the cognitive nature of teaching, then conversations about teaching must be about the cognition.”


Talk About Teaching, pp. 6-7, emphasis in original 

When we forget that teaching is, fundamentally, cognition—not a song and dance at the front of the room—we can distort teaching by emphasizing the wrong “look-fors” in our instructional leadership work. 

It's exceptionally easy to see this problem in the case of questioning strategies, vis-à-vis Bloom's Taxonomy and Webb's Depth of Knowledge (DoK). 

I like Bloom's Taxonomy and DoK. They're great ways to think about the variety of questions we're asking, and to make sure we're asking students to do the right type of intellectual work given our instructional purpose. 

But the pervasive bias toward the easily observable has resulted in what we might call “Rigor for Dummies.”

Rigor for Dummies works like this:

If you're asking higher-order questions, you're providing rigorous instruction.
If you're asking factual recall or other lower-level questions, that's not rigorous. 


Now, to some readers, this will sound too stupid to be true, but I promise, this is what administrators are telling teachers.

Observability bias at work. It's happening every day, all around the US: Administrators are giving teachers feedback that they need to make their questioning more “rigorous” by asking more higher-order questions, and avoiding DoK-1 questions. 

Never mind that neither Bloom nor Webb ever said we should avoid factual-level questions. Never mind that no rigor expert believes factual knowledge is unimportant. 

We want rigor, so we ask ourselves “What does rigor look like?” Then, we come up with the most reductive, oversimplified definition of rigor, so we can assess it without ever talking to the teacher. 

My friend, this will never work. 

We simply cannot understand a teacher's practice without talking with the teacher. Observation alone can't give us true insight into teacher practice.

Why?

Back to Danielson: Because teaching is cognitive work

It's not just behavior.

It can't be reduced to “look-fors” that you can assess in a drive-by observation and check off on a feedback form. 

The Danielson Framework gives us another great example.

Domain 1, Component C, is “Setting Instructional Outcomes.”

(This is a teacher evaluation criterion for at least 40% of teachers in the US.)

How well a teacher sets instructional outcomes is fairly hard to assess based on a single direct observation. 

Danielson describes “Proficient” practice in this area as follows:

“Most outcomes represent rigorous and important learning in the discipline and are clear, are written in the form of student learning, and suggest viable methods of assessment. Outcomes reflect several different types of learning and opportunities for coordination, and they are differentiated, in whatever way is needed, for different groups of students.” (Danielson, Framework for Teaching, 2013)

Is that a great definition? Yes!

But it's hard to observe, so we reduce it to something that's easier to document. We reduce it to “Is the learning target written on the board?”

(And if we're really serious, we might also ask that the teacher cite the standards the lesson addresses, and word the objective in student-friendly “I can…” or “We will…” language.)

Don't get me wrong—clarity is great. Letting teachers know exactly what good practice looks like is incredibly helpful—especially if they're struggling.

And for solid teachers to move from good to great, they need a clearly defined growth pathway, describing the next level of excellence.

But let's not be reductive. Let's not squeeze out all the critical cognitive aspects of teaching, just because they're harder for us to observe. 

Let's embrace the fact that teaching is complex intellectual work.

Let's accept the reality that to give teachers useful feedback, we can't just observe and fill out a form.

We must have a conversation. We must listen. We must inquire about teachers' invisible thinking, not just their observable behavior.  

What do you think?

Are you seeing the same reductive “observability bias” at work in instructional leadership practice?

In what areas of teacher practice? Leave a comment and let me know.

How To Organize Your Experience On Your Résumé So You Get In The “YES” Pile

How should you list your work history, experience, and skills on your résumé, so you get in the “YES” pile and land an interview?

Watch the video for my key recommendations:

I see a lot of résumés that are a jumble of confusion, because people are trying to put their best qualifications at the top of the page, even if they don't belong there.

The other day, I saw a résumé that had “SKILLS” at the top, followed by “LEADERSHIP EXPERIENCE” followed by “TEACHING EXPERIENCE.” It took a lot of effort to figure out the person's actual work history.

The facts? They'd done an admin internship a few years ago, and were currently a classroom teacher.

Is that bad? No! But it's confusing if you don't present it clearly.

When a screener is reviewing your résumé, they're looking for the facts—your work history. When your résumé is organized in a confusing way, the reviewer can't find what they're looking for.

When the reviewer is confused, they put your application in the “NO” pile.

So how should you list your leadership experience on your résumé—especially if your best leadership experience isn't your most recent?

  • What if you served a term on the leadership committee, but now it's someone else's turn, and you have zero leadership responsibilities at the moment?
  • What if you did a great internship a few years ago, but now you're back in a non-leadership classroom role?

If you list it reverse-chronologically, with the newest roles at the top, your best experience may not be at the top of the page…and that's OK.

The first goal of a résumé should be clarity about the basic facts of your work history.

Once you've given the reader what they're looking for—clarity—you can add the good stuff that will make you stand out.

Learn more about how to organize your résumé so you land in the “YES” pile—by downloading The Résumé Blueprint.

How To Respond When Someone Asks You For A Reference or Recommendation Letter

What should you do when someone you work with asks you for a recommendation letter or reference for an educational leadership role?

Reference checks are essential to the hiring process, because they vastly increase the amount of information available to the hiring team. In interviews and application materials,  candidates have full control over what they share. If there's something a hiring team should know about a candidate's past job performance, good or bad, only references can provide a third-party perspective and convey this information. 

Being asked to provide a reference catches many educators off-guard, so it's important to anticipate your own feelings and possible reactions, so you take the most appropriate course of action.

Here are three common reactions leaders face when asked for references and letters of recommendation:

  • Feelings of betrayal
  • Fear of losing a great person
  • Being unsure about whether you can, in good conscience, recommend someone for a new role

In each section, you'll find detailed guidance on how to react to your own feelings, and how to act ethically in complex situations.

“They're Being Disloyal!”

People typically need references because they're planning to move to a new position in another school or district. When someone signals their intent to leave, this can create strong feelings about loyalty—or rather, disloyalty.

Is someone being disloyal when they seek out new opportunities? Are they betraying you and your students?

Some leaders mistakenly believe that the educators they hire should be loyal to their particular school or district forever. While the school year and annual contract are important tools for creating stability for students, it's a mistake to expect individual educators to be loyal to a single organization for their entire careers.

Instead, professional loyalty is to the profession. Our students and colleagues will always change from year to year, so it's not as if there's any real sense that “we're all in this together, and always will be.”

Change is inevitable, and people's growth and development are a good thing. Just as we don't want our students to stick around forever—we want them to progress to the next grade level, and leave when it's time—we don't want staff to stick around longer than they should out of a misguided sense of loyalty.

This is a tough one for many of us, though, because we see plenty of examples of great educators who never move on—who continue to grow as professionals while remaining in the same position. 

But it's important to realize that some people must leave and seek new opportunities elsewhere if they're to fulfill their calling as educators. There are simply not always enough opportunities within a given school or district. 

Educators owe their loyalty to the profession, not to any one organization—and it's a two-way street. So if you're asked to provide a reference or letter of recommendation, don't see it as an act of disloyalty on either person's part—see it as part of the inevitable and necessary movement of people to the opportunities where they can best serve students and fulfill their professional calling. 

“But I want to keep them!”

I was taken aback the first time I heard this from a candidate, but I continue to hear it on a regular basis:

“My boss said she won't write me a letter of recommendation, because she doesn't want to have to replace me.”

At first, I thought it was a joke. “Ha ha, yeah, I'm sure you'll leave big shoes to fill” was my reply—but the candidate was completely serious:

“No, she really does not want me to leave, and she told me she won't give me a good recommendation.”

Let me be clear: this type of sabotage is deeply unethical. 

If you withhold a well-deserved recommendation, simply to prevent someone from leaving and to save yourself the trouble of replacing them, you are committing a type of professional fraud

If you believe someone is making a difference for kids, you don't get to hog them. Support them in pursuing their dreams and maximizing their impact. 

Are you creating more work for yourself? Potentially, but you're also opening your school to the possibility of an even more amazing opportunity to bring in the right person for your current needs.  It may be hard to imagine anyone else being as good in the role—but you'll also have a chance to re-envision the role and the impact it can have on students.

Leaders who withhold references are acting in a petty and shortsighted manner that doesn't even serve their own students. Educators who want to move on, but can't, are unlikely to be at their best after being rebuffed. And they're likely to leave anyway, even if it means going without the benefit of a good reference.

“I'm Ambivalent About Recommending This Person”

What if you're not sure whether you can, in good conscience, provide a glowing recommendation?

It's simple: speak the truth. Don't say someone is great when they're merely good, and don't say someone is good if you're really looking to dump them on someone else. 

But don't save your honesty for a confidential reference check or year-end recommendation letter. Give feedback directly to the person as soon as it occurs to you, or as soon as you're asked for feedback.

Rising stars in our profession will often ask directly for feedback:

  • What opportunities should I be taking on?
  • What are my blind spots?
  • What could I be doing better?

If you see that someone has ambitions that might take them beyond their current role, and you anticipate feeling some reluctance, get curious and ask yourself: “What would I need to see this year in order to give this person my best, most glowing, no-hesitation recommendation?”

Now, this is where it gets tricky, because if you remain in the educational leadership profession for any length of time, you'll inevitably come across aspiring leaders who are moving up faster than you did. It's natural to think “Whoah, they really need to slow down and get more experience.”

We all tend to think that our career trajectory was right for us, so a similar path must be the best course for everyone else, right?

Wrong.

Every educator is on their own journey, and every situation is different. Sure, most 2nd-year teachers are not ready to become principals, but the reasons they're not ready—and the next steps they should take to become ready—are unique to each individual.

If your only feedback is “Keep doing what you're doing, for a longer period of time,” you're not thinking about what skills and experience the person actually needs to be ready for the next level.

Giving Feedback While It's Still Useful

If you don't feel comfortable giving someone a strong reference, that's a clear signal that they deserve more specific feedback, while there's still time to act on it and address any shortcomings. Don't wait until you're called for a reference check—give specific feedback now, while it can still benefit your students. 

It is unlikely that simply gaining additional years of experience, doing the same work in the same role, will have much value for an educator's future work at a higher level of leadership. 

Think about a 2nd-year teacher who has expressed interest in becoming a principal. Personally, I was always annoyed at people who seemed too eager to move on to a new challenge too soon.

But let's interrogate this sense of annoyance a bit: what's wrong with a 2nd-year teacher aspiring to the principalship?

Let's first be clear that “It took me longer” and “I had to put in my time and wait my turn” are not good arguments. Many of us had to wait longer than we wanted due to circumstances we wouldn't wish on anyone. 

But there's a legitimate reason to want someone to gain more experience before you recommend them for a promotion: skills and experience.

In most cases, 2nd-year teachers aren't very good yet. This is a profession with a steep learning curve. 

But teachers deserve useful feedback whether they're planning a career move or not. They deserve the specific feedback that will help them grow so they can serve their students more effectively. 

So if you feel that someone doesn't yet have the skills or experience they need to move to the next level, don't just tell them to hang around longer. Putting in more time has no magical power—and we've all seen teachers who get a little better in their 2nd year, only to stagnate at that level for years afterward. 

Give people the feedback they need—now—to earn your enthusiastic endorsement in the future. You'll be doing your current students a favor, and you'll be making a long-term impact on the profession.

If you know someone who aspires to a higher level of leadership, you can share this link where they can download my 52 practice interview questions for school leadership candidates.

1 2 3 6