Instructional Frameworks

If you've ever walked out of a classroom visit thinking "that lesson needed more rigor" and then realized you couldn't explain exactly what that means, you've identified the problem that instructional frameworks solve.

Most schools have evaluation rubrics — broad tools designed to assess overall teaching performance across many domains. Those rubrics answer the question "How is this teacher doing overall?" But they can't answer the question that actually drives improvement: "What does growth look like in this specific area of practice, and where is this teacher on that path?"

An instructional framework zooms in. It takes one specific area — formative assessment, classroom discussion, guided mathematics, whatever your school is focused on — and describes what professional practice looks like from the inside, at multiple levels of development. Not a checklist of things to do, but a detailed description of what it feels like to do the work well and what changes as you get better at it.

That distinction between checklists and frameworks is critical. A checklist tells you whether the objective is posted and whether students are in groups. A framework tells you whether the teacher is making purposeful decisions about grouping based on formative data — something no checklist can capture, because it's happening inside the teacher's head. Most observation tools are systematically biased toward what you can see from the outside and blind to the professional judgment that makes the visible actions effective. I call this observability bias, and it's one of the biggest obstacles to genuine instructional improvement.

Frameworks are designed to capture the insider's view of practice. They describe the thinking and decision-making that drive what a skilled practitioner does — not just the surface behaviors that an observer might check off. When your school has that kind of shared language, post-visit conversations become richer, teacher self-assessment becomes meaningful, and professional development can target the actual skills that matter rather than the visible behaviors that are easiest to measure.

Building a framework is collaborative work. It starts with classroom observations, conversations with your most skilled practitioners, and a willingness to describe what quality looks like with enough specificity that everyone can see the growth path ahead.

Frequently Asked Questions

What is an instructional framework and how is it different from a teacher evaluation rubric?

An instructional framework is a detailed description of what professional practice looks like in a specific area — such as formative assessment, guided mathematics, or classroom management — broken into its essential dimensions and described at multiple levels of development. It's a growth tool, not an evaluation tool.

A teacher evaluation rubric, like the Danielson Framework for Teaching, is designed to assess overall teaching performance across many domains. It's broad by design — it has to cover everything from lesson planning to professionalism. An instructional framework zooms in on one specific area and describes it in far more detail than any evaluation rubric can.

The two serve different purposes. Evaluation rubrics answer the question "How is this teacher performing overall?" Instructional frameworks answer the question "What does growth look like in this specific area of practice, and where is this teacher on that path?" Both are useful, but only one gives teachers a detailed map for getting better at something specific.

Why do schools need instructional frameworks if they already have evaluation rubrics?

Because evaluation rubrics are too broad to guide improvement in any specific area. When your school improvement plan says "improve formative assessment practices," an evaluation rubric might tell you that a teacher is "developing" in that domain — but it won't tell either of you what the next concrete step looks like.

Instructional frameworks fill that gap. They describe the specific dimensions of a practice and what it looks like to move from beginning to fluent in each one. That level of detail gives teachers a clear picture of where they are and what growth actually means — not just "do more of this" but "here's what qualitatively different practice looks like."

Think of it this way: an evaluation rubric is like a map of an entire continent. An instructional framework is like a trail map for a specific hike. Both are maps. Only one tells you where to put your feet.

Why aren't checklists enough for improving teaching?

Checklists describe what you can see from the outside: Is the objective posted? Are students in groups? Is the teacher circulating? They're easy to create and easy to use, which is why schools love them. But they fundamentally miss what matters most about teaching — the thinking and decision-making that drive what a teacher does.

A teacher can post an objective, put students in groups, and circulate the room while doing all of it poorly. The objective might be copied from a textbook with no connection to the lesson. The groups might be random with no purpose. The circulating might be aimless. A checklist gives you full marks. An instructional framework would reveal that the teacher is at a beginning level of practice despite checking every box.

The deeper problem is that checklists focus on compliance rather than judgment. They tell teachers what to do but not how to think about it. Improving teaching requires improving the professional judgment that drives thousands of daily decisions. Checklists can't touch that.

What is observability bias and why does it matter for instructional improvement?

Observability bias is the tendency to focus on what you can see a teacher doing and ignore the invisible thinking that makes the visible actions effective. It's like judging an iceberg by the 10% above water.

When a skilled teacher asks a well-timed question that redirects a struggling student, what you see is the question. What you don't see is the teacher noticing the confusion, recalling what this student struggled with yesterday, choosing from several possible interventions, and deciding this was the right moment to act. That invisible decision-making is the skill. The visible question is just the output.

Most observation tools — checklists, walkthrough forms, data collection instruments — are designed around observable behaviors. That makes them systematically biased toward the surface of teaching and blind to the substance. When you build improvement tools around observable behaviors, you inadvertently train teachers to perform the visible actions without developing the underlying judgment.

Instructional frameworks are deliberately designed to capture the insider's view — what it's like to practice, not just what it looks like to observe.

What are levels of fluency and how many should I use?

Levels of fluency describe qualitatively different stages of development within a practice. They're not about doing something more often or more consistently — they're about doing it differently as your professional judgment develops.

Four levels work well for most frameworks: a beginning level where the practice is new or underdeveloped, a developing level where intentional effort is underway but not yet integrated, a fluent level where the practice is smooth and effective, and an exemplary level where the practitioner goes beyond fluency in ways that contribute to the broader professional community.

The reason four works better than three or five is practical. Three levels don't provide enough differentiation — most teachers end up in the middle. Five or more levels create distinctions that are too fine to be useful. Four levels give you enough range to describe a meaningful growth trajectory without splitting hairs.

How do I write level descriptors that describe qualitative differences instead of just frequency?

This is the most common challenge in framework development, and the most important one to get right. The temptation is to differentiate levels by how often something happens: "seldom," "sometimes," "often," "consistently." That approach fails for two reasons. First, you can't reliably measure frequency from occasional classroom visits. Second, frequency usually isn't the real growth path — doing something more often doesn't mean doing it better.

Instead, describe what practice actually looks and feels like at each level. At a beginning level, a teacher might plan a single assessment at the end of a unit. At a developing level, they might include checks for understanding along the way but not yet use the results to adjust instruction. At a fluent level, formative data routinely informs the next day's lesson. The progression isn't about frequency — it's about sophistication.

The test is simple: could two observers independently read your descriptors and agree on which level they're seeing? If the descriptors rely on vague frequency words, the answer is usually no. If they describe distinct practices, the answer is usually yes.

How do I build a shared instructional framework if my district doesn't provide one?

You already have one — it's just scattered. Your teacher evaluation rubric, your curriculum guides, your professional development priorities, and your school improvement plan all contain expectations for instruction. Taken together, they form a framework. The work isn't creating something from nothing — it's assembling and organizing what already exists.

Start by gathering every document that describes what good teaching should look like in your school. Then look for gaps: areas where you have strong expectations but no shared language, or areas where the language is so broad that everyone interprets it differently. Those gaps are where additional specificity is needed.

For targeted improvement in a specific practice area, you may want to develop a more detailed framework — one that describes what proficient practice looks like in that area at multiple levels of development. That's a collaborative process best done with teachers, starting from classroom observations and professional conversations about what the work actually involves.

Featured Episodes — Principal Center Radio

# Guest Episode
734 Heather Bell-Williams & Justin Baeder Mapping Professional Practice
427 Justin Baeder Mapping Professional Practice
412 James Stronge Qualities of Effective Teachers
416 James Stronge Qualities of Effective Principals

Related Articles

Related Books

Go Deeper

Members of the Instructional Leadership Association get live weekly sessions, community support, and implementation tools for putting these ideas into practice. Learn more about ILA →

Go deeper with ILA

ILA members get weekly video episodes, on-demand courses, live coaching sessions, and the full Ascend career toolkit.

Start Your Free Trial →