When Do We Blindly Trust Technology?

In this video, Dr. Justin Baeder discusses the 'deference threshold' — the point at which people stop questioning technology's output and accept it uncritically.

Key Takeaways

  • We defer to technology too easily - When data comes from a computer, people tend to accept it without questioning
  • The deference threshold is low in education - Schools adopt tech products with minimal scrutiny of their claims
  • Question everything - EdTech products deserve the same skepticism as any other vendor making promises

Transcript

How do we decide how much to trust AI or any other technology?

We have long ago passed the point where if you're using a calculator, you don't bother to check its output, right?

Like if you punch a number into a calculator and the answer is wrong, it's probably because you punched it in wrong.

We trust 100% that the calculator is going to give us the correct answer.

And similarly with spreadsheets, right?

If the spreadsheet is wrong, it's probably because I designed it wrong, not because there's an error in the programming.

of the software itself like we trust some technologies completely to the point that we don't manually check them right if you use a gradebook software you almost certainly do not double check the output of that gradebook software you may want to double check that it's configured correctly and working the way you think it is but it's almost certainly going to work correctly and with generative ai i think we have to be very careful because there is always the possibility of hallucination there's always the possibility that the output is just professionally not something that we would agree to or want to put our names on.

And yet the temptation is constant now.

Have you noticed in every AI tool, there's a helpful offer to do just about anything that you want to do?

Can I draft this email for you?

Can I summarize this email for you?

You know, it's entirely possible that there are whole companies out there where it's AI emailing other employees and then AI reading and responding to the email and like, are the humans even involved here?

I think as professionals, we have to stay involved in our work and we have to be very mindful of what I'm calling the deference threshold.

The threshold beyond which we just blindly trust the output of a system.

I don't think we're at that point with generative AI for most purposes yet.

Because we still need to be the ones who, you know, do our jobs, right?

I do not want to be held responsible for the output of AI that I didn't look at.

So I want to make sure that if I am using AI, which I do sparingly, then I am definitely checking that and making sure that I'm, you know, proud to sign my name to it.

And like these temptations are just constant and the line is going to move closer and closer to us if we're not careful, where we will find simply because of the massive time savings that we're allowing that deference threshold to get closer and closer.

So I just wanted to introduce that.

concept of a deference threshold, the point where we blindly trust the output of a system.

We've got to be very careful about that with AI, even though we're very comfortable with calculators, spreadsheets, grade books, you know, this is a different ballgame.

And I'm curious what you're seeing in issues with using AI in your work.

Let me know.

edtech evidence based practice

Want to go deeper?

ILA members get weekly video episodes, on-demand video courses, and the full Ascend career toolkit — including AI coaching to help you build your portfolio and nail your next interview.

Start Your Free Trial →