Making Errors by Design

Due to an unfortunate series of events, my local school district has had to close a few days over the last month or so.  One of the consequences of this is that students go without school lunch. For a good portion of the district, school lunch was the most reliable meal they received on a  daily basis.  One of the things that's popped up in response to this is a community refrigerator.  People contribute food and meals to the community fridge so that others can grab what they need without cost.  This is a form of mutual aid where the community takes when they need and provides when they are able.   The thing about mutual aid is that it is often set up without strict guidelines so it is easy to access. 

On the Facebook group that helps manage this community fridge, someone mentioned that they saw a person come in and take "a huge box of nearly everything"  that seemed way more than one person could need. The poster asked the group "How can we prevent this?"  This caused a really interesting discussion in the group about that highlights a challenge we run into in our organizations and our culture. 

Two factions evolved in the discussion: the first agreed with the sentiment of the poster and wanted to devise a way to make sure that no one took more than they needed.  The second faction urged people to not make assumption about the people need and suggested that the goal of the community fridge was that people leave it not wanting for more, even if as observers we thing they took too much. 

As I read this I started seeing the conversation through the lens of research.  One of the things I still remember from my research methods courses in grad school is the challenge of Type 1 and Type 2 errors.  When doing research, there is a chance that we are wrong about our hypothesis.  Heck, when you are a grad student there is a BIG chance you are wrong about your hypothesis.  But in social sciences, how you are wrong is worth looking at.

Errors in research are not "I thought X and the answer is Y".  That's just a failure to find an effect.  An error is "I thought X, and the research told me a different answer than the true answer".  There are two ways that you can be wrong:  either your work found X true when X was false, or your work found X false when X was true.  The first is a false positive - a type 1 error.  The second is a false negative - a type 2 error.

In my experience, each of us, and each of our organizations, have a preference for one type of error over another.  And these kind of binary situations aren't just in hypothesis testing.  We can find ourselves putting in effort for something we could never get (working for a false positive) or not doing enough because we thought it was an impossible outcome (earning that false negative). 

The case with the mutual aid refrigerator is another choice where we can't know what's right - finding out if each person taking something needs precisely that amount would be a huge burden on the community.  I think it is valuable for those involved to know if they are okay with someone who doesn't need the food taking it ( A type 1 error) or if they are okay with someone in need being denied access (A type 2 error). 

I want to say that I don't think that working towards eliminating all false positives or eliminating all false negatives is universally a good thing.  I firmly believe that each problem is different and calls for its own approach. 

What I confidently think is that we will make errors.  The best way to handle that is to understand which type of error we want to avoid the most and deign to minimize those errors. Because then when inevitably mess up it will be in a direction that, while it isn't ideal, at least we can live with it.