In my last post, I committed to practicing being a data-informed decision maker. Upon further reflection, I realized that I did not fully understand the current situation – I did not have data on how often I, or other leaders I work with, actually use data to learn and make decisions. So the first thing to do was to collect some data – I have been observing how decisions get made. I don’t have enough data yet to be ready to share what I am learning, but I can share what I am sure many of you already know – that the act of observation can easily change the subject being observed. I found myself pausing to consider whether I had data to inform my decision. I spent time with members of my team capturing their experience of the current situation, before then facilitating them in root cause analysis and developing hypotheses to test – in short, practicing Lean leadership as coach/facilitator, rather than decision maker.
Does this make the data I collected on myself invalid? Perhaps. Does that mean I should try to be more objective and not change my own behavior? Not necessarily. It depends on why I am observing myself. If I am doing it to gain knowledge about how I show up as a leader, then yes, the data I collected over the past week will not reflect how I acted the week before. The act of self-observation changed the subject. But if the ultimate purpose is to create change, than the act of measurement should perhaps be embraced as an effective change lever.
I am reminded of the ongoing debate with the social sector about the best method of impact evaluation. One camp strongly advocates for random control trials as the only way to know for sure if an intervention works. Another camp argues against such trials as inapplicable to the real world, along with taking a long time to show results and costing a lot of money. Most people in non-profits probably fall somewhere in-between, or just don’t think about the issue that much. If you are curious to learn more about this debate, you can read a two part article on the Markets for Good site on controlled trials and observational cohort studies by Peter York. As far as I can tell, this debate does not take place within the for-profit business world, where there seems to be less angst about the “right” method of measuring. There is a common belief within the social sector that market feedback serves as the evaluation function for business. That may be true. And successful businesses don’t wait for market feedback, they actively create feedback loops so that they can learn and adjust. This is critical for success, especially in a larger organization where market feedback can be muted.
I think it comes down to wanting to know for sure what made a difference. People seem to want to be certain about what to scale up or continue doing, in both non-profits and for-profits. That is based on the assumption of enough consistency over time or across situations for replication with the same results to be possible. I don’t think that is realistic – we live in a dynamic, complex world that is always changing in ways that are hard to anticipate, both in for-profit markets and non-profit human systems. Our own interventions contribute to change (at least, we hope so). The only intervention that I think can be replicated is the method of learning itself – collect data, develop a hypothesis, test, and adjust. And if the step of collecting data itself leads to change, I am ok with that. Being certain about what happened in the past is not so helpful – using what we learned about the past to develop a hypothesis to test in the present is helpful. But I don’t expect it to play out the same way it did before, so I always need to be ready to study and adjust.
I would love to hear others’ thoughts – please share your reflections through comments.