For the past few years our friend Vu Le of Rainier Valley Corps has been publishing a terrific blog called Nonprofit with Balls. If you don’t already know Vu, the title gives you a clue about his provocative ideas. A seasoned nonprofit leader, Vu has an unorthodox take on how the nonprofit world actually works—and lots of disruptive (in a good way) ideas about how it could work better.
Vu recently posted a blog entry that got our attention here at TrueBearing: Weaponized data: How the obsession with data has been hurting marginalized communities. It’s a thought-provoking read for anyone involved in the nonprofit, public or grantmaking sectors, so all you unicorns out there go ahead and click on the link to read his post. I guarantee you’ll chuckle at least twice- and you’ll get the reference to unicorns. I’ll wait.
Back already? OK. For those of you who didn’t bother to click the link, here is a 30,000-foot overview of Vu’s post:
“Data can be used for good or for evil.” While acknowledging the power of skillfully used data and its benefits to both nonprofits and grantmakers, Vu nails ten distinct ways in which data can be—and too often has been—used to obscure rather than to illuminate, to diminish the richness of our understanding of nonprofit performance, and to maintain the power status quo in a way that marginalizes and sometimes even pathologizes entire communities.
Vu believes it is time for an honest conversation about the many roles data play in the relationship between nonprofits and grantmakers. For example, when a grantmaker requires thorough needs assessment and performance data from a small low-resource nonprofit before it can even think about applying for funding, it places that nonprofit in an unrealistic double bind. Vu calls this the Data-Resource Paradox: “If an organization does not have resources to collect data, then it does not have the data to collect resources.” This paradox perpetuates a systemic mismatch between grantmakers and many nonprofits that serve marginalized communities.
Turning to another form of data weaponization, Vu cites our mutual colleague Dr. Jondou Chen (of the University of Washington), who describes in four steps a troubling tendency in the way evaluative data is sometimes interpreted 1) a difference between groups on an outcome indicator is found, 2) that difference is perceived as a de facto problem, 3) responsibility or blame for the problem is assigned, and 4) an entire group is pathologized. I have also witnessed this tendency to “get too far out over your skis” and assign causation and even tacit blame for program failures that the data does not actually support.
Vu’s diagnosis is compelling and offers any engaged and ethical leader in the nonprofit and grantmaker sectors serious food for thought. And actually, Vu is not alone in his concerns—other observers of the nonprofit scene (such as Lisa Rangelli and Yna Moore writing in the Nonprofit Quarterly) also make provocative points about structural barriers to honest communication that exist between nonprofits and their funders, and the need to re-examine entrenched assumptions.
Speaking from my own experience as an evaluator of nonprofits and consultant to grantmakers, I want to offer three reflections on Vu’s post that may be useful additions to the conversation:
1. “No matter what the problem is, it’s always a people problem.” One of my heroes Gerald Weinberg said this first and best. As a therapist as well as a consultant-evaluator, I see the truth of this statement nearly every day. Human beings love to think that externalities are to blame even when a more uncomfortable reality may exist. It’s just easier that way.
- “If only she weren’t so frustrating, I wouldn’t have to get so angry,” or
- “We thought about introducing best practice X into our nonprofit (or grantmaking institution), but our stakeholders would never understand it,” or
- “Our program actually works, there just seemed to be something wrong with the participants.”
Is there ever any truth in those explanations/rationalizations? Sure. But the whole truth rarely lets us off the hook so easily, either in our personal lives or in an organizational and institutional context.
A fuller truth is that if you really want to understand a chronic problem that exists between people or groups, at some point you have to take an honest look beyond abstract concepts and consider the actual motivations, interests and power relationships that exist among them. You have to find a way to talk about how things look from both sides, such as the fear of giving up power or resources, or of disappointing stakeholders. (BTW, to see an innovative strategy for surfacing these issues, check out the National Committee for Responsive Philanthropy’s Philamplify project).
Far from scapegoating, this is simply a reminder that chronic problems are chronic for a reason—in some unspoken way, they represent a “vector of competing interests.” In other words, problems persist because they are actually ways to manage or to avoid other more difficult problems.
This process of force-fitting information into an interpretive framework that is convenient for some groups can simultaneously be damaging to others. That outcome may be conscious and deliberate, or unconscious and inadvertent, or somewhere in between. However, I believe that the more unconscious, inadvertent, and undiscussed this process is, the more likely data will serve in effect as a weapon, and perpetuate a pernicious and self-defeating status quo.
In other words, Weinberg’s “people problem” principle applies to the notion of data weaponization. The issue Vu and others raise is not really a problem with data. At bottom, weaponized data is a people problem– those pesky critters who actually pay for, create, and use the stuff. I believe I am merely underscoring Vu’s argument, since much of Vu’s post is actually aimed at choices made about data by humans (and I’ll go out on a limb here and assume that nearly all grantmakers, evaluators, and nonprofit leaders are, in fact, humans).
The upshot: Data doesn’t weaponize itself!
Why is this an important point? While evaluators have at their fingertips a number of technical strategies that in theory can improve the situation Vu describes, technical solutions can never resolve underlying people problems. And if those people problems are not addressed, they will persist and undermine any tool or rational strategy we can throw at them.
2. Assuming grantmakers, nonprofits, and evaluators recognize the people problems involved, the use of evidence-based decision making (EBDM) tools offers a powerful corrective.
The EBDM movement has been gathering steam over the past decade partly in response to problems like weaponized data. EBDM strategies and techniques are adaptable, they can be incorporated in the design and execution of most evaluation projects. Here at the Current we’re in the midst of a series on EBDM so if you are interested in this topic, that’s a good place to start. For this post, let’s just touch on a few key ideas about EBDM.
In his book Thinking Fast and Slow, Daniel Kahneman proposed that the brain has two distinctive cognitive systems. System 1 is quick, instinctive—and frequently wrong. System 2 is logical, slow, and effortful, yet it is capable of producing markedly better results. For a quick and fun intro to these systems check out this cool video by ASAP Science.
The bottom line:
- System 1 thinking is where fear and those other nasty dynamics that underlie most people problems live.
- EBDM not only provides a way to bypass System 1 thinking, but it can also enhance System 2 thinking.
- The way EBDM works for decision making is analogous to the way eyeglasses work to correct poor eyesight. In other words, EBDM can sharpen your insight– but you have toactually use it for it to be effective.
When used properly, EBDM nudges users – nonprofits, grantmakers and even evaluators – to more consistently:
- articulate, question, and operationalize the key assumptions that underlie programs and their evaluation;
- develop clear sets of formative and/or summative outcome/impact criteria that are based on explicit and testable theories of change;
- draw upon multiple sources and types of data;
- consider multiple convergent methods of attacking key research questions; and
- engage multiple interpreters of data at all stages of evaluation – especially the communities that are the subjects or targets of programs and their evaluations.
- identify and explicitly seek to minimize the effect on the evaluation process of the “people problems” discussed in the last post.
3. If it’s true that evaluators have participated with nonprofits and grantmakers in weaponizing data, then we also have a duty to join in the search for solutions.
As a final point in reflecting on Vu Le’s post, Weaponized data: How the obsession with data has been hurting marginalized communities, I’d like to widen the frame by pointing out that alongside nonprofits and grantmakers, a third party is involved in the status quo that Vu and others describe. This is a group that could and should participate in the discussion and in collaborating on solutions.
I’m looking at us, fellow evaluators.
Yes, that’s right, by and large, evaluators are the ones who actually capture and massage the data that nonprofits and grantmakers use to frame their relationship. And if the data we generate are weaponized, then as evaluators we have a professional ethical obligation to consider our role in this pattern and to join in the search for viable solutions. I have seen some perceptive professional evaluators (such as as Lisa Rangelli and Yna Moore writing in the Nonprofit Quarterly) discussing the chronic relationship problems that exist between nonprofits and grantmakers. But I haven’t come across many who are willing to point out that this is actually a three-way dynamic.
Evaluators are every bit as vulnerable as nonprofits and grantmakers to the people problems we have discussed in this post. Most professional evaluators are trained to manage our personal anxieties about the unintended effects of our work and to keep the effect of such emotion to a minimum, but it is still demanding to remain as disinterested as the role requires.
Delving into the larger dynamics that exist between nonprofits and grantmakers is usually seen as “out of scope” for an evaluator. But it is time for evaluators to join the conversation in earnest. Even a brief reading of the American Evaluation Association’s Guiding Principles shows that we have a clear duty to consider the larger effects of our work, both intended and unintended, especially where they support a harmful status quo for marginalized communities.
Can we beat our data-swords into plowshares? I certainly hope so. My colleagues and I will be returning to this topic in the Current, and we’ll also be keeping a weather eye on Vu, the Nonprofit Quarterly, the National Committee for Responsive Philanthropy, and others who have had the courage to speak out on this disruptive topic.
Leave A Comment