Evaluation

­
8 02, 2016

Re-visioning funder impact

By |February 8th, 2016|Evaluation, Best Practices, Strategic Planning, Uncategorized|0 Comments

When it’s hard to see the forest for the trees, it turns out to be not so good for the trees– or the forest. Jennifer Teunon, the Executive Director  of the Medina Foundation has recently written a thoughtful piece on the Philanthropy Northwest blog (reposted from National Center for Family Philanthropy) about the need for grantmakers to re-think their approach to funding for non-profit organizations. She does a great job of describing the corrosive real-world effects on grantees of the program-specific approach to funding favored by most funders in contrast to a general operations funding approach. When funders only look at the specific programs that a non-profit operates rather than its work as a whole (i.e., the infrastructure required to support the whole package), valuable staff time is bled off to respond to ever more granular grantwriting demands. On top of that added stress, program-specific funding also fragments organizations and promotes a silo mentality, sapping a non-profits’ vitality. We appreciate Jennifer’s shout-out to nonprofit leaders who are calling attention to this problem such as our friend Vu Le of Rainier Valley Corps, whose blog often takes on this topic with insight and unicorn jokes. We also appreciate  Jennifer’s recognition that an underlying factor contributing to grantmakers’ narrow focus on programmatic outcomes is the grantmakers’ need to demonstrate its own impact. As she puts it, “I believe [a grantmakers’ tendency towards a granular focus on programs] is primarily because foundations want to understand and quantify their own impact. By earmarking dollars to a specific program, many foundations hope to draw a line from the dollars they give to the outcomes nonprofits achieve.” And that cuts to the heart of the problem…  […]

6 07, 2015

The weaponization of data: With great power comes great responsibility

By |July 6th, 2015|Evaluation, Data and Statistics, Cultural Competency|0 Comments

For the past few years our friend Vu Le of Rainier Valley Corps has been publishing a terrific blog called Nonprofit with Balls. If you don’t already know Vu, the title gives you a clue about his provocative ideas. A seasoned nonprofit leader, Vu has an unorthodox take on how the nonprofit world actually works—and lots of disruptive (in a good way) ideas about how it could work better. Vu recently posted a blog entry that got our attention here at TrueBearing: Weaponized data: How the obsession with data has been hurting marginalized communities. It’s a thought-provoking read for anyone involved in the nonprofit, public or grantmaking sectors, so all you unicorns out there go ahead and click on the link to read his post. I guarantee you’ll chuckle at least twice- and you’ll get the reference to unicorns. I’ll wait. Back already? OK. For those of you who didn’t bother to click the link, here is a 30,000-foot overview of Vu’s post: “Data can be used for good or for evil.” While acknowledging the power of skillfully used data and its benefits to both nonprofits and grantmakers, Vu nails ten distinct ways in which data can be—and too often has been—used to obscure rather than to illuminate, to diminish the richness of our understanding of nonprofit performance, and to maintain the power status quo in a way that marginalizes and sometimes even pathologizes entire communities. […]

15 05, 2015

EBDM On The Road

By |May 15th, 2015|Useful Stuff, Evaluation, Evidence-Based Decision Making|0 Comments

We’re on the road with Moneyball for Nonprofits at the Washington Nonprofit’s Shaping the Conversation conference in Bellevue WA. Lots of energy among the folks attending our workshop about the use of an evidence-based decision making approach in strategic planning, action, and evaluation. The full presentation deck has plenty of supplementary information and practical resources– take a look here, and start your organization on a path to better decisions! […]

7 04, 2015

Moneyball- coming soon to a boardroom near you!

By |April 7th, 2015|Evaluation, Evidence-Based Decision Making, Best Practices|0 Comments

In case you somehow missed it, Moneyball is a 2011 movie based on the book by Michael Lewis. Based on a true story, it depicts a baseball team, the 2002 Oakland Athletics, that found itself unable to compete with teams that had three times their team payroll. Facing the collapse of his club, manager Billy Beane realized that relying on the traditional insights of scouts simply wouldn’t result in a competitive team given the budget the A’s had available. Instead, he turned to the most unlikely of advisors, a statistician who never played baseball but who had a deep understanding of two things: 1) what measurable events best predict wins (primarily runs scored), and 2) what individual performance statistics predict those runs. Using the power of data, Beane could identify low-cost/high impact players that scouts overlooked. The result (spoiler alert!): this radical “Moneyball” approach rocketed the underdog Oakland A’s into the playoffs– at a fraction of the salary of the teams they competed against. It’s an inspiring story, with potential application in many fields (no pun intended!). The Moneyball approach is starting up in several sectors: you may have heard of evidence-based policy (government) or evidence-based practice (medicine and mental health). We prefer “evidence-based decision making,” characterized by a potent strategy to make effective decisions through the use of data. The ideas in Moneyball relate to key decisions that leaders face. […]

2 01, 2015

Has the time come for Moneyball for government?

By |January 2nd, 2015|Evaluation, Data and Statistics, Evidence-Based Decision Making, Best Practices|0 Comments

Tying funding for social programs to their effectiveness seems like a no-brainer. Sadly, however, genuine evidence-based decision making in policy and budget priority-setting in federal social spending is all too rare. Evaluations are often required in federally-funded social programs; however, the standards of evidence have often been unclear or lacking altogether (Sorry, but using client satisfaction scales as your sole measure of success is a poor way to measure effectiveness!). On top of that, since performance is rarely considered in funding decisions, little incentive exists for programs to change and improve in response to evaluative feedback. An effort to rectify this situation began in the Bush II years— but even then, less than .2 percent of all nonmilitary discretionary programs were held to rigorous evaluation standards. Kind of takes your breath away, doesn’t it? That’s a particularly disturbing figure when you consider that, according to Ron Haskins in the NYT, “75 percent of programs or practices that are intended to help people do better at school or at work have little or no effect.” One way of interpreting this shocking figure is that in the absence of evidence-based decision making, a massive amount of funds are tied up in supporting ineffective programs that could be invested in promising alternatives. […]

7 02, 2013

Surveying the future of survey techniques

By |February 7th, 2013|Evaluation, Data and Statistics, Data Visualization|0 Comments

I’m still sifting through the aftermath of the Presidential election and the controversial accusations of bias in polling and predictions leveled against various pollsters on all sides of the political spectrum. For data geeks, the question of bias in polling is a source of endless fascination. As someone whose profession involves mostly non-political surveys, however, I zeroed in on a basic methodological question: Regardless of polling firm, what polling method showed the least bias in predicting the election? According to a detailed post by Nate Silver, the answer is clear, and should make anyone who relies on survey or polling data sit up and take notice: All things being equal, online surveys showed 40 percent less bias than live telephone interviewers, and an astonishing 72 percent less bias than automated telephone “robopolls.” […]

5 10, 2012

Data Scientist: The Sexiest Job of the 21st Century

By |October 5th, 2012|Evaluation, Data and Statistics|0 Comments

An article from the Harvard Business Review we can get behind: Data Scientist: The Sexiest Job of the 21st Century

20 12, 2011

The 17,000,001st time’s a charm

By |December 20th, 2011|Leadership, Evaluation, Best Practices|0 Comments

Technorati estimates that the Internet hosted no fewer than 112.8 million blogs in 2011. Wow. Let that number sink in. That’s a whole lot of bloggery! It pencils out to one blog for every 63 people on the face of the planet. […]