Evidence-Based Decision Making

­
2 01, 2015

Has the time come for Moneyball for government?

By |January 2nd, 2015|Evaluation, Data and Statistics, Evidence-Based Decision Making, Best Practices|0 Comments

Tying funding for social programs to their effectiveness seems like a no-brainer. Sadly, however, genuine evidence-based decision making in policy and budget priority-setting in federal social spending is all too rare. Evaluations are often required in federally-funded social programs; however, the standards of evidence have often been unclear or lacking altogether (Sorry, but using client satisfaction scales as your sole measure of success is a poor way to measure effectiveness!). On top of that, since performance is rarely considered in funding decisions, little incentive exists for programs to change and improve in response to evaluative feedback. An effort to rectify this situation began in the Bush II years— but even then, less than .2 percent of all nonmilitary discretionary programs were held to rigorous evaluation standards. Kind of takes your breath away, doesn’t it? That’s a particularly disturbing figure when you consider that, according to Ron Haskins in the NYT, “75 percent of programs or practices that are intended to help people do better at school or at work have little or no effect.” One way of interpreting this shocking figure is that in the absence of evidence-based decision making, a massive amount of funds are tied up in supporting ineffective programs that could be invested in promising alternatives. […]

25 02, 2015

Even cows need good data

By |February 25th, 2015|Data and Statistics, Evidence-Based Decision Making|0 Comments

We came across this article the other day and have to admit, it gave us a chuckle. Of course, we’re fans of data in all its many forms, but being based in downtown Seattle, we don’t get many chances to think about it in the context of cows – er, make that calving management. But the article provides fodder for a crunchy little morsel of usefulness. The author explains that there is an existing method for collecting data on dystocia (cows with difficult labor in delivering calves). It’s been around since 1978, and involves a five-point scale. The scale ranges from 1 – no problems in delivery, 2 – slight problem, 3 – assistance needed, 4 – considerable force required (we’re picturing an episode of James Herriot’s All Creatures Great and Small), to 5 – extreme difficulty or surgical intervention. The controversy involves a new proposed scale from 1 (no problems), 2 (one-person pull) to 3 (severe traction or surgery). On an intuitive level, this sounds nice and simple, right? Who needs those extra points on the scale anyway? Well, dairies do. Reducing the spectrum from five points down to three means that the resulting data will be less sensitive to what’s really going on. And that makes it less reliable for decision making. Moral of the story? Pay attention not only to the data being collected, but the metrics used to collect it. Simpler is not always better. […]

21 02, 2015

Save the Census data!

By |February 21st, 2015|Data and Statistics, Evidence-Based Decision Making|0 Comments

Good data is necessary – but not sufficient in itself – for evidence-based decision making. After all, you can’t make an evidence-based decision without evidence, right? That’s why it’s dismaying that the folks at the Census are considering removing a crucial set of items when the next Census comes around in 2020. […]

13 01, 2014

Strategic Planning is Dead

By |January 13th, 2014|Evidence-Based Decision Making, Strategic Planning|0 Comments

A recent article in the Stanford Social Innovation Review succinctly makes the case for adaptive strategic planning—that is, a planning process that doesn’t attempt to predict the future, but rather encourages a culture of experimentation, learning, and adaptation. Since at least World War II, the prevailing approach to strategic planning in the business, governmental and nonprofit world has been rooted in traditional military thinking and culture. Based on centuries of hard-won experience, this approach assumes: The past is always the best predictor of the future. Good data is hard to come by, so new information should be greeted with skepticism. Lines of communication are generally unreliable. Therefore a small number of clear directives, not often changed, are essential in order to coordinate the far-flung elements of our operation. […]

25 11, 2013

Digesting your data

By |November 25th, 2013|Evidence-Based Decision Making, Data Visualization|0 Comments

Before the tryptophan sets in, you’ll want to take a look at this fun data visualization from Powerhouse. Want to eat that extra helping of sweet potatoes? Love that pecan pie? […]

20 01, 2012

The road ahead

By |January 20th, 2012|Evidence-Based Decision Making|0 Comments

Back to the question with which I opened the first post: does the world need another evaluation blog? As one of my professors used to say when one of his assertions met an annoying challenge: “Well now, that is an empirical question.” Well, as evaluators we are hardcore empiricists, so that works for us. We’re less interested in abstraction and idealism than in what works. Evaluators get to be up-close-and-personal witnesses to some of the best and worst practices in management. Too often, it is not a pretty picture. So, we see a need for a viewpoint that integrates evaluation with sound, realistic leadership principles. You see, all those books and blogs on leadership abound with advice. Some of it is useful, and some of it is frankly appalling. But in nearly every case, contemporary approaches to leadership zero in on one—and only one– of three basic premises: […]