09 Oct 2012
- clearly relevant to the key actions, decisions, and thinking of those the evaluation needs to inform
- going right to the heart of what is really important, and doesn’t get lost in the details;
- favoring approximate answers to important questions over accuracy to four decimal places on trivia;
- resisting being lured into a focus on the outcomes that are most easily measured;
- presenting findings in a way that is simple, but not simplistic;
- useful — at both strategic and practical (or operational) levels;
- influencing and clarifying thinking, action, and decision-making; and
- providing insights that help people figure out what actions to take.
- a clear purpose for the evaluation;
- the right stakeholder engagement strategy;
- important, big picture evaluation questions to guide the whole evaluation;
- well-reasoned answers to the big picture questions, backed by a convincing mix of evidence;
- succinct, straight to the point reporting that doesn’t get lost in the details; and
- answers and insights that are actionable, that we can do something with.
Whereas some readers of this minibook will assert they that already do actionable evaluation, that nothing new is presented here, I would argue that few evaluation studies I have read would qualify as actionable evaluation for two main reasons: Evaluators measure what they can measure rather than risk finding “approximate” answers to the right questions and evaluators generate only evidence, not evaluative conclusions, telling us “what’s so” (e.g., what the outcomes are) but not “so what” (how good, valuable, or worthwhile the outcomes are). Make your evaluation truly a measure of worth, merit, or value and more actionable (i.e., utilizable) by reading this short publication and attempting the simple methodologies presented. It may be the best $3.99 you ever spend!
15 Jan 2012
I read a lot of different blogs as part of my personal and professional learning and last year began reading Seth Godin’s blog. An entrepreneur, marketer, and author, he wrote the following in April 2011:
28 Sep 2011
Okay – not a straight read – but another go-to book: Scriven, M. (1991). Evaluation thesaurus (4th Edition). Thousand Oaks, CA: Sage Publications, Inc.
Very useful for helping me understand the history of evaluation: Alkin, M. (2004). Evaluation roots: Tracing theorists’ views and influences. Thousand Oaks, CA: Sage Publications, Inc.
- Clients who seek to learn, change, and improve;
- Internal “evaluation champions” within client organizations who will support the conduct of evaluation studies and use of results;
- Effective feedback mechanisms from evaluators to clients so evaluation information is understood and used; and
- Trust between evaluators and client organizations.
19 Apr 2011
Having recently taken up knitting, I was disappointed at how hard it is to find Yak yarn. I decided to investigate and here is what I found: “A Hairy Yak is a Happy Yak“
14 Apr 2011
01 Apr 2011
Recently there was a discussion on AEA’s listserv EvalTalk on what it means to think evaluatively. Multiple persons weighed in on this topic, but it surprised me how few persons seemed to understand the idea of using evidence versus using evidence to make an estimation of merit, worth , or value. In essence, it’s the same conundrum I’ve found in purported evaluation reports that detail “what is” versus going one step further and addressing “what is the value (of what is)”.
However, a few people did, as it seems to me, accurately identify what is meant by thinking evaluatively. One example I really liked was written by Eileen Stryker, who wrote, “It seems to me that thinking evaluatively is about how we arrive at or account for judgments about value or quality. Evaluative thinking means being tuned in to value judgments people make (e.g., listening for such language as: that’s good, he’s doing a good job, the program is working, they’re really getting better, and words like effective, quality, good, bad, better, improving, etc.), and questioning how those judgments were arrived at and what evidence may exist to substantiate the value claim.”
Eric Weir strongly agreed with Eileen’s definition and added: “[Evaluative thinking] is either
using evidence to support value judgments or assessing the extent to which value judgments are supported by evidence…..Evidence does not support or undermine judgments by itself. Arguments are needed to connect the evidence to the judgment.”
Last, Bob Williams summed it up (nicely) as “evaluative thinking is the informed judgment of value, merit or worth”.
This was an incredibly interesting discussion and made me wonder a few things:
Why do we, as evaluators, seem to so often feel insecure in evaluatively assessing outcomes?
What does this say about evaluation training and the need for more emphasis on training to think evaluatively?, and
Are there differences in how evaluatively persons think based on training, evaluation field, years of experience, etc?
Would love to know what others think about these questions. Feel free to comment!