top of page

Search Results

24 items found for ""

Blog Posts (12)

  • Qual versus quant impact evaluation. Same ballpark, different ballpark?

    Weird image of people counting beans generated by Canva's AI Soft versus hard impact evaluation approaches? Quant versus Qual? Is there an essential difference? Summary: quant and qual impact evaluation approaches are different ballparks because quant approaches attempt to estimate the strength of causal effects. Whereas qual approaches either don't use numbers at all or only do calculations about the evidence for the effects, not about the effects themselves and in particular we don't estimate strength of effects. Here's the question: how can we distinguish "soft" approaches to impact evaluation like Outcome Harvesting, QCA, causal mapping, Process Tracing, Most Significant Change, Realist Evaluation and so on from statistics-based causal inference (SEM, DAGs, RCTs etc)? Here are two bad answers: We can't distinguish our "soft" approach(es) by saying that we attempt to assess causal contribution and answer questions about for whom and in what contexts etc, because quantitative approaches attempt all of that too. We can say that we are focused on complex contexts, but there's nothing to stop someone using say OH in a non-complex context either is there? In any case whether a context is complex or not is also a matter of how you frame it, no? And there's in fact no shortage of examples where quant approaches have been used in complex contexts. Here's a better answer: these "soft" methods are qualitative, in the sense that where we involve numbers at all, our arithmetic is essentially an arithmetic of evidence for causal effects: is there any evidence for one pathway, how much, how much compared with another? For example, Process Tracing sometimes does calculations about the relative certainty of different causal hypotheses. QCA counts up configurations. Whereas quant causal analysis involves estimating the *strength* of causal effects (as well as having clever ways to reduce the bias of those estimates). As far as I know, qualitative approaches never attempt this (calculating the strength of a causal effect). We might conclude that the evidence suggests a particular effect is strong, for example because we have collected and verified evidence for a strong connection. But we don't, say, combine this with another set of evidence for a very weak connection and conclude that the strength of the effect was only moderate (we don't do maths on the strengths). It's true that qual approaches also do causal inference in the sense of making the jump from evidence for a causal effect to judging that the effect is real. Quant approaches (and, to be fair, some qual approaches) suggest that using their special method gives you a free ticket to make this leap. And indeed different methods include different ways to reduce different kinds of bias which mean you can be more confident in making the leap. But I'd say there are no free tickets. No way of an evaluator getting out of the responsibility of making the final evaluative judgement, however clever and appropriate your method. (You could argue that FCM and Systems Dynamics do arithmetic on the strengths of connections. Perhaps that makes them quant methods.) Seen this way, in essence qual and quant impact evaluation are not alternatives or competitors. They are different ways to do different things.

  • Hope you get to where you want to go in 2023!

    Best wishes from the Causal Map team (Steve, Fiona, Hannah and Samuel).

  • The elephant in the room

    Responding to our one-page description of causal mapping, Julian King says the elephant in the room with causal mapping is: can causal mapping really help you get from causal opinions to causal inference? The short answer is: sure it can help you, the evaluator, make that leap. But it causal mapping does not give out free passes. But in more detail, here are four more responses to the elephant. 1: Causal mapping ignores the elephant. On its own, causal mapping doesn’t even try to warrant that kind of step: it humbly assembles and organises heaps of assorted evidence in order for the actual evaluator to make the final evaluative judgement. Unlike evidence from an interview, or a conclusion from process tracing or from a randomised experiment, causal mapping evidence isn’t a kind of evidence, it’s an assemblage of those other kinds of evidence. It certainly isn’t a shortcut to get cheap answers to high-stakes answers by conducting a few interviews with bystanders. If you have to answer high stakes causal questions like “did X cause Y” and “how much did X contribute to Y” and have just a handful of pieces of evidence, there isn’t much point using causal mapping. Causal mapping is most useful for larger heaps of evidence, especially from mixed sources and of mixed quality; it gives you a whole range of ways of sorting and summarising that information, on which you can base your evaluative judgements. What it doesn’t give you is a free pass to any evaluation conclusions, and especially not the high stakes ones which occupy so much of our attention when we think and write about evaluation. 2: In most actual causal mapping studies, the elephant usually doesn’t even enter the room. Usually, we aren’t dealing with monolithic, high-stakes questions. Most causal mappers are looking for (and finding) answers to questions like these: - In which districts was our intervention mentioned most often? - Do children see things differently? - How much evidence is there linking our intervention to this outcome? - Does our project plan see the world in the same way as our stakeholders? All of these are relevant questions for evaluations. Some of them might feed into judgements about relevance, or about effectiveness or impact, and so on. We might notice for example that there is some evidence for a direct link from an intervention to an outcome, and much more indirect evidence, and some of those paths remain even when we remove less reliable sources. We can even compare the quantity of evidence for one causal pathway with the quantity of evidence for a different pathway. We can ask how many sources mention the entirety of a particular pathway, or we can ask which pathways have to be constructed out of evidence from different sources. (On the other hand we don’t, for example, make the mistake of inferring from the fact that there is a lot of evidence for a particular causal link that the link is a strong one.) All of this is bread and butter for evaluators, even though it doesn’t answer those elephant questions. 3: Causal mapping pushes back against the elephant. In every evaluation, the evaluator assembles some evidence and makes an evaluative judgement on the basis of it. All evaluation involves causal mapping in this sense. Occasionally there is, or seems to be, only a single piece of evidence in the heap – perhaps, evidence from a controlled experiment. But the final judgement is the evaluator’s responsibility, and (perhaps implicitly) must take into account other factors: “this is a controlled experiment, it was carried out by a reputable team, … but wait, their most recent study was criticised for allowing too much contamination …. but wait, the effect sizes were calculated with the latest method and controlled experiments seem to be a good way of reaching causal conclusions …”, and so on. An essential part of the evaluative process is also careful consideration of how exactly to formulate a conclusion, bearing in mind the context and the audience and how it will be generalised and applied. So, in practice, there is always a heap of factors to consider, often involving different parts of more than one causal pathway, even when the heap seems to be dominated by one or two elephants. 4: Causal mapping embraces the elephant. In most causal mapping studies, we do not in fact simply assemble the evidence we already have but actively gather it systematically. A good example is QuIP, the Qualitative Impact Assessment Protocol. The evidence is “only” the considered opinions of carefully selected individual stakeholders, but it is gathered using blindfolding techniques to minimise bias so that, once assembled and organised with causal mapping, the evaluative leap from opinions about causality to conclusions about causality can be made with more confidence, transparently and with appropriate caveats. Still, it's not the causal mapping itself which makes or warrants the leap, it's the evaluator, using evaluative judgement.

View All

Other Pages (12)

  • 404 | Causal Map

    There’s Nothing Here... We can’t find the page you’re looking for. Check the URL, or head back home. Go Home

  • HOME | Causal Map

    Identify and visualise causal connections in speech and writing Causal Map is a new online research tool , a way to code, analyse and visualise fragments of information about what causes what. Use it to make sense of what interviewees tell you in social science research. Use it to visualise stakeholders’ experiences of how a programme or intervention is working and create collective empirical ‘theories of change’. Get started here Simply sign up with your email address for a free account. Find out more about subscription options, more about the app , or read the Guide . Highlight connections Identify the links made between causal factors, according to your source text View causal maps See causal maps build up live as you code causal connections between factors Filter and analyse View and compare maps and connections by any theme or characteristic to help your analysis When people talk and write, they often give us rich information about what they think causes what. How then to extract this rich information from texts, whether that’s survey responses, interview transcripts or monitoring reports? ​ Causal Map is designed to help you identify and highlight information about what causes what within the text, then use powerful filtering and queries to help you aggregate, visualise and present how your sources perceive change happens. ​ You may also be interested in Causal Map’s little sister – StorySurvey , a web-based app to help you collect causal stories from your stakeholders. Check out these additional services Restructuring and importing your data If you don't have time to prepare your data for Causal Map, we can do it for you! Contact us and we can quote you for a professional data cleaning and preparation service. Qualitative data analysis We have trained qualitative analysts who can code data in Causal Map - whether for large scale surveys or in-depth interviews. Contact us to discuss your needs and for a quote. Qualitative data analysis We can offer advice on designing questionnaires, including using StorySurvey, and provide additional support to new or advanced users of Causal Map. We charge by the day for consultancy work, or in smaller increments for 1:1 support. Please contact us about any of the above to receive a quote tailored to your needs. Causal Map has been used by: and applied in reports by Bath SDR for: Connect with us! Get in touch at hello@causalmap.app SUBSCRIBE TO UPDATES

  • Resources | Causal Map

    RESOURCES What is Causal Mapping ? A one-page explanation. ​ The Causal Map app on two pages. ​ Some recent projects using Causal Map: a searchable list. ​ The Guide to Causal Mapping: If you are new to Causal Map you will love the Guide which tells you everything you need to know! ​ Take the quiz to test your skills and learn more about key features – this is the home to many helpful videos. ​ Causal Map Functions: Download our open-source R package to see how we make sense of causal maps. ​ Forthcoming book chapter comparing an empirical Theory of Change from a causal mapping process with the project's original theory: Powell, S., Larquemin, A., Copestake, J., Remnant, F., & Avard, R. (2023). Does our theory match your theory? Theories of change and causal maps in Ghana. In L. Simeone, D. Drabble, N. Morelli, & A. de Götzen (Eds.), Strategic Thinking, Design and the Theory of Change. A Framework for Designing Impactful and Transformational Social Interventions. Edward Elgar. ​ Forthcoming journal paper "Causal mapping for evaluators". ​ StorySurvey: A free app for making surveys which ask respondents to create their own causal connections based on questions devised by you. Suitable for sending out to large numbers of people; let the app do the work of aggregating everyone’s maps and then analyse this in Causal Map! Read more about this app which is still in development, and get in touch if you are interested. ​ StorySurvey overview (including StorySurvey face2face for use offline). ​ StorySurvey report on evaluation failures . ​

View All
bottom of page