top of page

Search Results 

26 items found for ""

  • ChatGPT is changing how we do evaluation. The view from Causal Map.

    Causal mapping – the process of identifying and synthesising causal claims within documents – is about to become much more accessible to evaluators. At Causal Map Ltd, we use causal mapping to solve evaluation problems, for example to create “empirical theories of change” or to trace evidence of the impact of inputs on outcomes. The first part of causal mapping has involved human analysts doing “causal QDA”: reading interviews and reports in depth and highlighting sections where causal claims are made. This can be a rewarding but very time-consuming process. Natural Language Processing (NLP) models like ChatGPT[1] can now do causal mapping pretty well, causally coding documents in seconds rather than days. And they are going to get much better in the coming months. 👄More voices: It is now possible to identify causal claims within dozens of documents or hundreds of interviews or thousands of questionnaire answers. We can involve far more stakeholders in key evaluation questions about what impacts what; and it is possible to work in several natural languages simultaneously. 🔁More reproducibility: To be clear: humans are still the best at causal coding, in particular at picking up on nuance and half-completed thoughts in texts. But NLP is good at reliably recognising explicit information in a way which is less subject to interpretation. 🍒More bites at the cherry: With NLP we can also do things that were practically impossible before, like saying “that’s great but let’s now recode the entire dataset using a different codebook, say from a gender perspective”. ❓Solving more evaluation questions: we hope to be able to more systematically compare causal datasets across time and between subgroups (region, gender, etc). 🤯New challenges We’re hard at work addressing the new challenges which NLP is bringing to causal coding: - Processing many large documents simultaneously. - Using existing pre-coded datasets to train models which are specialised for causal coding and/or for specific subject areas. - Developing a common grammar for causal coding, building on our existing work. For example, what to do when some claims are about an increase in income and others are about a decrease in income? - Optimising the prompts we give to the NLP models (this is not only a technical challenge but also has a substantive element: we have to explain to the machine in ordinary language what we actually mean by a causal claim or a causal link). - Grouping, labelling and aggregating similar causal factors. - After examining a coded dataset and further developing the "causal codebook", telling the NLP to completely recode the same dataset with the new codebook – something which has been prohibitively time-consuming up to now. - Developing human/NLP workflows. For example, a human codes a sample of the text and tells the NLP to “continue like this”. - Monitoring bias against specific groups and guarding against possible blind spots in identifying causal information. What we already offer at Causal Map We have developed a grammar and vocabulary for causal mapping, and a set of open-source algorithms for processing and visualising causal map databases. We help evaluators do things like this: - Trace the evidence for different causal pathways from one or more interventions to one or more outcomes. How many individual sources mentioned one or more of these paths? - Consolidate causal factors into a causal hierarchy - Examine and display differences between causal maps for different groups or different time points We see a lot of potential (as well as risks and pitfalls) in leveraging this functionality to help evaluators get more out of data which is currently more difficult to analyse - and we’d interested in sharing ideas and collaborating with others interested in exploring where we go next. --- [1] Actually we use the related model GPT3 via its API, as ChatGPT does not yet have its own API.

  • ChatGPT - causal, of course

    We can thank Judea Pearl for promoting the insight that if you want to thrive in this world, you have to understand causality natively. We humans make causal connections from an early age. We wouldn't survive long if we didn't. ChatGPT has been a hit recently for several reasons, but one of them is (like other recent, related models like davinci) it is much better than previous models at understanding causal connections within text. Our understanding of the world is drenched with causal understanding: information and hypotheses about how things work (mostly accurate enough, sometimes not). It's really hard for us to not think causally: the concept of correlation is much harder to understand than the concept of causation. So, all the stuff we write on the internet (which is what ChatGPT sucks in to understand the world) is similarly drenched with causal claims. And ChatGPT is now really good at understanding this information. That means you can ask it to extract the causal links within documents and interviews -- a process we call "causal QDA". It's pretty good at it. This ability is going to make causal mapping much easier and cheaper and therefore of renewed interest for evaluators, amongst others. At Causal Map we're hard at work harnessing this ability to help automate, or semi-automate, the process of extracting causal maps from medium and large quantities of text data in a useful way. Watch this space! So, ChatGPT is good at extracting causal information, but does it also have explicit knowledge about causation (meta-cognition) and can it explain it? Here's a chat I had this morning. ChatGPT can't actually draw yet but it knows a range of syntaxes for drawing graphs. So when you paste the code into Mermaid Live, it looks like this. Not bad for a robot. (Not sure you could say the sun causes the earth's rotation, though.)

  • Qual versus quant impact evaluation. Same ballpark, different ballpark?

    Weird image of people counting beans generated by Canva's AI Soft versus hard impact evaluation approaches? Quant versus Qual? Is there an essential difference? Summary: quant and qual impact evaluation approaches are different ballparks because quant approaches attempt to estimate the strength of causal effects. Whereas qual approaches either don't use numbers at all or only do calculations about the evidence for the effects, not about the effects themselves and in particular we don't estimate strength of effects. Here's the question: how can we distinguish "soft" approaches to impact evaluation like Outcome Harvesting, QCA, causal mapping, Process Tracing, Most Significant Change, Realist Evaluation and so on from statistics-based causal inference (SEM, DAGs, RCTs etc)? Here are two bad answers: We can't distinguish our "soft" approach(es) by saying that we attempt to assess causal contribution and answer questions about for whom and in what contexts etc, because quantitative approaches attempt all of that too. We can say that we are focused on complex contexts, but there's nothing to stop someone using say OH in a non-complex context either is there? In any case whether a context is complex or not is also a matter of how you frame it, no? And there's in fact no shortage of examples where quant approaches have been used in complex contexts. Here's a better answer: these "soft" methods are qualitative, in the sense that where we involve numbers at all, our arithmetic is essentially an arithmetic of evidence for causal effects: is there any evidence for one pathway, how much, how much compared with another? For example, Process Tracing sometimes does calculations about the relative certainty of different causal hypotheses. QCA counts up configurations. Whereas quant causal analysis involves estimating the *strength* of causal effects (as well as having clever ways to reduce the bias of those estimates). As far as I know, qualitative approaches never attempt this (calculating the strength of a causal effect). We might conclude that the evidence suggests a particular effect is strong, for example because we have collected and verified evidence for a strong connection. But we don't, say, combine this with another set of evidence for a very weak connection and conclude that the strength of the effect was only moderate (we don't do maths on the strengths). It's true that qual approaches also do causal inference in the sense of making the jump from evidence for a causal effect to judging that the effect is real. Quant approaches (and, to be fair, some qual approaches) suggest that using their special method gives you a free ticket to make this leap. And indeed different methods include different ways to reduce different kinds of bias which mean you can be more confident in making the leap. But I'd say there are no free tickets. No way of an evaluator getting out of the responsibility of making the final evaluative judgement, however clever and appropriate your method. (You could argue that FCM and Systems Dynamics do arithmetic on the strengths of connections. Perhaps that makes them quant methods.) Seen this way, in essence qual and quant impact evaluation are not alternatives or competitors. They are different ways to do different things.

  • Hope you get to where you want to go in 2023!

    Best wishes from the Causal Map team (Steve, Fiona, Hannah and Samuel).

  • The elephant in the room

    Responding to our one-page description of causal mapping, Julian King says the elephant in the room with causal mapping is: can causal mapping really help you get from causal opinions to causal inference? The short answer is: sure it can help you, the evaluator, make that leap. But it causal mapping does not give out free passes. But in more detail, here are four more responses to the elephant. 1: Causal mapping ignores the elephant. On its own, causal mapping doesn’t even try to warrant that kind of step: it humbly assembles and organises heaps of assorted evidence in order for the actual evaluator to make the final evaluative judgement. Unlike evidence from an interview, or a conclusion from process tracing or from a randomised experiment, causal mapping evidence isn’t a kind of evidence, it’s an assemblage of those other kinds of evidence. It certainly isn’t a shortcut to get cheap answers to high-stakes answers by conducting a few interviews with bystanders. If you have to answer high stakes causal questions like “did X cause Y” and “how much did X contribute to Y” and have just a handful of pieces of evidence, there isn’t much point using causal mapping. Causal mapping is most useful for larger heaps of evidence, especially from mixed sources and of mixed quality; it gives you a whole range of ways of sorting and summarising that information, on which you can base your evaluative judgements. What it doesn’t give you is a free pass to any evaluation conclusions, and especially not the high stakes ones which occupy so much of our attention when we think and write about evaluation. 2: In most actual causal mapping studies, the elephant usually doesn’t even enter the room. Usually, we aren’t dealing with monolithic, high-stakes questions. Most causal mappers are looking for (and finding) answers to questions like these: - In which districts was our intervention mentioned most often? - Do children see things differently? - How much evidence is there linking our intervention to this outcome? - Does our project plan see the world in the same way as our stakeholders? All of these are relevant questions for evaluations. Some of them might feed into judgements about relevance, or about effectiveness or impact, and so on. We might notice for example that there is some evidence for a direct link from an intervention to an outcome, and much more indirect evidence, and some of those paths remain even when we remove less reliable sources. We can even compare the quantity of evidence for one causal pathway with the quantity of evidence for a different pathway. We can ask how many sources mention the entirety of a particular pathway, or we can ask which pathways have to be constructed out of evidence from different sources. (On the other hand we don’t, for example, make the mistake of inferring from the fact that there is a lot of evidence for a particular causal link that the link is a strong one.) All of this is bread and butter for evaluators, even though it doesn’t answer those elephant questions. 3: Causal mapping pushes back against the elephant. In every evaluation, the evaluator assembles some evidence and makes an evaluative judgement on the basis of it. All evaluation involves causal mapping in this sense. Occasionally there is, or seems to be, only a single piece of evidence in the heap – perhaps, evidence from a controlled experiment. But the final judgement is the evaluator’s responsibility, and (perhaps implicitly) must take into account other factors: “this is a controlled experiment, it was carried out by a reputable team, … but wait, their most recent study was criticised for allowing too much contamination …. but wait, the effect sizes were calculated with the latest method and controlled experiments seem to be a good way of reaching causal conclusions …”, and so on. An essential part of the evaluative process is also careful consideration of how exactly to formulate a conclusion, bearing in mind the context and the audience and how it will be generalised and applied. So, in practice, there is always a heap of factors to consider, often involving different parts of more than one causal pathway, even when the heap seems to be dominated by one or two elephants. 4: Causal mapping embraces the elephant. In most causal mapping studies, we do not in fact simply assemble the evidence we already have but actively gather it systematically. A good example is QuIP, the Qualitative Impact Assessment Protocol. The evidence is “only” the considered opinions of carefully selected individual stakeholders, but it is gathered using blindfolding techniques to minimise bias so that, once assembled and organised with causal mapping, the evaluative leap from opinions about causality to conclusions about causality can be made with more confidence, transparently and with appropriate caveats. Still, it's not the causal mapping itself which makes or warrants the leap, it's the evaluator, using evaluative judgement.

  • What could possibly go wrong? Looking at evaluation failures with StorySurvey.

    What goes wrong in evaluations? If we ask evaluators to tell stories about evaluation failures, what common features will emerge? We used StorySurvey to find out: our new app for implementing qualitative causal mapping research really quickly. Like SurveyMonkey, but for stories. Evaluators from around the world filled in a StorySurvey. Here are the results! (You can click on each slide to enlarge it.) If you're interested in applying StorySurvey within your work, get in touch: hello@causalmap.app.

  • Causal mapping demonstration at #eval22

    Following our half-day workshop on Wednesday, today we were very pleased to deliver a demonstration of Causal Map and causal mapping to around 150 people at AEA's #eval22 in New Orleans. We promised some links for those attending, and here they are: The app itself at https://causalmap.shinyapps.io/tokyo Our Guide to causal mapping and the Causal Map app BathSDR, the home of the QuIP Some information about StorySurvey Our StorySurvey on evaluation failures Sign up to our waiting list for trying out StorySurvey Sign up for news updates about Causal Map

  • Are you a postgraduate student doing qualitative research to answer causal questions?

    We are offering small software grants, free support and a £500 prize to Masters & PhD students who use causal mapping in their research! Causal mapping is a great way to do qualitative social research. Causal mapping (Narayanan 2005; Eden et al. 1992; Laukkanen and Wang 2016; Axelrod 2015) aims to directly understand and collate the causal claims which people make in narrative (and other) data rather than trying to deduce causal connections using statistical inference. It is commonly used to find out how groups of people (for example, stakeholders in an intervention) think the world works, but also to synthesise expert documentary evidence in order to make conclusions about how the world really does work. The Causal Map app (causalmap.app) is a great way to do causal mapping. It’s a bit like NVivo or Dedoose, but it’s causal from the ground up. The app is already used by researchers and evaluation practitioners around the world and recently won the 2022 SAGE Concept Grant - an award for new software that supports social scientists during the research process. You can try the app out for free, but subscriptions for larger datasets usually start from £39/month (+VAT)*. We know that can be a lot on a student budget, and we are keen to work with more postgraduate researchers, so Causal Map Ltd is offering 20 individual subscriptions for up to 18 months completely free and a £500 prize for the best proposal! We will also offer free group support sessions to help you learn how to use the app, alongside written guidance. To be in with a chance, all you have to do is send us, by 31st December 2022, at hello@causalmap.app: a short, succinct proposal (less than 500 words) saying how you would like to include causal mapping in your PhD or Masters’ research (causal mapping does not have to be your main research method) a one-paragraph email from your supervisor affirming that you are enrolled on a Masters or PhD course and that you have their approval to use causal mapping as part of your research Applications will be considered on a first-come-first served basis; the £500 prize for the best proposal will be announced in December 2022. Applications will be judged on the following criteria: Originality Feasibility Use of causal mapping Social / environmental value Contribution to science *We do also offer student discounts, please get in touch: hello@causalmap.app!

  • Collaborative working on Causal Map App

    One feature we’ve been asked for most on Causal Map is live collaborative coding: if only two people could edit the same file at the same time! Well, we’ve now made that possible. Up until now, you and your colleagues have been able to log in to the same or different maps from the same or different accounts*, including from multiple browser tabs – so you can have two maps open at once – but you couldn’t both edit the file at the same time. That’s what we have enabled. (Or you can even have the same file open yourself in two different browser tabs if you want). So for example if you have a large file with a lot of transcripts, you could start coding, say, statements 1-200, and your colleague can start coding statements 201-400. The factors you create will be available to them and vice versa. This will probably later become a premium feature for subscribers to the Unlimited package only, but right now it is free for everyone on the Tokyo server only – let us know (hello@causalmap.app) if you have any questions or comments.

  • Smart-zooming

    There have been so many amazing updates in Causal Map this past month! The ability to collaborate has been added, there is a new dashboard and a new reports tab. However, hands-down my favourite feature to use has been smart-zooming! It has been such a useful function when presenting my findings. I always use hierarchical coding when coding as it gives me the option of presenting the larger overview or the finer details. It lets me create labels like Increased knowledge; Farming method/ practice which can also be grouped together with other factors like Increased knowledge; Markets and displayed just as Increased knowledge when “zooming out”. However, until smart-zooming came along this was presenting some difficult decisions when creating my maps. Zooming out to show the larger picture is appropriate for overview maps and is a great way to show my audience the big stories of change. However, when looking in more detail about specific drivers of change this sometimes led to over-simplifying my maps. Especially in cases when although the parent-factor was the same, some of the lower-level factors were really important (and frequently mentioned) in their own right and it seemed misleading to roll them all up together. Before the smart-zooming feature, leaving them ‘zoomed in’ was the only other option and this sometimes led to displaying too much detail. For example, in the map above I want to keep my audiences focus on how the improved/new farming techniques led to increased farm production, so details such as whether the farm was more protected from animals or wildfire is unnecessary and distracting from the main story. Here is that same map zoomed out to level one, the underlined factor labels are the factors that have been ‘rolled up’. It is easier to read but I have lost a lot of the detail, such as what type of knowledge drove these changes. Enter smart-zooming! With smart-zooming I can enter a source count (number of sources who mentioned a particular link) and any link with a count lower than that number gets rolled up, whilst the rest are preserved. In the map above the factor labels underline in purple have been rolled up and those underlined in pink have been preserved. I am able to present the main stories of change without distracting people with the links with small counts. In the above map I have set this number to 10, which is the default. As you can see I still have all the detail I wanted to present, such as the type of knowledge respondents had gained but none of the distracting little links. Lower level factors ‘use of fertiliser’ and ‘use of water pump’ have been rolled up into ‘Improved/new farming techniques (other)’ so it is clear there is variety within this factor but the unnecessary detail is not displayed as less than 10 sources mentioned each link. If I had simply zoomed out, my audience would have not known that cultivating in lines was the most common improved/new farming techniques driving change. Using smart-zooming I can clearly present the most important stories of change.1` Find the smart-zooming filter in the add-ons section in the dashboard. Give it a go and let us know in the comments below how it helps you out!

  • Monthly subscription options now available

    You’d like to use Causal Map but don’t want to pay for an annual subscription? Don’t forget we now have monthly options, starting from £39. And as before, anyone can try Causal Map for free, forever! Use is limited to 50 causal links, great for a demo. And we have annual subscriptions too which can save you up to 50%!

  • Exploring StorySurvey

    Last Autumn I carried out an evaluation of a professional development program for aspiring social enterprise leaders in the UK that had been running for a decade. Early in my discussion about the project with the client it became clear that understanding causal links and pathways was going to be crucial to this work. Some core aims of the evaluation were to understand: 1. Long-term outcomes We wanted to understand any longer-term outcomes (both positive & negative) experienced by beneficiaries since they left the programme. For some people this meant asking them to reflect back over a period of 7 years so we knew that it would take time and space to allow them to explore their own perceptions of change over this time. 2. The ‘highs & lows’ of the professional development journey The programme was a 3 year commitment to intense study and action learning which people completed alongside their existing work and personal lives. This presented challenges for many beneficiaries. A straightforward focus on end goals or final outcomes would have failed to fully explore the richness and depth of growth that many beneficiaries’ experienced over their time in the programme. For many beneficiaries the ‘lows’ along the way – those periods of growing self-awareness, self-doubt and low confidence – were essential elements of their overall journey. 3. The full story We wanted to create enough space to explore people’s experiences without the constraints of a narrow focus on the programmes’ theory of change. The client wanted to develop a very honest understanding of a range of beneficiary journeys in all their glorious ‘greyness’ in order to inform future design and funding of the programme. Having established the centrality of understanding casual pathways to this piece of work I reached out to the team at Bath SDR and Causal Map for some help! And that help came in the form of StorySurvey. StorySurvey is an app for gathering causal stories. It is a qualitative, causal survey tool. You simply send a link to a question which prompts people to make connections, for example ‘what factors have influenced your professional development in the last 3 years?’ StorySurvey then asks people for the reasons behind their answers and the reasons for the reasons. Or the other way round; the outcomes of the outcomes. The app combines the information from different respondents into an overview map which helps visualise people’s explanations: where they agree, and where they disagree. We decided StorySurvey would be a great way to support this evaluation for several reasons: 1. To increase reach Contacting people remotely I was able to reach more people than through one-to-one interviews. There are other online survey tools out there but, unlike StorySurvey, they would not have been able the capture of casual links and pathways that we were so interested in. 2. To give structure to interviews I conducted some one-to-one interviews and used StorySurvey during these to guide or support these conversations. This not only helped focus discussions but also saved time when it came to capturing and processing data! 3. The reflective nature of StorySurvey was familiar to participants As part of the professional development studies the beneficiaries had spent a lot of time honing their self-reflection skills, which meant they felt comfortable completing this kind of exercise. 4. The joy of visualisations! When I had done the work that I love – the human engagement and exploration through dialogue – I could use the data captured by the tool to slice, dice and present that data. The team at Causal Map supported me with my management of the data and the tools and the end results were fantastic. The only limitation was the usual one – no one tool is right for every single purpose or audience. I often work in high crisis environments such as prisons, homeless hostels and domestic abuse refuges where simply providing a link to an online survey would not be the right approach. Using the software in interviews however was a great way to address this challenge – and a fantastic experience as a qualitative evaluator who regularly drowns in vast and complex research outputs! Overall the experience was great and the evaluation much enhanced by the ability to collate and aggregate causal pathway data. From my point of view the exploration and visualisation of pathways and links really helped encourage the client to engage with the data rather than passively consume it. The client certainly valued the visual presentations of the findings and during a sensemaking workshop we were able to use the maps to discuss those findings in a way that broke down the idea that the data itself would provide ‘all the answers.’ I look forward to using StorySurvey again very soon and growing my understanding of its application in different environments.

bottom of page