ChatGPT - causal, of course
We can thank Judea Pearl for promoting the insight that if you want to thrive in this world, you have to understand causality natively. We humans make causal connections from an early age. We wouldn't survive long if we didn't.
ChatGPT has been a hit recently for several reasons, but one of them is (like other recent, related models like davinci) it is much better than previous models at understanding causal connections within text.
Our understanding of the world is drenched with causal understanding: information and hypotheses about how things work (mostly accurate enough, sometimes not). It's really hard for us to not think causally: the concept of correlation is much harder to understand than the concept of causation.
So, all the stuff we write on the internet (which is what ChatGPT sucks in to understand the world) is similarly drenched with causal claims. And ChatGPT is now really good at understanding this information.
That means you can ask it to extract the causal links within documents and interviews -- a process we call "causal QDA". It's pretty good at it. This ability is going to make causal mapping much easier and cheaper and therefore of renewed interest for evaluators, amongst others.
At Causal Map we're hard at work harnessing this ability to help automate, or semi-automate, the process of extracting causal maps from medium and large quantities of text data in a useful way. Watch this space!
So, ChatGPT is good at extracting causal information, but does it also have explicit knowledge about causation (meta-cognition) and can it explain it? Here's a chat I had this morning.
ChatGPT can't actually draw yet but it knows a range of syntaxes for drawing graphs. So when you paste the code into Mermaid Live, it looks like this. Not bad for a robot. (Not sure you could say the sun causes the earth's rotation, though.)