Hello! I have been going over readings and videos on causal inference and it’s relation to experimentation. While the concept of causal inference can get very philosophical, I have found that I am better able to follow the formalized ideas when my learning takes place away from the philosophical explanations. This may just be that I lack sufficient training in philosophy (obviously) but additionally, it is my view that the esoteric nature of the philosophical explanations can be extremely abstract such that they impede learning.
In this blog post, I plan to share my fascination about some concepts that I have been recently learning about. The concepts are not necessarily intertwined, in that explaining one does not require a knowledge of the other – I may be wrong so I would like to know what you think. In my early readings from the Making Things Happen – A Theory on Causal Explanation book by James Woodward, I have come to appreciate that our desire to understand causality is really about our need for control and manipulability of processes, natural or otherwise. Woodward encapsulates this idea of control and manipulability, by asserting that a causal explanation needs to answer the question of “what if things had been different”. While this is true, I believe that this is just one understanding of causal inference. I will admit, I still have a lot more of the book to read, but it is this one idea that I wanted to write this post about. To try and understand causality from the question above, we have well studied experimentation standards and methods, that have to be employed to investigate our hypotheses. In some cases, our desire for control and manipulability is granted and we are able to intervene, however in some cases we are not.
In the cases, where we are able to intervene, it becomes interesting how we then think about that effect on our experimental results. It becomes even more interesting when we answer the causal mechanism (between nodes) question. I promise to write more about this in the coming months, specifically about what ways are out there today, that scientists use to think about the effects of intervention on causality.
Another idea that I wanted to write about is generative models. I first learned of this concept through reading Sean Taylor‘s blog post found here. In the post Sean, writes about the possibility that experiments, such as A/B tests, could really just be locally optima and not necessarily adding significant value in a business/product. He then writes about generative models as being a possible mechanism through which we can derive significant value out of the work that has already been done in experimentation and causality. These models would (according to my thinking) learn all that we know as humans and using that data, suggest novel ideas of what it is that we might like/want. When I learned about this, I naturally had a lot of questions. Would we be using A/B tests for the testing of hypotheses/possible solutions, generated by the generative models?
I also thought that the building of the generative models would demand a stronger understanding/development of AI tools than we have now. More sophisticated AI tools, because if a model is to make predictions and suggestions of completely novel products/ideas then I believe (wrongfully?) that it should learn/develop “human consciousness”. There is an interesting debate currently on this subject of how we could make machines smarter through learning. You can find a piece I found quite illuminating on this debate here. Concerning the generative models themselves, questions relating to what data we would need for training (assuming that will be our approach), how we would teach it, how we would get it to explain an idea to us, are pertinent.