Last week I had an argument about the statistical interpretation of chance and pre-determination. Are they an attribute of the processes/phenomena we model or are they a product of the way we model a process? I believe the latter.
Stochasticity relates to our level of understanding regarding a phenomenon. If we would have complete information about a system and its states (Laplace’s demon) there would be no need for probabilistic models. Everything would be truly deterministic. Think of the moons changing phases, solar eclipses and planetary constellations. All these are governed by well known rules and applying these rules we can predict these variables perfectly without referring to chance. But this is not because these phenomena are in themselves deterministic, it is because we understand them on a deterministic level. Being completely ignorant to the mechanics of our solar system we could still try to predict solar eclipses in a probabilistic way. Looking at past data we would infer the probability to observe a solar eclipse within the next 10 years. This is the field social science plays at.
Explaining the difference between deterministic and stochastic components of a phenomena I was given the following example: „The event of winning the lottery is produced by my decision to play lottery (deterministic) vs. the outcome of the lottery (stochastic).“ OK, from an individuals point of view this distinction makes sense. But being the scientist who models lottery losses we might or might not know if someone will play the lottery, or how regularly. Depending on our level of knowledge we might use probability distributions to model our uncertainty and therefore treat the decision to play the lottery as a stochastic phenomena.
Stochastic processes versus deterministic processes. It’s not them, it’s us.