Superstition is a form of overfitting in human learning.
It’s likely to happen when the human has no clue or no good clue of what’s going on. Here, all kinds of noisy clues can become important for the human to predict what’s going to happen.
Similarly, a machine learning model is likely to get superstitious when no good clues are provided. This is probably the most severe form of overfitting, and the model can even perform worse than a random baseline or a constant guess.
Unfortunately, this is happening to my research experiment now. I know that there are good features out there, but they are much more expensive to compute, and I don’t want to get to use them.
I guess the only way out is to try to see if it’s possible to get some simple but reasonably good features.