I’ve been encountering this issue a lot lately, where I get the same output from a DNN/ANN whatever input I pass. I solved it previously, but I keep forgetting what I did. I encountered this issue again today and spent around 2-3 hours today trying to figure out what was going wrong. I fixed my issue now and here I am writing this post for my future self.

Here’s what may help:

  • Normalize inputs - Very often this happens when some inputs are huge compared to other inputs. In this case, normalizing of inputs must be done.

  • Reduce the learning rate - Sometimes if the learning rate is too high, the loss may jump far and into an irrecoverable trench.

  • Reduce the batch size - Huge batch size coupled with high learning rate will propel the loss into what fits all of them.

  • Reduce the depth or number of parameters in your model - This was what worked for me today. Usually the parameters in an initial model are pretty small (in the order to 1e-1 to 1e-3). If the inputs are small too, as they get passed via feed-forward propagation, they keep getting smaller and become less significant to the final output. Reducing the model’s depth reduces the number of times that the inputs get diminished, and fixes the issue.