Thijs-vW

Thijs-vW OP t1_iw6ryeq wrote

Thanks for the advice. Unfortunately I do not think transfer learning is the best thing for me to do, considering:

>if you train only on the new data, that's all it will know how to predict.

Anyhow,

>If retraining the entire model on the complete data set is possible with nominal cost in less than a few days, do that.

This is indeed the case. However, if I retrain my entire model, it is very likely that the new model will make entirely different predictions due to its weight matrix not being identical. This is the problem I would like to avoid. Do you have any advice on that?

1

Thijs-vW OP t1_it82bgp wrote

I looked in to the embedding layer in Keras, but I was not impressed. They are merely fancy lookup tables. That is nice when you want to encode sentences or the like, but I have a variable with merely 51 categories. In this case, a dense layer to transform the one-hot encoded variable would achieve the same, if I am not mistaken.

−6