Thanks for the advice. Unfortunately I do not think transfer learning is the best thing for me to do, considering:
>if you train only on the new data, that's all it will know how to predict.
Anyhow,
>If retraining the entire model on the complete data set is possible with nominal cost in less than a few days, do that.
This is indeed the case. However, if I retrain my entire model, it is very likely that the new model will make entirely different predictions due to its weight matrix not being identical. This is the problem I would like to avoid. Do you have any advice on that?
Thanks for the clarification. So if I one-hot encode my categorical variable and feed it to a dense layer, I would achieve the same as with an embedding layer?
I looked in to the embedding layer in Keras, but I was not impressed. They are merely fancy lookup tables. That is nice when you want to encode sentences or the like, but I have a variable with merely 51 categories. In this case, a dense layer to transform the one-hot encoded variable would achieve the same, if I am not mistaken.
Thijs-vW OP t1_iw6ryeq wrote
Reply to comment by BugSlayerJohn in Update an already trained neural network on new data by Thijs-vW
Thanks for the advice. Unfortunately I do not think transfer learning is the best thing for me to do, considering:
>if you train only on the new data, that's all it will know how to predict.
Anyhow,
>If retraining the entire model on the complete data set is possible with nominal cost in less than a few days, do that.
This is indeed the case. However, if I retrain my entire model, it is very likely that the new model will make entirely different predictions due to its weight matrix not being identical. This is the problem I would like to avoid. Do you have any advice on that?