vedrano-
vedrano- t1_iv06kdh wrote
Reply to [R] Keras image classification high loss by mikef0x
Blaze have right, CNNs re way to go. You might even get away with fully connected model, but you should use billions of images and quite a large model to reach similar results that much smaler CNN model with much smaller dataset can.
Btw, during last epochs val loss is oscillating, meaning learning rate is too large at that particular point.
vedrano- t1_ittm7at wrote
Depends on how sophisticated model would you like to build, but the simplest way would be to flatten those data and add (extend 1D array) to flattened CNN part prior to last fully connected layers of CNN model.
vedrano- t1_itk61c7 wrote
Reply to comment by BarcaStranger in [D] Newbie question about training and epoch by BarcaStranger
That should not change concept of continued training.
vedrano- t1_itjzfy3 wrote
Yes, it should be okay.
Moreover, some API (Keras) have parameter in fit() function that accepts epoch from which to continue training, so next models can be saved under right name.
vedrano- t1_iv125dq wrote
Reply to comment by Think_Olive_1000 in [R] Keras image classification high loss by mikef0x
If it would be trapped in local minima (gradient vanishing), it would not change loss for a quite margin.