Constant-Cranberry29
Constant-Cranberry29 OP t1_jbiyw8d wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
because I've read from some paper, they saying FS and FE is different
Constant-Cranberry29 OP t1_jbixwf1 wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
I think feature selection and feature engineering are different
Constant-Cranberry29 OP t1_jbiutcs wrote
Reply to comment by neuralbeans in Can feature engineering avoid overfitting? by Constant-Cranberry29
Can you provide a reference that states that feature engineering can address overfitting?
Submitted by Constant-Cranberry29 t3_11mokqu in deeplearning
Constant-Cranberry29 OP t1_iwoc176 wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
still the same even I drop abs, drop normalization, and change last layer to model.add(Dense(1, activation=None, use_bias=False)) it doesn't work
Constant-Cranberry29 OP t1_iwo71pb wrote
Reply to comment by Lexa_21 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
it doesn't work
Constant-Cranberry29 OP t1_iwo6vm1 wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
initial_learning_rate = 0.02
epochs = 50
decay = initial_learning_rate / epochs
def lr_time_based_decay(epoch, lr):
return lr * 1 / (1 + decay * epoch)
history = model.fit(
x_train,
y_train,
epochs=50,
validation_split=0.2,
batch_size=64,
callbacks=[LearningRateScheduler(lr_time_based_decay, verbose=2)],
)
Constant-Cranberry29 OP t1_iwo6ukg wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
>Okay. So, as I understand, your labels are usually either zero (before normalization), or negative, and, very rarely, they are positive.
>
>With the abs, it's easy for the model to reproduce the "baseline" level, because it's still zero after normalization, and as long as the last Dense produces a large negative number, sigmoid turns that number into zero.
>
>I think it would work even better if, instead of abs, you set all positive labels to zero, then normalize. (After normalization, the "baseline" level will become 1, also easy to reproduce).
>
>In both cases, will work for data points that originally had negative or zero labels, but it won't work for data points with originally positive labels.
>
>You have a problem without normalization, because the "baseline" level no longer 0 or 1 and your model needs to converge on that number. I think it would get there eventually, but you'll need more training, and probably learning rate decay (replace the constant learning rate with a tf.keras.optimizers.schedules.LearningRateSchedule object, and play with its settings.)
>
>The question is, do you want, and do you expect to be able to, reproduce positive labels? Or are they just random noise? If you don't need to reproduce them, just set them to zero. If they are valid and you need to reproduce them, do more training.
I have try using tf.keras.optimizers.schedules.LearningRateSchedule object, it still doesn't work
Constant-Cranberry29 OP t1_iwnz3zj wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
I have edit the pictures which contain normalization data
Constant-Cranberry29 OP t1_iwnyhip wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False)
columns = ['Fx']]
selected_df = df[columns]
FCDatas = selected_df[:2050]
SmartInsole = np.array(SIData[:2050])
FCData = np.array(FCDatas)
Dataset = np.concatenate((SmartInsole, FCData), axis=1)
scaler_in = MinMaxScaler(feature_range=(0, 1))
scaler_out = MinMaxScaler(feature_range=(0, 1))
data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89])
data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90])
Constant-Cranberry29 OP t1_iwnxdah wrote
Reply to comment by sqweeeeeeeeeeeeeeeps in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
I want reduce the shifting prediction if I not use the abs()
Constant-Cranberry29 OP t1_iwnx4ze wrote
Reply to comment by sqweeeeeeeeeeeeeeeps in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
so what should I do for solving this problem?
Constant-Cranberry29 OP t1_iwnx2yp wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
if you looking from the number why that is not 0-1 because before plotting the value I already transform it to original value.
Constant-Cranberry29 OP t1_iwlwjy8 wrote
Reply to comment by pornthrowaway42069l in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
I have try, then it doesnt work.
Submitted by Constant-Cranberry29 t3_ywu5zb in deeplearning
Constant-Cranberry29 OP t1_is8w1w2 wrote
Reply to comment by WildConsideration783 in how to find out the problem when want to do testing the model? by Constant-Cranberry29
I already add dropout but still same and if I look from the model loss is not overfit
Constant-Cranberry29 OP t1_irreagz wrote
Reply to comment by _Arsenie_Boca_ in how to find out the problem when want to do testing the model? by Constant-Cranberry29
Do you mean I need size up in this part?
def ResNet50Regression():
`Res_input = layers.Input(shape=(178,))`
# 128
`width = 128`
`x = dens_block(Res_input,width)`
`x = identity_block(x,width)`
`x = identity_block(x,width)`
`x = dens_block(x,width)`
`x = identity_block(x,width)`
`x = identity_block(x,width)`
`x = dens_block(x,width)`
`x = identity_block(x,width)`
`x = identity_block(x,width)`
`x = layers.BatchNormalization()(x)`
`x = layers.Dense(1,activation='linear')(x)`
`model = models.Model(inputs=Res_input, outputs=x)`
`return model`
Constant-Cranberry29 OP t1_irra20x wrote
Reply to comment by _Arsenie_Boca_ in how to find out the problem when want to do testing the model? by Constant-Cranberry29
then is there any suggestion for me so that the model I have made can predict properly?
Constant-Cranberry29 OP t1_irr8eo4 wrote
Reply to comment by _Arsenie_Boca_ in how to find out the problem when want to do testing the model? by Constant-Cranberry29
>I guess this is timeseries forecasting. You should think about the lookahead. Probably, during training, the model only has to predict the next point, while during testing, it has to predict many values autoregressively
what should I do? should I change the model structure or use another model?
Submitted by Constant-Cranberry29 t3_y0deqh in deeplearning
Constant-Cranberry29 OP t1_jbjt7m4 wrote
Reply to comment by trajo123 in Can feature engineering avoid overfitting? by Constant-Cranberry29
yes, you can see my problem here https://stackoverflow.com/questions/75672909/why-by-adding-additional-information-as-number-of-sequence-on-dataset-can-avoid