Constant-Cranberry29

Constant-Cranberry29 OP t1_iwo6vm1 wrote

initial_learning_rate = 0.02

epochs = 50

decay = initial_learning_rate / epochs

def lr_time_based_decay(epoch, lr):

return lr * 1 / (1 + decay * epoch)

history = model.fit(

x_train,

y_train,

epochs=50,

validation_split=0.2,

batch_size=64,

callbacks=[LearningRateScheduler(lr_time_based_decay, verbose=2)],

)

1

Constant-Cranberry29 OP t1_iwo6ukg wrote

>Okay. So, as I understand, your labels are usually either zero (before normalization), or negative, and, very rarely, they are positive.
>
>With the abs, it's easy for the model to reproduce the "baseline" level, because it's still zero after normalization, and as long as the last Dense produces a large negative number, sigmoid turns that number into zero.
>
>I think it would work even better if, instead of abs, you set all positive labels to zero, then normalize. (After normalization, the "baseline" level will become 1, also easy to reproduce).
>
>In both cases, will work for data points that originally had negative or zero labels, but it won't work for data points with originally positive labels.
>
>You have a problem without normalization, because the "baseline" level no longer 0 or 1 and your model needs to converge on that number. I think it would get there eventually, but you'll need more training, and probably learning rate decay (replace the constant learning rate with a tf.keras.optimizers.schedules.LearningRateSchedule object, and play with its settings.)
>
>The question is, do you want, and do you expect to be able to, reproduce positive labels? Or are they just random noise? If you don't need to reproduce them, just set them to zero. If they are valid and you need to reproduce them, do more training.

I have try using tf.keras.optimizers.schedules.LearningRateSchedule object, it still doesn't work

1

Constant-Cranberry29 OP t1_iwnyhip wrote

df = pd.read_csv('1113_Rwalk40s1.csv', low_memory=False)

columns = ['Fx']]

selected_df = df[columns]

FCDatas = selected_df[:2050]

SmartInsole = np.array(SIData[:2050])

FCData = np.array(FCDatas)

Dataset = np.concatenate((SmartInsole, FCData), axis=1)

scaler_in = MinMaxScaler(feature_range=(0, 1))

scaler_out = MinMaxScaler(feature_range=(0, 1))

data_scaled_in = scaler_in.fit_transform(Dataset[:,0:89])

data_scaled_out = scaler_out.fit_transform(Dataset[:,89:90])

1

Constant-Cranberry29 OP t1_irreagz wrote

Do you mean I need size up in this part?

def ResNet50Regression():

`Res_input = layers.Input(shape=(178,))`

# 128

`width = 128`



`x = dens_block(Res_input,width)`

`x = identity_block(x,width)`

`x = identity_block(x,width)`



`x = dens_block(x,width)`

`x = identity_block(x,width)`

`x = identity_block(x,width)`



`x = dens_block(x,width)`

`x = identity_block(x,width)`

`x = identity_block(x,width)`



`x = layers.BatchNormalization()(x)`

`x = layers.Dense(1,activation='linear')(x)`

`model = models.Model(inputs=Res_input, outputs=x)`



`return model`
1

Constant-Cranberry29 OP t1_irr8eo4 wrote

>I guess this is timeseries forecasting. You should think about the lookahead. Probably, during training, the model only has to predict the next point, while during testing, it has to predict many values autoregressively

what should I do? should I change the model structure or use another model?

1