Emotional-Fox-4285 OP t1_iw74o6z wrote on November 13, 2022 at 1:58 PM Reply to comment by crisischris96 in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285 Yes ....But the NN couldn't work ...Can you check my code for me ? Permalink Parent 1
Emotional-Fox-4285 OP t1_ivjbz4s wrote on November 8, 2022 at 11:52 AM Reply to comment by HowdThatGoIn in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285 I send you the link to my notebook... I am beginner ,therefore very lack of knowledge and couldn't find it out myself. I will be grateful if you take a look of my notebook and feel free to suggest any change. https://drive.google.com/file/d/1S5s5d6x0iwFOYk9SimiZt2U_6dLNierP/view?usp=sharing Permalink Parent 1
Emotional-Fox-4285 OP t1_ivepyvf wrote on November 7, 2022 at 12:57 PM Reply to comment by JabbaTheWhat01 in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285 I thought that Vanishing GD happen when using Sigmoid every layer...Here i use ReLU in the two Layer Permalink Parent 1
Emotional-Fox-4285 OP t1_ive060v wrote on November 7, 2022 at 7:16 AM Reply to comment by elbiot in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285 If you don't mind....I could share you my code so you could see Permalink Parent 1
Emotional-Fox-4285 OP t1_ivdu736 wrote on November 7, 2022 at 5:59 AM Reply to comment by elbiot in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285 I have tried the learning rate from 0.001 to 0.1.... Yes This is implement from scratch Permalink Parent 1
In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? Submitted by Emotional-Fox-4285 t3_yoauod on November 7, 2022 at 2:59 AM in deeplearning 12 comments 0
Emotional-Fox-4285 OP t1_iw74o6z wrote
Reply to comment by crisischris96 in In my deep NN with 3 layer, . In the second iteration of GD, The activation of Layer 1 and Layer 2 output all 0 due to ReLU as all the input are smaller than 0. And L3 output some value with high floating point which is opposite to first forward_ propagation . Is this how it should work ? by Emotional-Fox-4285
Yes ....But the NN couldn't work ...Can you check my code for me ?