eternal-abyss-77
eternal-abyss-77 OP t1_iygwbbr wrote
Reply to comment by bacon_boat in [D] Is 20 single layer Neural networks equals to a single 20 layer neural network? by eternal-abyss-77
Got it bro, thanks
eternal-abyss-77 OP t1_iwkd5k0 wrote
Reply to comment by arhetorical in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
I would Like to show you my implementation of this paper, to how it acts on images.
So, I can ask you more on what exactly my issue is
eternal-abyss-77 OP t1_iwkd1u3 wrote
Reply to comment by arhetorical in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
So how is rotation explained in the paper? Like which equations I should look into?
eternal-abyss-77 OP t1_iwjvpds wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
So I - HL will be
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
This. And same goes for I - VL
Fine.
What do they mean by those rotations?
eternal-abyss-77 OP t1_iwg7mjm wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
I am asking what is
I - H is?
I get the same H as you say, but what is the matrix we get after I - H? Is it a mirror of H? As in paper, they said
I -I
-I I
So, the I in I-H is, normal identity matrix where major diagonal elements are 1 or is it mirror of H
eternal-abyss-77 OP t1_iwg5fm0 wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
> Yes, I is the identity matrix.
> The shift matrix, H, will not have a row or column with only zeros in it. If l is 2 and N is 7 then H(1, 3) (1 based?) will be a 1 and the start of a diagonal.
> You have similarly misunderstood equation 4. There will not be a row or column with only 0s in it.
Hl is this
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 0 0 1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 1 0 0 0 0 0 0 ]
[ 0 1 0 0 0 0 0 ]
I - Hl is ?
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
Or
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
?
Show me how the matrix is written
And elaborate this:
> The authors do not mention rotation at all in this paper. They do mention that gradients are computed along those directions by the pixel differences.
eternal-abyss-77 OP t1_iwg59kg wrote
eternal-abyss-77 OP t1_iwg0e31 wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Please see dm
eternal-abyss-77 OP t1_iwfxk5b wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Ok I'll ask you exactly what i don't understand.
In equations 2 and 4 there is I which represents Identity matrix right?
So if let's say l = 2 and N = 7
Now will the shift matrix be like
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 0 0 1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 1 0 0 0 0 0 0 ]
[ 0 1 0 0 0 0 0 ]
If yes, then
[ I -I ]
[-I I ]
Should be of the form,
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
Or
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
?
Be it horizontal shift or vertical shift.
And what do they mean by rotations here : 0°, 45°, 90°, 135° ?
Because I'm extending this idea, so I am asking community help, perspectives, opinions and understandings, so I may not be wrongly understanding math.
eternal-abyss-77 OP t1_iwfx6f4 wrote
Reply to Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Ok I'll ask you exactly what i don't understand.
In equations 2 and 4 there is I which represents Identity matrix right?
So if let's say l = 2 and N = 7
Now will the shift matrix be like
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 0 0 1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 1 0 0 0 0 0 0 ]
[ 0 1 0 0 0 0 0 ]
If yes, then
[ I -I ]
[-I I ]
Should be of the form,
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
Or
[ 1 0 0 0 0 -1 0 ]
[ 0 1 0 0 0 0 -1 ]
[ 0 0 1 0 0 0 0 ]
[ 0 0 0 1 0 0 0 ]
[ 0 0 0 0 1 0 0 ]
[ -1 0 0 0 0 1 0 ]
[ 0 -1 0 0 0 0 1 ]
?
Be it horizontal shift or vertical shift.
And what do they mean by rotations here : 0°, 45°, 90°, 135° ?
Because I'm extending this idea, so I am asking community help, perspectives, opinions and understandings, so I may not be wrongly understanding math.
eternal-abyss-77 OP t1_iwfu277 wrote
Reply to comment by onkus in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Sir, firstly thanks for responding.
I already have implemented this as a working program. But now I am enhancing it, and I have some feeling that I somehow am missing something from the paper, and not understanding it properly.
For example:
The equations [2, 4, 6, 8, 10, 15-18] in pages 3, 4 and 5
The training of the model with generated features with Linear LSE mentioned in page 6-7
And section B Local pixel difference descriptor, para 2 regarding directions. And it's related figures, Figure 3(a,b).
If you can explain these things, i can effectively understand your explanation and ask my doubts wrt my present work on this paper, with code.
eternal-abyss-77 OP t1_iwfrja8 wrote
Reply to comment by arhetorical in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Explain me equations 2 and 4 sir
How to design the identity matrices
eternal-abyss-77 OP t1_iwfpjtm wrote
Reply to comment by sEi_ in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Reply to edit 2 : that's why i sent u the previous link, which is from sci-hub.se
eternal-abyss-77 OP t1_iwfpfz7 wrote
Reply to comment by arhetorical in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Sir, firstly thanks for responding.
I already have implemented this as a working program. But now I am enhancing it, and I have some feeling that I somehow am missing something from the paper, and not understanding it properly.
For example:
The equations [2, 4, 6, 8, 10, 15-18] in pages 3, 4 and 5
The training of the model with generated features with Linear LSE mentioned in page 6-7
And section B Local pixel difference descriptor, para 2 regarding directions. And it's related figures, Figure 3(a,b).
If you can explain these things, i can effectively understand your explanation and ask my doubts wrt my present work on this paper, with code.
eternal-abyss-77 OP t1_iwfn0qt wrote
Reply to comment by sEi_ in Can someone explain me the math behind this paper and tell me whether the way I have understood this paper is right or not? by eternal-abyss-77
Check Now
eternal-abyss-77 OP t1_iygx65a wrote
Reply to comment by bacon_boat in [D] Is 20 single layer Neural networks equals to a single 20 layer neural network? by eternal-abyss-77
Bro, but let me ask you one more question, please bear with me.
If the result [ f(x)+f(x)+f(x)+f(x) ] >= result [ f(f(f(f(x)))) ]
(Result is feature map, features retained or extracted )
Can I conclude that both are same?