Question from a noob:
When they say H_Fuse is fed into the decoder model, such that Y = Decoder(H_Fuse), how is it fed in? Is it fed in like the encoder output in an encoder-decoder transformer with cross-attention? Or something else?
Also, if there is a separate encoder and decoder component, are they trained together or separately?
Lopsided-Factor-780 t1_j7743wv wrote
Reply to [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Question from a noob:
When they say H_Fuse is fed into the decoder model, such that Y = Decoder(H_Fuse), how is it fed in? Is it fed in like the encoder output in an encoder-decoder transformer with cross-attention? Or something else?
Also, if there is a separate encoder and decoder component, are they trained together or separately?