12. 補足(余談)
• Feed Forward層は何をしているのか?
– Key-Valueのニューラルメモリ機構になっている
– “Transformer Feed-Forward Layers Are Key-Value Memories”, 2020
(arXiv)
• ResidualとFeed Forwardは必須
– “Attention is Not All You Need: Pure Attention Loses Rank Doubly
Exponentially with Depth”, 2021 (arXiv)
• 細かい部分は重要か?(活性化関数とか)
– “Do Transformer Modifications Transfer Across Implementations and
Applications?” 2021 (arXiv)
12
18. 畳み込みとLocal Attention
18
畳み込み Local Attention
観測値に依存しない重みをかける 観測値に依存した重みをかける
※理論的な関係は”On the Relationship between Self-Attention and Convolution Layers”など
(相対的PEを使う場合任意の畳込みをSelfAttentionは近似出来る)
50. 主な文献等:Self Attention, Transformer一般
• “Attention is All You Need”, NeurIPS2017
• “Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with
Depth”, 2021 (arXiv)
• “Do Transformer Modifications Transfer Across Implementations and Applications?”
2021 (arXiv)
• “Transformer Architecture: The Positional Encoding”, Blog
• “Visual Guide to Transformer Neural Networks - (Episode 1) Position Embeddings”,
Youtube
51
51. 主な文献等:画像に関するSelf Attention
Pure Attention系
(SASA) “Stand-Alone Self-Attention in Vision Models”, NeurIPS2019
(SANs) “Exploring Self-attention for Image Recognition”, CPVR2020
(axial attention) “Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation”, ECCV2020
(iGPT) “Generative Pretraining From Pixels”, ICML2020
(ViT) “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”. ICLR2021 (DeiT) “Training
data-efficient image transformers & distillation through attention”, arXiv, 2021
(T2T) “Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet”, arXiv, 2021
(Survey) “A Survey on Visual Transformer”, arXiv, 2021
それ以外
(GANsformer) “Generative Adversarial Transformers”, arXiv, 2021
(Bottleneck Transformer) “Bottleneck Transformers for Visual Recognition”, arXiv, 2021
(GLOM) “How to represent part-whole hierarchies in a neural network”, arXiv, 2021
52