SlideShare a Scribd company logo
1 of 26
Download to read offline
DEEP LEARNING JP
[DL Papers]
“Spectral Norm Regularization for Improving the
Generalizability of Deep Learning” and “Spectral
Normalization for Generative Adversarial Networks
Reiji Hatsugai
http://deeplearning.jp/
書誌情報
• Spectral Norm Regularization
– arxiv 5/31 (NIPS2017?)
– NII吉田さん、PFN宮戸さん
• Spectral Normalization for GANs
– ICML2017 Workshop on Implicit Models (https://drive.google.com/file/d/0B8HZ50DPgR3eSVV6YlF3XzQxSjQ/view?usp=sharing)
– PFN宮戸さん et al
2
モチベーション
• ニューラルネットワークとは
– 神経回路網
– 脳
– 人間より高精度の認識
– Atari、碁で人間をoverperfrom
• でも結局行列演算
– 行列の特性を考慮に入れて学習アルゴリズムを作りたい
– 行列をオペレータと見たときに自然に定義されるノルムである、
スペクトルノルムに着目している
– 汎化性能、頑健性、学習安定性の向上に効きそう
3
線形代数の復習
• いくつかの線形代数の知識を復習してから話します
– 特異値分解
– スペクトルノルム
4
特異値分解
• 特異値分解とは
– 行列分解の一つ
– 対象となる行列Wを直交行列U, Vと対角行列Σの積で分解
• UとVの列 (行) ベクトルは正規直交基底をなす
– 対角行列の成分は特異値と呼ばれる
5
𝑈" 𝑈 = 𝐼, 𝑉𝑉" = 𝐼 Σ = diag(𝜎., … , 𝜎0)
𝑊 = 𝑈ΣV4
特異値、nはWのランク、𝜎. ≥ 𝜎6 ⋯ ≥ 𝜎0
特異値分解の性質
• 特異行列 (ランク1の行列) の和に分解することができる
• 特異値は𝑊"
𝑊の固有値の平方根と一致する
• Wが正方行列だった場合固有値分解と同等
6
特異行列の和に分解
• Uの列ベクトルを𝑢., 𝑢6, … , 𝑢0
• Vの列ベクトルを𝑣., 𝑣6, … , 𝑣0とする
7
𝑊 = 𝑈ΣV4
= 𝑢. ⋯ 𝑢0
𝜎. ⋯ 0
⋮ ⋱ ⋮
0 ⋯ 𝜎0
𝑣.
"
⋮
𝑣0
"
= = 𝜎> 𝑢> 𝑣>
"
0
>
ランク1
𝑊" 𝑢> = 𝜎> 𝑣>
𝑊𝑣> = 𝜎> 𝑢>
固有値っぽい性質
特異値分解の幾何学的意味
• 𝑊 ∈ 𝑅0×B
としてVの列ベクトルがなす空間からUの列ベクトルがなす
空間への写像としてWを考えると、
• 𝑣>が対応する𝑢>に写り、その長さを𝜎>倍する
8
𝑣.
𝑣6
𝑣C 𝑊 𝑢.
𝑢6
𝑢C
𝜎C
𝜎6
𝜎.
スペクトルノルム
• 行列がある空間のベクトルに対して作用したときの
行き先でのノルムと前の空間でのノルムの比の上限
• 行列Wの最大特異値と一致!
9
𝑥 ⟼ 𝑊𝑥 sup
I
𝑊𝑥
𝑥
= sup
I J.
𝑊𝑥
𝑊𝑥 = 𝑥" 𝑊" 𝑊𝑥
= 𝑥" 𝑉ΣU4UΣV4x
= 𝑥" 𝑉ΣV4x 𝑥 = ∑ 𝛼> 𝑣>
0
> と置く
= = 𝛼>
6
𝜎>
6
0
>
𝛼>
6
が正で∑ 𝛼>
6
= 1なので𝛼. = 1,	それ以外0で
最大値𝜎.を取る
Spectral Norm Regularization
• ついに論文に入ります
• 論文要旨
– NNの各層のSNに正則化を加えることで入力データのperturbationに対して
ロバストなNNを学習し汎化性能を向上させた
10
論文モチベーション
• 汎化性能に対する一つの仮説
– 入力データの微小なnoise (perturbation) に対してモデルの出力がsensitiveになる
と学習器の性能が悪化する
• 入力データの微小な変化による出力値の変化を評価したい
• NN全体を一つの作用素と見たときの作用素ノルム (スペクトルノルム)
11
メインアイデア
• 多層NNは行列演算と非線形関数のカスケード接続
• 微小なnoiseが加えられた入力を評価する
• noiseが微小だとすると、各層のelementwiseな非線形変換は
線形変換とみなすことができ、単純なAffine変換の式で書ける
• スペクトルノルムがupper bound!
12
ロス関数と勾配計算
• 通常のロス関数にスペクトルノルムの正則項を加えたロス関数
• 勾配法で学習するため勾配の計算が必要
13
𝜕𝜎 𝑊 6
𝜕𝑊
= 2𝜎 𝑊
𝜕𝜎 𝑊
𝜕𝑊
= 2𝜎 𝑊 𝑢. 𝑣.
"
(ここよくわかっていません。妄想)
特異値分解の式から最大特異値の勾配方向は𝑢. 𝑣.
"
𝑊 = 𝜎. 𝑢. 𝑣.
"
+ 𝜎6 𝑢6 𝑣6
"
+ ⋯ + 𝜎0 𝑢0 𝑣0
"
−𝜆𝑢. 𝑣.
"
をWに加えるとその分𝜎.が下がる
スペクトルノルムの計算
• スペクトルノルムを計算するたびに特異値分解をすると計算量が増大
• power iteration法を使って高速に計算 (実際には各iterでuを保持しておき、反復は一回のみ)
14
𝑥W = = 𝛼> 𝑢>
𝑥. = 𝑊𝑦W = = 𝛼> 𝜎>
6
𝑢>
𝑦W = 𝑊" 𝑥W = = 𝛼> 𝜎> 𝑣>
𝑥0 = = 𝛼> 𝜎>
60
𝑢> , 𝑥0Y. = = 𝛼> 𝜎>
60Y6
𝑢>
⋮
𝑥0
𝜎.
60 = = 𝛼>
𝜎>
𝜎.
60
𝑢> ≈ 𝑢. ,
𝑥0Y.
𝜎.
60Y6 = = 𝛼>
𝜎>
𝜎.
60Y6
𝑢> ≈ 𝑢.
𝑥0
𝑥0Y.
= 𝜎., 𝑢. =
𝑥0
𝑥0
, 𝑣. =
𝑦0
𝑦0
結果1
15
結果2
16
結論
• spectral normに対する正則化によって汎化性能が向上したことが確認
できた
• MAP推定においてパラメタの事前分布に正規分布を仮定するとL2正
則化がでてくる。これと同じような構造がSpectral Normにもあるだと
うか?どんな事前分布だろうか?
17
Spectral Normalization for GANs
• 論文概要
– GANsの学習を安定させるにはLipschitz連続であることが重要であると最近の
研究でわかりつつある。Spectral Normalizationを行うことでDiscriminatorの
Lipschitz定数をコントロールし安定化を向上させた
18
Lipschitz連続とは
• ある作用素fで任意の2点を変換し、変換後の距離と変換前の距離の
比を抑える最小の定数をKとするとfはK-Lipschitz
• 今回はなぜGANsにおいてLipschitz性が重要であるかは割愛します。
WGAN, LSGAN, b-GAN等を見てください
19
fがK-Lipschitz⟺ ∀𝑥, 𝑥′
^ I Y^(I_)
IYI`
≤ 𝐾
LipschitzとSpectal Norm
• 非線形関数のLipschitz定数が1とするとNN全体のLipschitz定数は各層
のLipschitz定数の積で抑えられる
• 各層のLipschitz定数の積は各層のスペクトルノルムの積で表せる
20
Spectral Normalization
• Spectral Normで重みを割ってできた重みを使ってFeedForwardする
• これによってNN全体のLipschitz定数を1で抑えることができる
21
結果
22
Inception Score
頑健性
23
結論
• Spectral Normalizationによって安定性が向上した
• Inception Scoreもweight normalizationに比べて良いか同等のスコアを出
した
24
感想
• スペクトルノルムは今後画像以外の分野でも使えそう
• 学習安定性、Adversarial example、汎化性能等の最近hotな分野とちか
い感じがする
• 強化学習につかえないかな〜
25
26

More Related Content

What's hot

[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−
[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−
[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−Deep Learning JP
 
【メタサーベイ】Neural Fields
【メタサーベイ】Neural Fields【メタサーベイ】Neural Fields
【メタサーベイ】Neural Fieldscvpaper. challenge
 
[DL輪読会]Flow-based Deep Generative Models
[DL輪読会]Flow-based Deep Generative Models[DL輪読会]Flow-based Deep Generative Models
[DL輪読会]Flow-based Deep Generative ModelsDeep Learning JP
 
[DL輪読会]Disentangling by Factorising
[DL輪読会]Disentangling by Factorising[DL輪読会]Disentangling by Factorising
[DL輪読会]Disentangling by FactorisingDeep Learning JP
 
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked AutoencodersDeep Learning JP
 
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion ModelsDeep Learning JP
 
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from PixelsDeep Learning JP
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習Eiji Uchibe
 
畳み込みニューラルネットワークの高精度化と高速化
畳み込みニューラルネットワークの高精度化と高速化畳み込みニューラルネットワークの高精度化と高速化
畳み込みニューラルネットワークの高精度化と高速化Yusuke Uchida
 
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Yusuke Uchida
 
PILCO - 第一回高橋研究室モデルベース強化学習勉強会
PILCO - 第一回高橋研究室モデルベース強化学習勉強会PILCO - 第一回高橋研究室モデルベース強化学習勉強会
PILCO - 第一回高橋研究室モデルベース強化学習勉強会Shunichi Sekiguchi
 
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisDeep Learning JP
 
[DL輪読会]Hindsight Experience Replay
[DL輪読会]Hindsight Experience Replay[DL輪読会]Hindsight Experience Replay
[DL輪読会]Hindsight Experience ReplayDeep Learning JP
 
Transformerを雰囲気で理解する
Transformerを雰囲気で理解するTransformerを雰囲気で理解する
Transformerを雰囲気で理解するAtsukiYamaguchi1
 
[DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent
 [DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent [DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent
[DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient DescentDeep Learning JP
 
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...Deep Learning JP
 
[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential EquationsDeep Learning JP
 
[DL輪読会]GLIDE: Guided Language to Image Diffusion for Generation and Editing
[DL輪読会]GLIDE: Guided Language to Image Diffusion  for Generation and Editing[DL輪読会]GLIDE: Guided Language to Image Diffusion  for Generation and Editing
[DL輪読会]GLIDE: Guided Language to Image Diffusion for Generation and EditingDeep Learning JP
 
敵対的生成ネットワーク(GAN)
敵対的生成ネットワーク(GAN)敵対的生成ネットワーク(GAN)
敵対的生成ネットワーク(GAN)cvpaper. challenge
 
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...Deep Learning JP
 

What's hot (20)

[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−
[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−
[DL輪読会]The Neural Process Family−Neural Processes関連の実装を読んで動かしてみる−
 
【メタサーベイ】Neural Fields
【メタサーベイ】Neural Fields【メタサーベイ】Neural Fields
【メタサーベイ】Neural Fields
 
[DL輪読会]Flow-based Deep Generative Models
[DL輪読会]Flow-based Deep Generative Models[DL輪読会]Flow-based Deep Generative Models
[DL輪読会]Flow-based Deep Generative Models
 
[DL輪読会]Disentangling by Factorising
[DL輪読会]Disentangling by Factorising[DL輪読会]Disentangling by Factorising
[DL輪読会]Disentangling by Factorising
 
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
【DL輪読会】ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
 
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models
【DL輪読会】High-Resolution Image Synthesis with Latent Diffusion Models
 
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels[DL輪読会]Learning Latent Dynamics for Planning from Pixels
[DL輪読会]Learning Latent Dynamics for Planning from Pixels
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習
 
畳み込みニューラルネットワークの高精度化と高速化
畳み込みニューラルネットワークの高精度化と高速化畳み込みニューラルネットワークの高精度化と高速化
畳み込みニューラルネットワークの高精度化と高速化
 
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
Swin Transformer (ICCV'21 Best Paper) を完璧に理解する資料
 
PILCO - 第一回高橋研究室モデルベース強化学習勉強会
PILCO - 第一回高橋研究室モデルベース強化学習勉強会PILCO - 第一回高橋研究室モデルベース強化学習勉強会
PILCO - 第一回高橋研究室モデルベース強化学習勉強会
 
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
[DL輪読会]NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
 
[DL輪読会]Hindsight Experience Replay
[DL輪読会]Hindsight Experience Replay[DL輪読会]Hindsight Experience Replay
[DL輪読会]Hindsight Experience Replay
 
Transformerを雰囲気で理解する
Transformerを雰囲気で理解するTransformerを雰囲気で理解する
Transformerを雰囲気で理解する
 
[DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent
 [DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent [DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent
[DL輪読会]A Bayesian Perspective on Generalization and Stochastic Gradient Descent
 
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...
[DL輪読会]Life-Long Disentangled Representation Learning with Cross-Domain Laten...
 
[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations[DL輪読会]Neural Ordinary Differential Equations
[DL輪読会]Neural Ordinary Differential Equations
 
[DL輪読会]GLIDE: Guided Language to Image Diffusion for Generation and Editing
[DL輪読会]GLIDE: Guided Language to Image Diffusion  for Generation and Editing[DL輪読会]GLIDE: Guided Language to Image Diffusion  for Generation and Editing
[DL輪読会]GLIDE: Guided Language to Image Diffusion for Generation and Editing
 
敵対的生成ネットワーク(GAN)
敵対的生成ネットワーク(GAN)敵対的生成ネットワーク(GAN)
敵対的生成ネットワーク(GAN)
 
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
 

Viewers also liked

Web開発初心者がReactをチームに導入して半年経った
Web開発初心者がReactをチームに導入して半年経ったWeb開発初心者がReactをチームに導入して半年経った
Web開発初心者がReactをチームに導入して半年経ったkazuki matsumura
 
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural NetworksDeep Learning JP
 
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...Deep Learning JP
 
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People ImagesDeep Learning JP
 
[DLHacks] DLHacks説明資料
[DLHacks] DLHacks説明資料[DLHacks] DLHacks説明資料
[DLHacks] DLHacks説明資料Deep Learning JP
 
[DLHacks 実装] The statistical recurrent unit
[DLHacks 実装] The statistical recurrent unit[DLHacks 実装] The statistical recurrent unit
[DLHacks 実装] The statistical recurrent unitDeep Learning JP
 
[DL輪読会]Training RNNs as Fast as CNNs
[DL輪読会]Training RNNs as Fast as CNNs[DL輪読会]Training RNNs as Fast as CNNs
[DL輪読会]Training RNNs as Fast as CNNsDeep Learning JP
 
[DL輪読会]Parallel Multiscale Autoregressive Density Estimation
[DL輪読会]Parallel Multiscale Autoregressive Density Estimation[DL輪読会]Parallel Multiscale Autoregressive Density Estimation
[DL輪読会]Parallel Multiscale Autoregressive Density EstimationDeep Learning JP
 
[DLHacks 実装]Neural Machine Translation in Linear Time
[DLHacks 実装]Neural Machine Translation in Linear Time [DLHacks 実装]Neural Machine Translation in Linear Time
[DLHacks 実装]Neural Machine Translation in Linear Time Deep Learning JP
 
[DL輪読会] DeepNav: Learning to Navigate Large Cities
[DL輪読会] DeepNav: Learning to Navigate Large Cities[DL輪読会] DeepNav: Learning to Navigate Large Cities
[DL輪読会] DeepNav: Learning to Navigate Large CitiesDeep Learning JP
 
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...Deep Learning JP
 
[DL輪読会]Opening the Black Box of Deep Neural Networks via Information
[DL輪読会]Opening the Black Box of Deep Neural Networks via Information[DL輪読会]Opening the Black Box of Deep Neural Networks via Information
[DL輪読会]Opening the Black Box of Deep Neural Networks via InformationDeep Learning JP
 
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-Deep Learning JP
 
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image TransformationDeep Learning JP
 
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video GenerationDeep Learning JP
 
[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networksDeep Learning JP
 

Viewers also liked (17)

React.js + Flux入門 #scripty02
React.js + Flux入門 #scripty02React.js + Flux入門 #scripty02
React.js + Flux入門 #scripty02
 
Web開発初心者がReactをチームに導入して半年経った
Web開発初心者がReactをチームに導入して半年経ったWeb開発初心者がReactをチームに導入して半年経った
Web開発初心者がReactをチームに導入して半年経った
 
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks
[DLHacks 実装] DeepPose: Human Pose Estimation via Deep Neural Networks
 
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...
[DL輪読会] Towards an Automatic Turing Test: Learning to Evaluate Dialogue Respo...
 
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images
[DL輪読会] The Conditional Analogy GAN: Swapping Fashion Articles on People Images
 
[DLHacks] DLHacks説明資料
[DLHacks] DLHacks説明資料[DLHacks] DLHacks説明資料
[DLHacks] DLHacks説明資料
 
[DLHacks 実装] The statistical recurrent unit
[DLHacks 実装] The statistical recurrent unit[DLHacks 実装] The statistical recurrent unit
[DLHacks 実装] The statistical recurrent unit
 
[DL輪読会]Training RNNs as Fast as CNNs
[DL輪読会]Training RNNs as Fast as CNNs[DL輪読会]Training RNNs as Fast as CNNs
[DL輪読会]Training RNNs as Fast as CNNs
 
[DL輪読会]Parallel Multiscale Autoregressive Density Estimation
[DL輪読会]Parallel Multiscale Autoregressive Density Estimation[DL輪読会]Parallel Multiscale Autoregressive Density Estimation
[DL輪読会]Parallel Multiscale Autoregressive Density Estimation
 
[DLHacks 実装]Neural Machine Translation in Linear Time
[DLHacks 実装]Neural Machine Translation in Linear Time [DLHacks 実装]Neural Machine Translation in Linear Time
[DLHacks 実装]Neural Machine Translation in Linear Time
 
[DL輪読会] DeepNav: Learning to Navigate Large Cities
[DL輪読会] DeepNav: Learning to Navigate Large Cities[DL輪読会] DeepNav: Learning to Navigate Large Cities
[DL輪読会] DeepNav: Learning to Navigate Large Cities
 
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...
[DLHacks 実装]Network Dissection: Quantifying Interpretability of Deep Visual R...
 
[DL輪読会]Opening the Black Box of Deep Neural Networks via Information
[DL輪読会]Opening the Black Box of Deep Neural Networks via Information[DL輪読会]Opening the Black Box of Deep Neural Networks via Information
[DL輪読会]Opening the Black Box of Deep Neural Networks via Information
 
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-
[DLHacks LT] PytorchのDataLoader -torchtextのソースコードを読んでみた-
 
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation
[DLHacks 実装]Perceptual Adversarial Networks for Image-to-Image Transformation
 
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation
[DL輪読会] MoCoGAN: Decomposing Motion and Content for Video Generation
 
[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks[DL輪読会]Energy-based generative adversarial networks
[DL輪読会]Energy-based generative adversarial networks
 

Similar to [DL輪読会] Spectral Norm Regularization for Improving the Generalizability of Deep Learning/Spectral Normalization for GANs

Diet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicDiet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicHakky St
 
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたいTakuma Wakamori
 
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKSDeep Learning JP
 
attention_is_all_you_need_nips17_論文紹介
attention_is_all_you_need_nips17_論文紹介attention_is_all_you_need_nips17_論文紹介
attention_is_all_you_need_nips17_論文紹介Masayoshi Kondo
 
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)Toru Fujino
 
LCCC2010:Learning on Cores, Clusters and Cloudsの解説
LCCC2010:Learning on Cores,  Clusters and Cloudsの解説LCCC2010:Learning on Cores,  Clusters and Cloudsの解説
LCCC2010:Learning on Cores, Clusters and Cloudsの解説Preferred Networks
 
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナー
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナーPFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナー
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナーMatlantis
 
Combinatorial optimization with graph convolutional networks and guided ver20...
Combinatorial optimization with graph convolutional networks and guided ver20...Combinatorial optimization with graph convolutional networks and guided ver20...
Combinatorial optimization with graph convolutional networks and guided ver20...Shuntaro Ohno
 
Combinatorial optimization with graph convolutional networks and guided
Combinatorial optimization with graph convolutional networks and guidedCombinatorial optimization with graph convolutional networks and guided
Combinatorial optimization with graph convolutional networks and guidedShuntaro Ohno
 
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)(文献紹介)Deep Unrolling: Learned ISTA (LISTA)
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)Morpho, Inc.
 
Image net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksImage net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksShingo Horiuchi
 
Active Learning 入門
Active Learning 入門Active Learning 入門
Active Learning 入門Shuyo Nakatani
 
関西CVPRML勉強会資料20150627
関西CVPRML勉強会資料20150627関西CVPRML勉強会資料20150627
関西CVPRML勉強会資料20150627tsunekawas
 
Using Deep Learning for Recommendation
Using Deep Learning for RecommendationUsing Deep Learning for Recommendation
Using Deep Learning for RecommendationEduardo Gonzalez
 
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2深層学習(岡本孝之 著) - Deep Learning chap.1 and 2
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2Masayoshi Kondo
 
Deep learning勉強会20121214ochi
Deep learning勉強会20121214ochiDeep learning勉強会20121214ochi
Deep learning勉強会20121214ochiOhsawa Goodfellow
 
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...Deep Learning JP
 
[DL輪読会]Meta-Learning Probabilistic Inference for Prediction
[DL輪読会]Meta-Learning Probabilistic Inference for Prediction[DL輪読会]Meta-Learning Probabilistic Inference for Prediction
[DL輪読会]Meta-Learning Probabilistic Inference for PredictionDeep Learning JP
 

Similar to [DL輪読会] Spectral Norm Regularization for Improving the Generalizability of Deep Learning/Spectral Normalization for GANs (20)

Diet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomicDiet networks thin parameters for fat genomic
Diet networks thin parameters for fat genomic
 
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい
強化学習初心者が強化学習でニューラルネットワークの設計を自動化してみたい
 
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
[DL輪読会]QUASI-RECURRENT NEURAL NETWORKS
 
attention_is_all_you_need_nips17_論文紹介
attention_is_all_you_need_nips17_論文紹介attention_is_all_you_need_nips17_論文紹介
attention_is_all_you_need_nips17_論文紹介
 
Dilated rnn
Dilated rnnDilated rnn
Dilated rnn
 
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)
Tree-to-Sequence Attentional Neural Machine Translation (ACL 2016)
 
LCCC2010:Learning on Cores, Clusters and Cloudsの解説
LCCC2010:Learning on Cores,  Clusters and Cloudsの解説LCCC2010:Learning on Cores,  Clusters and Cloudsの解説
LCCC2010:Learning on Cores, Clusters and Cloudsの解説
 
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナー
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナーPFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナー
PFP:材料探索のための汎用Neural Network Potential_中郷_20220422POLセミナー
 
Rainbow
RainbowRainbow
Rainbow
 
Combinatorial optimization with graph convolutional networks and guided ver20...
Combinatorial optimization with graph convolutional networks and guided ver20...Combinatorial optimization with graph convolutional networks and guided ver20...
Combinatorial optimization with graph convolutional networks and guided ver20...
 
Combinatorial optimization with graph convolutional networks and guided
Combinatorial optimization with graph convolutional networks and guidedCombinatorial optimization with graph convolutional networks and guided
Combinatorial optimization with graph convolutional networks and guided
 
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)(文献紹介)Deep Unrolling: Learned ISTA (LISTA)
(文献紹介)Deep Unrolling: Learned ISTA (LISTA)
 
Image net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksImage net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural Networks
 
Active Learning 入門
Active Learning 入門Active Learning 入門
Active Learning 入門
 
関西CVPRML勉強会資料20150627
関西CVPRML勉強会資料20150627関西CVPRML勉強会資料20150627
関西CVPRML勉強会資料20150627
 
Using Deep Learning for Recommendation
Using Deep Learning for RecommendationUsing Deep Learning for Recommendation
Using Deep Learning for Recommendation
 
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2深層学習(岡本孝之 著) - Deep Learning chap.1 and 2
深層学習(岡本孝之 著) - Deep Learning chap.1 and 2
 
Deep learning勉強会20121214ochi
Deep learning勉強会20121214ochiDeep learning勉強会20121214ochi
Deep learning勉強会20121214ochi
 
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
[Dl輪読会]bridging the gaps between residual learning, recurrent neural networks...
 
[DL輪読会]Meta-Learning Probabilistic Inference for Prediction
[DL輪読会]Meta-Learning Probabilistic Inference for Prediction[DL輪読会]Meta-Learning Probabilistic Inference for Prediction
[DL輪読会]Meta-Learning Probabilistic Inference for Prediction
 

More from Deep Learning JP

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving PlannersDeep Learning JP
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについてDeep Learning JP
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...Deep Learning JP
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-ResolutionDeep Learning JP
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxivDeep Learning JP
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLMDeep Learning JP
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...Deep Learning JP
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place RecognitionDeep Learning JP
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?Deep Learning JP
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究についてDeep Learning JP
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )Deep Learning JP
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...Deep Learning JP
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"Deep Learning JP
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "Deep Learning JP
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat ModelsDeep Learning JP
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"Deep Learning JP
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...Deep Learning JP
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...Deep Learning JP
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...Deep Learning JP
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...Deep Learning JP
 

More from Deep Learning JP (20)

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
 

Recently uploaded

Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。iPride Co., Ltd.
 
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)Hiroshi Tomioka
 
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Gamesatsushi061452
 
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)NTT DATA Technology & Innovation
 
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...Toru Tamaki
 
LoRaWANスマート距離検出センサー DS20L カタログ LiDARデバイス
LoRaWANスマート距離検出センサー  DS20L  カタログ  LiDARデバイスLoRaWANスマート距離検出センサー  DS20L  カタログ  LiDARデバイス
LoRaWANスマート距離検出センサー DS20L カタログ LiDARデバイスCRI Japan, Inc.
 
新人研修 後半 2024/04/26の勉強会で発表されたものです。
新人研修 後半        2024/04/26の勉強会で発表されたものです。新人研修 後半        2024/04/26の勉強会で発表されたものです。
新人研修 後半 2024/04/26の勉強会で発表されたものです。iPride Co., Ltd.
 
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアル
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアルLoRaWAN スマート距離検出デバイスDS20L日本語マニュアル
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアルCRI Japan, Inc.
 
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)Hiroshi Tomioka
 
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。iPride Co., Ltd.
 
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video UnderstandingToru Tamaki
 

Recently uploaded (11)

Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その22024/04/26の勉強会で発表されたものです。
 
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)
業務で生成AIを活用したい人のための生成AI入門講座(社外公開版:キンドリルジャパン社内勉強会:2024年4月発表)
 
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games
論文紹介: The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games
 
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)
NewSQLの可用性構成パターン(OCHaCafe Season 8 #4 発表資料)
 
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
 
LoRaWANスマート距離検出センサー DS20L カタログ LiDARデバイス
LoRaWANスマート距離検出センサー  DS20L  カタログ  LiDARデバイスLoRaWANスマート距離検出センサー  DS20L  カタログ  LiDARデバイス
LoRaWANスマート距離検出センサー DS20L カタログ LiDARデバイス
 
新人研修 後半 2024/04/26の勉強会で発表されたものです。
新人研修 後半        2024/04/26の勉強会で発表されたものです。新人研修 後半        2024/04/26の勉強会で発表されたものです。
新人研修 後半 2024/04/26の勉強会で発表されたものです。
 
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアル
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアルLoRaWAN スマート距離検出デバイスDS20L日本語マニュアル
LoRaWAN スマート距離検出デバイスDS20L日本語マニュアル
 
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)
Observabilityは従来型の監視と何が違うのか(キンドリルジャパン社内勉強会:2022年10月27日発表)
 
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。
Amazon SES を勉強してみる その32024/04/26の勉強会で発表されたものです。
 
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
 

[DL輪読会] Spectral Norm Regularization for Improving the Generalizability of Deep Learning/Spectral Normalization for GANs