SlideShare a Scribd company logo
1 of 22
DEEP LEARNING JP
[DL Papers] Manipulation-Independent Representations(MIR) for
Successful Cross Embodiment Visual Imitation
XIN ZHANG, Matsuo Lab
http://deeplearning.jp/
目次
2
1. 書誌情報
2. Introduction
3. Related Works
4. Manipulator-Independent Representations for Visual Imitation
5. Experiment, Evaluation
6. Discussion
書誌情報
● タイトル:
○ Manipulator-Independent Representations for Visual Imitation
● 著者
○ Yuxiang Zhou, Yusuf Aytar, Konstantinos Bousmalis
● 所属:DeepMind
● 投稿日:2021/3/18 (arXiv)
● 概要
○ Cross-embodiment visual imitationで使える良い表現を学習したい.
○ 1.環境に依存せず、2.時空間の概念を取り入れ、3.行動の射影ができる
○ 条件を満たすManipulator-Independent Representations(MIR)を提案した。 3
Imitation Learning:Easy to teach a new skill to a robot
4
4
Recent Advances in Robot Learning from Demonstration
(Ravichandar el.at. 2020 )
Simple
Humanoid, mobile robot.
Remote, easy
Need to develop the interface Mapping the joint from
different kind robot(human)
Easy
Introduction:Manipulation Imitation Learning
5
- 模倣学習を、別の見方で、大きく2種類に分類できる。
1. 十分なデモンストレーションデータで、一般化した方策を学習させる
2. 初見のデモンストレーションで、特定の動きを模倣させる
- 種類1:一般化した方策を学習する
- Behavioral Cloning
- Inverse reinforcement learning
- 種類2:軌道でタスクを特定し、模倣する
- One-shot Imitation Learning
- Trajectory tacking
Related work
6
1. Manipulator-Independent Representations(MIR表現を学習する)
a. Cross-Domain Alignment
i. Domain Gapに対応できるように
b. Temporal Smoothness
i. 時空間に関する概念をうまく取り入れるように
c. Actionable Representations
i. 行動できるような表現、後の学習に使えるように
2. Cross-Embodiment Imitation via RL(MIR表現を使って、模倣学習する)
提案手法:MIR
7
- Domain Gapに対して、domain-randomized simulated environmentsを用い
る。
- 普通にやると、表現空間と方策は密に結合してしまう
- manipulator’s body, actions space, the task at handは表現空間に基づく
- 提案手法MIRは、どのマニピュレータでも使える表現空間を学習したい
- マニピュレータ、特定タスクの情報を含まない表現空間
- つまり、2つの特性を持つ表現が欲しい:
1. ドメインに影響させず
2. 環境に関する理解を射影できる
1.a: Cross-Domain Alignment
8
1.a: Cross-Domain Alignment
9
- そこで、模倣学習について考え直すと、2種類の考え方がある。
1. mimic the movements of the manipulator(軌道重視)
2. replicate the effect of the manipulator on the environment(結果重視)
- Case study:物体を持ってゴールまで運んでいく途中で物体を落とした。
- どうする?
- 主張:軌道の模倣というより、目的状態までの変化を再現するのが大事
1.a: Cross-Domain Alignment
10
(上):マニピュレータに関する情報をエンコードするため
(下):環境の変化をキャプするため
1.b: Temporal Smoothness
11
TCN:同一ドメインで、Temporal Smoothnessをやる
TSCN:TCNをことなるドメインに汎化させる
1.b: Temporal Smoothness
12
1.c : Actionable Representations
13
1.c : Actionable Representations
14
Goal Conditioned Policy:同一ドメイン
Cross-Domain Goal-Conditional Policies(CD-GCP):異なるドメインへ拡張
2. Cross-Embodiment Imitation via RL
15
実際模倣するプロセス:
- 学習済みMIRで、デモンストレーションが与えられる:
- N-step後のゴールをサンプリングする:
- 現在の観測oからゴールを達成した際の報酬:
- RLアルゴリズムは、Maximum a Posteriori Policy Optimization (MPO)を使う。
2. Cross-Embodiment Imitation via RL
16
Experiments and Data
17
- 7194のデモを収集、ドメインだけを変化させる
- 二つのタスクで評価している。
- 積んだ物体を下ろすタスク
- Objectを他のObjectの上に積むタスク
Evaluation:Comparison of Imitation Performance
18
Paired across domainの軌道を使わない手法:(上の2つ)
- Navie Goal-Conditioned Policies(GCP)
- Temporal Distance Classification(TDC)
Paired across domainの軌道を使う手法:(下の3つ)
- Time-Contrastive Networks(TCN)
- Cross-Modal Distance Classification(CMC)
- MIR
Evaluation:Ablation Study
19
MIRは、CD-GCPとTSCNから構成される
- Cross-embodiment visual imitationに対して、 MIRを提案した
- 未知な形態のマニピュレータにも汎化できた(jaco hand)
- human handのデモンストレーションでもある程度成功
- MIRをドメインシフトに強くした要素は、3つ
- focus on the change of object configurations(環境の用意)
- temporally smooth(TSCN)
- actionable(CD-GCPs)
- 感想
- マニピュレーションに関して、状態の変化に注目した研究
- 良い実験設定で、検証できたが、まだ研究が出てきそう。
- マニピュレーションだけでなく、行動に関する一般化的な話にでも適応できる
のでは?
Conclusion
20
Appendix
Appendix

More Related Content

What's hot

【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action DiffusionDeep Learning JP
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...Deep Learning JP
 
「世界モデル」と関連研究について
「世界モデル」と関連研究について「世界モデル」と関連研究について
「世界モデル」と関連研究についてMasahiro Suzuki
 
Noisy Labels と戦う深層学習
Noisy Labels と戦う深層学習Noisy Labels と戦う深層学習
Noisy Labels と戦う深層学習Plot Hong
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習Eiji Uchibe
 
深層生成モデルと世界モデル
深層生成モデルと世界モデル深層生成モデルと世界モデル
深層生成モデルと世界モデルMasahiro Suzuki
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習Deep Learning JP
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State SpacesDeep Learning JP
 
【メタサーベイ】数式ドリブン教師あり学習
【メタサーベイ】数式ドリブン教師あり学習【メタサーベイ】数式ドリブン教師あり学習
【メタサーベイ】数式ドリブン教師あり学習cvpaper. challenge
 
[DL輪読会]モデルベース強化学習とEnergy Based Model
[DL輪読会]モデルベース強化学習とEnergy Based Model[DL輪読会]モデルベース強化学習とEnergy Based Model
[DL輪読会]モデルベース強化学習とEnergy Based ModelDeep Learning JP
 
【DL輪読会】Flow Matching for Generative Modeling
【DL輪読会】Flow Matching for Generative Modeling【DL輪読会】Flow Matching for Generative Modeling
【DL輪読会】Flow Matching for Generative ModelingDeep Learning JP
 
【DL輪読会】Emergence of maps in the memories of blind navigation agents
【DL輪読会】Emergence of maps in the memories of blind navigation agents【DL輪読会】Emergence of maps in the memories of blind navigation agents
【DL輪読会】Emergence of maps in the memories of blind navigation agentsDeep Learning JP
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoderSho Tatsuno
 
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement LearningDeep Learning JP
 
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...Deep Learning JP
 
論文紹介:Multimodal Learning with Transformers: A Survey
論文紹介:Multimodal Learning with Transformers: A Survey論文紹介:Multimodal Learning with Transformers: A Survey
論文紹介:Multimodal Learning with Transformers: A SurveyToru Tamaki
 
[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination
[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination
[DL輪読会]Dream to Control: Learning Behaviors by Latent ImaginationDeep Learning JP
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...Deep Learning JP
 
【DL輪読会】Scaling Laws for Neural Language Models
【DL輪読会】Scaling Laws for Neural Language Models【DL輪読会】Scaling Laws for Neural Language Models
【DL輪読会】Scaling Laws for Neural Language ModelsDeep Learning JP
 

What's hot (20)

【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action Diffusion
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
 
「世界モデル」と関連研究について
「世界モデル」と関連研究について「世界モデル」と関連研究について
「世界モデル」と関連研究について
 
Noisy Labels と戦う深層学習
Noisy Labels と戦う深層学習Noisy Labels と戦う深層学習
Noisy Labels と戦う深層学習
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習
 
深層生成モデルと世界モデル
深層生成モデルと世界モデル深層生成モデルと世界モデル
深層生成モデルと世界モデル
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習
 
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
【DL輪読会】Efficiently Modeling Long Sequences with Structured State Spaces
 
HiPPO/S4解説
HiPPO/S4解説HiPPO/S4解説
HiPPO/S4解説
 
【メタサーベイ】数式ドリブン教師あり学習
【メタサーベイ】数式ドリブン教師あり学習【メタサーベイ】数式ドリブン教師あり学習
【メタサーベイ】数式ドリブン教師あり学習
 
[DL輪読会]モデルベース強化学習とEnergy Based Model
[DL輪読会]モデルベース強化学習とEnergy Based Model[DL輪読会]モデルベース強化学習とEnergy Based Model
[DL輪読会]モデルベース強化学習とEnergy Based Model
 
【DL輪読会】Flow Matching for Generative Modeling
【DL輪読会】Flow Matching for Generative Modeling【DL輪読会】Flow Matching for Generative Modeling
【DL輪読会】Flow Matching for Generative Modeling
 
【DL輪読会】Emergence of maps in the memories of blind navigation agents
【DL輪読会】Emergence of maps in the memories of blind navigation agents【DL輪読会】Emergence of maps in the memories of blind navigation agents
【DL輪読会】Emergence of maps in the memories of blind navigation agents
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder
 
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning
【DL輪読会】Contrastive Learning as Goal-Conditioned Reinforcement Learning
 
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...
[DL輪読会]“Spatial Attention Point Network for Deep-learning-based Robust Autono...
 
論文紹介:Multimodal Learning with Transformers: A Survey
論文紹介:Multimodal Learning with Transformers: A Survey論文紹介:Multimodal Learning with Transformers: A Survey
論文紹介:Multimodal Learning with Transformers: A Survey
 
[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination
[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination
[DL輪読会]Dream to Control: Learning Behaviors by Latent Imagination
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
 
【DL輪読会】Scaling Laws for Neural Language Models
【DL輪読会】Scaling Laws for Neural Language Models【DL輪読会】Scaling Laws for Neural Language Models
【DL輪読会】Scaling Laws for Neural Language Models
 

Similar to [DL輪読会]Manipulation-Independent Representations(MIR) for Successful Cross Embodiment Visual Imitation

エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察
エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察
エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察Yuichi Inobori
 
[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms
[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms
[DL輪読会]Fast and Slow Learning of Recurrent Independent MechanismsDeep Learning JP
 
琉球大学「ベンチャー起業講座」概要
琉球大学「ベンチャー起業講座」概要琉球大学「ベンチャー起業講座」概要
琉球大学「ベンチャー起業講座」概要Lean Startup Japan LLC
 
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...Deep Learning JP
 

Similar to [DL輪読会]Manipulation-Independent Representations(MIR) for Successful Cross Embodiment Visual Imitation (6)

Deeplearning lt.pdf
Deeplearning lt.pdfDeeplearning lt.pdf
Deeplearning lt.pdf
 
エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察
エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察
エステティクス(美学=感性論)概念を 視座とする「きれい事」の経営実現に関する考察
 
[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms
[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms
[DL輪読会]Fast and Slow Learning of Recurrent Independent Mechanisms
 
20150414seminar
20150414seminar20150414seminar
20150414seminar
 
琉球大学「ベンチャー起業講座」概要
琉球大学「ベンチャー起業講座」概要琉球大学「ベンチャー起業講座」概要
琉球大学「ベンチャー起業講座」概要
 
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...
【DL輪読会】Emergent World Representations: Exploring a Sequence ModelTrained on a...
 

More from Deep Learning JP

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving PlannersDeep Learning JP
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについてDeep Learning JP
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...Deep Learning JP
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-ResolutionDeep Learning JP
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxivDeep Learning JP
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLMDeep Learning JP
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...Deep Learning JP
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place RecognitionDeep Learning JP
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?Deep Learning JP
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究についてDeep Learning JP
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )Deep Learning JP
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...Deep Learning JP
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"Deep Learning JP
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "Deep Learning JP
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat ModelsDeep Learning JP
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"Deep Learning JP
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...Deep Learning JP
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...Deep Learning JP
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...Deep Learning JP
 
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...Deep Learning JP
 

More from Deep Learning JP (20)

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
 
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
【DL輪読会】VIP: Towards Universal Visual Reward and Representation via Value-Impl...
 

[DL輪読会]Manipulation-Independent Representations(MIR) for Successful Cross Embodiment Visual Imitation