4. Oral Session 4A
O-4A-
01
Group Normalization Yuxin Wu, Facebook; Kaiming He*,
Facebook Inc., USA
O-4A-
02
Deep Expander Networks: Efficient
Deep Networks from Graph Theory
Ameya Prabhu*, IIIT Hyderabad; Girish
Varma, IIIT Hyderabad; Anoop
Namboodiri, IIIT Hyderbad
O-4A-
03
Towards Realistic Predictors Pei Wang*, UC San Diego; Nuno
Vasconcelos, UC San Diego
O-4A-
04
Learning SO(3) Equivariant
Representations with Spherical CNNs
Carlos Esteves*, University of
Pennsylvania; Kostas Daniilidis,
University of Pennsylvania; Ameesh
Makadia, Google Research; Christine
Allec-Blanchette, University of
Pennsylvania
23. Oral Session 4B
O-4B-
01
CornerNet: Detecting Objects as Paired
Keypoints
Hei Law*, University of Michigan; Jia Deng,
University of Michigan
O-4B-
02
RelocNet: Continous Metric Learning
Relocalisation using Neural Nets
Vassileios Balntas*, University of Oxford;
Victor Prisacariu, University of Oxford; Shuda
Li, University of Oxford
O-4B-
03
The Contextual Loss for Image Transformation
with Non-Aligned Data
Roey Mechrez*, Technion; Itamar Talmi,
Technion; Lihi Zelnik-Manor, Technion
O-4B-
04
Acquisition of Localization Confidence for
Accurate Object Detection
Borui Jiang*, Peking University; Ruixuan Luo,
Peking University; Jiayuan Mao, Tsinghua
University; Tete Xiao, Peking University;
Yuning Jiang, Megvii(Face++) Inc
O-4B-
05
Deep Model-Based 6D Pose Refinement in
RGB
Fabian Manhardt*, TU Munich; Wadim Kehl,
Toyota Research Institute; Nassir Navab,
Technische Universität München, Germany;
Federico Tombari, Technical University of
Munich, Germany
28. Deep Model-Based 6D Pose Refinement in
RGB
• Deep LearningでRGB単独から6D pose Estimation
• これ系ばっかりOralになっている.ECCVの好み???
• 特徴
• RGB-Only, Ambiguity-Free (未知物体でもある程度動く)
• Precise
• もう一個あったが写真に写ってなかった(汗)
Oral
29. Oral 4C
O-4C-
01
DeepTAM: Deep Tracking and Mapping Huizhong Zhou*, University of Freiburg;
Benjamin Ummenhofer, University of Freiburg;
Thomas Brox, University of Freiburg
O-4C-
02
ContextVP: Fully Context-Aware Video
Prediction
Wonmin Byeon*, NVIDIA; Qin Wang, ETH
Zurich; Rupesh Kumar Srivastava,
NNAISENSE; Petros Koumoutsakos, ETH
Zurich
O-4C-
03
Saliency Benchmarking Made Easy:
Separating Models, Maps and Metrics
Matthias Kümmerer*, University of Tübingen;
Thomas Wallis, University of Tübingen;
Matthias Bethge, University of Tübingen
O-4C-
04
Museum Exhibit Identification Challenge for
the Supervised Domain Adaptation.
Piotr Koniusz*, Data61/CSIRO, ANU; Yusuf
Tas, Data61; Hongguang Zhang, Australian
National University; Mehrtash Harandi,
Monash University; Fatih Porikli, ANU; Rui
Zhang, University of Canberra
O-4C-
05
Multi-Attention Multi-Class Constraint for
Fine-grained Image Recognition
Ming Sun, baidu; Yuchen Yuan, Baidu Inc.;
Feng Zhou*, Baidu Research; Errui Ding, Baidu
Inc.
30. DeepTAM: Deep Tracking and Mapping
• 自己位置推定
• SfMをやるけど,単独のフレームでも歩いていど深度を推定できるよ
うにしたり,してた?ちょっとしっかり聞いていなかった.
• そろそろ3次元のいち推定系の話に興味が薄いのがバレているだろうな….
Oral
37. Domain transfer through deep activation
matching
• Domain変換するときに,最終層の出力に対するAdv. Lossだけ
でなくて,各レイヤーの出力も合うように使用,という話っぽ
い.
• 蒸留と違うのか?
Poster
38. Visual Coreference Resolution in Visual
Dialog using Neural Module Networks
• 文章における参照(「それ」とか「The boat」が前の文章の何
に対応するか)を推定する.
• 同じ物体が違う呼ばれ方を言える.
• 竜頭の船
• Dragon Head Boat
• The boat
• it
• The dragon
• 問題設定ばかりみてて,
解き方みてなかった(汗
Poster
39. Visual Coreference Resolution in Visual
Dialog using Neural Module Networks
• 文章における参照(「それ」とか「The boat」が前の文章の何
に対応するか)を推定する.
• 同じ物体が違う呼ばれ方を言える.
• 竜頭の船
• Dragon Head Boat
• The boat
• it
• The dragon
• 問題設定ばかりみてて,
解き方みてなかった(汗
Poster
40. Look Before You Leap: Bridging Model-Free
and Model-Based Reinforcement Learning for
Planned-Ahead Vision-and-Language
Navigation
• 以前の関東CV勉強会で牛久先生がVision-and-Language
Navigationの論文を紹介していたのを思い出したので写真を
取っておきました.
Poster