8. Motion-driven Concatenative Synthesis of
!Cloth Sounds
“We present a practical data-driven method for automatically
synthesizing plausible soundtracks for physics-based cloth
animations running at graphics rates. Given a cloth animation,
we analyze the deformations and use motion events to drive
crumpling and friction sound models estimated from cloth
measurements.” (from Abstract)
13. Wearable Telepresence System Based on Multimodal
Communication for Effective Teleoperation with a Humanoid
どんなもの?
先行研究と比べてどこがすごい?
技術や手法のキモはどこ?
どうやって有効だと検証した?
議論はある?
次に読むべき論文は?
「テレプレゼンスロボット」に指令を与える
新たなコントローラー
同研究では無視されがちな「動かしやすさ、
身につけやすさ」に着目した
すべてが人が装着するデバイス内に って
おり直感的な操作が可能
力学計算を行い、実際に操作者と
ロボの動きを検証し動作の有効性を確認
どうしても妥協するしかなかった
精密性等の性能面をどう補うか?
Interactive multi-modal robot programming
システム面にも注目したい
Yong-Ho SEO, Hum-Young PARK, Taewoo HAN, and Hyun Seung YANG
14. 動画URL:https://www.youtube.com/watch?v=-oN96cucBr4
論文URL: http://chrisharrison.net/projects/tapsense/tapsense.pdf
TapSense: Enhancing Finger Interaction on Touch Surfaces
95%の精度で,指の爪/腹/関節/指先の4つを
画面との衝突音により検出できる.
Chris Harrison Julia Schwarz Scott E. Hudson Human-Computer Interaction Institute and Heinz College Center for the Future of Work
Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213
どんなもの?
タッチペンや特殊な装着物を必要とせず,
指の一部分を利用して入力ができる.
比較的安価である.
先行研究と比べてどこがすごい?
技術や手法のキモはどこ?
どうやって有効だと検証した?
論議はある?
次に進むべき論文は?音響ベースの入力である.
衝突音の音響特徴を分類している.
ユーザーが追加してデバイスをつける必要がない.
様々なアプリケーションを使い,技術検証を行っ
た.曇りガラスを使ったテーブルでの検証や,スマ
ートフォンの小さな画面の中での文字入力や描画
を実際に検証した.
ペンと指を組み合わせると99%の精度が出るが、
4種類の指の分類は95%の精度.
Scratch Input: Creating
Large, Inexpensive, Unpowered and Mobile finger Input
Surfaces. In Proc.
とか
15. 論文URL: https://www.sonycsl.co.jp/person/rekimoto/papers/uist97holo.pdf
HoloWall: Designing a Finger,Hand,Body,
and Object Sensitive Wall
壁越しに赤外線カメラを置いて,指,手,体を検出する
Abstract
this techNote reports on our initial results of realizing a
computer augmented wall called the Holo Wall.
Using an infrared camera located behind the wall,
this system allow a user to interact with this computerized wall
us-ing fingers, hands, their body, or even a, physical object such
as a document folder
Nobuyuki Matsushita
Department of Computer Science,Keio University
Jun Rekimoto
Sony Computer Science Laboratory Inc.
タップした場所を皮膚の伝播により解析し,
入力操作を可能にする腕章
Skinput: Appropriating the Body
as an Input Surface
ABSTRACT
We present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the
skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand
by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel
array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-
body finger input system. We assess the capabilities, accuracy and limitations of our technique through a
two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with
several proof-of-concept applications we developed.
Chris Harrison1,2, Desney Tan2, Dan Morris2
1 Human-Computer Interaction Institute
2 Microsoft Research
動画URL: https://www.youtube.com/watch?v=g3XPUdW9Ryg
論文URL: http://www.kevinli.net/courses/mobilehci_w2013/papers/skinput.pdf
16. 動画URL: https://www.youtube.com/watch?v=2E8vsQB4pug
論文URL: http://www.chrisharrison.net/projects/scratchinput/Harrison_122.pdf
Scratch Input: Creating Large, Inexpensive,
Unpoweredand Mobile Finger Input Surfaces
任意の場所に描かれた、
6種類の入力パターンを音で分類する
ABSTRACT
We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced
when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint.
We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables,
turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small
that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be
appropriated as a gestural input surface. Several example applications were developed to demonstrate
possible interactions. We conclude with a study that shows users can perform six Scratch Input
gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.
Chris Harrison Scott E. Hudson
Human-Computer Interaction Institute
運動動作とタッチの組み合わせによる
入力方法の提案とそのアプローチ
傾けながら親指タップでズームしたり
Sensor Synaesthesia: Touch in Motion,
and Motion in Touch
ABSTRACT
We explore techniques for hand-held devices that leverage the multimodal combination of touch and motion.
Hybrid touch + motion gestures exhibit interaction properties that
combine the strengths of multi-touch with those of motionsensing. This affords touch-enhanced motion
gestures, such as one-handed zooming by holding one s thumb on the screen while tilting a device. We also
consider the reverse perspective, that of motion-enhanced touch, which uses motion sensors to probe what
happens underneath the surface of touch. Touching the screen induces secondary accelerations and angular
velocities in the sensors. For example, our prototype uses motion sensors to distinguish gently swiping a
finger on the screen from drags with a hard onset to enable more expressive touch interactions.
Ken Hinckley1, Hyunyoung Song1,2
1Microsoft Research
2University of Maryland
動画URL: https://www.youtube.com/watch?v=Zuu7ZnyWrJA
論文URL: http://research.microsoft.com/en-us/um/people
/kenh/papers/touch-motion-camera-ready-final.pdf
17. 動画URL: https://vimeo.com/30574433
論文URL: http://www.olwal.com/projects/research/surfacefusion/olwal_surfacefusion_gi_2008.pdf
SurfaceFusion: Unobtrusive Tracking of Everyday
Objects in Tangible User Interfaces
RFIDとカメラを使った,特定の場所の中での位置検出とタグ付け
ABSTRACT
Interactive surfaces and related tangible user interfaces often involve everyday objects that are
identified, tracked, and augmented with digital information. Traditional approaches for recognizing
these objects typically rely on complex pattern recognition techniques, or the addition of active
electronics or fiducials that alter the visual qualities of those objects, making them less practical for
real-world use. Radio Frequency Identification (RFID) technology provides an unobtrusive method of
sensing the presence of and identifying tagged nearby objects but has no inherent means of
determining the position of tagged objects. Computer vision, on the other hand, is an established
approach to track objects with a camera. While shapes and movement on an interactive surface can be
determined from classic image processing techniques, object recognition tends to be complex,
computationally expensive and sensitive to environmental conditions. We present a set of techniques in
which movement and shape information from the computer vision system is fused with RFID events
that identify what objects are in the image. By synchronizing these two complementary sensing
modalities, we can associate changes in the image with events in the RFID data, in order to recover
position, shape and identification of the objects on the surface, while avoiding complex computer vision
processes and exotic RFID solutions.
Alex Olwal
School of Computer Science and Communication, KTH1
Andrew D. Wilson
Microsoft Research2
18. Learning to be a Depth Camera
Sean Ryan Fanello, Cem Keskin, Shahram Izadi, Pushmeet Koshli,
David Kim, Dabid Sweeney, Antonio Criminisi, Jamie Shotton,
Sing Bing Kang, and Tim Paek
24. 次に読むべきもの
Lanman, D. and Taubin, G.
Build your own 3D scanner: #d photography for
beginners.
ACM SIGGRAPH 2009
25. Graffiti Fur: Turning Your Carpet into a Computer
Display
Yuta Sugiura, Koki Toda, Takayuki Hoshi,Youichi Kamiyama,
Takeo Igarashi and Masahiko Inami
59. Iterative Design of Seamless Collaboration Media
‘Smart Clothing’: Wearable Multimedia Computing and
‘Personal Imaging’ to Restore the Technological Balance
Between People and Their Environments
Hiroshi Ishii, Minoru Kohayashi, and Kazuho Arita
リアルタイムで相手とアイコンタクトをしながら
ワークスペースを共有しやり取りできるクリアーボード
ヘッドマウントディスプレイ、カメラ、センサーなどの付いた、
パーソナルでWearableなマルチメディアコンピューター
Steve Mann
MIT Media Lab
60. Bricks: Laying the Foundations for Graspable User Interfaces The Computer for the 21st Century
物質的なハンドルを通して、電子的でバーチャルなオ
ブジェクトを直接 制御できるより発展的なGUI
これから人間がマシンの中に入るのではなく、
マシンが人間の環境に適合していく。
それによって、森の中を散歩するかのように、
新鮮に自然に我々はコンピューターを使うことになる
だろう。
George W. Fitzmaurice Hiroshi Ishii William Buxton Mark Weiser
61. Living in Augmented Reality: Ubiquitous Media
and Reactive Environments.
リモート通信によって会議に参加したり、
オフィスをシェアしたり、顔と顔を合わせて連絡を
取り合うことができる。
ユビキタスメディアに取り入れられるデザイン。
William A.S. Buxton Computer Systems Research Institute, University of Tornonto &
Alias | Wavefront Inc., Toronto
78. From 3D to VR and farther to Telexistence
3D . VR 30 .
2020 .
Telexistence : , . Telepresence )
Telexistence Master-Slave system TELESAR
TELESAR Master-slave , , , .
slave .
S. Tachi : From 3D to VR and farther to Telexistence
Artificial Reality and Telexistence (ICAT), 2013 23rd International Conference on,
Fig.3 TELESARFig.2 TelexistenceFig.1 3D-VR Fig.4
79. Telexistence Cockpit
for Humanoid Robot Control
Telexistence Master-Slave system
Cockpit , .
feedback HMD(
)
.
Slave- .
S. Tachi, K. Komoriya, K. Sawada, T. Nishiyama, T. Itoko, M. Kobayashi, and
K. Inoue: Telexistence Cockpit for Humanoid Robot Control, Advanced
Robotics, vol.17, no.3, pp.199-217, 2003.
TORSO: Development of a Telexistence
Visual System Using a 6-d.o.f. Robot Head
K. Watanabe, I. Kawabuchi, N. Kawakami, T. Maeda, and S. Tachi: TORSO:
Development of a Telexistence Visual System using a 6-d.o.f. Robot Head: Advanced
Robotics, vol.22, pp.1053- 1073, 2008.
D.O.F.(=degree of freedom)
D.O.F.
80. TELEsarPHONE: Mutual Telexistence Master-Slave
Communication System Based on Retrore-flective
Projection Technology
S. Tachi, K. Watanabe, and K. Minamizawa: TELEsarPHONE: Mutual Telexistence Master
Slave Communication System based on Retro-Reflective Projection Technology, SICE
Journal of Control, Measurement, and System Integration, vol.1, no.5, pp.1-10, 2008.
Telexistnce
TELEsarPHONE
.
.
Design of TELESAR V for Transferring
Bodily Consciousness in Telexistence
Telexistence Master-slave system TELESAR .
, , ,
D.O.F.(=Degree of freedom) D.O.F.
D.O.F. D.O.F.
Fig 1. TELESAR
C. L. Fernando and S. Tachi: Design of TELESAR V for Transferring Bodily
Consciousness in Telexistence, Proceedings of IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS2012), pp.5112-5118,
Vilamoura, Algarve, Portugal, 2012.
81. Mutual Hand Representation for Telexistence Robots
using Projected Virtual Hands
MHD Yamen Saraiji, Charith Lasantha Fernando, Kouta Minamizawa, and Susumu
Tachi : Mutual hand representation for telexistence robots using projected virtual
hands. In Proceedings of the 6th Augmented Human International Conference (AH
'15), Singapore, pp.221-222 (2015.3) [demonstration]
.
Telexistence
101. Focus 3D: Compressive Accommodation Display
ANDREW MAIMONE!
University of North Carolina at Chapel Hill!
GORDON WETZSTEIN, MATTHEW HIRSCH, DOUGLAS LANMAN and RAMESH RASKAR MIT Media Lab!
and!
HENRY FUCHS!
University of North Carolina at Chapel Hill
106. 議論など
• ディスプレイに対して垂
直な位置でなければ3D
表示が上手く表示されな
い
次に読むべき
論文は?
• 従来の3Dディスプレイの構造と比較する必要
があると考え
• [Sullivan 2003]
• [Putilin et al. 2001; Gotoda 2010;
Wetzstein et al. 2011; Lanman et al.
2011; Wetzstein et al. 2012]
• [Chu et al. 2005; Chien and Shieh 2006;
Brott and Schultz 2010]
• [Toyooka et al. 2001; Mather et al. 2009;
Kwon and Choi 2012]
• などを読むべきだと思っている。
[
107. CICHOCKI, A., ZDUNEK, R., PHAN,
A. H., AND ICHI AMARI, S. 2009.
Nonnegative Matrix and Tensor
Factorizations. Wiley.
画像処理における、ベクトルの因数
分解処理のアルゴリズムを利用
BROTT, R. AND SCHULTZ, J. 2010.
Directional backlight lightguide con-
siderations for full resolution
autostereoscopic 3D displays. SID Digest,
218–221.
3Dディスプレイで高解像度のを実現
するための理論を利用
108. AKELEY, K., WATT, S. J., GIRSHICK, A. R.,
AND BANKS, M. S. 2004. A stereo display
prototype with multiple focal distances. ACM Trans.
Graph. (SIGGRAPH) 23, 804–813.
• 正確な焦点距離の取得に利用
LANMAN, D., HIRSCH, M., KIM, Y., AND
RASKAR, R. 2010. Content- adaptive
parallax barriers: Optimizing dual-layer 3D
displays using low- rank light field
factorization. ACM Trans. Graph.
(SIGGRAPH Asia) 29, 163:1–163:10.
• 3Dディスプレイへの最適化表示に利用
109. MARWAH, K., WETZSTEIN, G., BANDO,
Y., AND RASKAR, R. 2013. Compressive
Light Field Photography using Overcomplete
Dictionaries and Optimized Projections. ACM
Trans. Graph. (Proc. SIGGRAPH).
明視野の効率的な圧縮技術