This is my 5 minutes presentation at the Doctoral Consortium of the 18th Artificial Intelligence in Education conference held the 30th June 2017 in Wuhan, China
Procuring digital preservation CAN be quick and painless with our new dynamic...
Multimodal Tutor - Adaptive feedback from multimodal experience capturing
1. AIED17
30th June 2017 - Wuhan, Republic of China
Multimodal Tutor
Daniele DI MITRI
Advisors: Hendrik DRACHSLER, Marcus SPECHT
Adaptive feedback from multimodal
experience capturing.
3. Multimodality for humans
Humans encode messages
using multiple modalities like textual,
linguistic, spatial etc. Kress (2003)
Humans decode messages
capturing through the senses and
reasoning about them (Paivio, 1971).
Pagina 3
4. They encode messages
Pagina 4
Multimodality for computers
through displays or AR
They decode inputs
through sensors
5. Observability
Line
INPUT SPACE
OUTPUT SPACE
Observable dimensions,
can be tracked
with sensors
Unobservable
dimensions,
require human interpretation
assessment
Di Mitri, D., Drachsler, H., Specht, M. (2017) From signals to knowledge. A conceptual model for
multimodal learning analytics. In press.
6. • Train machines to look
beyond the observability line
• Train machines = ML models
• Use historical pairing of:
– multimodal data “X”
the input space
– learning performances ”y”
the output space
y = f(X)
Learning
Performance
ML
Model
Multimodal
Data
Pagina 6
Machine Learning approach
7. Research Tasks in my PhD
• T1 – Preliminary experiment Learning Pulse ✓
• T2 – Literature Review multimodal data for learning ✓
• T3 – Technology prototype Multimodal Prototype
• T4 – Main experiment Multimodal Health Tutor
Pagina 7
8. Task 1 – Learning Pulse (LAK17)
Pagina 8
Flow prediction (Csikszentmihalyi, 1972)
Di Mitri, D., Scheffel, Drachsler, H., M., Börner, D., & Specht, M.
Learning Pulse: a machine learning approach for predicting performance in self-
regulated learning using multimodal data.
9. Task 2 – Review of Multimodal Data in Learning
Daniele DI MITRI, Hendrik DRACHSLER, Marcus SPECHT
From signals to knowledge. A conceptual model for
multimodal learning analytics.
Pagina 9
10. Task 3 – Multimodal Prototype
Pagina 10
WEKIT prototype:
• MS Hololens + external sensors
• Multimedia annotations for task explanation
• Multimodal data capturing
WEKIT project
Industry 4.0
wekit.eu
11. Task 4 – Multimodal Health Tutor
Pagina 11
Can we go beyond the GPS-alike
tutoring and make a skills-sensitive
Intelligent Tutoring System?
= can we predict Confidence vs
Hesitation from multimodal data?
Learning setting: healthcare simulation
2nd WEKIT pilot
wekit.eu
12. Task 4 – Multimodal Health Tutor (2)
IDEA
• Input space: motoric &
physiological data.
• Output space:
self-reported confidence
/hesitation level
• Mixed Linear Effect Model
• Feedback: prompted according
to predicted confidence
Pagina 12
Multimodal
Data
Physiological
EEG / focus
Heart Rate
Sweat
Motoric
Gaze
direction
Head
position
Hands
movement
EMG
13. Q&A to the table
Thanks for listening!
Daniele Di Mitri
ddm@ou.nl
@dimstudi0