Slides from my presentation at the Awareness and Reflection in Technology-Enhanced Learning Workshop at the EC-TEL 2012 Conference. For more information about the workshop and the presentation please visit http://teleurope.eu/artel12.
Comparing Automatically Detected Reflective Texts with Human Judgements
1. Comparing Automatically Detected
Reflective Texts with Human
Judgements
Thomas Daniel Ullmann, Fridolin Wild, Peter
Scott
KMi - The Open University
2nd Workshop on Awareness and Reflection in
Technology-Enhanced Learning
18 September 2012
2. Traditional methods
• Questionnaires • Time consuming
– Groningen Reflection
Ability Scale (GRAS) • Delayed feedback
– Reflective Dialogue
Rating Scale
• Personal nature of
• Manual content reflection
analysis
– Overview see: Dyment,
J. E., & O’Connell, T. S.
(2011).
=> Automated detection of reflection
4. Theory: Elements
of Reflection
Description of an experience Personal
Critical analysis
Reflection
Frame-of-reference
Outcome
5. The Architecture
Ullmann, T.D 2011: An architecture for the automated detection of textual indicators of reflection.
http://ceur-ws.org/Vol-790/
6. Benefits
• Allows the mapping from low level
annotations to high level
constructs
• Knowledge driven
• Explanation of inferences
7. Example rule
FOR ALL sentences of the document:
IF sentence contains a nominal subject
AND IF it is a self-referential pronoun
AND IF the governor of this sentence is
contained in the
vocabulary reflective verbs
THEN add fact "Sentence is of type personal
use of reflective vocabulary"
8. The experiment
• Overarching goal:
– Evaluating the boundaries of automated
detection of reflection
• Focus of the paper:
– How does automated detection of reflection
relate with human judgments of reflection?
– What are reasonable weights to
parameterise the reflection detector?
10. Text corpus
• Text corpus: “The Blog Authorship
Corpus”
• Experiment based on subset: 5176 blog
posts
• 4.842.295 annotations
• 178.504 inferences
• Detection: 95 reflective; 54 not
reflective ones