SlideShare a Scribd company logo
1 of 25
+
Doing More with Less :
Student Modeling and Performance
Prediction with Reduced Content
Models
Yun Huang, University of Pittsburgh
Yanbo Xu, Carnegie Mellon University
Peter Brusilovsky, University of Pittsburgh
+
This talk…
 What? More effective student modeling and
performance prediction
 How? A simple novel framework reducing
content model without loss of quality
 Why? Better and cheaper
 Reduced to 10%~20% while maintaining or
improving performance (up to 8% better AUC)
 Beat expert based reduction
+
Outline
 Motivation
 Content Model Reduction
 Experiments and Results
 Conclusion and Future Work
+
Motivation
 In some domains and some types of learning
content, each content problem (item) is related to
large number of domain concepts (Knowledge
Component, KCs)
 It complicates modeling due to increasing noise and
decreasing efficiency
 We argue that we only need a subset of the most
important KCs!
+
Content model
 The focus of this study: Java
 Each problem involves a complete program and
relates to many concepts
 Original content model
 Each problem is indexed by a set of Java
concepts from ontology
 In our context of study, number of concepts per
problem can range from 9 to 55!
+
An example of original content model
1. class definition
2. static method
3. public class
4. public method
5. void method
6. String array
7. int type variable
declaration
8. int type variable
initialization
9. for statement
10. assignment
11. increment
12. multiplication
13. less or equal
14. nested loop
+
Challenges
 Select best concepts to model problems
 Traditional feature selection focuses on
selecting a subset of features for all datapoints
(a domain).
item level not domain level
+ Our intuitions of reduction methods
 Three types of methods from different information sources and
intuitions:
Intuition 1
“for statement” appears 2 times in
this problem -- it should be
important for this problem!
“assignment” appears in a lot of
problems -- it should be trivial for
this problem!
Intuition 2: When “nested loops” appears, students
always get it wrong -- it should be important for this
problem!
Intuition 3: Expert labeled “assignment”, “less than”
as prerequisite concepts, while “nested loops”, “for
statement” as outcome concepts --- outcome
concepts should be the important ones for current
problem!
+
Reduction Methods
 Content-based methods
 A problem = a document, a KC = a word
 Use IDF and TFIDF keyword weighting approach to
compute KC importance score.
 Response-based Method
 Train a logistic regression (PFA) to predict student
response
 Use the coefficient representing the initial easiness
(EASINESS-COEF) of a KC.
 Expert-based Method Use only the OUTCOME concepts
as the KCs for an item.
+
Item-level ranking of KC importance
 For each method, we define SCORE function
assigning a score to a KC in an item
 The higher the score, the more important a KC is in
an item.
 Then, we do item-level ranking: a KC's
importance can be differentiated
 by different score values, or/and
 by its different ranking positions in different items
+
Reduction Sizes
 What is the best number of KCs each
method should reduce to?
 Reducing non-adaptively to items (TopX)
 Reducing adaptively to items (TopX%)
+
Evaluating Reduction on PFA and KT
 We evaluate by the prediction performance of two
popular student modeling and performance prediction
models
 Performance Factor Analysis (PFA): logistic
regression model predicting student response
 Knowledge Tracing (KT): Hidden Markov Models
predicting student response and inferring student
knowledge level
*We select a variant that can handle multiple KCs.
+
Outline
 Motivation
 Content Model Reduction
 Experiments and Results
 Conclusion and Future Work
+
Tutoring System
Collected from JavaGuide, a tutor for learning Java programming.
Each question is generated from a template,
and students can try multiple attempts
Students give values for a variable or the
output
Java code
+
Experimental Setup
 Dataset
 19, 809 observations, about 69.3% correct
 132 students on 94 question templates (items)
 A problem is indexed into 9 ~ 55 KCs, 124 KCs in total
 Classification metric: Area Under Curve (AUC)
 1: perfect classifier, 0.5: random classifier
 Cross-validation: Two runs of 5-fold CV where in each run
80% of the users are in train, and the remaining are in test.
 We list the mean AUC on test sets across the 10 runs, and
use Wilcoxon Signed Ranks Test (alpha = 0.05) to test
AUC comparison significance.
+ Reduction v.s. original on PFA
 Flat (or roughly in bell shape) with fluctuations
 Reduction to a moderate size can provide comparable or even
better prediction than using original content models.
 Reduction could hurt if the size goes too small (e.g. < 5), possibly
because PFA was designed for fitting items with multiple KCs.
+ Reduction v.s. original on KT
 Reduction provides gain ranging a much bigger span and scale!
 KT achieves the best performance when the reduction size is small:
it may be more sensitive than PFA to the size!
 Our reduction methods have selected promising KCs that are the
important ones for KT making predictions!
+ Automatic v.s. expert-based (OUTCOME)
reduction method
 IDF and TFIDF can be comparable to or outperform
OUTCOME method!
 E-COEF provides much gain on KT than PFA, suggesting
PFA coefficients can provide useful extra information for
reducing the KT content models.
(+/−: signicantly better/worse than OUTCOME,  : the optimal mean AUC)
+
Outline
 Motivation
 Content Model Reduction
 Experiments and Results
 Conclusion and Future Work
+
“Everything should be
made as simple as
possible, but not simpler.”
-- Albert Einstein
+
Conclusion
 “Content model should be made as simple as
possible, but not simpler.”
 Given the proper reduction size, reduction enables
prediction performance better!
 Different model reacts to reduction differently!
 KT is more sensitive to reduction than PFA
 Different models achieve the best balance between
model complexity and model fit in different ranges
 We are the first to explore reduction extensively!
 More ideas for selecting important KCs?
 Larger datasets?
 Other domains?
+
Acknowledgement
 Advanced Distributed Learning Initiative
(http://www.adlnet.gov/).
 LearnLab 2013 Summer School at CMU (Dr.
Kenneth R. Koedinger, Dr. Jose P. Gonzalez-Brenes, Dr.
Zachary A. Pardos for advising and initiating the project)
+
Thank you for listening !
+
Look at the original content model of
our Java learning system…
+ Why RANDOM can occasionally be good?
 When remaining size is relatively large (e.g. > 4 or > 20%),
RANDOM can by chance target one or a subset of the important
KCs, and then
 it takes advantage of PFA’s logistic regression to adjust the
coefficients of other non-important KCs, or
 it take advantage of KT to pick out the most important one in the set
by computing the “weakest” KC.
 When remaining size of KCs is relatively small, proposed
methods becomes better than RANDOM more significantly.
 Our proposed method is not perfect…
(+/−: signicantly better/worse than RANDOM,  : the optimal mean AUC)

More Related Content

What's hot

Machine Learning: Foundations Course Number 0368403401
Machine Learning: Foundations Course Number 0368403401Machine Learning: Foundations Course Number 0368403401
Machine Learning: Foundations Course Number 0368403401
butest
 
Nova Press Brochure
Nova Press BrochureNova Press Brochure
Nova Press Brochure
Jeff Kolby
 
Basics of Machine Learning
Basics of Machine LearningBasics of Machine Learning
Basics of Machine Learning
butest
 

What's hot (13)

Bt0081, software engineering
Bt0081, software engineeringBt0081, software engineering
Bt0081, software engineering
 
Adaptive Multilevel Clustering Model for the Prediction of Academic Risk
Adaptive Multilevel Clustering Model for the Prediction of Academic RiskAdaptive Multilevel Clustering Model for the Prediction of Academic Risk
Adaptive Multilevel Clustering Model for the Prediction of Academic Risk
 
Machine Learning: Foundations Course Number 0368403401
Machine Learning: Foundations Course Number 0368403401Machine Learning: Foundations Course Number 0368403401
Machine Learning: Foundations Course Number 0368403401
 
H transformer-1d paper review!!
H transformer-1d paper review!!H transformer-1d paper review!!
H transformer-1d paper review!!
 
Benchmarking transfer learning approaches for NLP
Benchmarking transfer learning approaches for NLPBenchmarking transfer learning approaches for NLP
Benchmarking transfer learning approaches for NLP
 
1.9 cấu trúc câu đề phải có sự nhất quán
1.9 cấu trúc câu đề phải có sự nhất quán1.9 cấu trúc câu đề phải có sự nhất quán
1.9 cấu trúc câu đề phải có sự nhất quán
 
Nova Press Brochure
Nova Press BrochureNova Press Brochure
Nova Press Brochure
 
Benchmarking 1
Benchmarking 1Benchmarking 1
Benchmarking 1
 
Replication of Recommender Systems Research
Replication of Recommender Systems ResearchReplication of Recommender Systems Research
Replication of Recommender Systems Research
 
Basics of Machine Learning
Basics of Machine LearningBasics of Machine Learning
Basics of Machine Learning
 
Solving a business problem through text and sentiment mining
Solving a business problem through text and sentiment miningSolving a business problem through text and sentiment mining
Solving a business problem through text and sentiment mining
 
Vecc day 1
Vecc day 1Vecc day 1
Vecc day 1
 
Aied 2013
Aied 2013Aied 2013
Aied 2013
 

Similar to Umap v1

Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...
butest
 
Introduction
IntroductionIntroduction
Introduction
butest
 
Introduction
IntroductionIntroduction
Introduction
butest
 
Introduction
IntroductionIntroduction
Introduction
butest
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
butest
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
butest
 

Similar to Umap v1 (20)

2014UMAP Student Modeling with Reduced Content Models
2014UMAP Student Modeling with Reduced Content Models2014UMAP Student Modeling with Reduced Content Models
2014UMAP Student Modeling with Reduced Content Models
 
Thesis_Rehan_Aziz
Thesis_Rehan_AzizThesis_Rehan_Aziz
Thesis_Rehan_Aziz
 
Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...Ensemble Learning Featuring the Netflix Prize Competition and ...
Ensemble Learning Featuring the Netflix Prize Competition and ...
 
Creativity vs Best Practices
Creativity vs Best PracticesCreativity vs Best Practices
Creativity vs Best Practices
 
Chounta@paws
Chounta@pawsChounta@paws
Chounta@paws
 
2015EDM: A Framework for Multifaceted Evaluation of Student Models (Polygon)
2015EDM: A Framework for Multifaceted Evaluation of Student Models (Polygon)2015EDM: A Framework for Multifaceted Evaluation of Student Models (Polygon)
2015EDM: A Framework for Multifaceted Evaluation of Student Models (Polygon)
 
Exposé Ontology
Exposé OntologyExposé Ontology
Exposé Ontology
 
Prototype System for Recommending Academic Subjects for Students' Self Design...
Prototype System for Recommending Academic Subjects for Students' Self Design...Prototype System for Recommending Academic Subjects for Students' Self Design...
Prototype System for Recommending Academic Subjects for Students' Self Design...
 
JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
JAVA 2013 IEEE DATAMINING PROJECT Comparable entity mining from comparative q...
 
Comparable entity mining from comparative questions
Comparable entity mining from comparative questionsComparable entity mining from comparative questions
Comparable entity mining from comparative questions
 
Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
M08 BiasVarianceTradeoff
M08 BiasVarianceTradeoffM08 BiasVarianceTradeoff
M08 BiasVarianceTradeoff
 
Machine Learning for Everyone
Machine Learning for EveryoneMachine Learning for Everyone
Machine Learning for Everyone
 
Introduction to Data Mining
Introduction to Data MiningIntroduction to Data Mining
Introduction to Data Mining
 
MCQ test item analysis
MCQ test item analysisMCQ test item analysis
MCQ test item analysis
 
ODSC East: Effective Transfer Learning for NLP
ODSC East: Effective Transfer Learning for NLPODSC East: Effective Transfer Learning for NLP
ODSC East: Effective Transfer Learning for NLP
 

More from Peter Brusilovsky

User Control in Adaptive Information Access
User Control in Adaptive Information AccessUser Control in Adaptive Information Access
User Control in Adaptive Information Access
Peter Brusilovsky
 
Two Brains are Better than One: User Control in Adaptive Information Access
Two Brains are Better than One: User Control in Adaptive Information AccessTwo Brains are Better than One: User Control in Adaptive Information Access
Two Brains are Better than One: User Control in Adaptive Information Access
Peter Brusilovsky
 
Personalized Online Practice Systems for Learning Programming
Personalized Online Practice Systems for Learning ProgrammingPersonalized Online Practice Systems for Learning Programming
Personalized Online Practice Systems for Learning Programming
Peter Brusilovsky
 

More from Peter Brusilovsky (20)

SANN: Programming Code Representation Using Attention Neural Network with Opt...
SANN: Programming Code Representation Using Attention Neural Network with Opt...SANN: Programming Code Representation Using Attention Neural Network with Opt...
SANN: Programming Code Representation Using Attention Neural Network with Opt...
 
Computer Science Education: Tools and Data
Computer Science Education: Tools and DataComputer Science Education: Tools and Data
Computer Science Education: Tools and Data
 
Personalized Learning: Expanding the Social Impact of AI
Personalized Learning: Expanding the Social Impact of AIPersonalized Learning: Expanding the Social Impact of AI
Personalized Learning: Expanding the Social Impact of AI
 
Action Sequence Mining and Behavior Pattern Analysis for User Modeling
Action Sequence Mining and Behavior Pattern Analysis for User ModelingAction Sequence Mining and Behavior Pattern Analysis for User Modeling
Action Sequence Mining and Behavior Pattern Analysis for User Modeling
 
User Control in Adaptive Information Access
User Control in Adaptive Information AccessUser Control in Adaptive Information Access
User Control in Adaptive Information Access
 
Human-Centered AI in AI-ED - Keynote at AAAI 2022 AI for Education workshop
Human-Centered AI in AI-ED - Keynote at AAAI 2022 AI for Education workshopHuman-Centered AI in AI-ED - Keynote at AAAI 2022 AI for Education workshop
Human-Centered AI in AI-ED - Keynote at AAAI 2022 AI for Education workshop
 
User Control in AIED (Artificial Intelligence in Education)
User Control in AIED (Artificial Intelligence in Education)User Control in AIED (Artificial Intelligence in Education)
User Control in AIED (Artificial Intelligence in Education)
 
The Return of Intelligent Textbooks - ITS 2021 keynote talk
The Return of Intelligent Textbooks - ITS 2021 keynote talkThe Return of Intelligent Textbooks - ITS 2021 keynote talk
The Return of Intelligent Textbooks - ITS 2021 keynote talk
 
Data-Driven Education 2020: Using Big Educational Data to Improve Teaching an...
Data-Driven Education 2020: Using Big Educational Data to Improve Teaching an...Data-Driven Education 2020: Using Big Educational Data to Improve Teaching an...
Data-Driven Education 2020: Using Big Educational Data to Improve Teaching an...
 
Two Brains are Better than One: User Control in Adaptive Information Access
Two Brains are Better than One: User Control in Adaptive Information AccessTwo Brains are Better than One: User Control in Adaptive Information Access
Two Brains are Better than One: User Control in Adaptive Information Access
 
An Infrastructure for Sustainable Innovation and Research in Computer Scienc...
An Infrastructure for Sustainable Innovation and Research in Computer Scienc...An Infrastructure for Sustainable Innovation and Research in Computer Scienc...
An Infrastructure for Sustainable Innovation and Research in Computer Scienc...
 
Personalized Online Practice Systems for Learning Programming
Personalized Online Practice Systems for Learning ProgrammingPersonalized Online Practice Systems for Learning Programming
Personalized Online Practice Systems for Learning Programming
 
Human Interfaces to Artificial Intelligence in Education
Human Interfaces to Artificial Intelligence in EducationHuman Interfaces to Artificial Intelligence in Education
Human Interfaces to Artificial Intelligence in Education
 
Interfaces for User-Controlled and Transparent Recommendations
Interfaces for User-Controlled and Transparent RecommendationsInterfaces for User-Controlled and Transparent Recommendations
Interfaces for User-Controlled and Transparent Recommendations
 
UMAP 2019 talk Evaluating Visual Explanations for Similarity-Based Recommenda...
UMAP 2019 talk Evaluating Visual Explanations for Similarity-Based Recommenda...UMAP 2019 talk Evaluating Visual Explanations for Similarity-Based Recommenda...
UMAP 2019 talk Evaluating Visual Explanations for Similarity-Based Recommenda...
 
Course-Adaptive Content Recommender for Course Authoring
Course-Adaptive Content Recommender for Course AuthoringCourse-Adaptive Content Recommender for Course Authoring
Course-Adaptive Content Recommender for Course Authoring
 
The User Side of Personalization: How Personalization Affects the Users
The User Side of Personalization: How Personalization Affects the UsersThe User Side of Personalization: How Personalization Affects the Users
The User Side of Personalization: How Personalization Affects the Users
 
The Power of Known Peers: A Study in Two Domains
The Power of Known Peers: A Study in Two DomainsThe Power of Known Peers: A Study in Two Domains
The Power of Known Peers: A Study in Two Domains
 
Data driveneducationicwl2016
Data driveneducationicwl2016Data driveneducationicwl2016
Data driveneducationicwl2016
 
From Expert-Driven to Data-Driven Adaptive Learning
From Expert-Driven to Data-Driven Adaptive LearningFrom Expert-Driven to Data-Driven Adaptive Learning
From Expert-Driven to Data-Driven Adaptive Learning
 

Recently uploaded

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 

Recently uploaded (20)

🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 

Umap v1

  • 1. + Doing More with Less : Student Modeling and Performance Prediction with Reduced Content Models Yun Huang, University of Pittsburgh Yanbo Xu, Carnegie Mellon University Peter Brusilovsky, University of Pittsburgh
  • 2. + This talk…  What? More effective student modeling and performance prediction  How? A simple novel framework reducing content model without loss of quality  Why? Better and cheaper  Reduced to 10%~20% while maintaining or improving performance (up to 8% better AUC)  Beat expert based reduction
  • 3. + Outline  Motivation  Content Model Reduction  Experiments and Results  Conclusion and Future Work
  • 4. + Motivation  In some domains and some types of learning content, each content problem (item) is related to large number of domain concepts (Knowledge Component, KCs)  It complicates modeling due to increasing noise and decreasing efficiency  We argue that we only need a subset of the most important KCs!
  • 5. + Content model  The focus of this study: Java  Each problem involves a complete program and relates to many concepts  Original content model  Each problem is indexed by a set of Java concepts from ontology  In our context of study, number of concepts per problem can range from 9 to 55!
  • 6. + An example of original content model 1. class definition 2. static method 3. public class 4. public method 5. void method 6. String array 7. int type variable declaration 8. int type variable initialization 9. for statement 10. assignment 11. increment 12. multiplication 13. less or equal 14. nested loop
  • 7. + Challenges  Select best concepts to model problems  Traditional feature selection focuses on selecting a subset of features for all datapoints (a domain). item level not domain level
  • 8. + Our intuitions of reduction methods  Three types of methods from different information sources and intuitions: Intuition 1 “for statement” appears 2 times in this problem -- it should be important for this problem! “assignment” appears in a lot of problems -- it should be trivial for this problem! Intuition 2: When “nested loops” appears, students always get it wrong -- it should be important for this problem! Intuition 3: Expert labeled “assignment”, “less than” as prerequisite concepts, while “nested loops”, “for statement” as outcome concepts --- outcome concepts should be the important ones for current problem!
  • 9. + Reduction Methods  Content-based methods  A problem = a document, a KC = a word  Use IDF and TFIDF keyword weighting approach to compute KC importance score.  Response-based Method  Train a logistic regression (PFA) to predict student response  Use the coefficient representing the initial easiness (EASINESS-COEF) of a KC.  Expert-based Method Use only the OUTCOME concepts as the KCs for an item.
  • 10. + Item-level ranking of KC importance  For each method, we define SCORE function assigning a score to a KC in an item  The higher the score, the more important a KC is in an item.  Then, we do item-level ranking: a KC's importance can be differentiated  by different score values, or/and  by its different ranking positions in different items
  • 11. + Reduction Sizes  What is the best number of KCs each method should reduce to?  Reducing non-adaptively to items (TopX)  Reducing adaptively to items (TopX%)
  • 12. + Evaluating Reduction on PFA and KT  We evaluate by the prediction performance of two popular student modeling and performance prediction models  Performance Factor Analysis (PFA): logistic regression model predicting student response  Knowledge Tracing (KT): Hidden Markov Models predicting student response and inferring student knowledge level *We select a variant that can handle multiple KCs.
  • 13. + Outline  Motivation  Content Model Reduction  Experiments and Results  Conclusion and Future Work
  • 14. + Tutoring System Collected from JavaGuide, a tutor for learning Java programming. Each question is generated from a template, and students can try multiple attempts Students give values for a variable or the output Java code
  • 15. + Experimental Setup  Dataset  19, 809 observations, about 69.3% correct  132 students on 94 question templates (items)  A problem is indexed into 9 ~ 55 KCs, 124 KCs in total  Classification metric: Area Under Curve (AUC)  1: perfect classifier, 0.5: random classifier  Cross-validation: Two runs of 5-fold CV where in each run 80% of the users are in train, and the remaining are in test.  We list the mean AUC on test sets across the 10 runs, and use Wilcoxon Signed Ranks Test (alpha = 0.05) to test AUC comparison significance.
  • 16. + Reduction v.s. original on PFA  Flat (or roughly in bell shape) with fluctuations  Reduction to a moderate size can provide comparable or even better prediction than using original content models.  Reduction could hurt if the size goes too small (e.g. < 5), possibly because PFA was designed for fitting items with multiple KCs.
  • 17. + Reduction v.s. original on KT  Reduction provides gain ranging a much bigger span and scale!  KT achieves the best performance when the reduction size is small: it may be more sensitive than PFA to the size!  Our reduction methods have selected promising KCs that are the important ones for KT making predictions!
  • 18. + Automatic v.s. expert-based (OUTCOME) reduction method  IDF and TFIDF can be comparable to or outperform OUTCOME method!  E-COEF provides much gain on KT than PFA, suggesting PFA coefficients can provide useful extra information for reducing the KT content models. (+/−: signicantly better/worse than OUTCOME,  : the optimal mean AUC)
  • 19. + Outline  Motivation  Content Model Reduction  Experiments and Results  Conclusion and Future Work
  • 20. + “Everything should be made as simple as possible, but not simpler.” -- Albert Einstein
  • 21. + Conclusion  “Content model should be made as simple as possible, but not simpler.”  Given the proper reduction size, reduction enables prediction performance better!  Different model reacts to reduction differently!  KT is more sensitive to reduction than PFA  Different models achieve the best balance between model complexity and model fit in different ranges  We are the first to explore reduction extensively!  More ideas for selecting important KCs?  Larger datasets?  Other domains?
  • 22. + Acknowledgement  Advanced Distributed Learning Initiative (http://www.adlnet.gov/).  LearnLab 2013 Summer School at CMU (Dr. Kenneth R. Koedinger, Dr. Jose P. Gonzalez-Brenes, Dr. Zachary A. Pardos for advising and initiating the project)
  • 23. + Thank you for listening !
  • 24. + Look at the original content model of our Java learning system…
  • 25. + Why RANDOM can occasionally be good?  When remaining size is relatively large (e.g. > 4 or > 20%), RANDOM can by chance target one or a subset of the important KCs, and then  it takes advantage of PFA’s logistic regression to adjust the coefficients of other non-important KCs, or  it take advantage of KT to pick out the most important one in the set by computing the “weakest” KC.  When remaining size of KCs is relatively small, proposed methods becomes better than RANDOM more significantly.  Our proposed method is not perfect… (+/−: signicantly better/worse than RANDOM,  : the optimal mean AUC)

Editor's Notes

  1. 15min + 5minQ
  2. Find a good figure
  3. Content-based methods: use KC frequency characteristic in the original content model Response-based Method: use KC easiness (difficulty) inferred from student response Expert-based Method: use expert annotated prerequisite and outcome An important KC for the item should mainly appear in this item, and should appear more times in this item
  4. (Inverse Document Frequency) (Term Frequency - IDF)
  5. Select x KCs per item with the highest importance scores. Select x% KCs per item with the highest importance scores
  6. The skills are defined by experts aided by a Java programming lan- guage ontology and a parser [10]. Each item uses exactly one skill and may use 1 to 8 different fine-grained subskills.
  7. The skills are defined by experts aided by a Java programming lan- guage ontology and a parser [10]. Each item uses exactly one skill and may use 1 to 8 different fine-grained subskills.
  8. RANDOM:
  9. RANDOM
  10. Reference of the confound IRT is because of the order in which items are presented to students. Specifically, if the items are presented in a relatively deterministic order, the item’s position in the sequence of trials is confounded with the item’s identity. IRT can exploit such a confound to implicitly infer performance levels as a function of experience, and therefore would have the same capabilities as the combined model which performs explicit inference of student knowledge state.
  11. our study shows that reduction, in fact, can help PFA and KT to achieve signicantly higher predic- tive performance given the proper scale of reduction compared with the original content model.
  12. our study shows that reduction, in fact, can help PFA and KT to achieve signicantly higher predic- tive performance given the proper scale of reduction compared with the original content model.
  13. our study shows that reduction, in fact, can help PFA and KT to achieve signicantly higher predic- tive performance given the proper scale of reduction compared with the original content model.
  14. Reference of the confound IRT is because of the order in which items are presented to students. Specifically, if the items are presented in a relatively deterministic order, the item’s position in the sequence of trials is confounded with the item’s identity. IRT can exploit such a confound to implicitly infer performance levels as a function of experience, and therefore would have the same capabilities as the combined model which performs explicit inference of student knowledge state.