Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Multiple Classifier Systems

1,815 views

Published on

This presentation is about Multiple Classifier System (Ensemble of Classifiers). At first tell about the general idea of decision making, then address reasons and rationales of using Multiple Classifier System, after that concentrate on designing Multiple Classifier System: 1.Create an Ensemble 2.Combining Classifiers.

Published in: Engineering
  • Login to see the comments

  • Be the first to like this

Multiple Classifier Systems

  1. 1. Multiple Classifier System Farzad Vasheghani Farahani – Machine Learning
  2. 2. Outline  Introduction  Decision Making  General Idea  Brief History  Reasons & Rationale  Statistical  Large volumes of data  Too little data  Divide and Conquer  Data Fusion  Multiple Classifier system Designing  Diversity  Create an Ensemble  Combining Classifiers  Example  Conclusions  References
  3. 3. Ensemble-based Systems in Decision Making  For many tasks, we often seek second opinion before making a decision, sometimes many more  Consulting different doctors before a major surgery  Reading reviews before buying a product  Requesting references before hiring someone  We consider decisions of multiple experts in our daily lives  Why not follow the same strategy in automated decision making?  Multiple classifier systems, committee of classifiers, mixture of experts, ensemble based systems
  4. 4. Ensemble-based Classifiers  How to (i) generate individual components of the ensemble systems (base classifiers), and (ii) how to combine the outputs of individual classifiers?
  5. 5. Brief History of Ensemble Systems  Dasarathy and Sheela (1979) partitioned the feature space using two or more classifiers  Schapire (1990) proved that a strong classifier can be generated by combining weak classifiers through boosting; predecessor of AdaBoost algorithm  Two types of combination:  classifier selection  classifier fusion
  6. 6. Why Ensemble Based Systems?
  7. 7. Why Ensemble Based Systems? 1. Statistical reasons  A set of classifiers with similar training performances may have different generalization performances  Combining outputs of several classifiers reduces the risk of selecting a poorly performing classifier Example: Suppose there are 25 base classifiers Each classifier has error rate,  = 0.35 Probability that the ensemble classifier makes a wrong prediction: 25 25 1 25 (1 ) 0.06 i i i i             
  8. 8. Why Ensemble Based Systems? 2. Large volumes of data  If the amount of data to be analyzed is too large, a single classifier may not be able to handle it; train different classifiers on different partitions of data
  9. 9. Why Ensemble Based Systems? 3. Too little data  Ensemble systems can also be used when there is too little data; resampling techniques
  10. 10. Why Ensemble Based Systems? 4. Divide and Conquer  Divide data space into smaller & easier-to-learn partitions; each classifier learns only one of the simpler partitions
  11. 11. Why Ensemble Based Systems? 5. Data Fusion  Given several sets of data from various sources, where the nature of features is different (heterogeneous features), training a single classifier may not be appropriate (e.g., MRI data, EEG recording, blood test,..)
  12. 12. Multiple Classifier system Designing
  13. 13. Major Steps  All ensemble systems must have two key components:  Generate component classifiers of the ensemble  Method for combining the classifier outputs
  14. 14. “Diversity” of Ensemble  Objective: create many classifiers, and combine their outputs to improve the performance of a single classifier  Intuition: if each classifier makes different errors, then their strategic combination can reduce the total error!  Need base classifiers whose decision boundaries are adequately different from those of others  Such a set of classifiers is said to be “diverse”
  15. 15. How to achieve classifier diversity? A. Use different training sets to train individual classifiers B. Use different training parameters for a classifier C. Different types of classifiers (MLPs, decision trees, NN classifiers, SVM) can be combined for added diversity D. Using random feature subsets, called random subspace method
  16. 16. Create an Ensemble (Coverage Optimization)
  17. 17. Creating An Ensemble  Two questions: 1. How will the individual classifiers be generated? 2. How will they differ from each other?
  18. 18. Create Ensembles Methods 1. Subsample Approach (Data sample)  Bagging  Random forest  Boosting  Adaboost  Wagging  Rotation forest  RotBoost  Mixture of Expert 2. Subspace Approach (Feature Level)  Random based  Feature reduction  Performance based 3. Classifier Level Approach
  19. 19. Bagging
  20. 20. Boosting
  21. 21. Combining Classifiers (Decision Optimization)
  22. 22. Two Important Concept (i)  (i) trainable vs. non-trainable  Trainable rules: parameters of the combiner, called “weights” determined through a separate training algorithm  Non-trainable rules: combination parameters are available as classifiers are generated; Weighted majority voting is an example
  23. 23. Two Important Concept (ii)  (ii) Type of the output of classifiers Combine Classifier Absolute output Majority Voting Naïve Bayes Behavior Knowledge Space Ranked output Borda Counting Maximum Ranking Continuous output Algebraic Metohd Fuzzy Integral Decesion Template
  24. 24. Example (“Zoo” UCI Data Set) 1. animal name: Unique for each instance 2. hair: Boolean 3. feathers: Boolean 4. eggs: Boolean 5. milk: Boolean 6. airborne: Boolean 7. aquatic: Boolean 8. predator: Boolean 9. toothed: Boolean 10. backbone: Boolean 11. breathes: Boolean 12. venomous: Boolean 13. fins: Boolean 14. legs: Numeric (set of values: {0,2,4,5,6,8}) 15. tail: Boolean 16. domestic: Boolean 17. catsize: Boolean 18. type: Numeric (integer values in range [1,7])
  25. 25. Conclusions  Ensemble systems are useful in practice  Diversity of the base classifiers is important  Ensemble generation techniques: bagging, AdaBoost, mixture of experts  Classifier combination strategies: algebraic combiners, voting methods, and decision templates.  No single ensemble generation algorithm or combination rule is universally better than others  Effectiveness on real world data depends on the classifier diversity and characteristics of the data
  26. 26. References  [1] Polikar R., “Ensemble Based Systems in Decision Making,” IEEE Circuits and Systems Magazine, vol.6, no. 3, pp. 21-45, 2006  [2] Polikar R., “Bootstrap Inspired Techniques in Computational Intelligence,” IEEE Signal Processing Magazine, vol.24, no. 4, pp. 56- 72, 2007  [3] Polikar R., “Ensemble Learning,” Scholarpedia, 2008.  [4] Kuncheva, L. I. , Combining Pattern Classifiers: Methods and Algorithms. New York, NY: Wiley, 2004.

×