SlideShare a Scribd company logo
1 of 20
2011 IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC2011)
               Special Session on Machine Learning, 9-12/10/2011, Anchorage, Alaska




      Design of robust classifiers for
       adversarial environments

              Battista Biggio, Giorgio Fumera, Fabio Roli




PRAgroup
Pattern Recognition and Applications Group
Department of Electrical and Electronic Engineering (DIEE)
University of Cagliari, Italy
Outline

• Adversarial classification
      – Pattern classifiers under attack


• Our approach
      – Modelling attacks to improve classifier security


• Application examples
      – Biometric identity verification
      – Spam filtering


• Conclusions and future works

Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   2
Adversarial classification
•   Pattern recognition in security applications
       – spam filtering, intrusion detection, biometrics

•   Malicious adversaries aim to evade the system


      x2                                                                         legitimate
                  f(x)
                                                                                 malicious


                                                            Buy viagra!




                                                  Buy vi4gr@!


                                                                  x1
Oct. 10, 2011    Design of robust classifiers for adversarial environments - F. Roli - SMC2011   3
Open issues

1. Vulnerability identification

2. Security evaluation of pattern classifiers

3. Design of secure pattern classifiers




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   4
Our approach
• Rationale
       – to improve classifier security (robustness) by modelling
         data distribution under attack


• Modelling potential attacks at testing time
       – Probabilistic model of data distribution under attack


• Exploiting the data model for designing more
  robust classifiers




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   5
Modelling attacks at test time

                                                             Two class problem
     Y
                               Attack                        X is the feature vector
                                                             Y is the class label:
     X                                                          legitimate (L)
                                                                malicious (M)
    P(X,Y ) = P(Y )P(X | Y )



In adversarial scenarios, attacks can influence X and Y



   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   6
Manipulation attacks against anti-spam filters
•Text classifiers in spam filtering
    binary features (presence / absence of keywords)
•Common attacks
    bad word obfuscation (BWO) and good word insertion (GWI)



     Buy viagra!                                  Buy vi4gr4!

                                                  Did you ever play that game
                                                  when you were a kid where the
                                                  little plastic hippo tries to
                                                  gobble up all your marbles?


  x = [0 0 1 0 0 0 0 0 …]                         x’ = [0 0 0 0 1 0 0 1 …]

                                     x ' = A(x)

   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   7
Modelling attacks at test time

                              Y
                                                             Attack

                              X
                             P(X,Y ) = P(Y )P(X | Y )

In adversarial scenarios, attacks can influence X and Y

We must model this influence to design robust classifiers




   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   8
Modelling attacks at test time


  A               Y

                               P(X,Y , A) = P(A)P(Y | A)P(X | Y , A)
            X


• A is a r.v. which indicates whether the sample is
  an attack (True) or not (False)
• Y is the class label: legitimate (L), malicious (M)
• X is the feature vector


Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   9
Modelling attacks at test time
                      Ptr (X,Y = L)                                                     Ptr (X,Y = M )
Training time




                                                                                                                     x

                                       Pts (X,Y = M ) =
                 Pts (X,Y = L)         Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time




                                                                                                                     x
                       Attacks which were not present at training phase!

                Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   10
Modelling attacks at testing time
•   Attack distribution
     – P(X,Y=M, A=T) = P(X|Y=M,A=T)P(Y=M|A=T)P(A=T)

•   Choice of P(Y=M|A=T)
     – We set it to 1, since we assume the adversary has only
       control on malicious samples

•   P(A=T) is thus the percentage of attacks among malicious
    samples
     – It is a parameter which tunes the security/accuracy trade-
       off
     – The more attacks are simulated during the training phase,
       the more robust (but less accurate when no attacks) the
       classifier is expected to be at testing time



    Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   11
Modelling attacks at testing time

                  Key issue:

                  modelling Pts(X, Y=M / A=T)

               Pts (X,Y = L)
                                                                                          Pts (X,Y = M , A = F)
Testing time




               Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   12
Modelling attacks at testing time
               •     Choice of Pts(X, Y=M / A=T)
                     – Requires application-specific knowledge
                     – Even if knowledge about the attack is available, still difficult
                       to model analytically
                     – An agnostic choice is the uniform distribution


                                 Pts (X,Y = M ) =
                   Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time




                                                                                                                    x



               Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   13
Experiments
Spoofing attacks against biometric systems
• Multi-modal biometric verification systems
      – Spoofing attacks


    Fake fingerprints
                                                                                      Claimed
                                                                                      identity



                                                                            Face            Fingerprint
                                                                           matcher           matcher
                                                                                 s1                s2
    Photo attack

                                                                                 Fusion module


                                                                             Genuine / Impostor




Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011          14
Experiments
  Multi-modal biometric identity verification

                                                                                   true
                                                                                           genuine
                                                 s1
                Sensor        Face matcher
                                                                           s
                                                       Score fusion rule       s ! s"
                                                 s2        f (s1 , s2 )
                Sensor     Fingerprint matcher
                                                                                   false
                                                                                           impostor




• Data set
      – NIST Biometric Score Set 1 (publicly available)
• Fusion rules
                                                         p(s1 | G)p(s2 | G)
      – Likelihood ratio (LLR)                        s=
                                                          p(s1 | I )p(s2 | I )
      – Extended LLR
          [Rodrigues et al., Robustness of multimodal biometric fusion methods against spoof attacks,
          JVLC 2009]

      – Our approach (Uniform LLR)

Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011      15
Remarks on experiments
        •The Extended LRR [Rodrigues et al., 2009] used for
        comparison assumes that the attack distribution is
        equal to the distribution of legitimate patterns
               Pts (X,Y = L) = Pts (X,Y = M | A = T )
                                                                                             Pts (X,Y = M , A = F)
Testing time




     •Our rule, uniform LRR, assumes a uniform distribution

     Experiments are done assuming that attack patterns
     are exact replicas of legitimate patterns (worst case)
               Oct. 10, 2011    Design of robust classifiers for adversarial environments - F. Roli - SMC2011   16
Experiments
  Multi-modal biometric identity verification
• Uniform LLR under fingerprint spoofing attacks
      – Security (FAR) vs accuracy (GAR) for different P(A=T)
        values
      – No attack (solid) / under attack (dashed)




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   17
Experiments
  Multi-modal biometric identity verification
• Uniform vs                  Extended                LLR         under            fingerprint
  spoofing




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   18
Experiments
                                 Spam filtering
• Similar results obtained in spam filtering
      – TREC 2007 public data set
      – Naive Bayes text classifier
      – GWI/BWO attacks with nMAX modified words per spam




  AUC10%

TP




  0   0.1        FP




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   19
Conclusions and future works
• We presented a general generative approach
  for designing robust classifiers against attacks at
  test time

• Reported results show that our approach allow
  to increase the robustness (i.e., the security) of
  classifiers

• Future work
      – To test Uniform LLR against more realistic spoof attacks
            • Preliminary result: worst-case assumption is too pessimistic!
                Biggio, Akhtar, Fumera, Marcialis, Roli, “Robustness of multimodal biometric
                systems under realistic spoof attacks”, IJCB 2011




Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011   20

More Related Content

More from Pluribus One

On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsPluribus One
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksPluribus One
 
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
 
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Pluribus One
 
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Pluribus One
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
 
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Pluribus One
 
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware ClusteringBattista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware ClusteringPluribus One
 
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Pluribus One
 
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Pluribus One
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Pluribus One
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesPluribus One
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Pluribus One
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 PosterPluribus One
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterPluribus One
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisPluribus One
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Pluribus One
 

More from Pluribus One (20)

On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial Settings
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion Attacks
 
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
 
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
 
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
 
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
 
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware ClusteringBattista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
 
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
 
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense Slides
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 Poster
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern Analysis
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011
 

Recently uploaded

Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research DiscourseAnita GoswamiGiri
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...DhatriParmar
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfJemuel Francisco
 
How to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 DatabaseHow to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 DatabaseCeline George
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationdeepaannamalai16
 
week 1 cookery 8 fourth - quarter .pptx
week 1 cookery 8  fourth  -  quarter .pptxweek 1 cookery 8  fourth  -  quarter .pptx
week 1 cookery 8 fourth - quarter .pptxJonalynLegaspi2
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmStan Meyer
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Association for Project Management
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
ROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptxROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptxVanesaIglesias10
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWQuiz Club NITW
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxDhatriParmar
 
Multi Domain Alias In the Odoo 17 ERP Module
Multi Domain Alias In the Odoo 17 ERP ModuleMulti Domain Alias In the Odoo 17 ERP Module
Multi Domain Alias In the Odoo 17 ERP ModuleCeline George
 
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvRicaMaeCastro1
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Projectjordimapav
 

Recently uploaded (20)

Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research Discourse
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
 
How to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 DatabaseHow to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 Database
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentation
 
week 1 cookery 8 fourth - quarter .pptx
week 1 cookery 8  fourth  -  quarter .pptxweek 1 cookery 8  fourth  -  quarter .pptx
week 1 cookery 8 fourth - quarter .pptx
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and Film
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
 
Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
ROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptxROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptx
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITW
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
 
Multi Domain Alias In the Odoo 17 ERP Module
Multi Domain Alias In the Odoo 17 ERP ModuleMulti Domain Alias In the Odoo 17 ERP Module
Multi Domain Alias In the Odoo 17 ERP Module
 
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnvESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
ESP 4-EDITED.pdfmmcncncncmcmmnmnmncnmncmnnjvnnv
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Project
 

Design of robust classifiers for adversarial environments - Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on

  • 1. 2011 IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC2011) Special Session on Machine Learning, 9-12/10/2011, Anchorage, Alaska Design of robust classifiers for adversarial environments Battista Biggio, Giorgio Fumera, Fabio Roli PRAgroup Pattern Recognition and Applications Group Department of Electrical and Electronic Engineering (DIEE) University of Cagliari, Italy
  • 2. Outline • Adversarial classification – Pattern classifiers under attack • Our approach – Modelling attacks to improve classifier security • Application examples – Biometric identity verification – Spam filtering • Conclusions and future works Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 2
  • 3. Adversarial classification • Pattern recognition in security applications – spam filtering, intrusion detection, biometrics • Malicious adversaries aim to evade the system x2 legitimate f(x) malicious Buy viagra! Buy vi4gr@! x1 Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 3
  • 4. Open issues 1. Vulnerability identification 2. Security evaluation of pattern classifiers 3. Design of secure pattern classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 4
  • 5. Our approach • Rationale – to improve classifier security (robustness) by modelling data distribution under attack • Modelling potential attacks at testing time – Probabilistic model of data distribution under attack • Exploiting the data model for designing more robust classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 5
  • 6. Modelling attacks at test time Two class problem Y Attack X is the feature vector Y is the class label: X legitimate (L) malicious (M) P(X,Y ) = P(Y )P(X | Y ) In adversarial scenarios, attacks can influence X and Y Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 6
  • 7. Manipulation attacks against anti-spam filters •Text classifiers in spam filtering binary features (presence / absence of keywords) •Common attacks bad word obfuscation (BWO) and good word insertion (GWI) Buy viagra! Buy vi4gr4! Did you ever play that game when you were a kid where the little plastic hippo tries to gobble up all your marbles? x = [0 0 1 0 0 0 0 0 …] x’ = [0 0 0 0 1 0 0 1 …] x ' = A(x) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 7
  • 8. Modelling attacks at test time Y Attack X P(X,Y ) = P(Y )P(X | Y ) In adversarial scenarios, attacks can influence X and Y We must model this influence to design robust classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 8
  • 9. Modelling attacks at test time A Y P(X,Y , A) = P(A)P(Y | A)P(X | Y , A) X • A is a r.v. which indicates whether the sample is an attack (True) or not (False) • Y is the class label: legitimate (L), malicious (M) • X is the feature vector Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 9
  • 10. Modelling attacks at test time Ptr (X,Y = L) Ptr (X,Y = M ) Training time x Pts (X,Y = M ) = Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F) Testing time x Attacks which were not present at training phase! Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 10
  • 11. Modelling attacks at testing time • Attack distribution – P(X,Y=M, A=T) = P(X|Y=M,A=T)P(Y=M|A=T)P(A=T) • Choice of P(Y=M|A=T) – We set it to 1, since we assume the adversary has only control on malicious samples • P(A=T) is thus the percentage of attacks among malicious samples – It is a parameter which tunes the security/accuracy trade- off – The more attacks are simulated during the training phase, the more robust (but less accurate when no attacks) the classifier is expected to be at testing time Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 11
  • 12. Modelling attacks at testing time Key issue: modelling Pts(X, Y=M / A=T) Pts (X,Y = L) Pts (X,Y = M , A = F) Testing time Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 12
  • 13. Modelling attacks at testing time • Choice of Pts(X, Y=M / A=T) – Requires application-specific knowledge – Even if knowledge about the attack is available, still difficult to model analytically – An agnostic choice is the uniform distribution Pts (X,Y = M ) = Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F) Testing time x Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 13
  • 14. Experiments Spoofing attacks against biometric systems • Multi-modal biometric verification systems – Spoofing attacks Fake fingerprints Claimed identity Face Fingerprint matcher matcher s1 s2 Photo attack Fusion module Genuine / Impostor Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 14
  • 15. Experiments Multi-modal biometric identity verification true genuine s1 Sensor Face matcher s Score fusion rule s ! s" s2 f (s1 , s2 ) Sensor Fingerprint matcher false impostor • Data set – NIST Biometric Score Set 1 (publicly available) • Fusion rules p(s1 | G)p(s2 | G) – Likelihood ratio (LLR) s= p(s1 | I )p(s2 | I ) – Extended LLR [Rodrigues et al., Robustness of multimodal biometric fusion methods against spoof attacks, JVLC 2009] – Our approach (Uniform LLR) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 15
  • 16. Remarks on experiments •The Extended LRR [Rodrigues et al., 2009] used for comparison assumes that the attack distribution is equal to the distribution of legitimate patterns Pts (X,Y = L) = Pts (X,Y = M | A = T ) Pts (X,Y = M , A = F) Testing time •Our rule, uniform LRR, assumes a uniform distribution Experiments are done assuming that attack patterns are exact replicas of legitimate patterns (worst case) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 16
  • 17. Experiments Multi-modal biometric identity verification • Uniform LLR under fingerprint spoofing attacks – Security (FAR) vs accuracy (GAR) for different P(A=T) values – No attack (solid) / under attack (dashed) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 17
  • 18. Experiments Multi-modal biometric identity verification • Uniform vs Extended LLR under fingerprint spoofing Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 18
  • 19. Experiments Spam filtering • Similar results obtained in spam filtering – TREC 2007 public data set – Naive Bayes text classifier – GWI/BWO attacks with nMAX modified words per spam AUC10% TP 0 0.1 FP Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 19
  • 20. Conclusions and future works • We presented a general generative approach for designing robust classifiers against attacks at test time • Reported results show that our approach allow to increase the robustness (i.e., the security) of classifiers • Future work – To test Uniform LLR against more realistic spoof attacks • Preliminary result: worst-case assumption is too pessimistic! Biggio, Akhtar, Fumera, Marcialis, Roli, “Robustness of multimodal biometric systems under realistic spoof attacks”, IJCB 2011 Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 20