SlideShare a Scribd company logo
1 of 42
Linking data without common identifiers Semantic Web Meetup, 2011-08-23 Lars Marius Garshol, <larsga@bouvet.no> http://twitter.com/larsga
About me Lars Marius Garshol, Bouvet consultant Worked with semantic technologies since 1999 mostly Topic Maps Before that I worked with XML Also background from Opera Software (Unicode support) open source development (Java/Python)
Agenda Linking data ā€“ the problem Record linkage theory Duke A real-world example Usage at Hafslund
The problem Data sets from different sources generally donā€™t share identifiers names tend to be spelled every which way So how can they be linked?
A real-world example
A difficult problem It requires n2 comparisons for n records a million comparisons for 1000 records, 100 million for 10,000, ... Exact string comparison is not enough must handle misspellings, name variants, etc Interpreting the data can be difficult even for a human being is the address different because there are two different people, or because the person moved? ...
Help from an unexpected quarter Statisticians to the rescue!
Record linkage Statisticians very often must connect data sets from different sources They call it ā€œrecord linkageā€ term coined in 19461) mathematical foundations laid in 19592) formalized in 1969 as ā€œFellegi-Sunterā€ model3) A whole subject area has been developed with well-known techniques, methods, and tools these can of course be applied outside of statistics http://ajph.aphapublications.org/cgi/reprint/36/12/1412 http://www.sciencemag.org/content/130/3381/954.citation http://www.jstor.org/pss/2286061
Other terms for the same thing Has been independently invented many times Under many names entity resolution identity resolution merge/purge deduplication ... This makes Googling for information an absolute nightmare
Application areas Statistics (obviously) Data cleaning Data integration Conversion Fraud detection / intelligence / surveillance
Mathematical model
Model, simplified We work with records, each of which has fields with values before doing record linkage weā€™ve processed the data so that they have the same set of fields (fields which exist in only one of the sources get discarded, as they are no help to us) We compare values for the same fields one by one, then estimate the probability that records represent the same entity given these values probability depends on which field and what values estimation can be very complex Finally, we can compute overall probability
Example
String comparisons Must handle spelling errors and name variants must estimate probability that values belong to same entity despite differences Examples John Smith ā‰ˆ Jonh Smith J. Random Hacker ā‰ˆ James Random Hacker ... Many well-known algorithms have been developed no one best choice, unfortunately some are eager to merge, others less so Many are computationally expensive O(n2) where n is length of string is common this gets very costly when number of record pairs to compare is already high
Examples of algorithms Levenshtein edit distance ie: number of edits needed to change string 1 into 2 not so eager to merge Jaro-Winkler specifically for short person names very promiscuous Soundex essentially a phonetic hashing algorithm very fast, but also very crude Longest common subsequence / q-grams havenā€™t tried this yet Monge-Elkan one of the best, but computationally expensive TFIDF based on term frequency studies often find this performs best, but itā€™s expensive
Existing record linkage tools Commercial tools big, sophisticated, and expensive have found little information on what they actually do presumably also effective Open source tools generally made by and for statisticians nice user interfaces and rich configurability architecture often not as flexible as it could be
Standard algorithm n2comparisons for n records is unacceptable must reduce number of direct comparisons Solution produce a key from field values, sort records by the key, for each record, compare with n nearest neighbours sometimes several different keys are produced, to increase chances of finding matches Downsides requires coming up with a key difficult to apply incrementally sorting is expensive
Good research papers Threat and Fraud Intelligence, Las Vegas Style, Jeff Jonas http://jeffjonas.typepad.com/IEEE.Identity.Resolution.pdf Real-world data is dirty: Data Cleansing and the Merge/Purge Problem, Hernandez & Stolfo http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.30.3496&rep=rep1&type=pdf Swoosh: a generic approach to entity resolution, Benjelloun, Garcia-Molina et al http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.5696&rep=rep1&type=pdf
DUplicate KillEr Duke
Context Doing a project for Hafslund where we integrate data from many sources Entities are duplicated both inside systems and across systems Suppliers Customers Customers Customers Companies CRM Billing ERP
Requirements Must be flexible and configurable no way to know in advance exactly what data we will need to deduplicate Must scale to large data sets CRM alone has 1.4 million customer records thatā€™s 2 trillion comparisons with naĆÆve approach Must have an API project uses SDshare protocol everywhere must therefore be able to build connectors Must be able to work incrementally process data as it changes and update conclusions on the fly
Reviewed existing tools... ...but didnā€™t find anything matching the criteria Therefore...
Duke Java deduplication engine released as open source http://code.google.com/p/duke/ Does not use key approach instead indexes data with Lucene does Lucene searches to find potential matches Still a work in progress, but high performance (1M records in 10 minutes), several data sources and comparators, being used for real in real projects, flexible architecture
How it works XML configuration file defines setup lists data sources, properties, probabilities, etc Data sources provide streams of records Steps: normalize values so they are ready for comparison collect a batch of records, then index it with Lucene compare records one by one against Lucene index do detailed comparisons against search results pairs with probability above configurable threshold are considered matches For now, only API and command-line
Record comparisons in detail Comparator compares field values return number in range 1.0 ā€“ 0.0 probability for field is high probability if number is 1.0, or low probability if 0.0, otherwise in between the two Probability for entire record is computed using Bayesā€™s formula approach known as ā€œnaĆÆve Bayesā€ in research literature
Components Data sources CSV JDBC Sparql NTriples <plug in your own> Comparators ExactComparator NumericComparator SoundexComparator TokenizedComparator Levenshtein JaroWinkler Dice coefficient
Features Fairly complete command-line tool with debugging support API for embedding the tool Two modes: deduplication (all records matched against each other) linking (only matching across groups of records) Pluggable data sources, comparators, and cleaners Framework for composing cleaners Incremental processing High performance
A real-world example Matching Mondial and DBpedia
Finding properties to match Need properties providing identity evidence Matching on the properties in bold below Extracted data to CSV for ease of use
Configuration ā€“ data sources  <group>     <csv>       <param name="input-file" value="dbpedia.csv"/>       <param name="header-line" value="false"/>       <column name="1" property="ID"/>       <column name="2"               cleaner="no.priv...CountryNameCleaner"               property="NAME"/>       <column name="3"               property="AREA"/>       <column name="4"               cleaner="no.priv...CapitalCleaner"               property="CAPITAL"/>     </csv>   </group> <group>     <csv>       <param name="input-file" value="mondial.csv"/>       <column name="id" property="ID"/>       <column name="country"               cleaner="no.priv...examples.CountryNameCleaner"               property="NAME"/>       <column name="capital"               cleaner="no.priv...LowerCaseNormalizeCleaner"               property="CAPITAL"/>       <column name="area"               property="AREA"/>     </csv>   </group>
Configuration ā€“ matching  <schema>     <threshold>0.65</threshold>     <property type="id">         <name>ID</name>     </property>     <property>         <name>NAME</name>          <comparator>no.priv.garshol.duke.Levenshtein</comparator>         <low>0.3</low>         <high>0.88</high>     </property>         <property>         <name>AREA</name>          <comparator>AreaComparator</comparator>         <low>0.2</low>         <high>0.6</high>     </property>     <property>         <name>CAPITAL</name>          <comparator>no.priv.garshol.duke.Levenshtein</comparator>         <low>0.4</low>         <high>0.88</high>     </property>       </schema> Duke analyzes this setup and decides only NAME and CAPITAL need to be  searched on in Lucene.  <object class="no.priv.garshol.duke.NumericComparator"           name="AreaComparator">     <param name="min-ratio" value="0.7"/>   </object>
Result Correct links found: 206 / 217 (94.9%) Wrong links found: 0 / 12 (0.0%) Unknown links found: 0 Percent of links correct 100.0%, wrong 0.0%, unknown 0.0% Records with no link: 25 Precision 100.0%, recall 94.9%, f-number 0.974
Examples
Choosing the right match
An example of failure Duke doesnā€™t find this match no tokens matching exactly Lucene search finds nothing Detailed comparison gives correct result so only problem is Lucene search Lucene does have Levenshtein search, but in Lucene 3.x itā€™s very slow therefore not enabled now thinking of adding option to enable where needed Lucene 4.x will fix the performance problem
The effects of value matching
Duke in real life Usage at Hafslund
The SESAM project Building a new archival system does automatic tagging of documents based on knowledge about the data knowledge extracted from backend systems Search engine-based frontend using Recommind for this Very flexible architecture extracted data stored in triple store (Virtuoso) all data transfer based on SDshare protocol data extracted from RDBMSs with Ontopiaā€™s DB2TM
The big picture DUPLICATES! 360 SDshare SDshare Virtuoso ERP Recommind SDshare SDshare CRM Billing Duke SDshare SDshare contains owl:sameAs and haf:possiblySameAs
Experiences so far Incremental processing works fine links added and retracted as data changes Performance not an issue at all but then only ~50,000 records so far... Matching works well, but not perfect data are very noisy and messy also, matching is hard
Duke roadmap 0.3 clean up the public API and document properly maybe some more comparators support for writing owl:sameAs to Sparql endpoint 0.4 add a web service interface 0.5 and onwards more comparators maybe some parallelism
Slides will appear on http://slideshare.net/larsga Comments/questions?

More Related Content

What's hot

Natural Language Search with Knowledge Graphs (Haystack 2019)
Natural Language Search with Knowledge Graphs (Haystack 2019)Natural Language Search with Knowledge Graphs (Haystack 2019)
Natural Language Search with Knowledge Graphs (Haystack 2019)
Trey Grainger
Ā 
Real-time personalized recommendations using product embeddings
Real-time personalized recommendations using product embeddingsReal-time personalized recommendations using product embeddings
Real-time personalized recommendations using product embeddings
Jakub Macina
Ā 
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
Flink Forward
Ā 

What's hot (20)

Data democratization the key to future proofing data culture
Data democratization the key to future proofing data cultureData democratization the key to future proofing data culture
Data democratization the key to future proofing data culture
Ā 
Introduction to Data Analytics
Introduction to Data AnalyticsIntroduction to Data Analytics
Introduction to Data Analytics
Ā 
Algorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at SpotifyAlgorithmic Music Recommendations at Spotify
Algorithmic Music Recommendations at Spotify
Ā 
Collaborative Filtering with Spark
Collaborative Filtering with SparkCollaborative Filtering with Spark
Collaborative Filtering with Spark
Ā 
From Idea to Execution: Spotify's Discover Weekly
From Idea to Execution: Spotify's Discover WeeklyFrom Idea to Execution: Spotify's Discover Weekly
From Idea to Execution: Spotify's Discover Weekly
Ā 
Accelerating Data Ingestion with Databricks Autoloader
Accelerating Data Ingestion with Databricks AutoloaderAccelerating Data Ingestion with Databricks Autoloader
Accelerating Data Ingestion with Databricks Autoloader
Ā 
Modeling data and best practices for the Azure Cosmos DB.
Modeling data and best practices for the Azure Cosmos DB.Modeling data and best practices for the Azure Cosmos DB.
Modeling data and best practices for the Azure Cosmos DB.
Ā 
Building Identity Graphs over Heterogeneous Data
Building Identity Graphs over Heterogeneous DataBuilding Identity Graphs over Heterogeneous Data
Building Identity Graphs over Heterogeneous Data
Ā 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Ā 
Restaurant recommender
Restaurant recommenderRestaurant recommender
Restaurant recommender
Ā 
Tools for Solving Performance Issues
Tools for Solving Performance IssuesTools for Solving Performance Issues
Tools for Solving Performance Issues
Ā 
Natural Language Search with Knowledge Graphs (Haystack 2019)
Natural Language Search with Knowledge Graphs (Haystack 2019)Natural Language Search with Knowledge Graphs (Haystack 2019)
Natural Language Search with Knowledge Graphs (Haystack 2019)
Ā 
Extracting Insights from Data at Twitter
Extracting Insights from Data at TwitterExtracting Insights from Data at Twitter
Extracting Insights from Data at Twitter
Ā 
Observability for Data Pipelines With OpenLineage
Observability for Data Pipelines With OpenLineageObservability for Data Pipelines With OpenLineage
Observability for Data Pipelines With OpenLineage
Ā 
Real-time personalized recommendations using product embeddings
Real-time personalized recommendations using product embeddingsReal-time personalized recommendations using product embeddings
Real-time personalized recommendations using product embeddings
Ā 
Data/AI driven product development: from video streaming to telehealth
Data/AI driven product development: from video streaming to telehealthData/AI driven product development: from video streaming to telehealth
Data/AI driven product development: from video streaming to telehealth
Ā 
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and HudiHow to build a streaming Lakehouse with Flink, Kafka, and Hudi
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Ā 
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
William Vambenepe ā€“ Google Cloud Dataflow and Flink , Stream Processing by De...
Ā 
The evolution of the big data platform @ Netflix (OSCON 2015)
The evolution of the big data platform @ Netflix (OSCON 2015)The evolution of the big data platform @ Netflix (OSCON 2015)
The evolution of the big data platform @ Netflix (OSCON 2015)
Ā 
Knowledge Graphs are Worthless, Knowledge Graph Use Cases are Priceless
Knowledge Graphs are Worthless, Knowledge Graph Use Cases are PricelessKnowledge Graphs are Worthless, Knowledge Graph Use Cases are Priceless
Knowledge Graphs are Worthless, Knowledge Graph Use Cases are Priceless
Ā 

Viewers also liked

Progressive Texture
Progressive TextureProgressive Texture
Progressive Texture
Dr Rupesh Shet
Ā 
An adaptive algorithm for detection of duplicate records
An adaptive algorithm for detection of duplicate recordsAn adaptive algorithm for detection of duplicate records
An adaptive algorithm for detection of duplicate records
Likan Patra
Ā 
online Record Linkage
online Record Linkageonline Record Linkage
online Record Linkage
Priya Pandian
Ā 
novel and efficient approch for detection of duplicate pages in web crawling
novel and efficient approch for detection of duplicate pages in web crawlingnovel and efficient approch for detection of duplicate pages in web crawling
novel and efficient approch for detection of duplicate pages in web crawling
Vipin Kp
Ā 
Prescription event monitoring and record linkage system
Prescription event monitoring and record linkage systemPrescription event monitoring and record linkage system
Prescription event monitoring and record linkage system
Vineetha Menon
Ā 
Duplicate detection
Duplicate detectionDuplicate detection
Duplicate detection
jonecx
Ā 
Spontaneous reporting
Spontaneous reporting Spontaneous reporting
Spontaneous reporting
Madhuri Miriyala
Ā 

Viewers also liked (20)

Prescription event monitorig
Prescription event monitorigPrescription event monitorig
Prescription event monitorig
Ā 
Progressive Texture
Progressive TextureProgressive Texture
Progressive Texture
Ā 
Record matching over query results from Web Databases
Record matching over query results from Web DatabasesRecord matching over query results from Web Databases
Record matching over query results from Web Databases
Ā 
An adaptive algorithm for detection of duplicate records
An adaptive algorithm for detection of duplicate recordsAn adaptive algorithm for detection of duplicate records
An adaptive algorithm for detection of duplicate records
Ā 
online Record Linkage
online Record Linkageonline Record Linkage
online Record Linkage
Ā 
The Duplicitous Duplicate
The Duplicitous DuplicateThe Duplicitous Duplicate
The Duplicitous Duplicate
Ā 
novel and efficient approch for detection of duplicate pages in web crawling
novel and efficient approch for detection of duplicate pages in web crawlingnovel and efficient approch for detection of duplicate pages in web crawling
novel and efficient approch for detection of duplicate pages in web crawling
Ā 
Indexing Techniques for Scalable Record Linkage and Deduplication
Indexing Techniques for Scalable Record Linkage and DeduplicationIndexing Techniques for Scalable Record Linkage and Deduplication
Indexing Techniques for Scalable Record Linkage and Deduplication
Ā 
A Case Study in Record Linkage_PVER Conf_May2011
A Case Study in Record Linkage_PVER Conf_May2011A Case Study in Record Linkage_PVER Conf_May2011
A Case Study in Record Linkage_PVER Conf_May2011
Ā 
Prescription event monitoring and record linkage system
Prescription event monitoring and record linkage systemPrescription event monitoring and record linkage system
Prescription event monitoring and record linkage system
Ā 
Duplicate detection
Duplicate detectionDuplicate detection
Duplicate detection
Ā 
A study and survey on various progressive duplicate detection mechanisms
A study and survey on various progressive duplicate detection mechanismsA study and survey on various progressive duplicate detection mechanisms
A study and survey on various progressive duplicate detection mechanisms
Ā 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
Ā 
Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)Tutorial 4 (duplicate detection)
Tutorial 4 (duplicate detection)
Ā 
Progressive duplicate detection
Progressive duplicate detectionProgressive duplicate detection
Progressive duplicate detection
Ā 
Efficient Duplicate Detection Over Massive Data Sets
Efficient Duplicate Detection Over Massive Data SetsEfficient Duplicate Detection Over Massive Data Sets
Efficient Duplicate Detection Over Massive Data Sets
Ā 
Spontaneous reporting
Spontaneous reporting Spontaneous reporting
Spontaneous reporting
Ā 
Record linkage methods applied to population data deduplication
Record linkage methods applied to population data deduplicationRecord linkage methods applied to population data deduplication
Record linkage methods applied to population data deduplication
Ā 
Visualising Multi Dimensional Data
Visualising Multi Dimensional DataVisualising Multi Dimensional Data
Visualising Multi Dimensional Data
Ā 
Data vs. information
Data vs. informationData vs. information
Data vs. information
Ā 

Similar to Linking data without common identifiers

Swertz Molgenis Bosc2009
Swertz Molgenis Bosc2009Swertz Molgenis Bosc2009
Swertz Molgenis Bosc2009
bosc
Ā 
Classification
ClassificationClassification
Classification
guest9099eb
Ā 
Comparing DOM XSS Tools On Real World Bug
Comparing DOM XSS Tools On Real World BugComparing DOM XSS Tools On Real World Bug
Comparing DOM XSS Tools On Real World Bug
Stefano Di Paola
Ā 

Similar to Linking data without common identifiers (20)

Swertz Molgenis Bosc2009
Swertz Molgenis Bosc2009Swertz Molgenis Bosc2009
Swertz Molgenis Bosc2009
Ā 
Defense Against the Dark Arts: Protecting Your Data from ORMs
Defense Against the Dark Arts: Protecting Your Data from ORMsDefense Against the Dark Arts: Protecting Your Data from ORMs
Defense Against the Dark Arts: Protecting Your Data from ORMs
Ā 
New Directions in Metadata
New Directions in MetadataNew Directions in Metadata
New Directions in Metadata
Ā 
Sweo talk
Sweo talkSweo talk
Sweo talk
Ā 
Classification
ClassificationClassification
Classification
Ā 
Advance Data Mining Project Report
Advance Data Mining Project ReportAdvance Data Mining Project Report
Advance Data Mining Project Report
Ā 
Biomart Update
Biomart UpdateBiomart Update
Biomart Update
Ā 
Javascript2839
Javascript2839Javascript2839
Javascript2839
Ā 
Json
JsonJson
Json
Ā 
Xml
XmlXml
Xml
Ā 
Struts2
Struts2Struts2
Struts2
Ā 
Lucene And Solr Intro
Lucene And Solr IntroLucene And Solr Intro
Lucene And Solr Intro
Ā 
Introduction To Programming
Introduction To ProgrammingIntroduction To Programming
Introduction To Programming
Ā 
NHibernate
NHibernateNHibernate
NHibernate
Ā 
Comparing DOM XSS Tools On Real World Bug
Comparing DOM XSS Tools On Real World BugComparing DOM XSS Tools On Real World Bug
Comparing DOM XSS Tools On Real World Bug
Ā 
Automatic Metadata Generation using Associative Networks
Automatic Metadata Generation using Associative NetworksAutomatic Metadata Generation using Associative Networks
Automatic Metadata Generation using Associative Networks
Ā 
Language-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible researchLanguage-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible research
Ā 
Introduction to Data Science With R Notes
Introduction to Data Science With R NotesIntroduction to Data Science With R Notes
Introduction to Data Science With R Notes
Ā 
Eurolan 2005 Pedersen
Eurolan 2005 PedersenEurolan 2005 Pedersen
Eurolan 2005 Pedersen
Ā 
Nuts and bolts
Nuts and boltsNuts and bolts
Nuts and bolts
Ā 

More from Lars Marius Garshol

Archive integration with RDF
Archive integration with RDFArchive integration with RDF
Archive integration with RDF
Lars Marius Garshol
Ā 
Hafslund SESAM - Semantic integration in practice
Hafslund SESAM - Semantic integration in practiceHafslund SESAM - Semantic integration in practice
Hafslund SESAM - Semantic integration in practice
Lars Marius Garshol
Ā 

More from Lars Marius Garshol (20)

JSLT: JSON querying and transformation
JSLT: JSON querying and transformationJSLT: JSON querying and transformation
JSLT: JSON querying and transformation
Ā 
Data collection in AWS at Schibsted
Data collection in AWS at SchibstedData collection in AWS at Schibsted
Data collection in AWS at Schibsted
Ā 
Kveik - what is it?
Kveik - what is it?Kveik - what is it?
Kveik - what is it?
Ā 
Nature-inspired algorithms
Nature-inspired algorithmsNature-inspired algorithms
Nature-inspired algorithms
Ā 
Collecting 600M events/day
Collecting 600M events/dayCollecting 600M events/day
Collecting 600M events/day
Ā 
History of writing
History of writingHistory of writing
History of writing
Ā 
NoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativityNoSQL and Einstein's theory of relativity
NoSQL and Einstein's theory of relativity
Ā 
Norwegian farmhouse ale
Norwegian farmhouse aleNorwegian farmhouse ale
Norwegian farmhouse ale
Ā 
Archive integration with RDF
Archive integration with RDFArchive integration with RDF
Archive integration with RDF
Ā 
The Euro crisis in 10 minutes
The Euro crisis in 10 minutesThe Euro crisis in 10 minutes
The Euro crisis in 10 minutes
Ā 
Using the search engine as recommendation engine
Using the search engine as recommendation engineUsing the search engine as recommendation engine
Using the search engine as recommendation engine
Ā 
Linked Open Data for the Cultural Sector
Linked Open Data for the Cultural SectorLinked Open Data for the Cultural Sector
Linked Open Data for the Cultural Sector
Ā 
NoSQL databases, the CAP theorem, and the theory of relativity
NoSQL databases, the CAP theorem, and the theory of relativityNoSQL databases, the CAP theorem, and the theory of relativity
NoSQL databases, the CAP theorem, and the theory of relativity
Ā 
Bitcoin - digital gold
Bitcoin - digital goldBitcoin - digital gold
Bitcoin - digital gold
Ā 
Introduction to Big Data/Machine Learning
Introduction to Big Data/Machine LearningIntroduction to Big Data/Machine Learning
Introduction to Big Data/Machine Learning
Ā 
Hops - the green gold
Hops - the green goldHops - the green gold
Hops - the green gold
Ā 
Big data 101
Big data 101Big data 101
Big data 101
Ā 
Linked Open Data
Linked Open DataLinked Open Data
Linked Open Data
Ā 
Hafslund SESAM - Semantic integration in practice
Hafslund SESAM - Semantic integration in practiceHafslund SESAM - Semantic integration in practice
Hafslund SESAM - Semantic integration in practice
Ā 
Approximate string comparators
Approximate string comparatorsApproximate string comparators
Approximate string comparators
Ā 

Recently uploaded

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(ā˜Žļø+971_581248768%)**%*]'#abortion pills for sale in dubai@
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
Ā 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
Ā 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
Christopher Logan Kennedy
Ā 

Recently uploaded (20)

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
Ā 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Ā 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Ā 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Ā 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
Ā 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Ā 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
Ā 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
Ā 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
Ā 
Elevate Developer Efficiency & build GenAI Application with Amazon Qā€‹
Elevate Developer Efficiency & build GenAI Application with Amazon Qā€‹Elevate Developer Efficiency & build GenAI Application with Amazon Qā€‹
Elevate Developer Efficiency & build GenAI Application with Amazon Qā€‹
Ā 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
Ā 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
Ā 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
Ā 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
Ā 
Mcleodganj Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls šŸ„° 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Ā 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
Ā 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
Ā 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
Ā 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
Ā 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Ā 

Linking data without common identifiers

  • 1. Linking data without common identifiers Semantic Web Meetup, 2011-08-23 Lars Marius Garshol, <larsga@bouvet.no> http://twitter.com/larsga
  • 2. About me Lars Marius Garshol, Bouvet consultant Worked with semantic technologies since 1999 mostly Topic Maps Before that I worked with XML Also background from Opera Software (Unicode support) open source development (Java/Python)
  • 3. Agenda Linking data ā€“ the problem Record linkage theory Duke A real-world example Usage at Hafslund
  • 4. The problem Data sets from different sources generally donā€™t share identifiers names tend to be spelled every which way So how can they be linked?
  • 6. A difficult problem It requires n2 comparisons for n records a million comparisons for 1000 records, 100 million for 10,000, ... Exact string comparison is not enough must handle misspellings, name variants, etc Interpreting the data can be difficult even for a human being is the address different because there are two different people, or because the person moved? ...
  • 7. Help from an unexpected quarter Statisticians to the rescue!
  • 8. Record linkage Statisticians very often must connect data sets from different sources They call it ā€œrecord linkageā€ term coined in 19461) mathematical foundations laid in 19592) formalized in 1969 as ā€œFellegi-Sunterā€ model3) A whole subject area has been developed with well-known techniques, methods, and tools these can of course be applied outside of statistics http://ajph.aphapublications.org/cgi/reprint/36/12/1412 http://www.sciencemag.org/content/130/3381/954.citation http://www.jstor.org/pss/2286061
  • 9. Other terms for the same thing Has been independently invented many times Under many names entity resolution identity resolution merge/purge deduplication ... This makes Googling for information an absolute nightmare
  • 10. Application areas Statistics (obviously) Data cleaning Data integration Conversion Fraud detection / intelligence / surveillance
  • 12. Model, simplified We work with records, each of which has fields with values before doing record linkage weā€™ve processed the data so that they have the same set of fields (fields which exist in only one of the sources get discarded, as they are no help to us) We compare values for the same fields one by one, then estimate the probability that records represent the same entity given these values probability depends on which field and what values estimation can be very complex Finally, we can compute overall probability
  • 14. String comparisons Must handle spelling errors and name variants must estimate probability that values belong to same entity despite differences Examples John Smith ā‰ˆ Jonh Smith J. Random Hacker ā‰ˆ James Random Hacker ... Many well-known algorithms have been developed no one best choice, unfortunately some are eager to merge, others less so Many are computationally expensive O(n2) where n is length of string is common this gets very costly when number of record pairs to compare is already high
  • 15. Examples of algorithms Levenshtein edit distance ie: number of edits needed to change string 1 into 2 not so eager to merge Jaro-Winkler specifically for short person names very promiscuous Soundex essentially a phonetic hashing algorithm very fast, but also very crude Longest common subsequence / q-grams havenā€™t tried this yet Monge-Elkan one of the best, but computationally expensive TFIDF based on term frequency studies often find this performs best, but itā€™s expensive
  • 16. Existing record linkage tools Commercial tools big, sophisticated, and expensive have found little information on what they actually do presumably also effective Open source tools generally made by and for statisticians nice user interfaces and rich configurability architecture often not as flexible as it could be
  • 17. Standard algorithm n2comparisons for n records is unacceptable must reduce number of direct comparisons Solution produce a key from field values, sort records by the key, for each record, compare with n nearest neighbours sometimes several different keys are produced, to increase chances of finding matches Downsides requires coming up with a key difficult to apply incrementally sorting is expensive
  • 18. Good research papers Threat and Fraud Intelligence, Las Vegas Style, Jeff Jonas http://jeffjonas.typepad.com/IEEE.Identity.Resolution.pdf Real-world data is dirty: Data Cleansing and the Merge/Purge Problem, Hernandez & Stolfo http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.30.3496&rep=rep1&type=pdf Swoosh: a generic approach to entity resolution, Benjelloun, Garcia-Molina et al http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.5696&rep=rep1&type=pdf
  • 20. Context Doing a project for Hafslund where we integrate data from many sources Entities are duplicated both inside systems and across systems Suppliers Customers Customers Customers Companies CRM Billing ERP
  • 21. Requirements Must be flexible and configurable no way to know in advance exactly what data we will need to deduplicate Must scale to large data sets CRM alone has 1.4 million customer records thatā€™s 2 trillion comparisons with naĆÆve approach Must have an API project uses SDshare protocol everywhere must therefore be able to build connectors Must be able to work incrementally process data as it changes and update conclusions on the fly
  • 22. Reviewed existing tools... ...but didnā€™t find anything matching the criteria Therefore...
  • 23. Duke Java deduplication engine released as open source http://code.google.com/p/duke/ Does not use key approach instead indexes data with Lucene does Lucene searches to find potential matches Still a work in progress, but high performance (1M records in 10 minutes), several data sources and comparators, being used for real in real projects, flexible architecture
  • 24. How it works XML configuration file defines setup lists data sources, properties, probabilities, etc Data sources provide streams of records Steps: normalize values so they are ready for comparison collect a batch of records, then index it with Lucene compare records one by one against Lucene index do detailed comparisons against search results pairs with probability above configurable threshold are considered matches For now, only API and command-line
  • 25. Record comparisons in detail Comparator compares field values return number in range 1.0 ā€“ 0.0 probability for field is high probability if number is 1.0, or low probability if 0.0, otherwise in between the two Probability for entire record is computed using Bayesā€™s formula approach known as ā€œnaĆÆve Bayesā€ in research literature
  • 26. Components Data sources CSV JDBC Sparql NTriples <plug in your own> Comparators ExactComparator NumericComparator SoundexComparator TokenizedComparator Levenshtein JaroWinkler Dice coefficient
  • 27. Features Fairly complete command-line tool with debugging support API for embedding the tool Two modes: deduplication (all records matched against each other) linking (only matching across groups of records) Pluggable data sources, comparators, and cleaners Framework for composing cleaners Incremental processing High performance
  • 28. A real-world example Matching Mondial and DBpedia
  • 29. Finding properties to match Need properties providing identity evidence Matching on the properties in bold below Extracted data to CSV for ease of use
  • 30. Configuration ā€“ data sources <group> <csv> <param name="input-file" value="dbpedia.csv"/> <param name="header-line" value="false"/> <column name="1" property="ID"/> <column name="2" cleaner="no.priv...CountryNameCleaner" property="NAME"/> <column name="3" property="AREA"/> <column name="4" cleaner="no.priv...CapitalCleaner" property="CAPITAL"/> </csv> </group> <group> <csv> <param name="input-file" value="mondial.csv"/> <column name="id" property="ID"/> <column name="country" cleaner="no.priv...examples.CountryNameCleaner" property="NAME"/> <column name="capital" cleaner="no.priv...LowerCaseNormalizeCleaner" property="CAPITAL"/> <column name="area" property="AREA"/> </csv> </group>
  • 31. Configuration ā€“ matching <schema> <threshold>0.65</threshold> <property type="id"> <name>ID</name> </property> <property> <name>NAME</name> <comparator>no.priv.garshol.duke.Levenshtein</comparator> <low>0.3</low> <high>0.88</high> </property> <property> <name>AREA</name> <comparator>AreaComparator</comparator> <low>0.2</low> <high>0.6</high> </property> <property> <name>CAPITAL</name> <comparator>no.priv.garshol.duke.Levenshtein</comparator> <low>0.4</low> <high>0.88</high> </property> </schema> Duke analyzes this setup and decides only NAME and CAPITAL need to be searched on in Lucene. <object class="no.priv.garshol.duke.NumericComparator" name="AreaComparator"> <param name="min-ratio" value="0.7"/> </object>
  • 32. Result Correct links found: 206 / 217 (94.9%) Wrong links found: 0 / 12 (0.0%) Unknown links found: 0 Percent of links correct 100.0%, wrong 0.0%, unknown 0.0% Records with no link: 25 Precision 100.0%, recall 94.9%, f-number 0.974
  • 35. An example of failure Duke doesnā€™t find this match no tokens matching exactly Lucene search finds nothing Detailed comparison gives correct result so only problem is Lucene search Lucene does have Levenshtein search, but in Lucene 3.x itā€™s very slow therefore not enabled now thinking of adding option to enable where needed Lucene 4.x will fix the performance problem
  • 36. The effects of value matching
  • 37. Duke in real life Usage at Hafslund
  • 38. The SESAM project Building a new archival system does automatic tagging of documents based on knowledge about the data knowledge extracted from backend systems Search engine-based frontend using Recommind for this Very flexible architecture extracted data stored in triple store (Virtuoso) all data transfer based on SDshare protocol data extracted from RDBMSs with Ontopiaā€™s DB2TM
  • 39. The big picture DUPLICATES! 360 SDshare SDshare Virtuoso ERP Recommind SDshare SDshare CRM Billing Duke SDshare SDshare contains owl:sameAs and haf:possiblySameAs
  • 40. Experiences so far Incremental processing works fine links added and retracted as data changes Performance not an issue at all but then only ~50,000 records so far... Matching works well, but not perfect data are very noisy and messy also, matching is hard
  • 41. Duke roadmap 0.3 clean up the public API and document properly maybe some more comparators support for writing owl:sameAs to Sparql endpoint 0.4 add a web service interface 0.5 and onwards more comparators maybe some parallelism
  • 42. Slides will appear on http://slideshare.net/larsga Comments/questions?