admin管理员组文章数量:1348238
Background: I have a list that contains 13,000 records of human names, some of them are duplicates and I want to find out the similar ones to do the manual duplication process.
For an array like:
["jeff","Jeff","mandy","king","queen"]
What would be an efficient way to get:
[["jeff","Jeff"]]
Explanation ["jeff","Jeff"]
since their Levenshtein distance is 1(which can be variable like 3).
/*
Working but a slow solution
*/
function extractSimilarNames(uniqueNames) {
let similarNamesGroup = [];
for (let i = 0; i < uniqueNames.length; i++) {
//pare with the rest of the array
const currentName = uniqueNames[i];
let suspiciousNames = [];
for (let j = i + 1; j < uniqueNames.length; j++) {
const matchingName = uniqueNames[j];
if (isInLevenshteinRange(currentName, matchingName, 1)) {
suspiciousNames.push(matchingName);
removeElementFromArray(uniqueNames, matchingName);
removeElementFromArray(uniqueNames, currentName);
i--;
j--;
}
}
if (suspiciousNames.length > 0) {
suspiciousNames.push(currentName);
}
}
return similarNamesGroup;
}
I want to find the similarity via Levenshtein distance, not only the lower/uppercase similarity
I already find one of the fastest Levenshtein implementation but it still takes me to 35 mins to get the result of 13000 items list.
Background: I have a list that contains 13,000 records of human names, some of them are duplicates and I want to find out the similar ones to do the manual duplication process.
For an array like:
["jeff","Jeff","mandy","king","queen"]
What would be an efficient way to get:
[["jeff","Jeff"]]
Explanation ["jeff","Jeff"]
since their Levenshtein distance is 1(which can be variable like 3).
/*
Working but a slow solution
*/
function extractSimilarNames(uniqueNames) {
let similarNamesGroup = [];
for (let i = 0; i < uniqueNames.length; i++) {
//pare with the rest of the array
const currentName = uniqueNames[i];
let suspiciousNames = [];
for (let j = i + 1; j < uniqueNames.length; j++) {
const matchingName = uniqueNames[j];
if (isInLevenshteinRange(currentName, matchingName, 1)) {
suspiciousNames.push(matchingName);
removeElementFromArray(uniqueNames, matchingName);
removeElementFromArray(uniqueNames, currentName);
i--;
j--;
}
}
if (suspiciousNames.length > 0) {
suspiciousNames.push(currentName);
}
}
return similarNamesGroup;
}
I want to find the similarity via Levenshtein distance, not only the lower/uppercase similarity
I already find one of the fastest Levenshtein implementation but it still takes me to 35 mins to get the result of 13000 items list.
Share Improve this question edited Apr 24, 2019 at 0:06 Jeff Chung asked Apr 23, 2019 at 4:32 Jeff ChungJeff Chung 1863 silver badges14 bronze badges 8- 1 Possible duplicate of Sort an array by the "Levenshtein Distance" with best performance in Javascript – Kevin Kopf Commented Apr 23, 2019 at 5:05
- 2 @alex don't think that a levenshtein implementation that pares two strings would help here. – Jonas Wilms Commented Apr 23, 2019 at 6:19
-
1
I suspect that the
removeElementFromArray
function is killing your performance because it mutates the array that you're traversing. Remove the 4 lines aftersuspiciousNames.push(matchingName);
and test the performance usingconsole.time
andconsole.timeEnd
, preferably on a smaller array to begin with. – Aadit M Shah Commented Apr 23, 2019 at 6:54 - 2 Related cs.stackexchange./questions/53299/… – גלעד ברקן Commented Apr 23, 2019 at 10:27
-
1
What is the expected output for
["Jeff", "eff", "effl"]
? Also, are you only interested in a Levenshtein distance of 1 or could it be variable? – גלעד ברקן Commented Apr 23, 2019 at 10:45
5 Answers
Reset to default 3Your problem is not the speed of the Levenshtein distance implementation. Your problem is that you have to pare each word with each other word. This means you make 13000² parisons (and each time calculate the Levenshtein distance).
So my approach would be to try to reduce the number of parisons.
Here are some ideas:
words are only similar if their lengths differ less than 20% (just my estimation)
→ we can group by length and only pare words with other words of length ±20%words are only similar if they share a lot of letters
→ we can create a list of e.g. 3-grams (all lower case) that refer to the words they are part of.
→ only pare (e.g. with Levenshtein distance) a word with other words that have several 3-grams in mon with it.
Approaches to remove similar names:
- Use phonetical representation of the words. cmudict It works with python nltk. You can find which names are close to each other phonetically.
- Try different forms of stemming or simplifications. I would try most aggressive stemmers like Porter stemmer.
Levenshtein trie. You can create trie data structure that will help to find word with minimum distance to searched item, this is used for full text search in some search engines. As far as I know it's already implemented in Java. In your case you need to search one item then add it to the structure on every step, you need to make sure that item that you search is not in the structure yet.
Manual naive approach. Find all suitable representations of every word/name, put all representations to map and find representations that have more than 1 word. If you have around 15 different representations of one word you will need only 280K iterations to generate this object (much faster than pare each word to another, which requires around 80M parisons with 13K names).
-- Edit --
If there is a choice I would use something like Python or Java instead of JS for this. It's only my opinion based on: I don't know all requirements, it's mon to use Java/Python for natural language processing, task looks more like heavy data processing than front end.
As in your working code you use Levenshtein distance 1 only, I will assume no other distances need to be found.
I will propose a similar solution as Jonas Wilms posted, with these differences:
- No need to call a
isLevenshtein
function - Produces only unique pairs
- Each pair is lexically ordered
// Sample data with lots of similar names
const names = ["Adela","Adelaida","Adelaide","Adele","Adelia","AdeLina","Adeline",
"Adell","AdellA","Adelle","Ardelia","Ardell","Ardella","Ardelle",
"Ardis","Madeline","Odelia","ODELL","Odessa","Odette"];
const map = {};
const pairs = new Set;
for (const name of names) {
for (const i in name+"_") { // Additional iteration to NOT delete a character
const key = (name.slice(0, i) + name.slice(+i + 1, name.length)).toLowerCase();
// Group words together where the removal from the same index leads to the same key
if (!map[key]) map[key] = Array.from({length: key.length+1}, () => new Set);
// If NO character was removed, put the word in EACH group
for (const set of (+i < name.length ? [map[key][i]] : map[key])) {
if (set.has(name)) continue;
for (let similar of set) pairs.add(JSON.stringify([similar, name].sort()));
set.add(name);
}
}
}
const result = [...pairs].sort().map(JSON.parse); // sort is optional
console.log(result);
I tested this on a set of 13000 names, including at least 4000 different names, and it produced 8000 pairs in about 0.3 seconds.
If we remove one character from "Jeff" at different positions we end up at "eff", "Jff", "Jef" and "Jef". If we do the same with "jeff", we get "eff", "jff", "Jef" and "jef". Now if you look closely, you'll see that both strings produce "eff" as a result, which means that we could create a Map of those binations to their original version, then for each string generate all binations and look them up in the Map. Through the lookup, you'll get results that are similar, e.g. "abc" and "cab" but they do not necessarily have a levenshtein distance of 1, so we have to check that afterwards.
Now why is that better?
Well iterating all names is O(n) (n being the number of words), creating all binations is O(m) (m being the average number of characters in a word) and looking up in a Map is O(1), therefore this runs in O(n * m), whereas your algorithm is O(n * n * m), which means for 10.000 words, mine is 10.000 times faster (or my calculation is wrong :))
// A "OneToMany" Map
class MultiMap extends Map {
set(k, v) {
if(super.has(k)) {
super.get(k).push(v);
} else super.set(k, [v]);
}
get(k) {
return super.get(k) || [];
}
}
function* oneShorter(word) {
for(let pos = 0; pos < word.length; pos++)
yield word.substr(0, pos) + word.substr(pos + 1);
}
function findDuplicates(names) {
const bos = new MultiMap();
const duplicates = [];
const check = (name, bo) => {
const dupes = bos.get(bo);
for(const dupe of dupes) {
if((isInLevenshteinRange(name, bo, 1))
duplicates.push([name, dupe]);
}
bos.set(bo, name);
};
for(const name of names) {
check(name, name);
for(const bo of oneShorter(name)) {
check(name, bo);
}
}
return duplicates;
}
I have yet a pletely different way of approaching this problem, but I believe I am presenting a pretty fast (but debatable as to how correct/incorrect) it is. My approach is to map the strings to numeric values, sort those values once, and then run through that list once, paring neighboring values to each other. Like this:
// Test strings (provided by OP) with some additions
var strs = ["Jeff","mandy","jeff","king","queen","joff", "Queen", "jff", "tim", "Timmo", "Tom", "Rob", "Bob"]
// Function to convert a string into a numeric representation
// to aid with string similarity parison
function atoi(str, maxLen){
var i = 0;
for( var j = 0; j < maxLen; j++ ){
if( str[j] != null ){
i += str.toLowerCase().charCodeAt(j)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
} else {
// Normalize the string with a pad char
// up to the maxLen (update the value, but don't actually
// update the string...)
i += '-'.charCodeAt(0)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
}
}
valMap.push({
str,
i
})
return i;
}
Number.prototype.inRange = function(min, max){ return(this >= min && this <= max) }
var valMap = []; // Array of string-value pairs
var maxLen = strs.map((s) => s.length).sort().pop() // maxLen of all strings in the array
console.log('maxLen', maxLen)
strs.forEach((s) => atoi(s, maxLen)) // Map strings to values
var similars = [];
var subArr = []
var margin = 0.05;
valMap.sort((a,b) => a.i > b.i ? 1 : -1) // Sort the map...
valMap.forEach((entry, idx) => {
if( idx > 0 ){
var closeness = Math.abs(entry.i / valMap[idx-1].i);
if( closeness.inRange( 1 - margin, 1 + margin ) ){
if( subArr.length == 0 ) subArr.push(valMap[idx-1].str)
subArr.push(entry.str)
if( idx == valMap.length - 1){
similars.push(subArr)
}
} else {
if( subArr.length > 0 ) similars.push(subArr)
subArr = []
}
}
})
console.log('similars', similars)
I'm treating each string as if each were a "64-bit number", where each "bit" could take on the alphanumeric values, with 'a' representing 0. Then I sort that once. Then, if similar values are encountered to the previous one (i.e., if the ratio of the two is near 1), then I deduce I have similar strings.
The other thing I do is check for the max string length, and normalize all the strings to that length in the calculation of the "64-bit value".
--- EDIT: even more stress testing --- And yet, here is some additional testing, which pulls a large list of names and performs the processing rather quickly (~ 50ms on 20k+ names, with lots of false positives). Regardless, this snippet should make it easier to troubleshoot:
var valMap = []; // Array of string-value pairs
/* Extensions */
Number.prototype.inRange = function(min, max){ return(this >= min && this <= max) }
/* Methods */
// Function to convert a string into a numeric representation
// to aid with string similarity parison
function atoi(str, maxLen){
var i = 0;
for( var j = 0; j < maxLen; j++ ){
if( str[j] != null ){
i += str.toLowerCase().charCodeAt(j)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
} else {
// Normalize the string with a pad char
// up to the maxLen (update the value, but don't actually
// update the string...)
i += '-'.charCodeAt(0)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
}
}
valMap.push({ str, i })
return i;
}
function findSimilars(strs){
var maxLen = strs.map((s) => s.length).sort().pop() // maxLen of all strings in the array
console.log('maxLen', maxLen)
strs.forEach((s) => atoi(s, maxLen)) // Map strings to values
var similars = [];
var subArr = []
var margin = 0.05;
valMap.sort((a,b) => a.i > b.i ? 1 : -1) // Sort the map...
valMap.forEach((entry, idx) => {
if( idx > 0 ){
var closeness = Math.abs(entry.i / valMap[idx-1].i);
if( closeness.inRange( 1 - margin, 1 + margin ) ){
if( subArr.length == 0 ) subArr.push(valMap[idx-1].str)
subArr.push(entry.str)
if( idx == valMap.length - 1){
similars.push(subArr)
}
} else {
if( subArr.length > 0 ) similars.push(subArr)
subArr = []
}
}
})
console.log('similars', similars)
}
// Stress test with 20k+ names
$.get('https://raw.githubusercontent./dominictarr/random-name/master/names.json')
.then((resp) => {
var strs = JSON.parse(resp);
console.time('processing')
findSimilars(strs)
console.timeEnd('processing')
})
.catch((err) => { console.err('Err retrieving JSON'); })
<script src="https://cdnjs.cloudflare./ajax/libs/jquery/3.3.1/jquery.min.js"></script>
(For some reason, when I run this in JSFiddle, I get it to run in ~50ms, but in the Stackoverflow snippet, it's closer to 1000ms.)
本文标签: algorithmHow to efficiently find similar strings in a unique string in JavaScriptStack Overflow
版权声明:本文标题:algorithm - How to efficiently find similar strings in a unique string in JavaScript? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1743847057a2549315.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论