admin管理员组文章数量:1289911
Drake and Hougardy find a simple approximation algorithm for the maximum weighted matching problem. I think my understanding of academic papers is above my capabilities so I'm looking for an easy implementation preferable in php, c, javascript?
Drake and Hougardy find a simple approximation algorithm for the maximum weighted matching problem. I think my understanding of academic papers is above my capabilities so I'm looking for an easy implementation preferable in php, c, javascript?
Share Improve this question edited Oct 17, 2013 at 23:14 Cybercartel asked Mar 5, 2011 at 12:45 CybercartelCybercartel 12.6k7 gold badges38 silver badges73 bronze badges 7- 1 The Drake-Hougardy algorithm is an approximation; it gives a solution that is good but perhaps not the best. Do you want an explanation of both algorithms? – Beta Commented Mar 5, 2011 at 17:13
- @epitaph: The subject line asks for "minimum weight perfect match", but the Drake-Hougardy algorithm only promises a maximum weight approximation. Are you able to work-around for matchings that are not perfect/plete? One can redefine the objective of minimum weight matching to maximum weight matching if the size of the matching is predetermined, however I saw nothing in their paper about this (or about forcing perfect matchings, as conceivably the maximum matching is attained without being perfect). – hardmath Commented Apr 11, 2011 at 15:46
- @epitaph: You are the expert about what you want/need; I'm just trying to figure out how/if I can help. The Drake-Hougardy algorithm deals with a linear time approximation to the maximum weight matching problem. What other algorithm are you including in "both"? – hardmath Commented Apr 11, 2011 at 16:33
- @epitaph: Okay, I think what Beta meant was the earlier and later papers/algorithms proposed by Drake and Hougardy. The first one is much simpler and guarenteed to find (in linear time) a matching with at least half the weight of the maximum weight matching, and the second one is more plicated but gives results that provide nearly 2/3rds of the maximum (still in linear time). I'll write up the first one. I still have a quibble about the word "perfect" in your subject line, but I will tackle it in the Anwser as well, so I can define terms. – hardmath Commented Apr 13, 2011 at 0:43
- Perhaps I'm speaking out of turn, but is this a good candidate for math.stackexchange. ? – Tass Commented Apr 15, 2011 at 17:33
1 Answer
Reset to default 11Problem Definition and References
Given a simple graph (undirected, no self-edges, no multi-edges) a matching is a subset of edges such that no two of them are incident to the same vertex.
A perfect matching is one in which all vertices are incident to an edge of the matching, something not possible if there are an odd number of vertices. More generally we can ask for a maximum matching (largest possible number of edges in a matching) or for a maximal matching (a matching to which no more edges can be added).
If positive real "weights" are assigned to the edges, we can generalize the problem to ask for a maximum-weighted matching, one that maximizes the sum of edges' weights. The exact maximum-weighted matching problem can be solved in O(nm log(n)) time, where n is the number of vertices and m the number of edges.
Note that a maximum-weighted matching need not be a perfect matching. For example:
*--1--*--3--*--1--*
has only one perfect matching, whose total weight is 2, and a maximum weighted matching with total weight 3.
Discussion and further references for exact and approximate solutions of these, and of the minimum-weighted perfect matching problem, may be found in these papers:
"A Simple Approximation Algorithm for the Weighted Matching Problem" Drake, Doratha E. and Hougardy, Stefan (2002)
Implementation of O(nm log n) Weighted Matchings The Power of Data Structures Melhorn, Kurt and Schäfer, Guido (2000)
Computing Minimum-Weight Perfect Matchings Cook, William and Rohe, André (1997)
Approximating Maximum Weight Matching in Near-linear Time Duan, Ran and Pettie, Seth (2010)
Drake and Hougardy's Simple Approximation Algorithm
The first approximation algorithm of Drake-Hougardy uses the idea of growing paths using the locally heaviest edge at each vertex met. It has a "performance ratio" of 1/2 like the greedy algorithm, but linear time plexity in the number of edges (the greedy algorithm uses a globally heaviest edge and incurs greater time plexity to find that).
The main implementation task is to identify data structures that support the steps of their algorithm efficiently.
The idea of the PathGrowing algorithm:
Given: a simple undirected graph G with weighted edges
(0) Define two sets of edges L and R, initially empty.
(1) While the set of edges of G is not empty, do:
(2) Choose arbitrary vertex v to which an edge is incident.
(3) While v has incident edges, do:
(4) Choose heaviest edge {u,v} incident to v.
(5) Add edge {u,v} to L or R in alternating fashion.
(6) Remove vertex v (and its incident edges) from G.
(7) Let u take the role of v.
(8) Repeat 3.
(9) Repeat 1.
Return L or R, whichever has the greater total weight.
Data structures to represent the graph and the output
As a "set" is not in any immediate sense a data structure of C, we need to decide what kinds of container for edges and vertices will suit this algorithm. The critical operations are removing vertices and incident edges in a way that allows us to find if any edges are left and to pare weights of the remaining edges incident to a given vertex.
The edges need to be searchable, but only to see if any is still left. One thinks first of a simple linked list of edges, without any special ordering. But this list also needs to be maintained through essentially random deletions. This suggests a doubly-linked list (back links as well as forward at each node), so that deletion of an edge may be done by fixing up the links to skip over any "removed" node. Edge weights can also be stored in this same structure.
Further we need the ability to scan all (remaining) edges incident to a given vertex. We can do this by creating a linked list for each vertex of (pointers to) incident edges. I will assume that the vertices have been preprocessed to ordinal values that can be used as an index into an array of pointers to these linked lists.
Finally we need to represent the edge sets L and R, one of which is to be returned as the approximate maximum matching. Our requirements are to be able to add edges to either set, and to be able to total the edge weights for both of them. Linked lists with dynamically allocated nodes can serve this purpose, perhaps storing pointers to the edge nodes in the original doubly-linked lists as the weight attribute will still persist there even after an edge bees "removed" by link manipulation.
Such linked and doubly-linked lists can be created in time proportional to the number of edges, since the doubly-linked list entries may be allocated to vertex-specific links on input. With such a design in mind we can analyze the effort required by each step of the algorithm.
(to be continued)
本文标签:
版权声明:本文标题:php - A good approximation algorithm for the maximum weight perfect match in non-bipartite graphs? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741483604a2381301.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论