Jump to content

Talk:Approximation algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Rough draft of research to cite and incorporate, thanks

[edit]

In computer science and operations research, we use approximation algorithms to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since efficient polynomial time exact algorithms solving NP-hard problems, one settles for polynomial time sub-optimal solutions. Unlike heuristics, where usually we only find reasonably good solutions reasonably fast, one wants provable solution quality and provable run time bounds. Ideally, the approximation is optimal up to a small constant factor (for instance within 5% optimal solution). Approximation algorithms are used often for problems where exact polynomial-time algorithms are known but are too expensive due to the input size. A typical example for an approximation algorithm is the vertex cover in graphs: find an uncovered edge and add both endpoints to the vertex cover, until zero endpoints remain. It is clear that the resulting cover is at most twice as large as the optimal one. This is a constant factor approximation algorithm of factor two. NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (a family of approximation algorithms often called a polynomial time approximation scheme or PTAS). Others are possible to approximate but without a constant, or even polynomial factor unless P = NP, such as the maximum clique problem. NP-hard problems we can express as integer programs (IP) solved exactly in exponential time. Many approximation algorithms emerge from the linear programming relaxation of the integer program. Some approximation algorithms are suitable for some practical applications. They often use IP/LP/Semidefinite solvers, complex data structures or sophisticated algorithmic techniques leading to difficult implementation problems. Also, some approximation algorithms have impractical running times though they do run in polynomial time, for example O(n2000). Yet the study of expensive algorithms is a completely practical pursuit as it yields valuable insights. A classic example is the initial PTAS for Euclidean TSP due to Sanjeev Arora had prohibitive running time, yet ithin a year, Arora refined the ideas into a linear time algorithm. These types of algorithms are worthwhile in applications where we can justify the running time and cost e.g. computational biology, financial engineering, transportation planning, and inventory management. In these scenarios almorithms compete with direct IP corresponding formulations. A limit to the approach refers to optimization problems "pure" decision problems like satisfiability, though it is possible to conceive optimization versions of these problems including the maximum satisfiability problem or (Max SAT). Inapproximability has been a fruitful area of research in computational complexity theory since the 1990 result of Feige, Goldwasser, Lovasz, Safra and Szegedy on the inapproximability of Independent Set. Now the time has come to embrace the potential gains of approxmability provided these new theories we discuss. Looking back, after Arora et al. proved the PCP theorem in 1991, it was later shown Johnson's 1974 approximation algorithms for Max SAT, Set Cover, Independent Set and Coloring all achieve optimal approximation ratio, assuming P != NP. Consider these two items: Performance guarantee and Epsiolon term. Performance guarantees for some approximation algorithms allow us to prove certain properties about the approximation of the optimum result. For example, in case ρ-approximation algorithm A has proof value/cost, f(x) approximate solution A(x) instance x will be less (more or less, depending on) factor ρ times value OPT optimum solution.

   \begin{cases}\mathrm{OPT} \leq f(x) \leq \rho \mathrm{OPT},\qquad\mbox{if } \rho > 1; \\ \rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT},\qquad\mbox{if } \rho < 1.\end{cases}

Factor ρ relative performance guarantee approximation algorithm absolute performance guarantee bounds error c proven for instant x

   (\mathrm{OPT} - c) \leq f(x) \leq (\mathrm{OPT} + c).

Similarly, performance guarantee, R(x,y) solution y to instance x defines

   R(x,y) = \max \left ( \frac{OPT}{f(y)}, \frac{f(y)}{OPT} \right )

where f(y) is value/cost solution y for instant x. Performance guarantee is greater than or equal to 1 and equal to 1 optimal solution. If algorithm A guarantees solutions performance guarantee return at most r(n) A is an r(n)-approximation algorithm approximation ratio r(n). Problem r(n)-approximation algorithm is r(n)-approximable approximation ratio r(n).[1][2] Minimization problems guarantees two different factors provide the same result maximization problem relative performance guarantee ρ is equivalent to performance guarantee r = ρ − 1. In literature both definitions are common and define clear maximization problem ρ ≤ 1 while r ≥ 1. Absolute performance guarantee ΡA approximation algorithm A x refers to instant problem RA(x) is performance guarantee A on x (i.e. ρ for problem instant x) =:

   \Rho_A = \inf \{ r \geq 1 \,|\, R_A(I) \leq r, \forall x \}

ΡA=largest bound approximation ratio r sees over possible instant problem asymptotic performance ratio R_A^\infty=:

   R_A^\infty = \inf \{ r \geq 1 \,|\, \exists n \in \mathbb{Z}^+, R_A(I) \leq r, \forall x, |x| \geq n\} 

Same absolute performance ratio lower bound n problem size problem instant two type ratio result exist algorithm between two significant. Performance guarantee r-approx[1][2] ρ-approx rel. error[2] rel. error[1] norm. rel. error[1][2] abs. error[1][2] max: f(x) \geq r − 1OPT ρOPT (1 − c)OPT (1 − c)OPT (1 − c)OPT -cBEST OPT − c min: f(x) \leq rOPT ρOPT (1 + c)OPT (1 − c) − 1OPT (1 − c) − 1OPT - cBEST OPT + c Epsilon term IN literature approximation ratio maximization (minimization) problem c - ϵ (min: c + ϵ) means algorithm approximation ratio cϵ for arbitrary ϵ > 0 ratio show ϵ = 1 example of optimal approximability — existence of approximation — ratio 7 / 8 + ϵ for satisfiable MAX-3SAT instant due Johan Håstad.[3] Mention previously c = 1 problem polynomial-time approximation scheme. ϵ-term appearance approximation algorithm introduces multiplicative error plus constant error minimum optimum instant size n infinity as n infinity approximation ratio c k / OPT = c o(1) some constant c + k arbitrary ϵ > 0 link N term k / OPT < ϵ every n ≥ N every fixed ϵ, instant size n < N brute force solution shows approximation ratio exists approximation algorithm guarantee — c ϵ to every ϵ > 0. See also domain analysis guarantee rank term solution computes. References

   ^ a b c d e G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi (1999). Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties.
   ^ a b c d e Viggo Kann (1992). On the Approximability of NP-complete Optimization Problems.
   ^ Johan Håstad (1999). "Some Optimal Inapproximability Results". Journal of the ACM.
   Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. ISBN 3540653678.
   Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: Approximation Algorithms, pp. 1022–1056.
   Dorit H. Hochbaum, ed. Approximation Algorithms for NP-Hard problems, PWS Publishing Company, 1997. ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More

External links

   Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A compendium of NP optimization problems.

Categories: Computational complexity theory | Approximation algorithms 173.164.197.30 (talk) 05:06, 25 March 2011 (UTC)[reply]

There was an error on this page, specifically that the "dominance number" is made to sound as if it is a "replacement" for some other measure, whereas the reference I found calls it an orthoganal measure. There should probably still be a section about other measurements of approx. algorithms. The paper [1] I found that confirmed this error had some interesting contetnt that perhaps someone should write up. --NotQuiteEXPComplete 16:50, 22 August 2006 (UTC)[reply]

An obvious problem: R_a is used to define R_a; whassup? Gritzko 06:30, 11 June 2007 (UTC)[reply]

One is a function, the other is a set. You are right though, it'd probably be less confusing to use another variable.--NotQuiteEXPComplete 09:26, 20 June 2007 (UTC)[reply]

References for Approximation Guarantees

[edit]

Seeing how there are many different and varying definitions used for approximation guarantees, I would preferably like to see inline sources. The article currently uses ρ-approximation algorithms (relative performance guarantee) and I strongly believe that the definition is taken from Handbook of Approximation Algorithms and Metaheuristics but I cannot verify this myself. The definition used also agrees with "factor δ approximation algorithms" from Vazirani's book but it does not agree with the usual performance ratio which in Cormen et.al. uses the symbol ρ. —Preceding unsigned comment added by C. lorenz (talkcontribs) 18:23, 24 April 2009 (UTC)[reply]