buk_blogGeonho Yun

On Reductions

2025-01-14

A theoretical computer scientist wanted to join the local volunteer fire department. During his try-out, the senior officer hands him an extinguisher and a burning paper bin. He then asks to solve the problem, and the fire is skillfully put out by the computer scientist. Impressed, the officer compliments him for solving the task, then asks him what he will do with the remaining bin. After thinking for a while, he gathers fresh paper, lights up the bin and proudly tells the officer: "I have now reduced the current problem to the previous one; I'll leave the rest as an exercise."

This opening anecdote from the professor used to be the source of very last bursts of laughter before the hour of sole confusion during the lecture on reductions.

Now, the main idea is quite simple. When facing a new Problem A, reductions are attempts to prove the "new" problem is, in fact, not new, but instead just a variant of an old, preivously known Problem B. By applying a reduction, we're proving that the entire Problem A is "actually" just a subset of B.

All we have to know is what a "problem" means and how we can prove the "actually..." part.

Entscheidungsproblem

A problem in the computer science sense means a set of formally structured problem instances and their corresponding answers. In the problem of e.g. multiplication of two numbers, (76,42)(7 \cdot 6, 42) would be in the problem, while (76,41)(7 \cdot 6, 41) wouldn't. They all have some form of uniform mathematical structure, encoded with some kind of an alphabet Σ\Sigma.

We now focus on a special variant called an Entscheidungsproblem (Entscheidung: decision). In an entscheidungsproblem, the problem instances' answers can only be true or false. In other words, they're all yes-and-no questions!

Note that we can convert any problem into an entscheidungsproblem by concatenating the problem instances & answers in the relation in order to make a new set of positive instances as below:

(76,42)(76=42,1)(7 \cdot 6, 42) \rightarrow (7 \cdot 6 = 42, 1).

The conventional way to define a problem for the case of entscheidungsproblems is to gather all problem instances whose answer is "yes". We call this the language of the entscheidungsproblem, denoted as LL. If a word over a chosen alphabet Σ\Sigma wΣw \in \Sigma^* is a positive instance, ww lies in LL; if ww is a negative instance or pure sequence of nonsense, i.e. not an encoding of a problem at all, ww is in L\overline{L} (complement of LL). In our case, encoded over the alphabet

Σ={0,1,2,3,4,5,6,7,8,9,,=}\Sigma = \{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, \cdot, = \},

76=42L7\cdot6=42 \in L, while 76=41,75=42L7 \cdot 6 = 41, 7 \cdot 5 = 42 \in \overline{L} and 9==12L9 == \cdot 12 \cdot \in \overline{L}.

The "actually" part

A reduction is formally defined as follows:

Definition (Reduction).

Let A,BA, B be two languages over an alphabet Σ\Sigma. AA is reducible to BB (AB)(A \leq B), if there exists a computable function f:ΣΣf: \Sigma^* \rightarrow \Sigma^* so that xAf(x)Bx \in A \Leftrightarrow f(x) \in B for all xΣx \in \Sigma^*.

One way to visualize this is to imagine our Σ\Sigma^* is some kind of discrete metric space with languages A,BA, B corresponding to shapes defined in Σ\Sigma. The reduction ff is a transformation, warping the entire space so that AA fits inside BB and A\overline{A} inside B\overline{B}. The following animation might help.

Σ*AB

What confuses many while proving a reduction is that the transformation isn't necessarily surjective, meaning we don't need to ask "But what is the preimage of xyz in this case?" for every word xyz. We solely need to insure that every positive instance is mapped to a positive instance and every negative instance is mapped to a negative instance.

Also, being able to reduce A to B doesn't necessarily make B harder than A, since you can reduce a problem to an identical problem and still call it a reduction; If faced with a second burning bin, the computer scientist can calmly walk away, knowing that this can be reduced to the previous burning bin problem. This, however, doesn't mean that the previous burning bin was a harder problem to solve.

Reducing A to B has a couple implications for the relative difficulty of A and B. The following points should be intuitively clear:

  1. If we can solve B, we can surely solve A as well.
  2. If we know that we can't solve A, we can say that B is impossible to solve too.
  3. If B is easy to solve, A is easy to solve.
  4. If A is really difficult, we surely know B won't be easier than A.

Points 1 and 2 refer to the relative difficulty of the two problems regarding their computability or decidability. You can swap the word "solve" to words like "compute" or "decide" and the statement still holds. Points 3 and 4 are statements about their complexity. "Easy to solve" in the complexity context normally means that the problem can be solved in polynomial time, whereas "really difficult" means that the problem is NP-hard.

If you are applying a reduction in the complexity context, you further need to show that your reduction function is computable is polynomial runtime. The reduction then bears the name of polynomial reduction with the notation ApBA \leq_p B .

Moment of Poof

Looking at how it actually works would help you understand it a bit better.

Definition (Dominating Set).

Let G=(V,E)G = (V, E) be an undirected graph. DVD \subseteq V is a Dominating Set of GG, if for every node vVv \in V: vDuD:{u,v}Ev \in D \lor \exists u \in D: \{u, v\} \in E.

Given G=(V,E)G = (V, E) and kNk \in \mathbb{N}, does GG contain a Dominating Set of size k\leq k?
Definition (Hitting Set).

Given a set SS, CPot(S)\mathcal{C} \subseteq \text{Pot}(S), and kNk \in \mathbb{N}, is there a SSS' \subseteq S with Sk\lvert S' \rvert \leq k so that SCS' \cap C \neq \varnothing for all CCC \in \mathcal{C}?

We will show that Dominating Set \leq Hitting Set step by step.