buk_blogGeonho Yun

What Is a Turing Machine?

2024-11-10

Around a century ago, when World Wars did not yet bear the necessity of enumeration and where the discipline of computer science was yet to be conceived, a great question known as the Entscheidungsproblem in the realm of mathematics was whether every mathematically formulated problem could be answered using a system of formally defined steps. In other words: Is there a problem impossible to solve with any algorithm?

During his attempt to solve the Entscheidungsproblem, Alan Turing designed a rather machine: Not a machine in the physical sense, but a mathmatical automaton, named the Turing Machine in the present day. It is defined as follows:

M=(Q,Σ,Γ,B,q0,q,δ),M = (Q, \Sigma, \Gamma, B, q_0, \overline{q}, \delta),
where
  • QQ is the state space,
  • Σ\Sigma is the input alphabet,
  • Γ\Gamma is the band alphabet,
  • BB is the blank sign,
  • q0q_0 is the start state,
  • q\overline{q} is the end state,
  • δ:(Q{q})×ΓQ×Γ×{L,N,R}\delta : (Q \setminus \{ \overline{q} \}) \times \Gamma \rightarrow Q \times \Gamma \times \{L, N, R \} is the state transition function.

What the fuck? What is that supposed to represent?

I'll show you.

The Turing Machine


Current state: q0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
Transition Table
0
1
B

Imagine a person trying to solve (compute) a mathematical problem. Equipped with a pencil and paper, he additionaly keeps a book of instructions at his side. Every page except the very last page contains a row of instructions, which always ends with "Go to page XXX". When handed with a problem, he opens his book of instructions, reads the very first page, and follows the written instructions rigorously, writing down his results whenever necessary. Occasionally, he might as well erase some of his previous writings and even rewrite them. Once he reaches the last page, he closes the book and hands over the paper. Whatever written on the paper is interpreted as the result of our computation.

Here, the "head" of the Turing machine is the abstracted form of our computing person. When faced with a page (state qiQq_i \in Q) of the book of instructions (state transition function), he executes an instruction written in the page: "Write down an alphabet (γΓ\gamma \in \Gamma), move your hand position({L,N,R}\{L, N, R \}), and go to page XXX (next state qjQq_j \in Q)". You might as well interpret the state transition function as the brain of the computing person, or whatever suits your understanding best.

The concept of Turing machines might be hard to grasp at first. Without fully understanding them, you typically lag behind, especially because they are logical foundations for lectures on undecidability, Turing reductions (🇩🇪: Unterprogrammtechnik), many-one reductions (🇩🇪: Reduktion), etc.

Why Am I Learning This?

You've been explaining what a Turing machine is. I want to know why we're learning about them though.

Simply put, a Turing machine is a conceptional abstraction of an algorithm. The entire field of computer science is built on the belief that every algorithm, every program, every procedure, every function — call it however you want — can be rewritten as a Turing machine. Your Operating System. Every browser extension. Every Python function you have, and will have, written. Everything is just a Turing machine with limited memory. Every "intuitively computable" algorithm, i.e., every problem-solving procedure whose individual, finite steps can be explained by a human being (the word "finite" is a bit redundant here; if the steps weren't finite, it couldn't be explained by a human, could it?), can be solved using a Turing machine.

Is there a proof for that?

No. That's what we presume, at least.

Alright, so why does this matter?

This has several implications. Pretty huge implications.

If we succeed to prove that a problem cannot be solved by a Turing machine, it would mean that no matter how hard we try and how rapidly our computers improve, we will never be able to construct an algorithm solving it! The proof would set a limit for computability, dictating regardless of how many instances of an uncomputable problem any program is able to solve, there will be always at least one problem instance the program will fail to compute the correct answer. Take the problem of adding two decimal numbers as an instance. If this problem were uncomputable, even if the algorithm correctly computes almost all inputs, there will be at least one input the algorithm will fail to compute.

Thankfully, addition of two numbers is computable, so there's no need to doubt your faithful calculator. Other problems in the world, however, are not. In fact, mathmatically, computable problems make a teeny tiny fraction of the problems of the universe.

While visualizing helps to understand how Turing machines internally work, I found it pretty helpful to simply think of Turing machines as functions in the programming world. They take an input, do some calculations, and return an output. The only difference is that unlike in the real world, your function has now unlimited memory space and time to compute the output. In other words, the notation of algorithms can stay the same, as long as you assume that your function might never end. I will use the TypeScript syntax throughtout my articles, but you can take your own favorite.