程序代写案例-CMPT 364

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
CMPT 364 - Automata and Formal Languages
Topic 10: Turing Machines
Ian McQuillan
[email protected]
Department of Computer Science
University of Saskatchewan
April 2, 2020
1 / 52
Turing Machine
• In 1936, Alan Turing invented the Turing Machine.
• It was a model intended for any possible computation.
• Despite being called a “machine”, it is just a model.
• It sounds more like a computer than an algorithm.
2 / 52
Turing Machine
• The idea is simple now that we know DFA’s.
• We have our input, and we can scan it and at the end accept certain
words, or not accept certain words.
• Except, our input is encoded on a “tape”, and now, it extends in
both directions past the input.
• And we can put any symbol in each position of the tape.
3 / 52
Turing Machine
• This is sometimes described as having infinite storage.
• It isn’t so much infinite as arbitrarily large.
• Whenever you need an extra spot, you can get it (just like with
pushdown automata).
4 / 52
Turing Machine
Finite
Control
a a a b b b a a· · · · · ·b b
5 / 52
Turing Machine
• This is akin to a linked list.
• We can keep adding nodes to extend the list.
6 / 52
Turing Machine
• Technically, this isn’t directly realistic if we have a finite number of
molecules.
• But effectively, it is.
• When we compute, even if we run out of space, we can start using
extra storage.
7 / 52
Precise
• If we wanted to be really precise, then we could take all the
molecules in the universe that we could store information on, and
encode them in a finite automaton.
• But this wouldn’t capture the interesting parts of a computation.
• The abstraction to infinite is useful.
8 / 52
Turing Machine
• We still have the “finite control”.
• This consists of the finite set of states, and the method that we go
about making moves (the program).
• The Turing Machine tape is divided into cells, and is always
scanning one cell at any particular time.
9 / 52
Turing Machines
A move of a Turing Machine consists of the following:
based on:
• the current state,
• the current symbol being scanned,
it does:
• changes to another state,
• overwrites the current cell with a tape symbol (could be the same),
• either moves left, right one cell on the tape.
10 / 52
Formally,
Definition
A Turing Machine is denoted by a 7-tuple
M = (Q,Σ, Γ, δ, q0,B,F ),
where
Q: the finite set of states,
Σ: the finite input alphabet,
Γ: the finite tape alphabet, with Σ ⊆ Γ,
δ: the transition function, which is a partial function from Q × Γ to
Q × Γ× {L,R}, L, and R are directions, left and right,
q0: is the start state,
B: the blank symbol in Γ− Σ,
F : the final state set.
11 / 52
Instantaneous descriptions
• Similarly to PDA’s, we would like to be able to describe a Turing
Machine at any given point in time.
• We need to describe the state, and the current contents of the tape,
and the position of the read/write head.
• At any given point, we have only used a finite amount of tape so far.
12 / 52
Instantaneous description
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine. An instantaneous
description of M is a string
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn,
where X1, · · · ,Xn ∈ Γ, q ∈ Q,
• the tape head is currently scanning the cell holding Xi ,
• the cell symbols X1 through Xn is the contents of the tape between
the first non-blank, and the last non-blank, unless
• i = 1, where there can be some prefix of X1 · · ·Xn which contains
blanks, or
• i = n, where some suffix of X1 · · ·Xn can be blank.
13 / 52
Moves
• Graphically, an instantaneous description
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn
can be thought of as follows:
X1 X2 · · · Xi+1XiXi1 · · · Xn
q
14 / 52
Moves
• Roughly, moves dictate how one instantaneous description changes
into another.
• A move must follow from a transition.
15 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
•qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xn
q

Y X2 · · · Xn
p
B
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
•X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xn
q
Xn1

X1 X2 · · ·
p
Xn1
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
•X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xi+1XiXi1 · · · Xn
q ⇒
X1 X2 · · · Xi+1YXi1 · · · Xn
p
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
•qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xn
q

X2 · · · Xn
p
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
•X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xn
q
Xn1

X1 X2 · · ·
p
Xn1 Y B
16 / 52
Moves
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine.
• Let δ(q,Xi ) = (p,Y , L) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let δ(q,Xi ) = (p,Y ,R) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
•X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.

X1 X2 · · · Xi+1XiXi1 · · · Xn
q ⇒
X1 X2 · · · Xi+1YXi1 · · · Xn
p
16 / 52
Derivations
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine. Then
• u `∗M u, where u is an instantaneous description of M, and
• u `∗M v , where u `∗M x and x `M v .
17 / 52
Language acceptance
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a Turing Machine. Then the language
accepted by M is the set
{w | w ∈ Σ∗, α, β ∈ Γ∗, p ∈ F , q0w `∗M αpβ}.
18 / 52
Examples
Example
How do we accept the language {aibj | i , j ≥ 0}?
19 / 52
Examples
Example
How do we accept {aibi | i ≥ 0}?
20 / 52
Examples
Example
How do we accept {aibi | i ≥ 0}? Let
M = (Q, {a, b}, {a, b,B, c , d}, δ, q0,B, {qf }), where δ is defined as
follows:
δ(q0,B) = (qf ,B,R)
δ(q0, a) = (q1, c ,R)
δ(q1, a) = (q1, a,R)
δ(q1, d) = (q1, d ,R)
δ(q1, b) = (q2, d , L)
δ(q2, d) = (q2, d , L)
δ(q2, a) = (q3, a, L)
δ(q2, c) = (q4, c ,R)
δ(q3, a) = (q3, a, L)
δ(q3, c) = (q0, c ,R)
δ(q0, d) = (q4, d ,R)
δ(q4, d) = (q4, d ,R)
δ(q4,B) = (qf ,B, L)
20 / 52
Examples
aa a
q0
· · ·· · · b b b
21 / 52
Examples
ac a
q1
· · ·· · · b b b
21 / 52
Examples
ac a
q1
· · ·· · · b b b
21 / 52
Examples
ac a
q1
· · ·· · · b b b
21 / 52
Examples
ac a
q2
· · ·· · · d b b
21 / 52
Examples
ac a
q3
· · ·· · · d b b
21 / 52
Examples
ac a
q3
· · ·· · · d b b
21 / 52
Examples
ac a
q0
· · ·· · · d b b
21 / 52
Examples
cc a
q1
· · ·· · · d b b
21 / 52
Examples
cc a
q1
· · ·· · · d b b
21 / 52
Examples
cc a
q1
· · ·· · · d b b
21 / 52
Examples
cc a
q2
· · ·· · · d d b
21 / 52
Examples
cc a
q2
· · ·· · · d d b
21 / 52
Examples
cc a
q3
· · ·· · · d d b
21 / 52
Examples
cc a
q0
· · ·· · · d d b
21 / 52
Examples
cc c
q1
· · ·· · · d d b
21 / 52
Examples
cc c
q1
· · ·· · · d d b
21 / 52
Examples
cc c
q1
· · ·· · · d d b
21 / 52
Examples
cc c
q2
· · ·· · · d d d
21 / 52
Examples
cc c
q2
· · ·· · · d d d
21 / 52
Examples
cc c
q2
· · ·· · · d d d
21 / 52
Examples
cc c
q4
· · ·· · · d d d
21 / 52
Examples
cc c
q4
· · ·· · · d d d
21 / 52
Examples
cc c
q4
· · ·· · · d d d
21 / 52
Examples
cc c
q4
· · ·· · · d d d B
21 / 52
Examples
cc c
qf
· · ·· · · d d d
21 / 52
Strategies?
• What about {$w$w | w ∈ {a, b}∗}?
• What about {ww | w ∈ {a, b}∗}?
22 / 52
Strategies?
• What about {$w$w | w ∈ {a, b}∗}?
• What about {ww | w ∈ {a, b}∗}?
22 / 52
Complexity
• When we accepted aibi , how many moves did we have to apply?
• How hard is it to do in C?
23 / 52
Complexity
• When we accepted aibi , how many moves did we have to apply?
• How hard is it to do in C?
23 / 52
Complexity
• When we accepted aibi , how many moves did we have to apply?
• How hard is it to do in C?
• SO PAINFUL!
23 / 52
Extensions
• One problem is that we had to use the same tape for input and to
store interim computations.
• We couldn’t “store” the i directly when reading the a’s.
• What if we allowed an extra tape for extra storage beyond the input
— for “scratch work”?
• We could store the i on the scratch tape as we read the a’s, then
use it to check the number of b’s.
24 / 52
Multitape Turing Machine
• Multitape Turing Machines are just like regular Turing Machines, but
• there is more than one tape,
• the input is on the first tape before, and all other tapes only contain
blanks,
• we can control each tape independently.
25 / 52
Multitape Turing Machine
a1 a2 · · · an
q0
· · ·
· · ·
· · ·· · ·
· · ·· · ·
· · · · · ·
...
26 / 52
Formally,
Definition
A k-tape Turing Machine is denoted by a 7-tuple
M = (Q,Σ, Γ, δ, q0,B,F ),
where
Q: the finite set of states,
Σ: the finite input alphabet,
Γ: the finite tape alphabet, with Σ ⊆ Γ,
δ: the transition function, which is a partial function from Q × Γk to
Q × Γk × {L,R, S}k , L,R,S are directions, left, right, and stay,
q0: is the start state,
B: the blank symbol in Γ− Σ,
F : the final state set.
This means that each transition is from a state, and k tape symbols to k
other tape symbols plus directions for all k tapes.
27 / 52
Examples
Example
How do we accept {aibic i | i ≥ 0}?
28 / 52
Examples
Example
How do we accept {aibic i | i ≥ 0}?
Let M = (Q, {a, b, c}, {a, b, c}, δ, q0,B, {qf }), be a 2-tape Turing
Machine where δ is defined as follows:
δ(q0, (B,B)) = (qf , (B,B), (R,R))
δ(q0, (a,B)) = (q0, (a, a), (R,R))
δ(q0, (b,B)) = (q1, (b,B), (S , L))
δ(q1, (b, a)) = (q1, (b, a), (R, L))
δ(q1, (c,B)) = (q2, (c ,B), (S ,R))
δ(q2, (c , a)) = (q2, (c , a), (R,R))
δ(q2, (B,B)) = (qf , (B,B), (S , S))
28 / 52
Examples
Example
How do we accept {aibic i | i ≥ 0}?
Let M = (Q, {a, b, c}, {a, b, c}, δ, q0,B, {qf }), be a 2-tape Turing
Machine where δ is defined as follows:
δ(q0, (B,B)) = (qf , (B,B), (R,R))
δ(q0, (a,B)) = (q0, (a, a), (R,R))
δ(q0, (b,B)) = (q1, (b,B), (S , L))
δ(q1, (b, a)) = (q1, (b, a), (R, L))
δ(q1, (c,B)) = (q2, (c ,B), (S ,R))
δ(q2, (c , a)) = (q2, (c , a), (R,R))
δ(q2, (B,B)) = (qf , (B,B), (S , S))
How many transitions were applied on an input of size n?
28 / 52
Examples
aa b
q0
· · ·· · · b c c
· · ·· · · BB B B B B B
29 / 52
Examples
aa b
q1
· · ·· · · b c c
· · ·· · · aB B B B B B
29 / 52
Examples
aa b
q0
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q1
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q1
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q1
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q2
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q2
· · ·· · · b c c
· · ·· · · aB a B B B B
29 / 52
Examples
aa b
q2
· · ·· · · b c c
· · ·· · · aB a B B B B
B
29 / 52
Examples
aa b
qf
· · ·· · · b c c
· · ·· · · aB a B B B B
B
29 / 52
Multitape vs. regular
• Multitape Turing Machines can operate faster than regular Turing
Machines.
• But can they accept more?
30 / 52
Multitape vs. regular
• Multitape Turing Machines can operate faster than regular Turing
Machines.
• But can they accept more?
• If they did accept the same languages, how could we show it?
30 / 52
Multitape vs. regular
• Multitape Turing Machines can operate faster than regular Turing
Machines.
• But can they accept more?
• If they did accept the same languages, how could we show it?
30 / 52
Multitape vs. regular
• We have shown many automata, grammars and regular expressions
can describe the same languages.
• How did we show equivalence?
31 / 52
Multitape vs. regular
• We have shown many automata, grammars and regular expressions
can describe the same languages.
• How did we show equivalence?
• We need to build an algorithm that takes an arbitrary multitape
Turing Machine as input, and outputs a single tape Turing Machine
that accepts the same language.
31 / 52
Multitape vs. regular
• How do we “simulate” a multi-tape TM with a regular one?
• First trick - adding multiple tracks.
• Since we can change the tape alphabet, we can have each cell hold l
symbols at once (where l is a constant).
• For example, if our original tape alphabet is {a, b, c}, to hold 3
symbols, we make a new tape alphabet which includes
(a, a, a), (a, a, b), (a, a, c), (a, b, a), (a, b, b), (a, b, c),
(a, c , a), (a, c , b), (a, c , c), (b, a, a), (b, a, b), (b, a, c), (b, b, a),
(b, b, b), (b, b, c), (b, c , a), (b, c , b), (b, c , c), (c, a, a), (c , a, b),
(c , a, c), (c , b, a), (c , b, b), (c, b, c), (c , c, a), (c, c , b), (c , c, c).
• This is a little like having multiple tapes, except with only one head.
32 / 52
Multitape vs. regular
• How do we “simulate” a multi-tape TM with a regular one?
• First trick - adding multiple tracks.
• Since we can change the tape alphabet, we can have each cell hold l
symbols at once (where l is a constant).
• For example, if our original tape alphabet is {a, b, c}, to hold 3
symbols, we make a new tape alphabet which includes
(a, a, a), (a, a, b), (a, a, c), (a, b, a), (a, b, b), (a, b, c),
(a, c , a), (a, c , b), (a, c , c), (b, a, a), (b, a, b), (b, a, c), (b, b, a),
(b, b, b), (b, b, c), (b, c , a), (b, c , b), (b, c , c), (c, a, a), (c, a, b),
(c , a, c), (c, b, a), (c , b, b), (c, b, c), (c , c, a), (c , c , b), (c , c , c).
• This is a little like having multiple tapes, except with only one head.
32 / 52
Multitape vs. regular
• If we have a k-tape TM, then we make a single tape TM with 2k
“tracks”.
• The contents of the k tapes, will be stored in tracks 2, 4, 6, . . . , 2k .
• Tracks 1, 3, 5, . . . , 2k − 1 will store a single symbol “pointing to” the
tape head position of each track.
33 / 52
Multitape vs. regular
• If we have a k-tape TM, then we make a single tape TM with 2k
“tracks”.
• The contents of the k tapes, will be stored in tracks 2, 4, 6, . . . , 2k .
• Tracks 1, 3, 5, . . . , 2k − 1 will store a single symbol “pointing to” the
tape head position of each track.
33 / 52
Multitape vs. regular
• If we have a k-tape TM, then we make a single tape TM with 2k
“tracks”.
• The contents of the k tapes, will be stored in tracks 2, 4, 6, . . . , 2k .
• Tracks 1, 3, 5, . . . , 2k − 1 will store a single symbol “pointing to” the
tape head position of each track.
33 / 52
Multitape vs. regular
$
$
$
34 / 52
Multitape vs. regular
• The original TM starts with input on the first tape: a1 · · · an.
a1 a2 · · · an
q0
· · ·
· · ·
· · ·· · ·
· · ·· · ·
· · · · · ·
...
• With the new single tape TM, it starts with the same input in the
tape, a1 · · · an.
35 / 52
Multitape vs. regular
• The new TM first overwrites each symbol with 2k tracks, ai in the
second track, with all other even tracks blank, and all odd tracks
have $ in the first symbol, with others blank.
• That is, it will change the input a1 · · · an to
($, a1, $,B, . . . , $,B)(B, a2,B,B, . . . ,B,B) · · · (B, an,B,B, . . . ,B,B).
• This is similar to the multitape TM except with “tracks” instead of
tapes.
a1 a2 · · · an
q0
· · ·· · ·

a1 a2 · · · an
q
· · ·
· · ·
· · ·· · ·
...
· · ·
$
$
36 / 52
States
For every move of the multitape TM
δ(q, (X1, . . . ,Xk)) = (p, (Y1, . . . ,Yk), (D1, . . . ,Dk))
multitape TM
X2
X1 · · ·
q
· · ·
· · ·
· · ·· · ·
· · ·· · ·
· · · · · ·
...
Xk
Our single tape TM looks like:
X1 · · ·
X2
q
· · ·
· · ·
· · ·· · ·
...
· · ·
$
$
· · ·
· · ·
$
Xk
37 / 52
States
For every move of the multitape TM
δ(q, (X1, . . . ,Xk)) = (p, (Y1, . . . ,Yk), (D1, . . . ,Dk))
X1 · · ·
X2
q
· · ·
· · ·
· · ·· · ·
...
· · ·
$
$
· · ·
· · ·
$
Xk
1. by scanning from left to right, remember what
is stored “under the $’s” (tape heads)
2. see what transition of multitape TM would
apply
3. then from left to right apply appropriate
changes to each track
4. go back to the beginning and repeat with
another transition
37 / 52
Multitape vs. regular
• Thus, single tape Turing Machines can simulate single tape Turing
Machines.
• They accept the same languages.
• Although multitape Turing Machines can be faster (by a
polynomial).
• The time taken by a one-tape TM to simulate n moves of a
multi-tape TM is O(n2).
38 / 52
Random access machines
• “Regular silicon computers” can simulate TM’s.
• How?
39 / 52
Random access machines
• Can Turing Machines simulate random access machines?
• We won’t go over the formal construction (see text).
• We use a multitape TM.
40 / 52
Random access machines
• On the first tape we write out the memory contents, indexed by
address.
$0 ∗ w0#1 ∗ w1#10 ∗ w2#11 ∗ w3#100 ∗ w4 · · ·
• Programs and instructions are stored in some of the memory.
• We have a tape dedicated to the address of the next instruction.
• We have a tape that holds the instruction stored at that address.
• The Turing Machine simulates each instruction of the random
access machine one at a time.
41 / 52
Power
• Deterministic Turing Machines have the same computational power
as silicon computers.
• Deterministic Turing Machines can simulate random access
machines, although there is a polynomial time slowdown.
Theorem
(from the text) A computer of the type described [in the text] can be
simulated for n steps by a one-tape Turing Machine using at most O(n6)
steps of the Turing Machine.
42 / 52
Power hierarchy
Regular Languages
DCFL
CFL
deterministic TMs
• For finite automata, nondeterminism did not increase what
languages we could accept.
• For pushdown automata, it did increase.
43 / 52
Nondeterministic Turing Machines
• A nondeterministic Turing Machine is just like the deterministic
Turing Machine, but the transition function δ associates each state
and tape symbol to a set of triples with a state, tape letter and
direction.
• That is, it is a function from Q × Γ to 2Q×Γ×{L,R}.
44 / 52
Nondeterministic Turing Machines
Instantaneous descriptions are exactly as with deterministic Turing Machines.
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a nondeterministic Turing Machine.
• Let (p,Y , L) ∈ δ(q,Xi ) be a transition. Then
qX1X2 · · ·Xn `M pBYX2 · · ·Xn, if i = 1
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−2pXn−1, if i = n and Y = B
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−2pXi−1YXi+1 · · ·Xn, otherwise.
• Let (p,Y ,R) ∈ δ(q,Xi ) be a transition. Then
qX1X2 · · ·Xn `M pX2 · · ·Xn, if i = 1 and Y = B
X1X2 · · ·Xn−1qXn `M X1X2 · · ·Xn−1YpB, if i = n
X1X2 · · ·Xi−1qXiXi+1 · · ·Xn `M X1 · · ·X2 · · ·Xi−1YpXi+1 · · ·Xn otherwise.
Moves are the same as well.
45 / 52
Language acceptance for NTM’s
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a nondeterministic Turing Machine.
Then the language accepted by M is the set
{w | w ∈ Σ∗, α, β ∈ Γ∗, p ∈ F , q0w `∗M αpβ}.
It accepts if there is some sequence of transitions that can be applied
leading to acceptance.
46 / 52
Language acceptance for NTM’s
Definition
Let M = (Q,Σ, Γ, δ, q0,B,F ) be a nondeterministic Turing Machine.
Then the language accepted by M is the set
{w | w ∈ Σ∗, α, β ∈ Γ∗, p ∈ F , q0w `∗M αpβ}.
It accepts if there is some sequence of transitions that can be applied
leading to acceptance.
Do nondeterministic Turing Machines accept more languages than
deterministic Turing Machines?
46 / 52
Nondeterminism with determinism
• We are going to simulate a nondeterministic Turing Machine with a
deterministic one.
• We start by using a 2-tape deterministic Turing Machine.
• We start with the input on the first tape, w .
47 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• We are going to have a sequence of instantaneous descriptions
placed on the first tape.
• We start by replacing input w with an instantaneous description
#q0w .• Throughout the computation, tape 1 will contain:
∗ID1 ∗ ID2 ∗ ID3 ∗ · · · ,
• We will always be examining one instantaneous description at a
time, and we will change the ∗ before that one to #.
• Then on tape 2, we write all instantaneous descriptions that can
follow from the current one by one transition.
• Then we copy all ID’s from tape 2 to the end of tape 1, and
continue from the next instantaneous description of tape 1.
• If we hit an ID with a final state, we accept.
48 / 52
Nondeterminism with determinism
• If there is some sequence of transitions leading the nondeterministic
TM to acceptance, then eventually, the deterministic one will accept
as well.
• It is essentially doing a breadth first traversal across all possible
instantaneous descriptions.
• Nondeterministic Turing Machines accept the same family of
languages as deterministic Turing Machines.
49 / 52
Multitape-nondeterministic TM’s
• And we can allow multi-tape Turing Machines to be
nondeterministic as well.
50 / 52
Example
Build a multitape nondeterministic Turing Machine accepting
{w i | w ∈ {a, b}∗, i ≥ 2}.
51 / 52
Example
Build a multitape nondeterministic Turing Machine accepting
{w i | w ∈ {a, b}∗, i ≥ 2}.
δ(q0, (a,B)) = {(q0, (a, a), (R,R)), (q1, (a,B), (S , L)}
δ(q0, (b,B)) = {(q0, (b, b), (R,R)), (q1, (b,B), (S , L)}
δ(q1, (a, a)) = {(q1, (a, a), (S , L)}
δ(q1, (a, b)) = {(q1, (a, b), (S , L)}
δ(q1, (b, a)) = {(q1, (b, a), (S , L)}
δ(q1, (b, b)) = {(q1, (b, b), (S , L)}
δ(q1, (a,B)) = {(q2, (a,B), (S ,R)}
δ(q1, (b,B)) = {(q2, (b,B), (S ,R)}
δ(q2, (a, a)) = {(q2, (a, a), (R,R)}
δ(q2, (b, b)) = {(q2, (b, b), (R,R)}
δ(q2, (a,B)) = {(q1, (a,B), (S , L)}
δ(q2, (b,B)) = {(q1, (b,B), (S , L)}
δ(q2, (B,B)) = {(qf , (B,B), (S ,S)}
51 / 52
Recursively Enumerable Languages
Definition
The family of recursively enumerable languages is the family of languages
which can be accepted by Turing Machines.
52 / 52

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468