MCS-031 2018-19 PDF

Title MCS-031 2018-19
Author Sabnam Khatoon
Course Design and Analysis of Algorithms
Institution Indira Gandhi National Open University
Pages 30
File Size 3 MB
File Type PDF
Total Downloads 100
Total Views 181

Summary

SOLVED ASSIGNMENT...


Description

Technical Suite

Web: TechnicalSuite.blogspot.com

Course Code : MCS-031 Course Title : Design and Analysis of Algorithms Assignment Number : MCA(III)-031/Assignment/2018-19 Maximum Marks : 100 Weightage : 25% Last Date of Submission : 15th October, 2018 (For July session) 15th April, 2019 (For January session)

Page 1

Technical Suite

Web: TechnicalSuite.blogspot.com

Page 2

Technical Suite

Web: TechnicalSuite.blogspot.com

Page 3

Technical Suite

Web: TechnicalSuite.blogspot.com

Page 4

Technical Suite

Web: TechnicalSuite.blogspot.com

Page 5

Technical Suite

Web: TechnicalSuite.blogspot.com

Ans 3

Page 6

Technical Suite

Web: TechnicalSuite.blogspot.com

Ans. 5

Our first example of dynamic programming is an algorithm that solves the problem of matrix-chain multiplication. We are given a sequence (chain) A1, A2, . . ., An of n matrices to be multiplied, and we wish to compute the product A1A2 ……An . (16.1) We can evaluate the expression (16.1) using the standard algorithm for multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. Matrix multiplication is associative, and so all parenthesizations yield the same product. For example, if the chain of matrices is A1, A2, A3, A4, the product A1A2A3A4 can be fully parenthesized in five distinct ways: (A1(A2(A3A4))) , (A1((A2A3)A4)) , ((A1A2)(A3A4)) , ((A1(A2A3))A4) , (((A1A2)A3)A4) . The way we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the product. Consider first the cost of multiplying two matrices. The standard algorithm is given by Page 7

Technical Suite

Web: TechnicalSuite.blogspot.com

the following pseudocode. The attributes rows and columns are the numbers of rows and columns in a matrix.

MATRIX-MULTIPLY(A,B) 1 if columns [A] ≠ rows [B] 2 then error "incompatible dimensions" 3 else for i 1 to rows [A] 4 do for j 1 to columns[B] 5 do C[i, j] 0 6 for k 1 to columns [A] 7 do C[i ,j] C[i ,j] + A [i, k] B[k, j] 8 return C We can multiply two matrices A and B only if the number of columns of A is equal to the number of rows of B. If A is a p X q matrix and B is a q X r matrix, the resulting matrix C is a p X r matrix. The time to compute C is dominated by the number of scalar multiplications in line 7, which is pqr. In what follows, we shall express running times in terms of the number of scalar multiplications. To illustrate the different costs incurred by different parenthesizations of a matrix product, consider the problem of a chain A1, A2, A3 of three matrices. Suppose that the dimensions of the matrices are 10 X 100,100* 5, and 5* 50, respectively. If we multiply according to the parenthesization ((A1A2)A3), we perform 10 *100 *5 = 5000 scalar multiplications to compute the 10 X 5 matrix product A1A2, plus another 10* 5* 50 = 2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar multiplications. If instead we multiply according to the parenthesization (A1(A2A3)), we perform 100* 5* 50 = 25,000 scalar multiplications to compute the 100 X 50 matrix product A2A3, plus another 10* 100 *50 = 50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000 scalar multiplications. Thus, computing the product according to the first parenthesization is 10 times faster. The matrix-chain multiplication problem can be stated as follows: given a chain A1, A2, . . . , An of n matrices, where for i = 1, 2, . . . , n , matrix Ai has dimension pi - 1 X pi, fully parenthesize the product A1 A2...An in a way that minimizes the number of scalar multiplications.

Ans.6 Optimality Principle/number The influence of Richard Bellman is seen in algorithms throughout the computer science literature. Bellman’s principle of optimality has been used to develop highly efficient dynamic programming solutions to many important and difficult problems. The paradigm is now well entrenched as one of the most successful algorithm design tools employed by computer scientists. The optimality principle was given a broad and general statement by Bellman [23, making it applicable to problems of diverse types. Since computer programs are often employed to implement solutions based on the principle of optimality, Bellman’s impact on computing in general has been immense. In this paper we wish to focus in particular on the influence of Bellman’s work on the area of computer science known as algorithm design and analysis. A primary goal of algorithm design and analysis is to discover theoretical properties of classes of algorithms (e.g., how efficient they are, when they Page 8

Technical Suite

Web: TechnicalSuite.blogspot.com

are applicable) and thus learn how to better apply the algorithms to new problems. From the perspective of algorithm design and analysis, combinatorial optimization problems form the class of problems on which the principle of optimality has had its greatest impact. Problem decomposition is a basic technique for attacking problems of this type-the solution to a large problem is obtained by combining solutions to smaller subproblems. The trick of this approach, of course, is to define an efficient decomposition procedure which assures that combining optimal solutions to subproblems will result in an optimal solution to the larger problem. As a standard course of action, computer scientists attempt to define a decomposition based on Bellman’s principle of optimality. Problem decompositions based on the principle of optimality not only are at the heart of dynamic programming algorithms, but are also integral parts of the strategies of other important classes of algorithms, such as branch and bound. Matrix Multiplication A1*A2 => 30* 35 => 1050 A2*A3 => 35*15 => 525 A3*A1 => 15 *5 => 75 A2 A3 A1 A1 (1050) A2 (525) A3 (75)

Ans.7 According to Noam Chomosky, there are four types of grammars − Type 0, Type 1, Type 2, and Type 3. The following table shows how they differ from each other – Grammar Type Type 0

Grammar Accepted Unrestricted grammar

Type 1

Context-sensitive grammar Context-free grammar Regular grammar

Language Accepted Recursively enumerable language Context-sensitive language Context-free language Regular language

Automaton Turing Machine

Linear-bounded automaton Type 2 Pushdown automaton Type 3 Finite state automaton Take a look at the following illustration. It shows the scope of each type of grammar –

Page 9

Technical Suite

Web: TechnicalSuite.blogspot.com

Type - 3 Grammar Type-3 grammars generate regular languages. Type-3 grammars must have a single non-terminal on the left-hand side and a right-hand side consisting of a single terminal or single terminal followed by a single non-terminal. The productions must be in the form X → a or X → aY where X, Y ∈N (Non terminal) and a ∈T (Terminal) The rule S → ε is allowed if S does not appear on the right side of any rule. Example X→ε X → a | aY Y→b Type - 2 Grammar Type-2 grammars generate context-free languages. The productions must be in the form A → γ where A ∈N (Non terminal) and γ ∈(T ∪N)* (String of terminals and non-terminals). These languages generated by these grammars are be recognized by a non-deterministic pushdown automaton. Example S→Xa X→a X → aX X → abc X→ε

Page 10

Technical Suite

Web: TechnicalSuite.blogspot.com

Type - 1 Grammar Type-1 grammars generate context-sensitive languages. The productions must be in the form αAβ→αγβ where A ∈N (Non-terminal) and α, β, γ ∈(T ∪N)* (Strings of terminals and non-terminals) The strings α and β may be empty, but γ must be non-empty. The rule S → ε is allowed if S does not appear on the right side of any rule. The languages generated by these grammars are recognized by a linear bounded automaton. Example AB → AbBc A → bcA B→b Type - 0 Grammar Type-0 grammars generate recursively enumerable languages. The productions have no restrictions. They are any phase structure grammar including all formal grammars. They generate the languages that are recognized by a Turing machine. The productions can be in the form of α → β where α is a string of terminals and nonterminals with at least one non-terminal and α cannot be null. β is a string of terminals and non-terminals. Example S → ACaB Bc → acB CB → DB aD → Db

Ans.8 If a context free grammar G has more than one derivation tree for some string w ∈L(G), it is called an ambiguous grammar. There exist multiple right-most or left-most derivations for some string generated from that grammar. Problem Check whether the grammar G with production rules − X → X+X | X*X |X| a is ambiguous or not. Solution Let’s find out the derivation tree for the string "a+a*a". It has two leftmost derivations. Derivation 1 − X → X+X → a +X → a+ X*X → a+a*X → a+a*a Parse tree 1 –

Page 11

Technical Suite

Web: TechnicalSuite.blogspot.com

Derivation 2 − X → X*X → X+X*X → a+ X*X → a+a*X → a+a*a Parse tree 2 –

Since there are two parse trees for a single string "a+a*a", the grammar G is ambiguous.

Ans.9 If L1 and L2 are context-free languages, then each of them has a context-free grammar; call the grammars G1 and G2. Our proof requires that the grammars have no non-terminals in common. So we shall subscript all of G1’s non-terminals with a 1 and subscript all of G2’s non-terminals with a 2. Now, wecombine the two grammars into one grammar that will generate the union of the two languages. To do this, we add one new non-terminal, S, and two new productions. S -> S1 | S2 S is the starting non-terminal for the new union grammar and can be replaced either by the starting non-terminal for G1 or for G2, thereby generating either a string from L1 or from L2. Since the nonPage 12

Technical Suite

Web: TechnicalSuite.blogspot.com

terminals of the two original languages are completely different, and once we begin using one of the original grammars, we must complete the derivation using only the rules from that original grammar. Note that there is no need for the alphabets of the two languages to be the same. Therefore, it is proved that if L1 and L2 are context – free languages, then L1 Union L2 is also a context – free language.

Ans. 10 A pushdown automaton is a way to implement a context-free grammar in a similar way we design DFA for a regular grammar. A DFA can remember a finite amount of information, but a PDA can remember an infinite amount of information. Basically a pushdown automaton is − "Finite state machine" + "a stack" A pushdown automaton has three components –

an input tape, •

a control unit, and

• a stack with infinite size. The stack head scans the top symbol of the stack. A stack does two operations − • Push − a new symbol is added at the top.



Pop − the top symbol is read and removed.

A PDA may or may not read an input symbol, but it has to read the top of the stack in every transition.

Page 13

Technical Suite

Web: TechnicalSuite.blogspot.com

A PDA can be formally described as a 7-tuple (Q, Σ, S, δ, q0, I, F) − • Q is the finite number of states •

Σ is input alphabet



S is stack symbols



δ is the transition function: Q × (Σ ∪{ε}) × S × Q × S*



q0 is the initial state (q0 ∈Q)



I is the initial stack top symbol (I ∈S)



F is a set of accepting states (F ∈Q)

The following diagram shows a transition in a PDA from a state q1 to state q2, labeled as a,b → c –

Ans.11 A problem is said to be Decidable if we can always construct a corresponding algorithm that can answer the problem correctly. We can intuitively understand Decidable problems by considering a simple example. Suppose we are asked to compute all the prime numbers in the range of 1000 to 2000. To find the solution of this problem, we can easily devise an algorithm that can enumerate all the prime numbers in this range. Now talking about Decidability in terms of a Turing machine, a problem is said to be a Decidable problem if there exist a corresponding Turing machine which halts on every input with an answeryes or no. It is also important to know that these problems are termed as Turing Decidable since a Turing machine always halts on every input, accepting or rejecting it. Decidable Problems – Semi-Decidable problems are those for which a Turing machine halts on the input accepted by it but it can either halt or loop forever on the input which is rejected by the Turing Machine. Such problems are termed as Turing Recognisable problems.

Page 14

Technical Suite

Web: TechnicalSuite.blogspot.com

Undecidable Problems – The problems for which we can’t construct an algorithm that can answer the problem correctly in a finite time are termed as Undecidable Problems. These problems are not even partially decidable and can lead a Turing machine to loop forever without answering at all. We can understand Undecidable Problems intuitively by considering Fermat’s Theorem, a popular Undecidable Problem which states that no three positive integers a, b and c for any n>=2 can ever satisfy the equation: a^n + b^n = c^n. If we feed this problem to a Turing machine to find such a solution which gives a contradiction then a Turing Machine might run forever, to find the suitable values of n, a, b and c. But we are always unsure whether a contradiction exists or not and hence we term this problem as an Undecidable Problem.

Ans.12 A Turing Machine (TM) is a mathematical model which consists of an infinite length tape divided into cells on which input is given. It consists of a head which reads the input tape. A state register stores the state of the Turing machine. After reading an input symbol, it is replaced with another symbol, its internal state is changed, and it moves from one cell to the right or left. If the TM reaches the final state, the input string is accepted, otherwise rejected. Construct a Turing machine which will accept the set of strings over Σ = {a, b} beginning with a ‘a’ and ending with a ‘b’. Though this set can be accepted by a FSA, we shall give a TM accepting it. M = (K, Σ, Γ, δ, q0, F) where K = {q0, q1, q2, q3} F = {q3} Σ = {a, b} Γ = {a, b, X, 6b} δ is defined as follows: δ(q0, a) = (q1, X, R) δ(q1, a) = (q1, X, R) δ(q1, b) = (q2, X, R) δ(q2, a) = (q1, X, R) δ(q2, b) = (q2, X, R) δ(q2, 6b) = (q3, halt) Let us see how the machine accepts aab

Page 15

Technical Suite

Web: TechnicalSuite.blogspot.com

Ans.13 Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its run-time performance. Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time. Other than the "input" all other factors are considered constant. Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n 2). This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases. Similarly, the running time of both operations will be nearly the same if n is significantly small. Usually, the time required by an algorithm falls under three types − • Best Case − Minimum time required for program execution. Page 16

Technical Suite

Web: TechnicalSuite.blogspot.com



Average Case − Average time required for program execution.



Worst Case − Maximum time required for program execution.

Asymptotic Notations Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.



Ο Notation



Ω Notation

• θ Notation Big Oh Notation, Ο The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete.

For example, for a function f(n) Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. } Omega Notation, Ω The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.

Page 17

Technical Suite

Web: TechnicalSuite.blogspot.com

For example, for a function f(n) Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. } Theta Notation, θ The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time. It is represented as follows –

Ans.14 NP Complete Problems (NPC)(NPC) Over the years many problems in NPNP have been proved to be in PP (like Primality Testing). Still, there are many problems in NPNP not proved to be in PP. i.e.; the question still remains whether P=NPP=NP (i.e.; whether all NPNP problems are actually PP problems). NPNP Complete Problems helps in solving the above question. They are a subset of NP.NP problems with the property that all other NPNP problems can be reduced to any of them in polynomial time. So, they are the hardest problems in NPNP, in terms of running time. If it can be showed that any NPCNPC Problem is in PP, then all problems in NPNP will be in PP (because of NPCNPC definition), and hence P=NP=NPCP=NP=NPC. All NPCNPC problems are in NPNP (again, due to NPCNPC definition). Examples of NPCNPC problems

NP Hard Problems (NPH)(NPH) Page 18

Technical Suite

Web: TechnicalSuite.blogspot.com

These problems need not have any bound on their running time. If any NPCNPC Problem is polynomial time reducible to a problem XX, that problem XX belongs to NPNP Hard class. Hence, all NPNP Complete problems are also NPHNPH. In other words if a NPHNPH problem is nondeterministic polynomial time solvable, it is a NPCNPC problem. Example of a NPNP problem that is not NPCNPC is Halting Problem.

From the diagram, its clear that NPCNPC problems are the hardest problems in NPNP while being the simplest ones in NPHNPH. i.e.; NP∩NPH=NPC

Ans.15 (n) = 2T(n/2) + sqrt(n), where T(1) = 1, and n = 2^k T(n) = 2[2T(n/4) + sqrt(n/2)] + sqrt(n) = 2^2T(n/4) + 2sqrt(n/2) + sqrt(n) T(n) = 2^2[2T(n/8) + sqrt(n/4)] + 2sqrt(n/2) + sqrt(n) = 2^3T(n/8) + 2^2sqrt(n/4) + 2sqrt(n/2) + sqrt(n) In general T(n) = 2^kT(1) + 2^(k-1) x sqrt(2^1) + 2^(k-2) x sqrt(2^2) + ... + 2^1 x sqrt(2^(k-1)) + sqrt(2^k)

Page 19

Technical Suite

Web: TechnicalSuite.blogspot.com

Ans.16

Page 20

Technical Suite

Web: TechnicalSuite.blogspot.com

When quicksort always has the most unbalanced partitions possible, then the original call takes cncnc, n time for some constant ccc, the recursive call on n-1n−1n, minus, 1elements takes c(n-1)c(...


Similar Free PDFs