Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Top Down Parsing, Predictive Parsing

Parsing, Top Down parsing, Predictive Parsing

  • Login to see the comments

Top Down Parsing, Predictive Parsing

  1. 1. G R O U P M E M B E R S : H I R A S H A H Z A D J A V E R I A K H A L I D T A N Z E E L A H U S S A I N P R E S E N T E D T O : M S . S A N I A B A T O O L
  2. 2. COMPILER CONSTRUCTION
  3. 3. PARSING • The term parsing comes from Latin pars meaning “part”. • Parsing is a process that constructs a syntactic structure (i.e. parse tree) from the stream of tokens. • Parsing is the process of determining if a string of tokens can be generated by a grammar. • For any context-free grammar there is a parser that takes at most Ο(n3) time to parse a string of n tokens. • Parsing a string with a CFG: – Finding a derivation of the string consistent with the grammar – The derivation gives us a PARSE TREE
  4. 4. PARSING TECHNIQUES • Syntax analyzers follow production rules defined by means of context-free grammar. The way the production rules are implemented (derivation) divides parsing into two types : Top-down parsing and Bottom-up parsing.
  5. 5. TYPES OF PARSING Parsing Bottom-up parsing Top-Down Parsing Predictive parsing Recursive decent parsing Recursive predictive parsing Non-Recursive predictive parsing Shift reduce parsing LALR parsing Canonical parsing SLR parsing Operator Precedence parsing
  6. 6. TOP DOWN PARSING • Top-down parsers build parse trees from the top (root) to the bottom (leaves). • A top-down parse corresponds to a preorder traversal of the parse tree • A leftmost derivation is applied at each derivation step • Top-Down Parsing may need to backtracking • Two top-down parsing are further sub-divided into the following categories:  Predictive Parsing.  Recursive Descent Parsing
  7. 7. EXAMPLE: Consider the following Grammar: <program>  begin <stmts> end $ <stmts>  SimpleStmt ; <stmts> <stmts>  begin <stmts> end ; <stmts> <stmts>  € Input: begin SimpleStmt; SimpleStmt; end $
  8. 8. RECURSIVE DECENT PARSER • This parsing technique recursively parses the input to make a parse tree • A recursive-descent parser consists of several small functions, one for each nonterminal in the grammar • A procedure is associated with each nonterminal of a grammar.. • Recursive descent parsing involves backtracking. • For an input string: read S → rXd X → oa X → ea
  9. 9. PREDICTIVE PARSING • Predictive parser, has the capability to predict which production is to be used to replace the input string. • The predictive parser does not suffer from backtracking. • The predictive parser uses a look-ahead pointer, which points to the next input symbols. • To make the parser back-tracking free, the predictive parser puts some constraints on the grammar. • It accepts only a class of grammar known as LL(k) grammar. • Hence, Predictive Parser is also known as LL(1) Parser.
  10. 10. LL(1) PARSER • LL(1) Parser accepts LL(1) grammar. • LL(1) grammar is a subset of context-free grammar but with some restrictions to get the simplified version • In LL(1) parser, the first L in LL(1) is parsing the input from left to right, the second L in LL(1) stands for left-most derivation and the 1 means one input symbol of look ahead.
  11. 11. CONSTRUCTING PREDICTIVE PARSER Following are the steps for constructing predictive parser. o Removing unreachable productions. o Removing ambiguity from the Grammar. o Eliminating left recursion. o Left Factoring of a grammar. o First and Follow o Constructing a parse table
  12. 12. REMOVING UNREACHABLE PRODUCTIONS An unreachable production is one that cannot possibly appear in the parse tree rooted at the start symbol. For example, in the following grammar : S A (1) A a (2) B b (3) Production (3) is unreachable because the non-terminal B does not appear on the right side of any production. A non-terminal can be unreachable either it appears on the right side of any production. if it is on the right side of unreachable non-terminal.
  13. 13. Data Structures: • A stack • A List for Reachable Non-Terminals Method: Initially both the stack and list are Empty. Step 1: Start symbol to the list of reachable non- terminal also push onto the stack. Step 2: While (The stack is not Empty) { P= POP one Item of the stack for (Each non-terminal X on right hand side are P) { If (X is not in the list of reachable non - terminals) { Push X; Add X to the list of Reachable non- terminal; } } } Step 3: Remove all the productions from the grammar where L-H-S is not in the list of reachable non –terminals. Algorithmto remove unreachable production
  14. 14. • Grammer: S aB | bA A a | bAA | aS B b | aBB | bS C aD | bS | € D bD | € After Removing Unreachable Productions we have : S aB | bA A a | bAA | aS B b | aBB | bS
  15. 15. ELIMINATING AMBIGUITY A grammar that produces more than one parse tree for some sentence (input string) is said to be ambiguous. Ambiguity can be remove only by constructing a new grammar. Note:  For left associative, replace right non-terminal.  For right associative, replace left no-terminal.  If a grammar contains more than one operators, ambiguity will be removed first from the production involving the operator having the lowest precedence.
  16. 16. EXAMPLE: • S  S+S | S-S | S*S | S/S | NUM The operators with lower precedence will deal first: S  S + S ’ | S - S ’ | S ’ S ’  S | S * S | S / S | N U M R e p l a c e S b y S ’ f r o m R - H - S S ’  S ’ | S ’ * S ’ | S ’ / S ’ | N U M After Eliminating the redundant productions i-e S’  S’ , We will get : S ’  S ’ * S ’ | S ’ / S ’ | N U M  S ’  S ’ * S ’ ’ | S ’ / S ’ ’ | S ’ ’ S ’ ’  S ’ | N U M Replace again S’ by S’’ from R-H-S S ’ ’  S ’ ’ | N U M After Eliminating the redundant productions i-e S’’  S’’ , We will get : S’ ’  N U M
  17. 17. Unambiguous Grammar: S  S +S ’ | S - S ’ | S ’ S ’  S ’* S ’’ | S ’/S ’’ | S ’’ S ’’ NUM
  18. 18. TRANSITION DIAGRAMS • Transition diagrams can describe predictive parsers, just like they can describe lexical analyzers, but the diagrams are slightly different. For Predictive Parser For Lexical Analyzer There is one diagram foe Every Non-terminal There is one Diagram for the Entire language construct. The labels of edge was terminals and Non-terminals. The label of edges are only terminals.
  19. 19. CONSTRUCTION 1. Eliminate left recursion from G 2. Left factor G 3. For each non-terminal A, do Create an initial and final (return) state For each production A -> X1 X2 … Xn, create a path from the initial to the final state with edges X1 X2 … Xn. 4. Simplify the transition Diagram, if possible.
  20. 20. EXAMPLE OF TRANSITION DIAGRAMS • An expression grammar with left recursion and ambiguity removed: • E -> T E’ • E’ -> + T E’ | ε • T -> F T’ • T’ -> * F T’ | ε • F -> ( E ) | id • Corresponding transition diagrams
  21. 21. TYPES OF PREDICTIVE PARSING • Following are the two types of Predictive Parsing:  Recursive Predictive Parsing  Non-Recursive Predictive Parsing Here, we discuss in Detail Non-Recursive Predictive Parsing Technique.
  22. 22. NON-RECURSIVE PREDICTIVE PARSING • A non-recursive predictive parser is an efficient way of implementing by handling the stack of activation records explicitly.
  23. 23. CONT.. • The predictive parser has an input, a stack, a parsing table, and an output. • The input contains the string to be parsed, followed by $, the right end marker. • The stack contains a sequence of grammar symbols, preceded by $, the bottom-of- stack marker. • Initially the stack contains the start symbol of the grammar preceded by $. • The parsing table is a two dimensional array M[A ,a], where A is a nonterminal, and a is a terminal or the symbol $. • The parser is controlled by a program that behaves as follows: The program determines X, the symbol on top of the stack, and a, the current input symbol. These two symbols determine the action of the parser.
  24. 24. CONT.. There are three possibilities: o If X = a = $, the parser halts and announces successful completion of parsing. o If X = a ≠ $, the parser pops X off the stack and advances the input pointer to the next input symbol. o If X is a nonterminal, the program consults entry M[X, a] of the parsing table M. This entry will be either an X-production of the grammar or an error entry. • § If M[X, a] = {X → UVW}, the parser replaces X on top of the stack by WVU (with U on top). • § If M[X, a] = error, the parser calls an error recovery routine.
  25. 25. PREDICTIVE PARSING ALGORITHM repeat begin let X be the top stack symbol and a the next input symbol; if X is a terminal or $ then if X = a then pop X from the stack and remove a from the input else ERROR( ) else /* X is a nonterminal */ if M[X, a] = X → Y1, Y2, … , Yk then begin pop X from the stack; push Yk, Yk-1, … ,Y1 onto the stack, Y1 on top end else ERROR( ) end until X = $ /* stack has emptied */
  26. 26. EXAMPLE: • Use the table-driven predictive parser to parse id + id * id • Assuming parsing table • Initial stack is $E • Initial input is id + id * id $ Grammar: E  TE’ E’  +TE’ | € T  FT’ T’  *FT’ | € F  (E) | id
  27. 27. ERROR RECOVERY IN PREDICTIVE PARSING An error is detected during predictive parsing when the terminal on top of the stack does not match input symbol or when nonterminal A is on top of the stack, a is the next input symbol, and the parsing table entry M[A, a] is empty. Following two Error Recovery Routines are handled in predictive parser. Panic-mode error recovery: It is based on the idea of skipping symbols on the input until a token in a selected set of synchronizing tokens appears. Its effectiveness depends on the choice of synchronizing set. The sets should chosen so that the parser recovers quickly from errors that are likely to occur in practice.
  28. 28. CONT.. Phrase=level recovery: § It is implemented by filling in the blank entries in the predictive parsing table with pointers to error routines. § These routines may change, insert, or delete symbols on the input and issue appropriate error messages. § They may also pop from the stack. § In any event, sure that there is no possibility of an infinite loop. § Checking that any recovery action eventually results in an input symbol being consumed is a good way to protect against such loops.
  29. 29. DIFFERENCE BETWEEN PREDICTIVE PARSER AND RECURSIVE DECENT PARSER Predictive Parsing Recursive-Descent Parsing Its Non-Recursive (Table Driven) Predictive Parser, which is also known as LL(1) parser Its has a set of recursive procedures to process the input. No backtracking is Needed. Backtracking is needed. Needs a special form of grammars (LL(1) grammars) and Widely used It is a general parsing technique, but not widely used. Its an efficient technique. Its not an efficient Technique. Predictive parsers operate in linear time It operates in exponential time
  30. 30. BOTTOM-UP PARSING Bottom-up parsing starts with the input symbols and tries to construct the parse tree up to the start symbol. Example: Input string : a + b * c Production rules: S → E E → E + T E → E * T E → T T → id
  31. 31. EXAMPLE Let us start bottom-up parsing a + b * c Read the input and check if any production matches with the input: a + b * c T + b * c E + b * c E + T * c E * c E * T E S

×