Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BYTE# literals not properly handled in CASE statement #34

Closed
engineerjoe440 opened this issue May 11, 2022 · 3 comments
Closed

BYTE# literals not properly handled in CASE statement #34

engineerjoe440 opened this issue May 11, 2022 · 3 comments
Labels
bug Something isn't working grammar

Comments

@engineerjoe440
Copy link
Collaborator

Continuing to work through testing this project against a number of libraries I contribute to, I discovered a failure that's related to CASE statements which use multiple subranges:

Click to expand pytest output
source_filename = '/home/joestan/Documents/blark/blark/tests/source/commas_in_case.st'

    def test_parsing_source(source_filename):
        """Test plain source 61131 files."""
        try:
            with open(source_filename, "r", encoding="utf-8") as src:
                content = src.read()
>           result = parse_source_code(content)

blark/tests/test_parsing.py:49: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
blark/parse.py:109: in parse_source_code
    tree = parser.parse(processed_source)
../../.local/lib/python3.8/site-packages/lark/lark.py:625: in parse
    return self.parser.parse(text, start=start, on_error=on_error)
../../.local/lib/python3.8/site-packages/lark/parser_frontends.py:96: in parse
    return self.parser.parse(stream, chosen_start, **kw)
../../.local/lib/python3.8/site-packages/lark/parsers/earley.py:266: in parse
    to_scan = self._parse(lexer, columns, to_scan, start_symbol)
../../.local/lib/python3.8/site-packages/lark/parsers/xearley.py:146: in _parse
    to_scan = scan(i, to_scan)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

i = 528
to_scan = {__xor_expression_star_33 ::= * LOGICAL_XOR and_expression (528), __equality_expression_star_36 ::= * COMPARE_OP add_e...LEVATED_BY unary_expression (528), __assignment_expression_star_30 ::= * LOGICAL_OR_ELSE or_else_expression (528), ...}

    def scan(i, to_scan):
        """The core Earley Scanner.
    
        This is a custom implementation of the scanner that uses the
        Lark lexer to match tokens. The scan list is built by the
        Earley predictor, based on the previously completed tokens.
        This ensures that at each phase of the parse we have a custom
        lexer context, allowing for more complex ambiguities."""
    
        node_cache = {}
    
        # 1) Loop the expectations and ask the lexer to match.
        # Since regexp is forward looking on the input stream, and we only
        # want to process tokens when we hit the point in the stream at which
        # they complete, we push all tokens into a buffer (delayed_matches), to
        # be held possibly for a later parse step when we reach the point in the
        # input stream at which they complete.
        for item in set(to_scan):
            m = match(item.expect, stream, i)
            if m:
                t = Token(item.expect.name, m.group(0), i, text_line, text_column)
                delayed_matches[m.end()].append( (item, i, t) )
    
                if self.complete_lex:
                    s = m.group(0)
                    for j in range(1, len(s)):
                        m = match(item.expect, s[:-j])
                        if m:
                            t = Token(item.expect.name, m.group(0), i, text_line, text_column)
                            delayed_matches[i+m.end()].append( (item, i, t) )
    
                # XXX The following 3 lines were commented out for causing a bug. See issue #768
                # # Remove any items that successfully matched in this pass from the to_scan buffer.
                # # This ensures we don't carry over tokens that already matched, if we're ignoring below.
                # to_scan.remove(item)
    
        # 3) Process any ignores. This is typically used for e.g. whitespace.
        # We carry over any unmatched items from the to_scan buffer to be matched again after
        # the ignore. This should allow us to use ignored symbols in non-terminals to implement
        # e.g. mandatory spacing.
        for x in self.ignore:
            m = match(x, stream, i)
            if m:
                # Carry over any items still in the scan buffer, to past the end of the ignored items.
                delayed_matches[m.end()].extend([(item, i, None) for item in to_scan ])
    
                # If we're ignoring up to the end of the file, # carry over the start symbol if it already completed.
                delayed_matches[m.end()].extend([(item, i, None) for item in columns[i] if item.is_complete and item.s == start_symbol])
    
        next_to_scan = set()
        next_set = set()
        columns.append(next_set)
        transitives.append({})
    
        ## 4) Process Tokens from delayed_matches.
        # This is the core of the Earley scanner. Create an SPPF node for each Token,
        # and create the symbol node in the SPPF tree. Advance the item that completed,
        # and add the resulting new item to either the Earley set (for processing by the
        # completer/predictor) or the to_scan buffer for the next parse step.
        for item, start, token in delayed_matches[i+1]:
            if token is not None:
                token.end_line = text_line
                token.end_column = text_column + 1
                token.end_pos = i + 1
    
                new_item = item.advance()
                label = (new_item.s, new_item.start, i)
                token_node = TokenNode(token, terminals[token.type])
                new_item.node = node_cache[label] if label in node_cache else node_cache.setdefault(label, SymbolNode(*label))
                new_item.node.add_family(new_item.s, item.rule, new_item.start, item.node, token_node)
            else:
                new_item = item
    
            if new_item.expect in self.TERMINALS:
                # add (B ::= Aai+1.B, h, y) to Q'
                next_to_scan.add(new_item)
            else:
                # add (B ::= Aa+1.B, h, y) to Ei+1
                next_set.add(new_item)
    
        del delayed_matches[i+1]    # No longer needed, so unburden memory
    
        if not next_set and not delayed_matches and not next_to_scan:
            considered_rules = list(sorted(to_scan, key=lambda key: key.rule.origin.name))
>           raise UnexpectedCharacters(stream, i, text_line, text_column, {item.expect.name for item in to_scan},
                                       set(to_scan), state=frozenset(i.s for i in to_scan),
                                       considered_rules=considered_rules
                                       )
E           lark.exceptions.UnexpectedCharacters: No terminal matches ',' in the current parser context, at line 17 col 29
E           
E               BYTE#9..BYTE#10, BYTE#13, BYTE#28..BYTE#126:
E                                       ^
E           Expected one of: 
E               * EQUALS_OP
E               * __ANON_7
E               * LOGICAL_XOR
E               * LOGICAL_OR
E               * ASSIGNMENT
E               * ADD_OPERATOR
E               * COMPARE_OP
E               * LOGICAL_AND_THEN
E               * ELEVATED_BY
E               * MULTIPLY_OPERATOR
E               * LOGICAL_AND
E               * LOGICAL_OR_ELSE

../../.local/lib/python3.8/site-packages/lark/parsers/xearley.py:119: UnexpectedCharacters

I've created a branch in my local fork to contain a sample of some such logic that is valid (at least in CODESYS) with the CASE. This seems like it should be supported since both of the following

I've created a branch in my local fork to contain a sample of some such logic that is valid (at least in CODESYS) with the CASE (my sample fork is here ). This seems like it should be supported since both of the following options are supported.

image

@engineerjoe440
Copy link
Collaborator Author

Actually...

After further analysis, I realize that this is not a failure due to the comma existing in the case statement, it's an inability to recognize the BYTE# literal.

I'm wondering if it would be most appropriate to:

  1. Adjust the integer_literal to support BIT types (BYTE, WORD, etc.)
    image

  2. Adjust the case_list_element to support bit_string_literal
    image

I'm leaning towards that second option, the more I think about it (sorry this is turning into a stream-of-consciousness). I'll make that change in my fork and open a PR, but let's discuss if you think there might be a better approach.

I should note, I also saw boolean_literal and wonder if it might also be applicable in this way. It seems reasonable, but uncommon? I'm going to leave it out for now.

@engineerjoe440 engineerjoe440 changed the title Comma-Separated Groups of Subranges Failing in CASE Statement BYTE# literals not properly handled in CASE statement May 11, 2022
@klauer
Copy link
Owner

klauer commented May 12, 2022

I saw the PR and then the issue - but I think you made the right call here 👍

I also appreciate the level of effort you put in to documenting this and walking through the possibilities.

@klauer klauer added bug Something isn't working grammar labels May 12, 2022
@engineerjoe440
Copy link
Collaborator Author

Oops! Should've closed this when the PR merged!

Closed by: #35

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working grammar
Projects
None yet
Development

No branches or pull requests

2 participants