Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update test_convert_to_markdown.py #1090

Merged
merged 2 commits into from
Aug 2, 2024
Merged

Update test_convert_to_markdown.py #1090

merged 2 commits into from
Aug 2, 2024

Conversation

mrT23
Copy link
Collaborator

@mrT23 mrT23 commented Aug 2, 2024

PR Type

Tests


Description

  • Updated the expected_output in test_convert_to_markdown.py to use PRReviewHeader.REGULAR.value instead of PRReviewHeader.REGULAR.

Changes walkthrough 📝

Relevant files
Tests
test_convert_to_markdown.py
Update expected output to use enum value in test                 

tests/unittest/test_convert_to_markdown.py

  • Updated the expected_output to use PRReviewHeader.REGULAR.value
    instead of PRReviewHeader.REGULAR.
  • +1/-1     

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Copy link
    Contributor

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
    🏅 Score: 95
    🧪 No relevant tests
    🔒 No security concerns identified
    🔀 No multiple PR themes
    ⚡ No key issues to review

    Copy link
    Contributor

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Maintainability
    Improve variable naming for clarity and maintainability

    Consider using a more descriptive variable name for the expected_output to reflect
    its specific purpose or content. This can improve code readability and
    maintainability.

    tests/unittest/test_convert_to_markdown.py [56]

    -expected_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\n<table>\n<tr><td>⏱️&nbsp;<strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪&nbsp;<strong>No relevant tests</strong></td></tr>\n<tr><td>⚡&nbsp;<strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒&nbsp;<strong>No security concerns identified</strong></td></tr>\n</table>\n\n\n<details><summary> <strong>Code feedback:</strong></summary>\n\n<hr><table><tr><td>relevant file</td><td>pr_agent/git_providers/git_provider.py\n</td></tr><tr><td>suggestion &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</td><td>\n\n<strong>\n\nConsider raising an exception or logging a warning when \'pr_url\' attribute is not found. This can help in debugging issues related to the absence of \'pr_url\' in instances where it\'s expected. [important]\n\n</strong>\n</td></tr><tr><td>relevant line</td><td><a href=\'https://github.com/Codium-ai/pr-agent-pro/pull/102/files#diff-52d45f12b836f77ed1aef86e972e65404634ea4e2a6083fb71a9b0f9bb9e062fR199\'>return ""</a></td></tr></table><hr>\n\n</details>'
    +detailed_markdown_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\n<table>\n<tr><td>⏱️&nbsp;<strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪&nbsp;<strong>No relevant tests</strong></td></tr>\n<tr><td>⚡&nbsp;<strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒&nbsp;<strong>No security concerns identified</strong></td></tr>\n</table>\n\n\n<details><summary> <strong>Code feedback:</strong></summary>\n\n<hr><table><tr><td>relevant file</td><td>pr_agent/git_providers/git_provider.py\n</td></tr><tr><td>suggestion &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</td><td>\n\n<strong>\n\nConsider raising an exception or logging a warning when \'pr_url\' attribute is not found. This can help in debugging issues related to the absence of \'pr_url\' in instances where it\'s expected. [important]\n\n</strong>\n</td></tr><tr><td>relevant line</td><td><a href=\'https://github.com/Codium-ai/pr-agent-pro/pull/102/files#diff-52d45f12b836f77ed1aef86e972e65404634ea4e2a6083fb71a9b0f9bb9e062fR199\'>return ""</a></td></tr></table><hr>\n\n</details>'
     
    • Apply this suggestion
    Suggestion importance[1-10]: 5

    Why: The suggestion to use a more descriptive variable name is valid and can improve code readability and maintainability. However, it is a minor improvement and not crucial for the functionality of the code.

    5

    Copy link
    Contributor

    CI Failure Feedback 🧐

    Action: build-and-test

    Failed stage: Test dev docker [❌]

    Failed test name: test_load_invalid_yaml2

    Failure summary:

    The action failed because the test test_load_invalid_yaml2 in the file
    tests/unittest/test_load_yaml.py failed.

  • The test expected the function load_yaml to return a specific output when given an invalid YAML
    string.
  • The actual output differed from the expected output due to a minor discrepancy in the suggestion
    content field.
  • Specifically, the expected output had an extra space after the colon in the string if name ==: ,
    which was not present in the actual output.

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    1510:  tests/unittest/test_find_line_number_of_relevant_line_in_file.py::TestFindLineNumberOfRelevantLineInFile::test_relevant_line_found_but_deleted PASSED [ 58%]
    1511:  tests/unittest/test_fix_output.py::TestTryFixJson::test_incomplete_code_suggestions PASSED [ 59%]
    1512:  tests/unittest/test_fix_output.py::TestTryFixJson::test_incomplete_code_suggestions_new_line PASSED [ 61%]
    1513:  tests/unittest/test_fix_output.py::TestTryFixJson::test_incomplete_code_suggestions_many_close_brackets PASSED [ 62%]
    1514:  tests/unittest/test_fix_output.py::TestTryFixJson::test_incomplete_code_suggestions_relevant_file PASSED [ 63%]
    1515:  tests/unittest/test_github_action_output.py::TestGitHubOutput::test_github_action_output_enabled PASSED [ 64%]
    1516:  tests/unittest/test_github_action_output.py::TestGitHubOutput::test_github_action_output_disabled PASSED [ 66%]
    1517:  tests/unittest/test_github_action_output.py::TestGitHubOutput::test_github_action_output_notset PASSED [ 67%]
    1518:  tests/unittest/test_github_action_output.py::TestGitHubOutput::test_github_action_output_error_case PASSED [ 68%]
    ...
    
    1523:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_happy_path_sort_files_by_main_languages PASSED [ 75%]
    1524:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_edge_case_empty_languages PASSED [ 76%]
    1525:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_edge_case_empty_files PASSED [ 77%]
    1526:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_edge_case_languages_with_no_extensions PASSED [ 79%]
    1527:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_edge_case_files_with_bad_extensions_only PASSED [ 80%]
    1528:  tests/unittest/test_language_handler.py::TestSortFilesByMainLanguages::test_general_behaviour_sort_files_by_main_languages PASSED [ 81%]
    1529:  tests/unittest/test_load_yaml.py::TestLoadYaml::test_load_valid_yaml PASSED [ 83%]
    1530:  tests/unittest/test_load_yaml.py::TestLoadYaml::test_load_invalid_yaml1 PASSED [ 84%]
    1531:  tests/unittest/test_load_yaml.py::TestLoadYaml::test_load_invalid_yaml2 FAILED [ 85%]
    ...
    
    1535:  tests/unittest/test_parse_code_suggestion.py::TestParseCodeSuggestion::test_with_code_example_key PASSED [ 90%]
    1536:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_valid_yaml PASSED [ 92%]
    1537:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_add_relevant_line PASSED [ 93%]
    1538:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_extract_snippet PASSED [ 94%]
    1539:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_remove_last_line PASSED [ 96%]
    1540:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_empty_yaml_fixed PASSED [ 97%]
    1541:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_no_initial_yaml PASSED [ 98%]
    1542:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_with_initial_yaml PASSED [100%]
    1543:  =================================== FAILURES ===================================
    1544:  _____________________ TestLoadYaml.test_load_invalid_yaml2 _____________________
    1545:  self = <test_load_yaml.TestLoadYaml object at 0x7f2316985d20>
    1546:  def test_load_invalid_yaml2(self):
    1547:  yaml_str = '''\
    1548:  - relevant file: src/app.py:
    1549:  suggestion content: The print statement is outside inside the if __name__ ==: \
    1550:  '''
    1551:  with pytest.raises(ScannerError):
    1552:  yaml.safe_load(yaml_str)
    1553:  expected_output = [{'relevant file': 'src/app.py:', 'suggestion content': 'The print statement is outside inside the if __name__ ==: '}]
    1554:  >           assert load_yaml(yaml_str) == expected_output
    1555:  E           AssertionError: assert [{'relevant f..._name__ ==:'}] == [{'relevant f...name__ ==: '}]
    ...
    
    1557:  E             Full diff:
    1558:  E               [
    1559:  E                {'relevant file': 'src/app.py:',
    1560:  E                 'suggestion content': 'The print statement is outside inside the if __name__ '
    1561:  E             -                         '==: '},
    1562:  E             ?                             -
    1563:  E             +                         '==:'},
    1564:  E               ]
    1565:  tests/unittest/test_load_yaml.py:49: AssertionError
    1566:  ----------------------------- Captured stderr call -----------------------------
    1567:  2024-08-02 18:45:17.264 | ERROR    | pr_agent.algo.utils:load_yaml:564 - Failed to parse AI prediction: mapping values are not allowed here
    1568:  in "<unicode string>", line 1, column 28:
    1569:  - relevant file: src/app.py:
    1570:  ^
    1571:  2024-08-02 18:45:17.264 | INFO     | pr_agent.algo.utils:try_fix_yaml:587 - Successfully parsed AI prediction after adding |-
    1572:  =============================== warnings summary ===============================
    1573:  ../usr/local/lib/python3.10/site-packages/pydantic/_internal/_config.py:291
    1574:  /usr/local/lib/python3.10/site-packages/pydantic/_internal/_config.py:291: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
    ...
    
    1576:  tests/unittest/test_file_filter.py:44
    1577:  /app/tests/unittest/test_file_filter.py:44: DeprecationWarning: invalid escape sequence '\.'
    1578:  monkeypatch.setattr(global_settings.ignore, 'regex', ['^file[2-4]\..*$'])
    1579:  tests/unittest/test_file_filter.py:65
    1580:  /app/tests/unittest/test_file_filter.py:65: DeprecationWarning: invalid escape sequence '\.'
    1581:  monkeypatch.setattr(global_settings.ignore, 'regex', ['(((||', '^file[2-4]\..*$'])
    1582:  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
    1583:  =========================== short test summary info ============================
    1584:  FAILED tests/unittest/test_load_yaml.py::TestLoadYaml::test_load_invalid_yaml2
    1585:  =================== 1 failed, 76 passed, 3 warnings in 3.91s ===================
    1586:  ##[error]Process completed with exit code 1.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    @mrT23 mrT23 merged commit 3756b54 into main Aug 2, 2024
    1 check passed
    @mrT23 mrT23 deleted the mrT23-patch-10 branch August 2, 2024 18:54
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    3 participants