Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test script for updateRecommendation API #836

Merged
merged 12 commits into from
Aug 22, 2023

Conversation

msvinaykumar
Copy link
Contributor

@msvinaykumar msvinaykumar commented May 29, 2023

added updateREcomendation API to scale test and enabled simultaneous uploading of results and update recommendations. Additionally, we have introduced extra flags to selectively post results or update recommendations only. Moreover, it is now possible to specify a start date as well.

syntax ex:

python3 -u quickTestScalability.py --ip master-0.kruizevin.lab.psi.pnq2.redhat.com --port 30792 --count 1,1440 --minutesjump=15 --generaterecommendation --parallelresultcount 1 --startdate=2023-01-01T00:00:00.000Z --name temp24Hrs3 --postresults

above example creates experiment name temp24Hrs3_1
with 15 days of result data also triggers updateRecommendations api

Signed-off-by: msvinaykumar vinakuma@redhat.com

@msvinaykumar msvinaykumar self-assigned this May 29, 2023
@msvinaykumar msvinaykumar added this to the Kruize 0.0.16_rm Release milestone May 29, 2023
@msvinaykumar msvinaykumar changed the title Included updateRecommendation API Test script for updateRecommendation API Jun 5, 2023
@chandrams
Copy link
Contributor

@msvinaykumar - Please include functional test for updateRecommendations, I have listed a few scenarios below add if anything is missing to it.

Test scenarios for updateRecommendations, invoke updateRecommendations as below:

Post a valid experiment name & valid end time and validate recommendations are as expected (What is the valid range for end time?)
Post a valid experiment name, valid end time and valid start time (valid range), validate recommendations are as expected
Post the same experiment name & end time twice, validate recommendations - Should be same or within acceptable threshold values?

Validate status code & expected error messages for the below:
Post without any parameters
Post an invalid experiment name
Post only valid interval end time
Post a valid experiment name but an invalid end time
Post a valid experiment name and an invalid end time format
Post a valid experiment name and an valid end time format but an invalid start time format
Post a valid experiment name, valid end time and invalid start time (future date or past date beyond accepted valid range)
Test for boundary values for the timestamps if there are any ranges

assert notification["type"] != "error"

response = update_recommendations(experiment_name, interval_start_time, end_time)
data = response.json()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msvinaykumar - Please include validation of the json response of update_recommendations for all the success cases

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add a jsonschema validation for the output of updateRecommendations json too. Please include this in all tests were updateRecommendations API is invoked.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


@pytest.mark.sanity
def test_update_valid_recommendations_after_results_after_create_exp(cluster_type):
input_json_file = "../json_files/create_exp.json"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Include description for all the tests included with expected behavior

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# Sleep for 1 sec to get recommendations
time.sleep(1)

response = update_recommendations(experiment_name, interval_start_time, end_time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update all the tests in test_list_recommendations.py to invoke update_recommendations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

response = update_recommendations(None, None, None)
data = response.json()
assert response.status_code == ERROR_STATUS_CODE
assert data['message'] == UPDATE_RECOMMENDATIONS_MANDATORY_DEFAULT_MESSAGE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the error messages returned for negative tests match what is mentioned in the monitoringAPI.md ?#854

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

response = update_recommendations(experiment_name, None, None)
data = response.json()
assert response.status_code == ERROR_STATUS_CODE
assert data['message'] == UPDATE_RECOMMENDATIONS_MANDATORY_INTERVAL_END_DATE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the experiment_name doesn't exist I think, will it still return error message related to interval end date? Would it return experiment_name is invalid ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update accordingly. In general it will display 'data not found'.

response = update_recommendations(experiment_name, start_time, end_time)
data = response.json()
assert response.status_code == ERROR_STATUS_CODE
assert data['message'] == UPDATE_RECOMMENDATIONS_START_TIME_PRECEDE_END_TIME
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the API doc with this error message

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please share the results for all these tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Include these tests too

  • Test with boundary values for the timestamps, I see 15 days as the acceptable range in the md file
  • Include tests with invalid time format for start and end time

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please share the results for all these tests.

https://drive.google.com/drive/folders/17e1KE3eZKi8Waf3o7uvlMQ1SqJC8cpfW?usp=sharing

@msvinaykumar msvinaykumar added analyzer Analyzer module related issues API Requires API Changes labels Jul 25, 2023
@msvinaykumar
Copy link
Contributor Author

@@ -165,12 +185,13 @@ def test_list_recommendations_invalid_exp(cluster_type):
response = delete_experiment(input_json_file)
print("delete exp = ", response.status_code)


@pytest.mark.sanity
def test_list_recommendations_without_results(cluster_type):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invoke updateRecommendations API in this test too to check behavior without results

@@ -112,7 +131,7 @@ def test_list_recommendations_without_parameters(cluster_type):
# Validate the json values
create_exp_json = read_json_data_from_file(input_json_file)
update_results_json = []
update_results_json.append(result_json_arr[len(result_json_arr)-1])
update_results_json.append(result_json_arr[len(result_json_arr) - 1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does validate_reco_json work with this update_results_json?

assert data['message'] == "Bulk entries are currently unsupported!"
assert response.status_code == SUCCESS_STATUS_CODE
assert data['status'] == SUCCESS_STATUS
assert data['message'] == UPDATE_RESULTS_SUCCESS_MSG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also update the below code in this test to invoke updateRecommendations API & validate recommendations using update results json instead of None

 # Uncomment the below lines when bulk entries are allowed
 # update_results_json = read_json_data_from_file(result_json_file)

    # Since bulk entries are not supported passing None for update results json
    update_results_json = None
    validate_reco_json(create_exp_json[0], update_results_json, list_reco_json[0])

assert notification["type"] != "error"

response = update_recommendations(experiment_name, interval_start_time, end_time)
data = response.json()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add a jsonschema validation for the output of updateRecommendations json too. Please include this in all tests were updateRecommendations API is invoked.

@@ -453,17 +471,23 @@ def test_update_results_with_same_result(cluster_type):
response = update_results(result_json_file)

data = response.json()
assert response.status_code == ERROR_409_STATUS_CODE
assert response.status_code == ERROR_STATUS_CODE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we changed the response code to 400 now if the result with an existing timestamp is posted again? We will need to discuss this with Suraj too as they might have some logic based on 409 error code. Also if there is a change in error code we will need to update the UpdateResults.md

assert response.status_code == ERROR_STATUS_CODE
assert data['status'] == ERROR_STATUS
assert data['message'] == 'Out of a total of 1 records, 1 failed to save'
assert data['data'][0]['errorReasons'][0] == UPDATE_RESULTS_DATE_PRECEDE_ERROR_MSG
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UpdateResults.md needs to be updated with the errorReasons and error messages

@@ -18,359 +18,511 @@
import datetime
import requests
import argparse
import multiprocessing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msvinaykumar - As discussed, please raise a separate PR for this on top of PR 832

@chandrams
Copy link
Contributor

@msvinaykumar - Tests did not run completely, extended and negative were not run due to the below error, did it work for you? Looking into this

Cmd used:

./test_autotune.sh -c minikube -i quay.io/vinakuma/autotune_operator:ros4 --testsuite=remote_monitoring_tests --resultsdir=/home/csubrama/kruize-scalability-res

Error:

FAILED test_create_experiment.py::test_create_exp_with_both_deployment_name_selector
FAILED test_create_experiment.py::test_create_exp_with_invalid_header - asser...
=========== 2 failed, 37 passed, 216 deselected in 82.78s (0:01:22) ============

/home/csubrama/github/vinay/autotune/tests/scripts/remote_monitoring_tests/remote_monitoring_tests.sh: line 127: 0 + 2
3
3
3
3
3
3
1
1
1
1: syntax error in expression (error token is "3
3
3
3
3
3
1
1
1
1")

@chandrams
Copy link
Contributor

@msvinaykumar - Tests did not run completely, extended and negative were not run due to the below error, did it work for you? Looking into this

Cmd used:

./test_autotune.sh -c minikube -i quay.io/vinakuma/autotune_operator:ros4 --testsuite=remote_monitoring_tests --resultsdir=/home/csubrama/kruize-scalability-res

Error:

FAILED test_create_experiment.py::test_create_exp_with_both_deployment_name_selector
FAILED test_create_experiment.py::test_create_exp_with_invalid_header - asser...
=========== 2 failed, 37 passed, 216 deselected in 82.78s (0:01:22) ============

/home/csubrama/github/vinay/autotune/tests/scripts/remote_monitoring_tests/remote_monitoring_tests.sh: line 127: 0 + 2
3
3
3
3
3
3
1
1
1
1: syntax error in expression (error token is "3
3
3
3
3
3
1
1
1
1")

grep at line 124 seems to be matching these prints in the test output - Out of a total of 1 records, 1 failed to save
Please modify this to match the failed results count

@@ -651,6 +708,7 @@ def test_list_recommendations_multiple_exps_with_missing_metrics(cluster_type):
response = delete_experiment(create_exp_json_file)
print("delete exp = ", response.status_code)


@pytest.mark.extended
@pytest.mark.parametrize("latest", ["true", "false"])
def test_list_recommendations_with_only_latest(latest, cluster_type):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msvinaykumar - rest_apis/test_list_recommendations.py::test_list_recommendations_with_only_latest[false] this test failed with the below error, can you please check

      # Validate the json against the json schema
        errorMsg = validate_list_reco_json(list_reco_json, list_reco_json_schema)
>       assert errorMsg == ""
E       assert [<ValidationError: "'monitoring_start_time' is a required property">, <ValidationError: "'monitoring_end_time' is a re...Error: "'variation' is a required property">, <ValidationError: "'monitoring_start_time' is a required property">, ...] == ''

test_list_recommendations.py:805: AssertionError

@dinogun
Copy link
Contributor

dinogun commented Aug 12, 2023

@msvinaykumar does this need a rebase as well?

…ous uploading of results and recommendations. Additionally, we have introduced extra flags to selectively post results or update recommendations only. Moreover, it is now possible to specify a start date as well.

Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
@chandrams
Copy link
Contributor

@msvinaykumar - Please check the timestamp format in the error message

{"message":"Out of a total of 1 records, 1 failed to save","httpcode":400,"documentationLink":"","status":"ERROR","data":[{"interval_start_time":"Aug 17, 2023, 4:35:46 PM","interval_end_time":"Aug 17, 2023, 4:50:46 PM",

Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Signed-off-by: msvinaykumar <vinakuma@redhat.com>
Copy link
Contributor

@dinogun dinogun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dinogun dinogun merged commit cc988f5 into kruize:mvp_demo Aug 22, 2023
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
analyzer Analyzer module related issues API Requires API Changes
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

New API 'UpdateRecommendations' to support non-chronological order of data
4 participants