Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial test results #33

Closed
tylergu opened this issue Mar 11, 2022 · 0 comments
Closed

Initial test results #33

tylergu opened this issue Mar 11, 2022 · 0 comments
Assignees

Comments

@tylergu
Copy link
Member

tylergu commented Mar 11, 2022

I ran Acto for 3 Hours. It ran 65 tests, and produced 25 alarms

All of the alarms are from our system state oracle.

For 19 out of 25: Acto didn't find any matching field in system state deltas for the input delta.
For 6 out of 25 alarms: Acto found some matching fields, but the value change is different


1 true alarm

See here: #39

18 false alarms caused by no matching field in system state deltas

3 are caused by changing a complex object: When we change a complex object to null, changes are reflected on a lower level.
Concretely, consider the following example where we changed the secretBackend from null to a new object.

"root['spec']['secretBackend']": {
        "prev": null,
        "curr": {
            "vault": {
                    "annotations": {
                        "key": "random"
                    }
            }
        }
}

Then we have the following system state delta:

"root['test-cluster-server']['spec']['template']['metadata']['annotations']['key']": {
    "prev": null,
    "curr": "random"
}
...

Acto tries to find a matching field based on the input delta's path ['spec']['secretBackend'], but the system state delta is at lower level.
In the system state delta, the path is ...['annotations']['key']. To match these two fields, we need to flatten the dict in the input delta before field matching.

10 are caused because we performed a change which is rejected by the operator: scale down and shrink volumn, key-value delimiter not found.

1 is caused by changing from a default value to null (this is effectively no change, but we were not aware of the default value). Need to be aware of default values

1 is caused by that field does not affect application's state (configuration of the operator itself)

2 are caused by a bug in our input generation

1 need further inspection


6 false alarms caused by value mismatch

3 are caused by lack of canonicalization when comparing dictionaries: easy to fix, canonicalize fields when comparing dict

Need canonicalization when comparing dictionaries
requiredDuringSchedulingIgnoredDuringExecution != required_during_scheduling_ignored_during_execution

"root['spec']['affinity']['podAntiAffinity']": {
        "prev": {
            "requiredDuringSchedulingIgnoredDuringExecution": [
                    {
                        "labelSelector": {
                                "matchExpressions":...
                        }
                    }
            ]
        },
        "curr": null
}
"root['test-cluster-server-0']['spec']['affinity']['pod_anti_affinity']": {
    "prev": {
            "required_during_scheduling_ignored_during_execution": [
                {
                        "label_selector": {
                            "match_expressions":...
                        }
                }
            ]
    },
    "curr": null
}

2 are caused by comparing null to default value: need to be aware of default values when comparing with null

"root['spec']['image']": {
      "prev": null,
      "curr": "random"
}

resulted:

"root['test-cluster-server']['spec']['template']['spec']['containers'][0]['image']": {
      "prev": "rabbitmq:3.8.21-management",
      "curr": "random"
}

1 is caused by 0 != '0': easy to fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants