-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assert return codes specified in API in functional tests #331
Comments
Preference is to stress inputs from functional tests (vs mock/stub coverage tests) and verify handling/responses. @dmknutsen assigned to check compliance against the unit test standard and we'll evaluate approaches to resolve. |
Should we have a coding standard where all unit test assertions must check "return == something" (no use of "!=")? |
I don't think we can make this sort of generalization. It really depends on whether or not there is a specific requirement for a certain return code. If there is, then yes, a functional test should check for it. But in the majority of cases (particularly with PSP and OSAL) there is no documented requirements at that API level. On the other hand, where there isn't a specific documented error code requirement, it is more future-proof to simply check that The reality is that we should discourage applications from checking for only certain specific error return codes. By relying too heavily on a specific set of return codes, we open the possibility that app developers check only for those return codes, and then we are backed into an API corner (so to speak) if we need to expand those in the future as the libraries evolve. The better pattern, and the one that we should encourage for real apps, is to check for |
We don't have any OSAL or PSP requirements, so relying on requirements to verify against doesn't work. Historically and in this round, certification is based on the spec/APIs. Since the APIs do currently document at least some of the return codes, I'd expect those to be explicitly checked (if it's in the spec the code should match). Then the question becomes - should we remove the documentation of specific error codes for each API (do we fix the document or code)? Could generalize documentation as OS_SUCCESS or appropriate error code for everything that applies. I think the developmental trade here is more if we want to support a concept where apps could handle certain errors in a unique manner, vs just reporting the error code. If we want apps to be able to take a unique action based on the error, I'd think we'd need to provide consistent behavior across OSALs. Basically the functional test should enforce a specific return where one is defined in the API. On the other hand, if apps should only ever have at most 2 responses, OS_SUCCESS or any error case then I'd think the specific error checks may not need to be enforced (depends on the certification expectations though). These are just developer concerns though, maybe @tngo67 can help with certification expectations? @acudmore - any inputs from your perspective? Enforce no specific error codes, a subset, or all possible? *EDIT - tagged wrong person |
Tagged for discussion. In the short term @dmknutsen is auditing the current functional tests for compliance with our processes to get a sense of how well we currently comply with GSFC expectations. |
@ejtimmon do you know if any GSFC apps have unique responses to different error codes? Or is the error response typically just an event and reporting of error code? |
There are a few error conditions which applications do handle specially, mainly timeouts, possibly queue full/empty. Most error codes actually deal with things that the application/caller could have and should have checked on their own (i.e. name too long, bad pointer, bad id, etc). Highly unlikely an application would handle these explicitly because a properly-written application would never generate them. These are completely tools/hints for the developer to aid in finding the bugs in the code. Once debugged, they don't occur. On the other hand, the timeouts and queue full errors deal with the state of the system at the time of the call and therefore could occur in a properly written and operating system. Applications that call functions that are subject to system state like this do need to check for these codes. |
Might be worth a scrub of the API documentation relating to error codes from that context, as it's our contract w/ the user. Doesn't need to limit additional error codes for unique situations as development progresses, but explicitly list those guaranteed (or that should be enforced) by the functional or coverage tests. Maybe something like: \return Execution status. Success and error codes below are verified by test, but users should not assume the error code list is exhaustive. Precedence is not enforced, calls with multiple errors may return any one of the related the error codes. Then define the codes we want to enforce by test. Typically I'd think most standard ones are probably a good idea, but doesn't limit future development? |
I like that wording - main thing is to be clear that the list of errors is not exhaustive/all-inclusive, and that precedence is not defined. That is, if you pass in both a long name and a bad pointer, all that the API guarantees is that the return code is not Might even want to add a note in the list of error codes, that the specific error code definitions may be extended or refined in future versions of the software. Warn users/developers against only checking for specific error values aside from |
I don't know, to me any return code should have an associated coverage (and probably functional) unit test case, so producing/maintaining an exhaustive list of return codes should be concurrent to the development of the unit tests. |
Coverage tests shall absolutely have explicit checks, they are white box and need to relate to the current exhaustive list for each implementation. Functional is API based, and what I have been referring to. Needs to enforce the standard set, but I'm suggesting functional tests don't have to limit development or unique error codes across implementations where an explicit error code isn't required (per the API documentation). |
I second that -- Coverage tests are written against a specific/unique implementation, where functional tests are written against an API. We need to keep these tests separate and distinct. The former (coverage) will be updated with each and every code change and there is no separate documentation for it (or in other words, the implementation itself is the documentation of what the coverage tests need to do). The latter (functional) does have separate documentation, and should only need to change if/when there is an actual API change. |
Gotcha, and I think I understand more fully--coverage tests cover all error return values, and you're suggesting functional tests only check specific return values if necessary but generally can just check success or error and not worry about which error unless it's particularly important. |
See #399 to capture documentation updates to keep this issue focused on the tests. |
I second that. We do have such coding standard that we have to comply. |
I third that! :-) |
CCB 2020-04-01 - Approved implementation |
Reopening since there are remaining items |
Not sure if it matters, but I concur with the end results of this discussion. I just noticed the notification email. We maintain a single set of functional tests that test our OSALs, since the functionality should be identical between all the OSALs. We maintain a separate set of coverage tests for each of our OSALs, since the internal code is different and may require differing inputs to fully exercise statement and branch coverage for each OSAL. When there is a requirement that drives a specific return code, our functional tests are designed to verify it. When there is a requirement that drives a success or non-success return code, our functional tests are required to only verify success or non-success. In the later case, the specific non-success return code is not required to be tested, but coverage tests may still exercise in the course of capturing required statement and branch coverage. |
Here is an updated report of where there are differences between the doxygen documentation and the return values validated by the unit-tests (Formatted as a "to do" list): https://gist.github.com/jphickey/2797f70ab0465a7d088e3281c17f2138 |
From the API/test traceability audit the following "rules" stood out as not being consistently followed - Needed "Standard" tests based on input parameter type:
Other comments
|
I noticed that osal-core-test.c only tests if the function returns OS_SUCCESS or does NOT return OS_SUCCESS. It does not actually test that the return code is correct. For example, the test to ensure that the OSAL does not create a queue with a name that already exists does not actually test for OS_ERR_NAME_TAKEN. It merely passes if the return is not OS_SUCCESS. A quick spot check indicates this design pattern is systemic.
These unit tests were used to certify (per NPR7150A) the ARINC 653 version of CFS. This requires requirements have tracing back to tests. Testing the actual expected return code is a better pattern, but is required if there is an actual requirement defining the return codes.
The text was updated successfully, but these errors were encountered: