Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port JavaRosa Scenario test suites #110

Merged
merged 140 commits into from
May 30, 2024
Merged

Conversation

eyelidlessness
Copy link
Member

@eyelidlessness eyelidlessness commented May 9, 2024

Closes #25.

What's in this PR?

This PR effectively ports all applicable (more on this below) the integration tests from JavaRosa, effective as of getodk/javarosa@5ae6894 (which was the current commit at the time I started work on this branch).

Why?

We have placed a high priority on alignment and consistency with ODK Collect. JavaRosa implements the bulk of the behaviors associated with Collect's XForms engine. We expect that, to the extent possible, sharing the JavaRosa test suite will have immense benefits in achieving those goals of alignment and consistency.

A note on **PORTING NOTES**

First and foremost: copious notes were taken throughout the porting process. As much as possible, these are all included as JSDoc blocks alongside the affected tests and/or supporting structures, each with the heading: **PORTING NOTES**. These notes are more detailed than anything written below, such as:

  • Insights into likely cause of specific known test failures
  • Discussion of various implications of the tests themselves, or their respective functionality under test
  • Thoughts on refinements, additive or alternate approaches to testing specific features or aspects of behavior

... and more than I could possibly recall in a PR writeup. Hopefully the notes generally speak for themselves.

Porting approach

As we began exploration of the ODK Web Forms project, we considered a few different approaches to alignment, including a few variations of the work in this PR. Ultimately we decided to:

  1. Focus on porting integration tests which utilize JavaRosa's Scenario test API
  2. Share as much portable test logic as possible, where any differences are largely in syntax or can be reasonably adapted to platform equivalents
  3. Avoid excessive test logic involving and/or exercising JavaRosa implementation details and internals

We did a preliminary spike into this toward the end of 2023, largely to get some insight into its feasibility, and to exercise some initial repeat functionality in the process. Following that spike, we also determined to refine the approach from the initial spike, favoring a more direct port to the Vitest assertion style common throughout most of the rest of the Web Forms project.

Furthermore, the porting process evolved somewhat as this branch progressed. Earlier commits tended to focus on retaining a high fidelity with JavaRosa's test code. As the scope of JavaRosa's API surface area became more clear, later commits tend to be more aggressive in identifying implementation details and internals. In a number of cases, proposed APIs are introduced to the Scenario class (with a proposed_ prefix).

The overarching idea with those later proposed APIs is: many JavaRosa tests exercise logic that we care about, but the test code itself is more concerned with JavaRosa internal APIs than we'd prefer, so let's imagine how the tests would be implemented if the core Scenario API provided appropriate test-focused abstractions. If I had a time machine, I'd go back and apply this more aggressive approach throughout. This effort has been rather long in the tooth, so my instinct is that we may want to make further changes in that vein gradually as those tests are surfaced in ongoing feature development and bug fixes.

Importantly: to the extent possible, porting actual test logic was done by:

  1. Literally copy and paste the test body from JavaRosa
  2. Make necessary platform-specific adjustments (e.g. form init is async)
  3. Fix any other differences in syntax
  4. Adapt assertions to their Vitest equivalents and/or our extensions thereof

Another thing that should hopefully be apparent in review: there are a handful of edge cases which prevent tests from meaningfully running without slight modification. Many of these cases are concerned with areas Web Forms is slightly stricter on init than JavaRosa is, doesn't yet handle certain (uncommon) details of form structure, or where the test logic itself might be inconsistent. These cases are all called out, and generally they're parameterized to demonstrate the failure as-ported-directly alongside either passing-with-minor-adjustments or more-meaningful-failure.

Quick note on terminology

There are many references, in this text as well as several comment blocks, to "bags" or "vats". These are terms that @lognaturel used to describe the filesystem-level structure of JavaRosa's test suites. It's an amusing way to describe a somewhat amorphous concept, so it stuck. Apologies if it confuses anyone!

Scenario and company

For the most part, all pertinent (i.e. those called by ported tests) aspects of the Scenario class have been ported to be roughly semantically equivalent to their JavaRosa counterparts. As mentioned above, some additive APIs are proposed to supplement testing logic which exercises apparent internals of JavaRosa.

A few methods were either:

  • not ported, as they're not exercised
  • ported with some minor semantic modification to minimize the number of similar concepts that must be ported along with them (generally around various different kinds of references to the same aspects of form/question/node state)
  • ported, but isolated to a particular "vat" (where certain awkward Scenario APIs were apparently specific to a single file)

Along with Scenario, a concept that was nominally ported is that of "events". Specifically, these are ported as the PositionalEvent base class and its subclasses. These are conceptually similar to their JavaRosa counterparts, but they internally diverge in a variety of ways so they can act as a common semantic intermediary between Scenario and the Web Forms xforms-engine client-facing API (which itself differs substantially on many aspects of its JavaRosa equivalents).

This PositionalEvent concept is so named because it also serves as a sort of stand-in for the Web Forms engine's lack of a concept of "controller", especially as it deals with stateful progress through a form. This is largely considered a client responsibility; the scenario package is also considered a client of the Web Forms engine, and so it takes on the management of positional state.

For the purposes of positional state, I believe all of these are true in this PR, to the extent of current Web Forms feature support:

  1. Scenario will produce an equivalent sequence of PositionalEvent instances corresponding to what JavaRosa loosely refers to as "questions", and the nodes that those questions represent.
  2. The ordering of PositionalEvent instances and their corresponding "questions"/nodes are consistent with JavaRosa.
  3. Some adjustments will need to be made to omit specific events as we introduce specific features. (Top of mind is skipping a "repeat prompt" for repeats with jr:count or jr:noAddRemove.) These could theoretically have been addressed now, but not without reaching deeper into unstable/not-intentionally-public aspects of the Web Forms engine's API.

Where do we stand on completeness/correctness?

As noted in #25, we knew going into this that some ported tests will initially fail. They'll pass in CI because they're explicitly marked it.fails. We'll update those tests as we make them pass, by introducing missing features and fixing bugs.

Unfortunately, Vitest's reporting doesn't distinguish between it.fails and tests that are actually passing. I'd love to automate a report making this distinction, but for now what I've got is a quick and dirty local reporter that dumps the information to my terminal (and we can refine that if we think it'll be useful in the semi-near term). What I see now is:

  • Failing: 1921 193
  • Passing: 121 120
  • TODO: 28
  • Skipped: 29

("TODO" and "Skipped" distinguish between tests that are stubbed or partially ported, versus tests that are referenced but unlikely to be ported.)

Considerations for review

There's a lot here. Not quite as much as the line count suggests (lots of fixtures!). My hope is that the most substantive aspects of the PR are:

  • Custom assertion logic
  • Scenario &co
  • Comments with **PORTING NOTES**

Except where noted in those porting notes, and not withstanding obvious mistakes, the test logic is intended to be as close to identical to JavaRosa as possible.

I expect that the Web Forms team will have interest both in the correctness and long-term suitability of the approach to custom assertions.

I expect that those most familiar with JavaRosa will be particularly interested in other aspects of the portability approach, and in many details in the **PORTING NOTES**.


Test "vats" not ported

In general, determining which tests to port began with the question: "does the
test create a Scenario instance, and make assertions about it?" Several test
"vats" answer this question in the negative, and were essentially skipped
without any further consideration. However, it feels appropriate to wrap up this
porting process by giving each of those "vats" a quick look, to see if they
might be worth another look in a future pass. I'll group these remaining "vats"
into categories of the types of tests they containe.

Worth another look

We may benefit from testing some or all of the logic under test in these "vats",
though the porting story would be less clear.

Text (<label> and <hint>; jr:constraintMsg and jr:requiredMsg)

I'd previously noted that we don't appear to get as much coverage of text
behavior (i.e. <label> and <hint>) as I think we want. These "vats" may
offer some guidance in that direction.

  • core/form/api/test/TextFormTests.java
  • core/model/condition/RecalculateTest.java
  • core/model/utils/test/LocalizerTest.java
  • form/api/ConstraintTextTest.java

Data types, non-XML instances

These tests may be worth a look for insights into expectations around (a) data
types, and casting behavior associated with those data types; and/or (b)
external secondary instances with non-XML content types. To some extent, they
may have overlap with functionality already under test in the xpath package.

  • core/model/data/helper/AnswerDataUtilTest.java

  • core/model/data/test/DateDataTests.java

  • core/model/data/test/GeoPointDataTests.java

  • core/model/data/test/IntegerDataTests.java

  • core/model/data/test/MultipleItemsDataTests.java

  • core/model/data/test/SelectOneDataTests.java

  • core/model/data/test/StringDataTests.java

  • core/model/data/test/TimeDataLimitationsTest.java

  • core/model/data/test/TimeDataTests.java

  • core/model/instance/geojson/GeoJsonExternalInstanceTest.java

  • core/model/instance/geojson/GeojsonGeometryTest.java

  • core/model/instance/CsvExternalInstanceTest.java

  • core/util/GeoUtilsTest.java

  • Likely overlaps, at least partially, with coverage in xpath?

    • core/model/utils/test/DateUtilsFormatLocalizationTests.java
    • core/model/utils/test/DateUtilsFormatSanityCheckTests.java
    • core/model/utils/test/DateUtilsFormatTests.java
    • core/model/utils/test/DateUtilsGetXmlStringValueTest.java
    • core/model/utils/test/DateUtilsParseDateTimeTests.java
    • core/model/utils/test/DateUtilsParseTimeTests.java
    • core/model/utils/test/DateUtilsSCTOTests.java
    • xpath/expr/ToDateTest.java

Additional XPath behavior coverage

While XPath evaluation probably has the best coverage in the Web Forms project
(even after this huge JavaRosa test port effort), we may find there are cases
covered in these "vats" for which we don't have equivalents.

  • xform/parse/FisherYatesExamplesTest.java
  • xform/parse/FisherYatesTest.java
  • xform/parse/ParkMillerTest.java
  • xform/parse/RandomizeHelperTest.java
  • xpath/expr/RandomizeTest.java

This "vat" may contain some tests we should check for alignment/discrepancies;
it's also notable for highlighting a few cases that are not supported by
JavaRosa, but are supported by Web Forms (and in the case of support for
variables, apparently vice versa!):

  • xpath/test/XPathEvalTest.java

Worth adding to the parser corpus (or any other smoke tests we'll have if we
consider parsing tech other than tree-sitter):

  • xpath/test/XPathParseTest.java

XPath behavior: (almost) definitely redundant

  • xpath/expr/EncodingDecodeTest.java
  • xpath/expr/EncodingEncodeTest.java
  • xpath/expr/RelativeRefTest.java
  • xpath/expr/XPathBinaryOpExprTest.java
  • xpath/expr/XPathFilterExprTest.java
  • xpath/expr/XPathFuncExprTest.java
  • xpath/expr/XPathPathExprTest.java
  • xpath/expr/XPathUnaryOpExprTest.java

Form structure: attribute bindings

  • xform/parse/AttributesTestCase.java
  • xform/parse/BindAttributeProcessorTest.java

Scenario.getRef, XPath reference logic

Some of these tests may provide insights into certain expectations around how
XPath references are expanded, comparable or related to one another, etc. To the
extent they may be pertinent, they'd likely be most applicable in whatever
dependency analysis we perform.

  • core/model/condition/EvaluationContextExpandReferenceTest.java
  • core/model/instance/TreeReferenceAnchorHarnessTest.java
  • core/model/instance/TreeReferenceAnchorTest.java
  • core/model/instance/TreeReferenceContextualizeTest.java
  • core/model/instance/TreeReferenceEqualsTest.java
  • core/model/instance/TreeReferenceGenericizeTest.java
  • core/model/instance/TreeReferenceIsAncestorOfTest.java
  • core/model/instance/TreeReferenceParentTest.java

Submission

  • model/xform/CompactSerializingVisitorTest.java
  • xform/parse/SubmissionParserTest.java

Serialization/deserialization (serde; non-parsing/non-submission)

While it was noted in the porting process that serde in JavaRosa is mostly a
performance consideration, we may find there are some insights pertinent to
other future Web Forms efforts (e.g. offline).

  • core/model/test/FormIndexSerializationTest.java
  • core/util/test/ExternalizableTest.java
  • core/util/test/NumericEncodingTest.java
  • core/util/test/SampleExtz.java
  • xform/util/test/XFormAnswerDataSerializerTest.java

Custom assertions

JavaRosa has them. We now have some too. Some of these may have been briefly
consulted, but Vitest's mechanisms for custom assertions differ greatly from
JavaRosa's use of Hamcrest.

  • core/test/AnswerDataMatchers.java
  • core/test/FormDefMatchers.java
  • core/test/QuestionDefMatchers.java
  • core/test/SelectChoiceMatchers.java

No @Test

They were skipped because they do not have any @Test annotations. Some of
these are clearly helpers. It's possible others have some logic invoked as tests
by some other mechanism.

  • core/model/actions/CapturingRecordAudioActionListener.java
  • core/model/test/utils/FormDefConstructionUtils.java
  • core/model/test/DummyFormEntryPrompt.java
  • core/model/test/DummyReference.java
  • core/reference/ReferenceManagerTestUtils.java
  • core/test/FormParseInit.java
  • test/utils/ResourcePathHelper.java
  • test/utils/SystemHelper.java
  • xform/parse/FormParserHelper.java
  • xpath/test/IFunctionHandlerHelpers.java

Reference

Per @lognaturel,

Nobody likes reference. Don't look. 😄

However, there may be insights around jr: URLs.

  • core/reference/ReferenceManagerTest.java

Likely test only internals

Controller-specific and/or likely more pertinent for clients (if at all)

  • form/api/FormEntryControllerTest.java
  • form/api/FormNavigationTestCase.java

Model-specific: likely closer to unit tests than we intend to port

  • core/model/instance/test/QuestionDataElementTests.java
  • core/model/instance/test/QuestionDataGroupTests.java
  • core/model/instance/test/TreeElementTests.java

Other sub-form structures/grab bag

  • core/io/test/BufferedInputStreamTests.java
  • core/model/DataTypeClassesTest.java
  • core/model/FormIndexTest.java
  • core/model/test/QuestionDefTest.java
  • xform/parse/ChildProcessingTest.java
  • xpath/test/StatefulFunc.java
  • core/model/CompareToNodeExpressionTest.java
  • xform/parse/TreeElementNameComparatorTest.java
  • xform/parse/XPathProcessorTest.java
  • xml/TreeElementParserTest.java
  • xpath/XPathConditionalTriggersTest.java
  • xpath/XPathNodesetShuffleTest.java

Footnotes

  1. Since this PR has been open and its branch has been rebased to incorporate work in several other PRs which have come in ahead of it, one new failure was introduced in Fix: client reactivity of translated select item labels #114. While this was technically a regression, I do not think it affects either the web-forms (Vue UI) client or the now-internal/experimental ui-solid client.

@seadowg
Copy link
Member

seadowg commented May 14, 2024

@eyelidlessness I'm confused by this sentence:

Unfortunately, Vitest's reporting doesn't distinguish between it.fails and tests that are actually pssing.

Do you mean "actually passing" or "actually failing" here? The latter would make more sense to me (it.fails marks the test as failing).

@eyelidlessness
Copy link
Member Author

Do you mean "actually passing" or "actually failing" here? The latter would make more sense to me (it.fails marks the test as failing).

I mean "actually passing". So if you have...

it('does boolean logic', () => {
  expect(true).toBe(true);
});

it.fails('does arithmetic', () => {
  expect(0.1 + 0.2).toBe(0.3);
});

it.todo('compares strings');
it.skip('compares arrays');

Vitest's built-in reporters will produce:

  • 2 passed
  • 1 todo
  • 1 skipped

This is "correct" from a standpoint that the test run reflects the expected reality (and is the expected behavior in CI, where "passed" = does not cause an error exit code). But it doesn't provide enough fidelity to produce up-to-date reporting on our progress against the known-failing tests we want to eventually make pass.

Copy link
Contributor

@sadiqkhoja sadiqkhoja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked at this PR fully. Since I'm away for the next two days, I thought I should post my findings.

  • This is a wonder work. I really liked how Form initialization is wrapped in the Scenario class 🎉
  • [updates relevant state] relevance updates in who-va.test.ts fails intermittently on my computer due to time out, may be should increase the timeout for smoke tests.
  • I wish tests & file organization was matched with those of JavaRosa, for example JavaRosa has a TriggerableDagTest.java file that contains 51 tests, here we have those tests spread across 6 different files. 1-1 mapping would have made it easier to see what is ported and what is not.
  • I have added few inline questions and comments. None of those are major.

I would continue reviewing this next week but I don't want to be a blocker so feel free to merge if it is approved by other reviewers.

packages/scenario/src/jr/Scenario.ts Outdated Show resolved Hide resolved
packages/scenario/src/client/init.ts Show resolved Hide resolved
packages/scenario/src/client/init.ts Outdated Show resolved Hide resolved
packages/scenario/src/client/traversal.ts Show resolved Hide resolved
packages/scenario/src/client/answerOf.ts Show resolved Hide resolved
@eyelidlessness
Copy link
Member Author

I haven't looked at this PR fully. Since I'm away for the next two days, I thought I should post my findings.

  • This is a wonder work. I really liked how Form initialization is wrapped in the Scenario class 🎉

Aw thank! 🎉 indeed!

  • [updates relevant state] relevance updates in who-va.test.ts fails intermittently on my computer due to time out, may be should increase the timeout for smoke tests.

This is concerning. It's also concerning to me that the tests are slow in general, seemingly slower than interacting with the form directly.

Unfortunately... I'm not sure I can address the intermittent failure with much confidence. I can't produce a failure-by-timeout at all, so I couldn't say whether any particular approach to overriding the timeout would even work. If you find a change locally where the timeout flakiness is addressed, you're welcome to suggest a specific change that actually works for you (or even just push it to the branch if you feel comfortable with that).

  • I wish tests & file organization was matched with those of JavaRosa, for example JavaRosa has a TriggerableDagTest.java file that contains 51 tests, here we have those tests spread across 6 different files. 1-1 mapping would have made it easier to see what is ported and what is not.

Yeah, so that was my instinct as well. @lognaturel expressed not being particularly thrilled with the organization of tests in JavaRosa, and suggested organizing the tests in a way that makes sense here, with the possibility of updating the organization in JavaRosa after the fact. In hindsight, I wish I'd remembered to mention this in the opening PR notes!

  • I have added few inline questions and comments. None of those are major.

I would continue reviewing this next week but I don't want to be a blocker so feel free to merge if it is approved by other reviewers.

I think that since @seadowg and I are going to chat on Monday, it's pretty unlikely we're going to merge this week. Either way, I'd like to give it enough time for you to take a thorough look at any non-test code (i.e. the stuff that isn't mostly copypasta), as well as discuss whatever else you might want to talk about in the various **PORTING NOTES**.

Anyway, thanks for a first pass! Have a great rest of the week.

@eyelidlessness eyelidlessness force-pushed the jr-scenario-ports/kitchen-sink branch from cc678b2 to 9a2c98a Compare May 16, 2024 21:27
@seadowg seadowg removed their request for review May 20, 2024 15:08
packages/scenario/src/answer/ExpectedDisplayTextAnswer.ts Outdated Show resolved Hide resolved
Comment on lines +36 to +39
* They only failed because the initial comparison was order-sensitive. In
* hindsight, unless there is some nuance in either the ODK XForms spec, or in
* expected Collect behavior, it seems to me that `<select>` values should
* always be set-like, and explicitly order-agnostic.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I agree

This comment was marked as resolved.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two assertions (one test) that fail without the sorting discussed here.

  • Both are asserting answer values. (I hope this is clear also from the class name, but it seems worth prefacing the clarification with this.)

  • The assertions, as copied directly from JavaRosa, specify the expected answers in the order that they were entered by the user (as represented by the client, via Scenario.answer).

  • Those answers are entered in a different order than the options are defined in the form.

  • The answer values produced by xforms-engine are effectively the selected subset of the option values. They are produced in form definition order, regardless of their user-selection order.

It only occurs to me now that there might be an expectation that user selection order would be preserved as part of the submission value. If that's the case, I think the sorting should be removed and the test marked failing for now. If that's not the case, I would be happy to update this JSDoc to include any aspect of this clarification that seems valuable to either/both of you.

Copy link
Member

@lognaturel lognaturel May 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahhh, I got to thinking about the question order for some reason and then the next paragraph seemed to say what I had said. But yes, the class name is clear.

The biggest value of preserving user-selected order that I've seen is that it can have slightly less-bad behavior when using parallel repeats with indexes: enketo/enketo#23 (comment) This is an extremely useful but extremely error-prone form structure. I would much prefer recommending using relevance to only show the repeat instances that matter but that hasn't been practical in Enketo because of enketo/enketo#1028 I've added a note at #87

I don't think that's really compelling, though.

It has always been maintained in Collect so we suspect that some people use it in their analysis. We have no proof of that, though. Maybe it's time to change it or I also think it's ok for Web Forms to deviate in this way (as Enketo does). We can explicitly document Collect's behavior as being wrong but maintained for backwards compatibility for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed this on my pass yesterday, but caught it today. What I'm taking away here is that there's a Collect behavior, of unclear value, where part of that lack of clarity intersects with prior Enketo challenges. And there's theoretical value, with unknown practical real-world application, in just capturing order of selection as an aspect of preserving and understanding user intent/action.

My instinct is: keep the discrepancy as-is for now; keep the commentary intact as a bit of extra context; file a quick issue (or tack it on to an existing one if there's one that feels thematically related, perhaps #57) so we can revisit if/when there's an appetite for it?

I do think it's probably too strong to say that Collect is wrong. And I also think this is in the huge category of things that would probably be way quicker and cheaper to align on than to think much harder about.

packages/scenario/src/answer/UntypedAnswer.ts Outdated Show resolved Hide resolved
@@ -0,0 +1,83 @@
<h:html xmlns:h="http://www.w3.org/1999/xhtml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💭 I know extension of this file is xhtml in javarosa, I was wondering wouldn't it be nice to change its extension to xml here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this was super weird, and I agree we should change it. If it's okay, I'd like to hold off until we're close to wrapping up review. If there are other similar changes like this, it might be nice to have a single commit of "here's some stuff we changed that isn't otherwise very visible, but JavaRosa may want to adopt the change too".

@@ -0,0 +1,25 @@
<configuration>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❓are we using this file anywhere?

Copy link
Member Author

@eyelidlessness eyelidlessness May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤷‍♂️

... I mean, no I don't think it is. In JavaRosa I see it referenced in ResourcePathHelper.java. The equivalent module here doesn't reference it.

The whole resources directory is basically copied as-is (apart from a few companion fixtures added after the fact to support alternate/supplemental tests). In hindsight:

  • I kind of wish I'd copied each fixture piecemeal, as each ported test required.
  • I'd still want to copy any unused files, albeit maybe into a different place. Maybe we can also do a final pass to isolate any "resources"/fixtures that aren't currently referenced in any of the ported tests.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to forget to come back to this. Here's a list of resources that I don't see referenced in any test, or from any other fixture:

  • bad-feature-not-feature.geojson
  • bad-features-not-array.geojson
  • bad-futures-collection.geojson
  • calculate-now.xml
  • constraint-message-error.xml
  • external-secondary-comma-complex.csv
  • external-secondary-csv-bom.csv
  • external-secondary-instance-large.xml
  • external-secondary-instance.xml
  • external-secondary-semicolon-complex.csv
  • external-select-xml-dummy-nodes.xml
  • external-select-xml.xml
  • feature-collection-extra-feature-toplevel.geojson
  • feature-collection-extra-toplevel.geojson
  • feature-collection-id-toplevel.geojson
  • feature-collection-id-twice.geojson
  • feature-collection-integer-id.geojson
  • feature-collection-no-properties.geojson
  • feature-collection-toplevel-order.geojson
  • feature-collection-with-null.geojson
  • feature-collection-with-unsupported-type.geojson
  • feature-collection.geojson
  • form_with_additional_attributes.xml
  • form_with_bind_attributes.xml
  • formindex-serialization.xml
  • hhplotdetails.csv
  • ImageSelectTester.xml
  • invalid-type.geojson
  • logback-test.xml.example
  • nigeria_wards_external_combined.xml
  • nigeria_wards_external.xml
  • not-object.geojson
  • populate-nodes-attributes-instance.xml
  • populate-nodes-attributes.xml
  • primary-instance-with-id.xml
  • randomize.xml
  • repeatGroupWithQuestionAndRegularGroupInside.xml
  • repeatGroupWithTwoQuestions.xml
  • simpleFormWithThreeQuestions.xml
  • sms_form.xml
  • twoNestedRegularGroups.xml
  • twoNestedRepeatGroups.xml
  • unused-secondary-instance.xml

Quite a few of these are in deeply nested subdirectories. I'm less inclined now to move them anywhere, just because it'll be fussy to do now, and might be fussy to track them back if they ever become pertinent in future work. But this list can be a point of reference of unused files for now.

Copy link
Contributor

@sadiqkhoja sadiqkhoja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have looked all the test setup stuff and I don't have any additional comments. This is amazing 🎉

I have glanced over some of the ported tests and notes, they are readable and well documented 👍 . There are some commented out code, I am assuming we will uncomment those once functionality is there or remove them if we don't want to support them in the future PRs.

readonly QUESTION: SelectNode | StringInputNode;
readonly GROUP: GroupNode;
readonly REPEAT: RepeatInstanceNode;
readonly REPEAT_JUNCTURE: never; // per @lognaturel: this can be ignored
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate a little bit: as far as I know, there was at some point a way to specify a different UI for adding repeats that used this event type. I think it was for something like a chat interface.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be misremembering, but I think you had also said that this corresponded to potentially inserting repeat instances at arbitrary indexes. I definitely remember having that discussion around something in JavaRosa, and I think it was this. At any rate, that conversation is largely the reason the engine-client API supports the possibility now.

Copy link
Member

@lognaturel lognaturel May 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's possible but my memory is fuzzy on that. Inserting repeat instances at arbitrary indexes is not currently exposed in Collect. It does get requested now and then and could certainly be valuable in a more table-like display of repeat instances. That would be closer to the original W3C XForms repeat concept and what Orbeon and Fore expose.

Supporting the possibility sounds good, even if it's not a priority. I think the big challenge would be representing these more complex scenarios in XLSForm.

@seadowg
Copy link
Member

seadowg commented May 22, 2024

This is "correct" from a standpoint that the test run reflects the expected reality (and is the expected behavior in CI, where "passed" = does not cause an error exit code). But it doesn't provide enough fidelity to produce up-to-date reporting on our progress against the known-failing tests we want to eventually make pass.

I guess what confuses me about using it.fails is why not just use todo?

@lognaturel
Copy link
Member

I guess what confuses me about using it.fails is why not just use todo?

I think the biggest advantage of using it.fails is that if the test conditions start passing, then the test will fail because it won't satisfy the fails directive. 🤪 I don't know how to say that. But basically, as the engine changes, some of the tests that aren't being explicitly targeted will likely start to pass and it's useful to have a mechanism to detect that.

@eyelidlessness did I get that right?

Copy link
Member

@lognaturel lognaturel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been going through commit by commit and need to pause at ce8d575 for today. It'd be helpful to get a sense of whether the reactions/answers I've given are at the right level. I've been keeping track of most JR todos at getodk/javarosa#759 and should be able to do those relatively quickly.

packages/scenario/test/choice-name.test.ts Outdated Show resolved Hide resolved
packages/scenario/src/jr/select/SelectChoiceList.ts Outdated Show resolved Hide resolved
packages/scenario/test/select.test.ts Outdated Show resolved Hide resolved
packages/scenario/test/select.test.ts Outdated Show resolved Hide resolved
packages/scenario/test/select.test.ts Outdated Show resolved Hide resolved
Copy link
Member

@lognaturel lognaturel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's my comments! Hopefully there are some helpful nuggets in there. I imagine @eyelidlessness that you'll want to do another pass incorporating some of the feedback that can be applied immediately.

I filed a handful of issues for functionality areas I think we could lose track of and that I'd like to be able to prioritize easily. I'm thinking we can prioritize and address a wave of functionality and then file more issues as needed. That's all very subjective so happy to consider alternatives and I encourage everyone to file enhancement issues as things come up.

🚀 🚀 🚀

packages/scenario/test/extensibility.test.ts Show resolved Hide resolved
/**
* **PORTING NOTES**
*
* - Direct port fails due to trailing semicolon in `/data/polygon` value,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Collect either currently or historically included a trailing semicolon (no handling of the fencepost problem) so it would be important to be able to use. IIRC the JR implementation trims semicolons

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. It's trivial to address/fix (and I have a fix in a stash somewhere).

packages/scenario/test/data-types/geo.test.ts Show resolved Hide resolved
*
* If either of these patterns are in scope (or may be in the future), I'd
* like to spend more time getting familiar with this fixture (and/or others
* known to have similar aspects of form design).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This form predates XLSForm and was manually written XML. I don't see any good reason for the different model and body structure. At first I thought maybe the same questions were going to be shown in different sections which could kind of be a cool feature but that's not the case. Sometimes the data analysis organization needs are different from the data entry needs but it doesn't seem worth the extra complexity. I think we should ignore this for now and likely forever. Maybe we can build/find an IMCI XLSForm and rewrite the test around that.

* one has been noted for its curious behavior with the unicode input text
* producing "garbage" code points. I'd like to consider tests exercising
* non-text input, for instance some of the binary data associated with
* formats we presently support in itext media.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current description is that the encoded value is expected to represent a UTF-8 encoded string. What would the use case be for another kind of encoded value and how could it be used?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see that now. I think I was moving really quickly and my brain autofilled some blanks without really looking to closely at the spec (other than a quick, sort of anxious look thinking I'd somehow missed it in the original xpath pass).

My first thought was that I'd use it for embedding arbitrary form attachments, sidestepping anything to do with jr: URLs, uploads e.g. in Central, etc. Roughly analogous to using data: URLs across a wide variety of web technologies. This was my frame of reference, because they're commonly base64 encoded (and combined with a MIME type).

* formats we presently support in itext media.
*
* - I would like to consider whether the spec should include a parameter to
* specify the content type of the encoded input.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC we decided to layer that on if there's a user request for it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remember, I'll probably request it as soon as I want to build a form with small attachments.

If we pursue this, it’ll come after tests which are more pressing. If we decide not to, we can roll back this commit separately. Putting it in front because benchmarks were the first thing I noticed peeking back into JR for the first time in so long.
This change is being made now, as a response to repeatedly interrupting the porting effort as it became challenging to reason about which aspects of the `scenario` package come from JavaRosa (and should have stable semantics) versus aspects we’ve introduced to achieve those semantics (and should be more free to change as we move to support more of the JR test suite).

Following this, we’ll break out many aspects of `Scenario`. Ideally, that class will only contain logic directly from JR or direct calls into code which is clearly **not** derived from JR.
Extends the built-in `expect` assertion factory to act on values with a `ComparableAnswer` interface. These interfaces will also be used for other assertion types beyond equality checks.
Despite the odd naming, I expect to find it much less fussy porting the remaining tests. And now, I believe we’re actually well prepared to start doing that!
This is a non-ideal solution (and loses a little bit of error messaging with better specificity), but it achieves the important goal **for now**: when a (synchronous) error is thrown during form initialization (and not caught anywhere else), the initialization Promise itself will reject.

Otherwise, what happens is that the initialization Promise resolves, and then whatever error should have rejected it will be unhandled.

This change (along with presumed future changes around conveying errors more broadly) will make it easier to reason about form load errors **happening**. It also makes it possible to properly mark a ported JavaRosa test as failing. It will otherwise report an out of band unhandled error, which is distinct from the expected failure.
eyelidlessness added a commit that referenced this pull request May 28, 2024
Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.
eyelidlessness added a commit that referenced this pull request May 28, 2024
Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.
eyelidlessness added a commit that referenced this pull request May 28, 2024
Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.
@eyelidlessness eyelidlessness force-pushed the jr-scenario-ports/kitchen-sink branch from 1ec7793 to ca5afef Compare May 29, 2024 21:35
@lognaturel
Copy link
Member

Looks great, thanks! I've left a few comments unresolved because I want to think a little bit more about them and/or resolve once we take action in JR.

@eyelidlessness eyelidlessness merged commit 11f18d1 into main May 30, 2024
86 checks passed
eyelidlessness added a commit that referenced this pull request May 30, 2024
Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.
eyelidlessness added a commit that referenced this pull request May 30, 2024
Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.
sadiqkhoja pushed a commit that referenced this pull request Jun 6, 2024
* Normalize repeats by *unwrapping* <group ref><repeat nodeset> pairs

Note that this introduces an in-memory `<label form-definition-source=“repeat-group”>` to distinguish outer/inner labels for such structures.

Example: for this form definition structure…

```xml
<group ref="/root/rep">
  <label>Repeat/group label</label>
  <repeat nodeset="/root/rep">
    <label>Repeat label</label>
  </repeat>
</group>
```

… this would be the normalized structure:

```xml
<repeat nodeset="/root/rep">
  <label form-definition-source=“repeat-group”>Repeat/group label</label>
  <label>Repeat label</label>
</repeat>
```

* Eliminate concept of “repeat group” in parsed definition

Note that `RepeatGroupDefinition` previously had two responsibilities:

1. To provide access to its `repeat`
2. To provide access to a label defined in the repeat’s containing group

The first is no longer necessary because the repeat is accessed directly.

The second is accommodated by defining the `RepeatElementDefinition`’s label from the special-case `<label form-definition-source=“repeat-group”>` as produced in the previous commit.

Also note that this still does not deal with the unhandled repeat labeling engine responsibility: <repeat><group><label/></group></repeat>. A TODO is added specifically so it can be traced back to this commit, as it will likely help shorten the path to reabsorbing the pertinent code/implementation details back into active brain memory.

* Consistent repeat terminology: “sequence” -> “range”

* Initial support for appearances (and body classes)

Automated testing is pending. Some manual validation has been done to verify that this likely works as intended. The intent is to make this first pass available for client iteration as quickly as possible.

While it’s possible to include unit tests for `TokenListParser`, it seems more likely we’ll want to add integration tests in the `scenario` package. Given there’s a ton of unaddressed feedback in #110, it seems most prudent to get this into draft first, and bring in integration tests once that  lands.

* Add scenario integration tests for appearances and body classes

* Add changeset

---------

Co-authored-by: Hélène Martin <lognaturel@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port JavaRosa tests, ignore ones that fail
4 participants