-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add messages implementation for python #165
base: main
Are you sure you want to change the base?
Conversation
18b7d09
to
6a26520
Compare
This address to #162 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At a glance this doesn't follow the pattern used by the other language implementations in quite a few ways. Please follow up the directions from #162 around code generation.
I also don't understand the purpose of the samples directory.
|
You can use Pydantic if you can make it fit into the
Consider narrowing this down to a few representative examples. Currently it is hard to see the forest for the trees. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're going to copy lots of the cck it would be better to fetch the data using some form of call rather than C+P as this is currently being rapidly updated
ee63f2a
to
358b36b
Compare
What purpose do these tests serve? They'll be a hassle to update if/when the schema changes. |
Hi @elchupanebrej - Just checking in to see where you're up to with this. Is this something you're still working on? |
Hi @luke-hill, sorry for the long response, hadn't time to work on the project. I'll try to create another merge request that will conform to the building process. |
3256104
to
effdd2b
Compare
The PR was updated with Makefile. Model is stable, so generated code is totally same to version, which was generated at first try @mpkorstanje I kindly ask you to review the code and take a release part. I didn't get into all deps&relations between release tools. |
python/tests/test_model_load.py
Outdated
def compatibility_kit_repo(tmpdir): | ||
repo_path = Path(tmpdir) / "compatibility-kit" | ||
repo = Repo.clone_from( | ||
"https://github.com/cucumber/compatibility-kit.git", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Messages should not use the compatibility kit as this creates a circular dependency. Rather you'll want to write some targeted tests for serialization and deserialisation.
The Java implementation would be a good example, PHP less so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luke-hill the above comment also applies to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! The small suite of tests will be copied here.
Java tests seem must not be directly ported because the model is generated from schemas directly. So many tests will just test the generator itself(it has a much wider suite of tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed. Most of the tests in Java are for serialization rather than the shape of the messages.
For example enums must be serialized by name, null fields must be omitted, optionals types are elided, ect. This will depend a bit on what Python offers out of the box.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth pointing out here @elchupanebrej the way in which Rien is describing things is that we should / can use the CCK to test the generation. But we shouldn't have the generation "requiring on" the CCK. Hope that makes sense / apologies if I'm repeating something already understood.
i.e. for ruby here - https://github.com/cucumber/messages/blob/main/ruby/cucumber-messages.gemspec we have no direct dependencies, but we use the CCK as a development dependency (I.e. to test the generation has worked).
Apologies if this doesn't make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PyPi releases don't allow dependencies on github repositories, so I can't add resources directly. If you go through commits you will see an example of tests with direct downloading CCK data. If you have better ideas how to integrate - please share your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surely this isn't a problem if it's listed as a dev dependency as it won't be in the pypi package? (I could be wrong!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luke-hill, this implementation isn't dependent on CCK for generation, it was dependent on CCK for test purposes only
Left a few quick remarks, will have to take a deeper look later. |
@@ -0,0 +1,12 @@ | |||
{"meta":{"ci":{"buildNumber":"154666429","git":{"remote":"https://github.com/cucumber-ltd/shouty.rb.git","revision":"99684bcacf01d95875834d87903dcb072306c9ad"},"name":"GitHub Actions","url":"https://github.com/cucumber-ltd/shouty.rb/actions/runs/154666429"},"cpu":{"name":"x64"},"implementation":{"name":"fake-cucumber","version":"16.3.0"},"os":{"name":"darwin","version":"22.4.0"},"protocolVersion":"22.0.0","runtime":{"name":"node.js","version":"19.7.0"}}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good process what you've done here. Just commenting for documentation.
I think as/when you have gotten this all working, it would be good to migrate this and others to the CCK proper. WDYT? (Maybe something for 2025?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
migrate this and others to the CCK proper
It must work with CCK now in all possible cases. If it doesn't - let write tests & fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elchupanebrej As a test I'm not happy with a "sample test". As said before this create a circular dependency between the code that generates the samples and messages.
Tests for messages can be limited to testing whether the code was generated and serialization works correctly. This is does not test those things specifically while still testing many other - less relevant things.
@luke-hill what exactly do you mean by "migrating this and others to the cck"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mpkorstanje sorry for bothering you, it seems I can't catch a point:
Samples of messages in the CCK repository are stored as examples. Every tool that uses messages has to use them (at least serialize when some event is emitted, and deserialize when this message comes to some reporter). So I took the full suite of test data from the CCK repo and checked that the models generated were successfully parsed that messages into the model, and after that deserialized them to totally the same JSON. Could you please describe more precisely what kind of tests would be OK: would be enough if some model(for every kind of message) would be created, serialized and deserialized perfectly to the totally same model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CCK uses the messages to generate the output of a canonical cucumber execution. For this is needs the messages. The value the CCK adds isn't that it generates a sample of each messages, but rather that the collection of messages as a whole. So it can for example express relationships between messages.
This dependency also means that it can't be used as test data in messages. That would result in a circular dependency.
Now for messages the exact testing strategy depends on the framework and language used.
For example for Javascript, the object and it's json representation are almost identical so there is little to test at all. And because the code is generated, it doesn't seem nesesary to test every message either.
So you can see we do a round trip test of one moderately complex message and not much more.
https://github.com/cucumber/messages/blob/main/javascript/test/messagesTest.ts
For Java serialization is more complicated. It does not have a concept of undefined
. So we got tests to check for that.
Now I don't know enough about Python to tell you exactly what to test. I can't tell you about pitfalls I don't know about. But I imagine if third party code generator is used, a simple round trip should be enough.
ae519d7
to
99e72d6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to have been a misunderstanding.
So just to clarify.
Either:
- Source is generated by the Ruby codegen script
- Generated source is checked in
Or:
- Source is generated by the python build process.
- Generated source is not checked in.
- Make targets print a message that code code gen is handled by Python.
Which option are you going for now?
@@ -0,0 +1,12 @@ | |||
{"meta":{"ci":{"buildNumber":"154666429","git":{"remote":"https://github.com/cucumber-ltd/shouty.rb.git","revision":"99684bcacf01d95875834d87903dcb072306c9ad"},"name":"GitHub Actions","url":"https://github.com/cucumber-ltd/shouty.rb/actions/runs/154666429"},"cpu":{"name":"x64"},"implementation":{"name":"fake-cucumber","version":"16.3.0"},"os":{"name":"darwin","version":"22.4.0"},"protocolVersion":"22.0.0","runtime":{"name":"node.js","version":"19.7.0"}}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@elchupanebrej As a test I'm not happy with a "sample test". As said before this create a circular dependency between the code that generates the samples and messages.
Tests for messages can be limited to testing whether the code was generated and serialization works correctly. This is does not test those things specifically while still testing many other - less relevant things.
@luke-hill what exactly do you mean by "migrating this and others to the cck"?
import { Given } from '@cucumber/fake-cucumber' | ||
|
||
Given('I have {int} cukes in my belly', function (cukeCount: number) { | ||
assert(cukeCount) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file seems unused in any tests.
python/src/messages.py
Outdated
@@ -0,0 +1,3 @@ | |||
from _messages import * | |||
|
|||
ExpressionType = Type1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what this file does. Can you explain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have two entities in the original model named Type (design bug from my perspective). This module is a simple adapter, so the end user will import Type and ExpressionType but not Type and Type1. In the serialized model they both are named Type as it was in the original model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to fix that in the code generator instead?
And if it is not possible, an explanatory comment would be useful.
] | ||
dependencies = [ | ||
"importlib_resources", | ||
"pydantic>=2.0.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it really necessary to use and add pydantic as a dependency ?
Many people are still on pydantic v1, and this would require pytest-bdd users to upgrade to pydantic v2 since pytest-bdd will soon depend on gherkin
Arenβt stdlib dataclasses enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
importlib_resources is also, from what I can see, only used for tests which I'm not sure is needed either
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@youtux Yes, this is technically possible, but such realization will be dependent on some library like https://github.com/lidatong/dataclasses-json (the best option for now), which are not as good supported as pydantic
From another perspective - testing utilities are selected at the start of a project, so if the messages package will be used somewhere - it most probably would be dependent on the new version of Pydantic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but there are many projects using pytest-bdd for years, and this would be an issue.
We can do without pydantic in a very simple way. We can use data classes, then when we need to serialise to json we call asdict(model)
. If we need custom encoders (e.g. for date times) we can implement a simple JsONEncoder and pass that to json.dumps(asdict(model), encoder=β¦)
.
Or also just implement custom serialiser for each object
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in this case, we have to implement dict_factory for dataclass.asdict, which will have to take in count Enums, or there would be an issue with serialization to JSON. And deserialisation to the dataclass also will be an issue (Enums again)
And pydantic covers both of this issues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimizing the number of dependencies avoids a potential conflict with the system under test. And it seems to me that any effort saved by using Pydantic in Cucumber will be meaningless if Cucumber can't be used because of it.
But I'm not in the Python ecosystem so I'd like to see a consensus on this problem from those who are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really think we should not bring in a big dependency like pydantic here, especially since it has made a big API change in v2, and I can see it make it difficult for users to adopt this library if it conflicts with their pydantic v1 requirement.
What's the use of pydantic here? I don't see it being used for serialisation / deserialisation here.
What's the API of this library going to look like?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Messages library is hardly used for serialization/deserialization, for example:
- Test runner must produce messages in the ndjson format, so it uses model of "messages" lib to represent outcomes, messages lib serializes and validates against Json schema (non-directly).
- Test reporter consumes ndjson stream of messages and uses "messages" library to deserialize inputs and validate them.
So "messages" lib is a bridge between test runner and test reporter (potentially from different languages ecosystems)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but how is the API of this lib supposed to look like?
from cucumber_messages import ???
???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@youtux , please check python/tests/test_model_load.py test in this PR (I'll rework tests later).
For example reporting in the pytest-bdd-ng uses this particular model:
https://github.com/elchupanebrej/pytest-bdd-ng/blob/default/src/pytest_bdd/message_plugin.py
Thanks for great review, return later this week and will update all things accordingly π |
c127cc1
to
f6ecf72
Compare
5d79e6f
to
11a8b21
Compare
11a8b21
to
5052824
Compare
5052824
to
614d55c
Compare
Welcome to Codecov πOnce you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered βοΈ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry to see that the misunderstanding I highlighted in the last review persists. While some aspects have been addressed, they have not been addressed in full. Let me know if we need to schedule a call and talk this through.
Further more, the current test set contains many incidental details while also not testing for anything specific. This will make the tests break whenever small changes to the schema are made. Given that this repository currently hosts 9 languages, keeping tests up to date becomes tedious quickly.
Finally I'd like to see a consensus on the use of pydantic. And it might be useful to do that first as it will significantly impact the shape of this pull request.
echo "Skipping code generation - code is generated by Python" | ||
|
||
generate-real: require install-deps | ||
datamodel-codegen \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to move this into pythons build process?
--target-python-version=3.8 | ||
|
||
require: ## Check requirements for the code generation (python is required) | ||
@python --version >/dev/null 2>&1 || (echo "ERROR: python is required."; exit 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python should not be required. This can be a stub too.
from pydantic import BaseModel, ConfigDict, Field | ||
|
||
|
||
class ContentEncoding(Enum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the code is generated by Python, then I would not expect this file to be checked in.
- python-version: "3.10" | ||
os: windows-latest | ||
- python-version: "3.11" | ||
os: windows-latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There has to be a more efficient way to run run all versions on ubuntu and exclude osx and windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use an include matrix personally @elchupanebrej
python/src/messages.py
Outdated
@@ -0,0 +1,3 @@ | |||
from _messages import * | |||
|
|||
ExpressionType = Type1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to fix that in the code generator instead?
And if it is not possible, an explanatory comment would be useful.
build: | ||
|
||
runs-on: ${{ matrix.os }} | ||
timeout-minutes: 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks unnecessary.
|
||
with (resource_path / "message_samples/minimal/minimal.feature.ndjson").open(mode="r") as ast_file: | ||
model_data = [*map(json.loads, ast_file)] | ||
oracle_models = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This oracle is overly detailed and at the same time does not specify what property is being tested.
I reckon the important things to check are
- Are null values omitted from the output
- Enums are written by name
- Something simple can round trip.
] | ||
dependencies = [ | ||
"importlib_resources", | ||
"pydantic>=2.0.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minimizing the number of dependencies avoids a potential conflict with the system under test. And it seems to me that any effort saved by using Pydantic in Cucumber will be meaningless if Cucumber can't be used because of it.
But I'm not in the Python ecosystem so I'd like to see a consensus on this problem from those who are.
Hello @elchupanebrej ! I wondered if you had had a chance to get back to this? π |
To me it's still not quite clear how this implementation (but the others already present in the repo as well) is supposed to be used. |
If we are doing this implementations just to define the classes, I think we should better use an automated tool that converts from jsonschema to python models. A good one seems to be https://github.com/koxudaxi/datamodel-code-generator, I managed to create dataclasses from the jsonspec like this:
Generated `model.py` using dataclass generator# generated by datamodel-codegen:
# filename: GherkinDocument.json
# timestamp: 2024-10-25T21:41:54+00:00
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
from typing import List, Optional
@dataclass
class Location:
line: int
column: Optional[int] = None
@dataclass
class Comment:
location: Location
text: str
@dataclass
class DocString:
location: Location
content: str
delimiter: str
mediaType: Optional[str] = None
class KeywordType(Enum):
Unknown = 'Unknown'
Context = 'Context'
Action = 'Action'
Outcome = 'Outcome'
Conjunction = 'Conjunction'
@dataclass
class TableCell:
location: Location
value: str
@dataclass
class TableRow:
location: Location
cells: List[TableCell]
id: str
@dataclass
class Tag:
location: Location
name: str
id: str
@dataclass
class DataTable:
location: Location
rows: List[TableRow]
@dataclass
class Examples:
location: Location
tags: List[Tag]
keyword: str
name: str
description: str
tableBody: List[TableRow]
id: str
tableHeader: Optional[TableRow] = None
@dataclass
class Step:
location: Location
keyword: str
text: str
id: str
keywordType: Optional[KeywordType] = None
docString: Optional[DocString] = None
dataTable: Optional[DataTable] = None
@dataclass
class Background:
location: Location
keyword: str
name: str
description: str
steps: List[Step]
id: str
@dataclass
class Scenario:
location: Location
tags: List[Tag]
keyword: str
name: str
description: str
steps: List[Step]
examples: List[Examples]
id: str
@dataclass
class RuleChild:
background: Optional[Background] = None
scenario: Optional[Scenario] = None
@dataclass
class Rule:
location: Location
tags: List[Tag]
keyword: str
name: str
description: str
children: List[RuleChild]
id: str
@dataclass
class FeatureChild:
rule: Optional[Rule] = None
background: Optional[Background] = None
scenario: Optional[Scenario] = None
@dataclass
class Feature:
location: Location
tags: List[Tag]
language: str
keyword: str
name: str
description: str
children: List[FeatureChild]
@dataclass
class Model:
comments: List[Comment]
uri: Optional[str] = None
feature: Optional[Feature] = None Generated `model.py` using pydantic v2 generator# generated by datamodel-codegen:
# filename: GherkinDocument.json
# timestamp: 2024-10-25T21:49:23+00:00
from __future__ import annotations
from enum import Enum
from typing import List, Optional
from pydantic import BaseModel, ConfigDict, Field
class Location(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
line: int
column: Optional[int] = None
class Comment(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description='The location of the comment')
text: str = Field(..., description='The text of the comment')
class DocString(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location
mediaType: Optional[str] = None
content: str
delimiter: str
class KeywordType(Enum):
Unknown = 'Unknown'
Context = 'Context'
Action = 'Action'
Outcome = 'Outcome'
Conjunction = 'Conjunction'
class TableCell(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description='The location of the cell')
value: str = Field(..., description='The value of the cell')
class TableRow(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(
..., description='The location of the first cell in the row'
)
cells: List[TableCell] = Field(..., description='Cells in the row')
id: str
class Tag(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description='Location of the tag')
name: str = Field(
..., description='The name of the tag (including the leading `@`)'
)
id: str = Field(
..., description='Unique ID to be able to reference the Tag from PickleTag'
)
class DataTable(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location
rows: List[TableRow]
class Examples(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(
..., description='The location of the `Examples` keyword'
)
tags: List[Tag]
keyword: str
name: str
description: str
tableHeader: Optional[TableRow] = None
tableBody: List[TableRow]
id: str
class Step(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description="The location of the steps' `keyword`")
keyword: str = Field(
..., description='The actual keyword as it appeared in the source.'
)
keywordType: Optional[KeywordType] = Field(
None,
description="The test phase signalled by the keyword: Context definition (Given), Action performance (When), Outcome assertion (Then). Other keywords signal Continuation (And and But) from a prior keyword. Please note that all translations which a dialect maps to multiple keywords (`*` is in this category for all dialects), map to 'Unknown'.",
)
text: str
docString: Optional[DocString] = None
dataTable: Optional[DataTable] = None
id: str = Field(
..., description='Unique ID to be able to reference the Step from PickleStep'
)
class Background(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(
..., description='The location of the `Background` keyword'
)
keyword: str
name: str
description: str
steps: List[Step]
id: str
class Scenario(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(
..., description='The location of the `Scenario` keyword'
)
tags: List[Tag]
keyword: str
name: str
description: str
steps: List[Step]
examples: List[Examples]
id: str
class RuleChild(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
background: Optional[Background] = None
scenario: Optional[Scenario] = None
class Rule(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description='The location of the `Rule` keyword')
tags: List[Tag] = Field(
..., description='All the tags placed above the `Rule` keyword'
)
keyword: str
name: str
description: str
children: List[RuleChild]
id: str
class FeatureChild(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
rule: Optional[Rule] = None
background: Optional[Background] = None
scenario: Optional[Scenario] = None
class Feature(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
location: Location = Field(..., description='The location of the `Feature` keyword')
tags: List[Tag] = Field(
..., description='All the tags placed above the `Feature` keyword'
)
language: str = Field(
...,
description='The [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language code of the Gherkin document',
)
keyword: str = Field(
...,
description='The text of the `Feature` keyword (in the language specified by `language`)',
)
name: str = Field(
..., description='The name of the feature (the text following the `keyword`)'
)
description: str = Field(
...,
description='The line(s) underneath the line with the `keyword` that are used as description',
)
children: List[FeatureChild] = Field(..., description='Zero or more children')
class Model(BaseModel):
model_config = ConfigDict(
extra='forbid',
)
uri: Optional[str] = Field(
None,
description='*\n The [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier)\n of the source, typically a file path relative to the root directory',
)
feature: Optional[Feature] = None
comments: List[Comment] = Field(
..., description='All the comments in the Gherkin document'
) It also supports pydantic v1/v2, and other libs, if one needs that. |
If anything, we could maintain the generated files into this repo, and make sure every time the json schema is updated these files are regenerated, so that other downstream users can use whatever flavour of models they want. |
@youtux, this approach and tool are used here exactly! Check Makefile pls! |
A schema definition isn't any good without data objects to go with it and their use is mostly to provide a type safe representation of the message. For example And while in theory each library could generate dtos based of the schema, that isn't practical once libraries start calling each other. So having a shared implementation of the data objects is essential. |
Got it. Then I'd propose for the python impl to provide at least the data classes versions, since its the most compatible one, and possibly also the pydantic version under a different module, so that downstream users can choose what to use |
Is there anything I can help with, as I'd love to try and messages over the line to support our work with gherkin? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to step away from this now I think as there is enough voices commenting on it. I don't quite understand why we've not gone down the route that the other 8 languages have done with using the codegen tool given I spent a while refactoring it so now it's a tiny class you need to make π€·
- python-version: "3.10" | ||
os: windows-latest | ||
- python-version: "3.11" | ||
os: windows-latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would use an include matrix personally @elchupanebrej
π€ What's changed?
Add python implementation
π·οΈ What kind of change is this?
π Checklist:
This text was originally generated from a template, then edited by hand. You can modify the template here.