-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: dry run queries with DB API cursor #128
Conversation
I'm not as familiar with dbapi abstraction, but this seems like an odd pattern. I'd expect something like an empty row iterator and some form of side channel for getting metadata, but scanning https://www.python.org/dev/peps/pep-0249/ it doesn't seem like there's such an abstraction. Nor does it seem like there's an explicit prepare(), which is where you typically do things like validate queries. My worry about doing things like building a custom schema and row is brittleness; if we add more metadata over time it seems like this change necessitates we keep widening the schema and keep the single-row semantics. Is this how other engines have hoisted prepare semantics into the dbapi abstraction? |
This issue actually has bubbled it's way here from the SQLAlchemy connector. googleapis/python-bigquery-sqlalchemy#56 I don't know how we'd want to start showing extra stats from the job resource in the DB-API, but in this user's case just successfully executing with a dry run query is sufficient. They're updating their connection string to just try out the code for syntax errors before running it for real. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this!
Fixes #118.
This PR fixes the error in DB API cursor if a dry run query is run.
As a side effect, it also fixes handling the client's default query job config in DB API (the default config, if any, has been ignored to date).
PR checklist: