Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returning Invalid syntax for type json on arrays #442

Closed
jrf0110 opened this issue Sep 16, 2013 · 29 comments
Closed

Returning Invalid syntax for type json on arrays #442

jrf0110 opened this issue Sep 16, 2013 · 29 comments

Comments

@jrf0110
Copy link
Contributor

jrf0110 commented Sep 16, 2013

When your column is type json and you try and insert a JSON array, you get the following error:

{ [error: invalid input syntax for type json]
  length: 154,
  name: 'error',
  severity: 'ERROR',
  code: '22P02',
  detail: 'Expected string or "}", but found "1".',
  hint: undefined,
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: 'JSON data, line 1: {1...',
  file: 'json.c',
  line: '665',
  routine: 'report_parse_error' }

For example, this works in PSQL:

insert into my_table ( data ) values ( '[1,2,3]' );

But this returns an error in node-pg:

client.query('insert into my_table ( data ) values ( $1 )', [ [1,2,3] ], function( error ){
  /* ERROR */
});
@booo
Copy link
Contributor

booo commented Sep 16, 2013

We convert javascript arrays into postgres arrays. If you stringify your input array it should work:

client.query('INSERT INTO my_table (data) VALUES ($1)', [ JSON.stringify([1,2,3])], handler);

We don't now the column type in advance so we can't do a auto conversions of the array.

@jrf0110
Copy link
Contributor Author

jrf0110 commented Sep 16, 2013

@booo Yeah, I was thinking that may be the case. Well, in that case, non-issue!

@badave
Copy link
Contributor

badave commented Oct 31, 2013

@booo @brianc This behavior is a bug. Converting data from [{}, {}] to {{}, {}} should not be determined by pg.

The only solution is to use JSON.stringify before running the insert query? What if you're running an ORM? What if you are unaware of this limitation?

Just because pg doesn't know the column type in advance does not mean you should convert to the syntax of column type "array". It might be helpful, but it's also a pretty big assumption and makes it really hard to use an array in a json field, despite it being supported and normal behavior by the database.

The better behavior would be to leave converting data types to the client. Both possibilities would be easy. As is, it's really hard to get the expected behavior using what's available.

@spollack
Copy link
Contributor

see also: #374

@jrf0110
Copy link
Contributor Author

jrf0110 commented Oct 31, 2013

@badave Doesn't your proposed behavior require the same amount of work to insert JSON? You're gonna be stringifying the JSON anyway.

@badave
Copy link
Contributor

badave commented Oct 31, 2013

Proposed behavior passing {} or [] into a column without the library being opinionated about converting it into a postgres array type. The larger problem isn't the JSON.stringify, it's the behavior of pg. As a matter of fact, the point should be that I shouldn't have to do a JSON.stringify because the library already handles it with JSON objects without stringifying.

This shouldn't be a mandatory conversion. It's that simple. The database error is "invalid input syntax for type json", an insufficient error message as a user to conclude its caused by the lib transforming the data. It's transforming valid JSON into invalid JSON.

However, if I were using an array type and inserting an array into postgres with the incorrect format, I would understand the invalid input syntax for type array because it isn't JSON format, it requires an array format. At that point I would look for a solution that converted my data into the correct form to insert into the pg.

And I could understand it better if the array column type was any good in postgres, but it's complete crap. I'd prefer to use a json array because it is more recognizable.

@jrf0110
Copy link
Contributor Author

jrf0110 commented Oct 31, 2013

Yeah, I see what you're saying. It's a bit funky that node-pg assumes to stringify Objects but not Arrays. I think making it more consistent, or rather, no incoming magic (I like the magic coming out though!) would be for the best. It seems a little arbitrary, especially now that JSON has been added to the data-type mix, that JS arrays are assumed to be PG Arrays, when they could be type json or even hstore.

My preference would be to make no assumptions and leave it up to the library consumer. I'm not sure if that means node-pg should JSON.stringify Arrays and Objects by default, as that would be trading one magic for another. But it certainly does seem like a sane default.

@eriknyk
Copy link

eriknyk commented May 12, 2016

Hi, this issue should be reopened, since in postgres shell is possible to execute:

INSERT INTO public.my_table ("userId", "personId", books) VALUES (1, 1, '[1,2,3]');

Best Regards.

@jamesdixon
Copy link

I'm still running into this error even after stringifying the input array. Here's my JSON object:

[
   { serviceId: 2, petId: 5, checked: true },
   { serviceId: 3, petId: null, checked: false }
]

After stringifying:

"[{"serviceId":2,"petId":5,"checked":true},{"serviceId":3,"petId":null,"checked":false}]"

I insert and receive the following error:

{"msec":54.03032702207565,"error":"update \"appointment\" set \"ended_at\" = $1, \"report_card\" = $2, \"staff_notes\" = $3, \"status\" = $4, \"updated_at\" = $5 where \"id\" = $6 - invalid input syntax for type json","data":{"message":"update \"appointment\" set \"ended_at\" = $1, \"report_card\" = $2, \"staff_notes\" = $3, \"status\" = $4, \"updated_at\" = $5 where \"id\" = $6 - invalid input syntax for type json","severity":"ERROR","code":"22P02","condition":"invalid_text_representation","detail":"Expected \":\", but found \",\".","where":"JSON data, line 1: {\"{\\\"serviceId\\\":2,\\\"petId\\\":5,\\\"checked\\\":true}\",...","file":"json.c","line":1198,"routine":"report_parse_error","name":"PgError","isBoom":true,"isServer":true,"data":null,"output":{"statusCode":500,"payload":{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"},"headers":{}}}}

It appears the input is being stringified again. Am I missing something here?

Appreciate any help!

@brianc
Copy link
Owner

brianc commented Jul 25, 2016

Here's an example of using a json data type & doing a round-trip query from node -> postgres -> and back to node with the json type preserved:

https://github.com/brianc/node-postgres/blob/master/test/integration/client/json-type-parsing-tests.js

Hope that helps!

@jamesdixon
Copy link

@brianc thanks for this. However, isn't this just testing on a single object and not an array of objects?

@jrf0110
Copy link
Contributor Author

jrf0110 commented Jul 25, 2016

@jamesdixon using pg for arrays of objects is working fine for me:

require('pg').connect( (error, client, done) => {
  if (error) throw error

  client.query('select $1::json as arr', [JSON.stringify([{foo: 'bar'}])], (error, result) => {
    done()
    if (error) throw error
    console.log(result.rows[0].arr) // => [{ foo: 'bar' }]
    process.exit(0)
  })
})

@jrf0110
Copy link
Contributor Author

jrf0110 commented Jul 25, 2016

If you stringify first, you'll need to cast the value to json or jsonb depending on what you're using

@jamesdixon
Copy link

@jrf0110 thanks for this. I've confirmed this does work on insert as well. It appears something is double-stringifying my input.

Appreciate the help, fellas!

@PythonDevOp
Copy link

@charmander so are issue 442 and 1143 overlapping issues? I see they are going back between open/close. @joskuijpers seemed to find my issue, I was wondering if any progress had been made regarding a solution

@charmander
Copy link
Collaborator

@PythonDevOp Yes, they’re the same issue. I’m not sure how @brianc wants to solve it, if at all, but you can use JSON.stringify manually in the meantime as seen above.

@brianc
Copy link
Owner

brianc commented Jun 16, 2017

If you need to pass in an array of objects as a single JSON value then do what @jrf0110 has done and call JSON.stringify on the parameter yourself before adding it to your array of parameters. node-postgres ignores any array parameter that's a string, passing it directly onto the wire and to the backend. You might need to cast in your query.

@brianc brianc closed this as completed Jun 16, 2017
@tamlyn
Copy link

tamlyn commented Jun 29, 2017

Would it make sense for node-postgres to intercept this error and augment it with something like Are you inserting an array into a JSON column? See <link to docs>? Helpful error messages make for a nice developer experience.

@sparebytes
Copy link

A way to configure the default behavior would be desirable.

@felixfbecker
Copy link

pg-native too?

@charmander
Copy link
Collaborator

Correct. pg passes JSON to libpq, not JavaScript objects.

markddrake added a commit to markddrake/YADAMU---Yet-Another-DAta-Migration-Utility that referenced this issue Aug 1, 2020
… conversion between object and text, binary and hexBinary

Yadamu: Enable TABLES parameter to be used to limit operations to a specific subset of the tables in the schema
Yadamu: Added support for TABLES, WAREHOUSE and ACCOUNT command line parameters
Yadamu: Refactor DEFAULT handling and PARAMETERS as GETTERS
DBReader: pass cause to forcedEnd()
DBWriter: Use await when calling dbi.setMetadata()
YadamuLibrary: Add Boolean Conversion utilities
YadmauLogger:  Disable fileWriter column count check
YadamuRejectManager: Disable fileWriter column count check

YadamuDBI: Standardized naming conventions for SQL Statements used by driver
SQL_CONFIGURE_CONNECTION
SQL_SYSTEM_INFORMATION_SCHEMA
SQL_GET_DLL_STATEMENTS
SQL_SCHEMA_INFORMATION
SQL_BEGIN_TRANSACTION
SQL_COMMIT_TRANSACTION
SQL_ROLLBACK_TRANSACTION
SQL_GET_DDL_STATEMENTS
SQL_CREATE_SAVE_POINT
SQL_RESTORE_SAVE_POINT

YadamuDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuDBI: Add incoming Spatial format information to table metadata in getTableInfo()
YadamuDBI: All drivers Set transaction state before performing commit and rollback operations
YadmuDBI: Remove forceEndOnInputStreamError()
YadamuDBI: Refactor decomposeDateType => YadamuLibrary
YadamuDBI: Refactor decomposeDateTypes => YadamuLibrary
YadamuDBI: Add support for table name filtering via TABLES parameter
YadamuParser: remove objectMode argument from constuctor and all descendant classes
YadamuParser: Use Object.values() to Pivot from Object to Array
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Disable column count check once skipTable is true
YadamuWriter: FlushCache() Only commit or rollback if there is an active transaction
YadamuWriter: FlushCache() Skip writing pending rows if skipTable is true
YadamuQA: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuQA: Standardize test names across export, import, fileRoundtrip, dbRoundtrip and lostConnection configurations
YadamuQA: Abort test operation when step fails.
YadamuQA: Enable integration of LoaderDBI by using dynamic driver loading to load FileDBI
YadamuQA: Fixed Accumulators
YadamuQA: Added Unload Testing Framework to Export

ExampleDBI: Add ExampleConstants class
ExampleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
ExampleParser: Use Object.values() to Pivot from Object to Array
ExampleParser: Use  transformation array
ExampleParser: Do not run transformations unless one or more transformation  are defined

FileDBI: Add DataType mapping to talbleInfo
FileDBI: Add SpatialFormat to tableInfo
FileDBI: Wrap calls to fs.createReadStream() in a promise
FileDBI: Remove source information from Metadata before writing to file
FileWriter: Use transformation array for Buffer and Date conversions
FileWriter: Do not run transformations unless one or more transformations are defined

MySQLDBI: Add MySQLConstants class
MySQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MySQLDBI: Remove JSON_ARARY operator from SELECT statements
MySQLDBI: Return binary data types as Buffer, not HexBinary String
MySQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Map SET columns to JSON when generating DML statements
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in "schemaInformation" query
MySQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MySQLDBI: Map tinyint(1) to boolean;
MySQLDBI: Map Postgres "jsonb" data type to JSON.
MySQLDBI: Add Snowflake data type mappings
MySQLParser: Use Object.values() to Pivot from Object to Array
MySQLParser: Use  transformation array for JSON & SET conversions
MySQLParser: Return JSON as object
MySQLParser: Return SET column as JSON array
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLParser: SET columns are automatically converted to JSON by the driver, no transformation required
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLWriter: Use YadamuSpatialLibary to recode entire batch to WKT on WKB insert Error
MySQLWriter: Remove direct use of WKX package
MySQLQA: Cast SET columns in the source table to JSON when comparing results

MariaDBI: Add MariadbConstants class
MariaDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MariaDBI: Remove JSON_ARARY operator from SELECT statements
MariaDBI: Return binary data types as Buffer, not HexBinary String
MariaDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MariaDBI: Map SET columns to JSON when generating DML statements
MariaDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MariaDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MariaDBI: Use rowsAsArray option at connection time
MariaDBI: Join with information_schema.check_constraints to identify JSON columns
MariaDBI: Map tinyint(1) to boolean;
MariaDBI: Map Postgres "jsonb" data type to JSON.
MariaDBI: Fetch float and double as string
MariaParser: Use  transformation array for JSON
MariaParser: Return JSON as object
MariaQA: Cast SET columns in the source table to Pseudo JSON when comparing results

MsSQLDBI: Add MsSQLConstants class
MsSQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MsSQLDBI: Map MySQL SET columns  to JSON when generating DML statements
MsSQLDBI: Fixed parameters names used in reportTransactionState()'
MsSQLDBI: Fix mapping of Oracle data type "MDSYS.SDO_GEOMETRY"
MsSQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MsSQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MsSQLDBI: Wrap calls to fs.createReadStream() in a promise
MsSQLDBI: Map bit to Boolean. Write Boolean columns as true/false
MsSQLDBI: Map Postgres "jsonb" data type to JSON.
MsSQLDBI: Add Snowflake data type mappings
MsSQLDBI: Restrict  STisValid test to Geography columns
MsSQLDBI: Use YadamuLibrary for Boolean Conversions
MsSQLParser: Use Object.values() to Pivot from Object to Array
MsSQLParser: Return binary data types as Buffer, not HexBinary String
MsSQLWriter: Convert GeoJSON to WKT before writing batch

OracleDBI: Add OracleConstants class
OracleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
OracleDBI: Map MySQL  SET columns to JSON when generating DML statements
OracleDBI: Standardize LOB conversion functions using sourceToTarget naming convention
OracleDBI: Remove all HexBinary LOB conversion functions \
OracleDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
OracleDBI: Convert all LOB copy operations to Stream.pipeline()
OracleDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
OracleDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
OracleDBI: Wrap calls to fs.createReadStream() in a promise
OracleDBI: Export BFILE in JSON Format / Import BFILE from JSON Format
OracleDBI: Add Snowflake data type mappings
OracleDBI: Map raw(1) to Boolean. Write Boolean Values as 0x00 and 0x01
OracleDBI: Map Postgres "jsonb" data type to JSON.
OracleDBI: Use YadamuLibrary for Boolean Conversions
OracleDBI: Remove unused parameter TABLE_NAME from YADAMU_EXPORT procedure
OracleDBI: Switch Default JSON storage to BLOB in Oracle12c
OracleDBI: Retun JSON as CLOB or BLOB. Use JSON_SERIALIZE in 20c with Native JSON dataType.
OracleParser: Convert all LOB copy operations to Stream.pipeline()
OracleParser: Return binary data types as Buffer, not HexBinary String
OracleParser: Convert JSON store as BLOB to text.
OracleParser: Use transformation array for CLOB, BLOB and JSON conversions
OracleParser: Do not run transformations unless one or more transformation  are defined
OracleWriter: Use await when serializing LOB columns before logging errors
OracleWriter: Put rows containing LOBs back into column order before logging errors
OracleWriter: Remove HexBinary conversions
OracleWriter: Use await when calling with HandleBatchError
OracleError: Implement "SpatialError" method

PostgresDBI: Add PostgreConstants class
PostgresDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
PostgresDBI: Return binary data types as Buffer, not HexBinary String
PostgresDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
PostgresDBI: Map MySQL Set data type to JSON
PostgresDBI: Serialize JSON/JSONB Columns to avoid brianc/node-postgres#442
PostgresDBI: Set rowMode to array on query exectuion
PostgresDBI: Wrap calls to fs.createReadStream() in a promise

SnowflakeDBI: Add SnowflakeConstants class
SnowflakeDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
SnowflakeDBI: Defined constants for all SQL statements used by the driver
SnowflakeDBI: Removed functions related to uploading and processing YADAMU files on the server
SnowflakeDBI: Optimize insert of variant types using FROM VALUES (),()
SnowflakeDBI: Fix PARSE_XML usage  when copying from databases where XML datatype is "XML"
SnowflakeDBI: Added TIME_INPUT_FORMAT mask
SnowflakeDBI: Add support for specifying transient and DATA_RETENTION_TIME to create table statements
SnowflakeDBI: Added support for type discovery for columns of type 'USER_DEFINED_TYPE'
SnowflakeDBI: Refactor Spatial Conversion operations to new class YadamuSpatialLibary
SnowflakeDBI: Added support for GEOMETRY data type
SnowflakeDBI: Switched Default Spatial Format to EWKB
SnowflakeDBI: Use Describe to get length of Binary columns
SnowflakeDBI: Add Duck Typing for Variant columnss
SnowflakeParser: Use Object.values() to Pivot from Object to Array
SnowflakeWriter: Convert Buffers to HexBinary before inserting binary data (Snowflake-sdk does not handle Buffer)
SnowflakeWriter: Recode WKB and EWKB as WKT as snowflake rejects some valid WKB/EWKBT values
SnowflakeQA: Add transient and DATA_RETENTION_TIME to Database and Schema creation
SnowflakeQA: Added YADAMU_TEST stored procedure for comparing Snowflake schemas
SnowflakeQA: Added implementation for function compareSchemas()
SnowflakeQA: Added implementation for function getRowCounts()

MongoDBI: Add MongoConstants class
MongoDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MongoDBI: Add stack traces to MongoError
MongoError: Add stack trace information
MongoParser: Use Object.values() to Pivot from Object to Array
MongoParser: Use  transformation array for data conversions
MongoParser: Do not run transformations unless one or more transformation  are defined
MongoWriter: Use transformation array for Buffer and Date conversions
MongoWriter: Do not run transformations unless one or more transformation  are defined
MongoWriter: Fixed objectId transformation
MongoParser: Decode Mongo binData BSON Objects
MongoQA: Report Collection Hash Values as Extra and Missing rows counts

LoaderDBI: Add Experimental version of parallel File load/unload option

Docker: Limit Container Memory to 16GB
markddrake added a commit to markddrake/YADAMU---Yet-Another-DAta-Migration-Utility that referenced this issue Aug 1, 2020
… conversions between object and text, binary and hexBinary

Yadamu: Enable TABLES parameter to be used to limit operations to a specific subset of the tables in the schema
Yadamu: Added support for TABLES, WAREHOUSE and ACCOUNT command line parameters
Yadamu: Refactor DEFAULT handling and PARAMETERS as GETTERS
DBReader: pass cause to forcedEnd()
DBWriter: Use await when calling dbi.setMetadata()
YadamuLibrary: Add Boolean Conversion utilities
YadmauLogger:  Disable fileWriter column count check
YadamuRejectManager: Disable fileWriter column count check

YadamuDBI: Standardized naming conventions for SQL Statements used by driver
SQL_CONFIGURE_CONNECTION
SQL_SYSTEM_INFORMATION_SCHEMA
SQL_GET_DLL_STATEMENTS
SQL_SCHEMA_INFORMATION
SQL_BEGIN_TRANSACTION
SQL_COMMIT_TRANSACTION
SQL_ROLLBACK_TRANSACTION
SQL_GET_DDL_STATEMENTS
SQL_CREATE_SAVE_POINT
SQL_RESTORE_SAVE_POINT

YadamuDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuDBI: Add incoming Spatial format information to table metadata in getTableInfo()
YadamuDBI: All drivers Set transaction state before performing commit and rollback operations
YadmuDBI: Remove forceEndOnInputStreamError()
YadamuDBI: Refactor decomposeDateType => YadamuLibrary
YadamuDBI: Refactor decomposeDateTypes => YadamuLibrary
YadamuDBI: Add support for table name filtering via TABLES parameter
YadamuParser: remove objectMode argument from constuctor and all descendant classes
YadamuParser: Use Object.values() to Pivot from Object to Array
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Disable column count check once skipTable is true
YadamuWriter: FlushCache() Only commit or rollback if there is an active transaction
YadamuWriter: FlushCache() Skip writing pending rows if skipTable is true
YadamuQA: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuQA: Standardize test names across export, import, fileRoundtrip, dbRoundtrip and lostConnection configurations
YadamuQA: Abort test operation when step fails.
YadamuQA: Enable integration of LoaderDBI by using dynamic driver loading to load FileDBI
YadamuQA: Fixed Accumulators
YadamuQA: Added Unload Testing Framework to Export

ExampleDBI: Add ExampleConstants class
ExampleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
ExampleParser: Use Object.values() to Pivot from Object to Array
ExampleParser: Use  transformation array
ExampleParser: Do not run transformations unless one or more transformation  are defined

FileDBI: Add DataType mapping to talbleInfo
FileDBI: Add SpatialFormat to tableInfo
FileDBI: Wrap calls to fs.createReadStream() in a promise
FileDBI: Remove source information from Metadata before writing to file
FileWriter: Use transformation array for Buffer and Date conversions
FileWriter: Do not run transformations unless one or more transformations are defined

MySQLDBI: Add MySQLConstants class
MySQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MySQLDBI: Remove JSON_ARARY operator from SELECT statements
MySQLDBI: Return binary data types as Buffer, not HexBinary String
MySQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Map SET columns to JSON when generating DML statements
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in "schemaInformation" query
MySQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MySQLDBI: Map tinyint(1) to boolean;
MySQLDBI: Map Postgres "jsonb" data type to JSON.
MySQLDBI: Add Snowflake data type mappings
MySQLParser: Use Object.values() to Pivot from Object to Array
MySQLParser: Use  transformation array for JSON & SET conversions
MySQLParser: Return JSON as object
MySQLParser: Return SET column as JSON array
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLParser: SET columns are automatically converted to JSON by the driver, no transformation required
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLWriter: Use YadamuSpatialLibary to recode entire batch to WKT on WKB insert Error
MySQLWriter: Remove direct use of WKX package
MySQLQA: Cast SET columns in the source table to JSON when comparing results

MariaDBI: Add MariadbConstants class
MariaDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MariaDBI: Remove JSON_ARARY operator from SELECT statements
MariaDBI: Return binary data types as Buffer, not HexBinary String
MariaDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MariaDBI: Map SET columns to JSON when generating DML statements
MariaDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MariaDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MariaDBI: Use rowsAsArray option at connection time
MariaDBI: Join with information_schema.check_constraints to identify JSON columns
MariaDBI: Map tinyint(1) to boolean;
MariaDBI: Map Postgres "jsonb" data type to JSON.
MariaDBI: Fetch float and double as string
MariaParser: Use  transformation array for JSON
MariaParser: Return JSON as object
MariaQA: Cast SET columns in the source table to Pseudo JSON when comparing results

MsSQLDBI: Add MsSQLConstants class
MsSQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MsSQLDBI: Map MySQL SET columns  to JSON when generating DML statements
MsSQLDBI: Fixed parameters names used in reportTransactionState()'
MsSQLDBI: Fix mapping of Oracle data type "MDSYS.SDO_GEOMETRY"
MsSQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MsSQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MsSQLDBI: Wrap calls to fs.createReadStream() in a promise
MsSQLDBI: Map bit to Boolean. Write Boolean columns as true/false
MsSQLDBI: Map Postgres "jsonb" data type to JSON.
MsSQLDBI: Add Snowflake data type mappings
MsSQLDBI: Restrict  STisValid test to Geography columns
MsSQLDBI: Use YadamuLibrary for Boolean Conversions
MsSQLParser: Use Object.values() to Pivot from Object to Array
MsSQLParser: Return binary data types as Buffer, not HexBinary String
MsSQLWriter: Convert GeoJSON to WKT before writing batch

OracleDBI: Add OracleConstants class
OracleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
OracleDBI: Map MySQL  SET columns to JSON when generating DML statements
OracleDBI: Standardize LOB conversion functions using sourceToTarget naming convention
OracleDBI: Remove all HexBinary LOB conversion functions \
OracleDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
OracleDBI: Convert all LOB copy operations to Stream.pipeline()
OracleDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
OracleDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
OracleDBI: Wrap calls to fs.createReadStream() in a promise
OracleDBI: Export BFILE in JSON Format / Import BFILE from JSON Format
OracleDBI: Add Snowflake data type mappings
OracleDBI: Map raw(1) to Boolean. Write Boolean Values as 0x00 and 0x01
OracleDBI: Map Postgres "jsonb" data type to JSON.
OracleDBI: Use YadamuLibrary for Boolean Conversions
OracleDBI: Remove unused parameter TABLE_NAME from YADAMU_EXPORT procedure
OracleDBI: Switch Default JSON storage to BLOB in Oracle12c
OracleDBI: Retun JSON as CLOB or BLOB. Use JSON_SERIALIZE in 20c with Native JSON dataType.
OracleParser: Convert all LOB copy operations to Stream.pipeline()
OracleParser: Return binary data types as Buffer, not HexBinary String
OracleParser: Convert JSON store as BLOB to text.
OracleParser: Use transformation array for CLOB, BLOB and JSON conversions
OracleParser: Do not run transformations unless one or more transformation  are defined
OracleWriter: Use await when serializing LOB columns before logging errors
OracleWriter: Put rows containing LOBs back into column order before logging errors
OracleWriter: Remove HexBinary conversions
OracleWriter: Use await when calling with HandleBatchError
OracleError: Implement "SpatialError" method

PostgresDBI: Add PostgreConstants class
PostgresDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
PostgresDBI: Return binary data types as Buffer, not HexBinary String
PostgresDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
PostgresDBI: Map MySQL Set data type to JSON
PostgresDBI: Serialize JSON/JSONB Columns to avoid brianc/node-postgres#442
PostgresDBI: Set rowMode to array on query exectuion
PostgresDBI: Wrap calls to fs.createReadStream() in a promise

SnowflakeDBI: Add SnowflakeConstants class
SnowflakeDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
SnowflakeDBI: Defined constants for all SQL statements used by the driver
SnowflakeDBI: Removed functions related to uploading and processing YADAMU files on the server
SnowflakeDBI: Optimize insert of variant types using FROM VALUES (),()
SnowflakeDBI: Fix PARSE_XML usage  when copying from databases where XML datatype is "XML"
SnowflakeDBI: Added TIME_INPUT_FORMAT mask
SnowflakeDBI: Add support for specifying transient and DATA_RETENTION_TIME to create table statements
SnowflakeDBI: Added support for type discovery for columns of type 'USER_DEFINED_TYPE'
SnowflakeDBI: Refactor Spatial Conversion operations to new class YadamuSpatialLibary
SnowflakeDBI: Added support for GEOMETRY data type
SnowflakeDBI: Switched Default Spatial Format to EWKB
SnowflakeDBI: Use Describe to get length of Binary columns
SnowflakeDBI: Add Duck Typing for Variant columnss
SnowflakeParser: Use Object.values() to Pivot from Object to Array
SnowflakeWriter: Convert Buffers to HexBinary before inserting binary data (Snowflake-sdk does not handle Buffer)
SnowflakeWriter: Recode WKB and EWKB as WKT as snowflake rejects some valid WKB/EWKBT values
SnowflakeQA: Add transient and DATA_RETENTION_TIME to Database and Schema creation
SnowflakeQA: Added YADAMU_TEST stored procedure for comparing Snowflake schemas
SnowflakeQA: Added implementation for function compareSchemas()
SnowflakeQA: Added implementation for function getRowCounts()

MongoDBI: Add MongoConstants class
MongoDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MongoDBI: Add stack traces to MongoError
MongoError: Add stack trace information
MongoParser: Use Object.values() to Pivot from Object to Array
MongoParser: Use  transformation array for data conversions
MongoParser: Do not run transformations unless one or more transformation  are defined
MongoWriter: Use transformation array for Buffer and Date conversions
MongoWriter: Do not run transformations unless one or more transformation  are defined
MongoWriter: Fixed objectId transformation
MongoParser: Decode Mongo binData BSON Objects
MongoQA: Report Collection Hash Values as Extra and Missing rows counts

LoaderDBI: Add Experimental version of parallel File load/unload option

Docker: Limit Container Memory to 16GB
@danvk
Copy link

danvk commented Aug 25, 2020

Another option is to make your column have a type of jsonb[] rather than jsonb.

@dreamyguy
Copy link

Here's an example of using a json data type & doing a round-trip query from node -> postgres -> and back to node with the json type preserved:

https://github.com/brianc/node-postgres/blob/master/test/integration/client/json-type-parsing-tests.js

Hope that helps!

I found the file @brianc referred to, since the link is now broken: https://github.com/brianc/node-postgres/blob/9274f08fa2d8ae55a218255bf7880d26b6abc935/test/integration/client/json-type-parsing-tests.js

alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 7, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 7, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 7, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 10, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 12, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 12, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data`
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
alecgibson added a commit to share/sharedb-postgres that referenced this issue Jun 12, 2024
This is a **BREAKING** change that:

 - adds tests against the [upstream `sharedb` DB test suite][1]
 - adds a CI build for the tests against current Node.js and Postgres
   versions
 - breaks the API to conform to the upstream tests, including adding
   metadata support

The breaks are:

 - Dropping non-null constraints on `snapshots.doc_type` and
   `snapshots.data` (to allow `Doc`s to be deleted)
 - Adding a new `snapshots.metadata` `json` column
 - Respecting `options.metadata` and `fields.$submit`, which were
   previously ignored on `getOps()`, and useless on `getSnapshot()`
   (which didn't store metadata)
 - `snapshot.m` is now `undefined` if not present, or `null` if
   unrequested (inline with the spec)

On top of this it also makes some bugfixes to conform to the spec:

 - Ignore unique key validations when committing, since this may happen
   during concurrent commits
 - `JSON.stringify()` JSON fields, which [break][2] if passed a raw
   array
 - Default `from = 0` if unset in `getOps()`

[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests