Skip to content

Commit

Permalink
Re-architecture. Add Constants Classes, Setters and Getters, Optimize…
Browse files Browse the repository at this point in the history
… conversions between object and text, binary and hexBinary

Yadamu: Enable TABLES parameter to be used to limit operations to a specific subset of the tables in the schema
Yadamu: Added support for TABLES, WAREHOUSE and ACCOUNT command line parameters
Yadamu: Refactor DEFAULT handling and PARAMETERS as GETTERS
DBReader: pass cause to forcedEnd()
DBWriter: Use await when calling dbi.setMetadata()
YadamuLibrary: Add Boolean Conversion utilities
YadmauLogger:  Disable fileWriter column count check
YadamuRejectManager: Disable fileWriter column count check

YadamuDBI: Standardized naming conventions for SQL Statements used by driver
SQL_CONFIGURE_CONNECTION
SQL_SYSTEM_INFORMATION_SCHEMA
SQL_GET_DLL_STATEMENTS
SQL_SCHEMA_INFORMATION
SQL_BEGIN_TRANSACTION
SQL_COMMIT_TRANSACTION
SQL_ROLLBACK_TRANSACTION
SQL_GET_DDL_STATEMENTS
SQL_CREATE_SAVE_POINT
SQL_RESTORE_SAVE_POINT

YadamuDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuDBI: Add incoming Spatial format information to table metadata in getTableInfo()
YadamuDBI: All drivers Set transaction state before performing commit and rollback operations
YadmuDBI: Remove forceEndOnInputStreamError()
YadamuDBI: Refactor decomposeDateType => YadamuLibrary
YadamuDBI: Refactor decomposeDateTypes => YadamuLibrary
YadamuDBI: Add support for table name filtering via TABLES parameter
YadamuParser: remove objectMode argument from constuctor and all descendant classes
YadamuParser: Use Object.values() to Pivot from Object to Array
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Disable column count check once skipTable is true
YadamuWriter: FlushCache() Only commit or rollback if there is an active transaction
YadamuWriter: FlushCache() Skip writing pending rows if skipTable is true
YadamuQA: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuQA: Standardize test names across export, import, fileRoundtrip, dbRoundtrip and lostConnection configurations
YadamuQA: Abort test operation when step fails.
YadamuQA: Enable integration of LoaderDBI by using dynamic driver loading to load FileDBI
YadamuQA: Fixed Accumulators
YadamuQA: Added Unload Testing Framework to Export

ExampleDBI: Add ExampleConstants class
ExampleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
ExampleParser: Use Object.values() to Pivot from Object to Array
ExampleParser: Use  transformation array
ExampleParser: Do not run transformations unless one or more transformation  are defined

FileDBI: Add DataType mapping to talbleInfo
FileDBI: Add SpatialFormat to tableInfo
FileDBI: Wrap calls to fs.createReadStream() in a promise
FileDBI: Remove source information from Metadata before writing to file
FileWriter: Use transformation array for Buffer and Date conversions
FileWriter: Do not run transformations unless one or more transformations are defined

MySQLDBI: Add MySQLConstants class
MySQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MySQLDBI: Remove JSON_ARARY operator from SELECT statements
MySQLDBI: Return binary data types as Buffer, not HexBinary String
MySQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Map SET columns to JSON when generating DML statements
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in "schemaInformation" query
MySQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MySQLDBI: Map tinyint(1) to boolean;
MySQLDBI: Map Postgres "jsonb" data type to JSON.
MySQLDBI: Add Snowflake data type mappings
MySQLParser: Use Object.values() to Pivot from Object to Array
MySQLParser: Use  transformation array for JSON & SET conversions
MySQLParser: Return JSON as object
MySQLParser: Return SET column as JSON array
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLParser: SET columns are automatically converted to JSON by the driver, no transformation required
MySQLParser: Do not run transformations unless one or more transformation  are defined
MySQLWriter: Use YadamuSpatialLibary to recode entire batch to WKT on WKB insert Error
MySQLWriter: Remove direct use of WKX package
MySQLQA: Cast SET columns in the source table to JSON when comparing results

MariaDBI: Add MariadbConstants class
MariaDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MariaDBI: Remove JSON_ARARY operator from SELECT statements
MariaDBI: Return binary data types as Buffer, not HexBinary String
MariaDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MariaDBI: Map SET columns to JSON when generating DML statements
MariaDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MariaDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MariaDBI: Use rowsAsArray option at connection time
MariaDBI: Join with information_schema.check_constraints to identify JSON columns
MariaDBI: Map tinyint(1) to boolean;
MariaDBI: Map Postgres "jsonb" data type to JSON.
MariaDBI: Fetch float and double as string
MariaParser: Use  transformation array for JSON
MariaParser: Return JSON as object
MariaQA: Cast SET columns in the source table to Pseudo JSON when comparing results

MsSQLDBI: Add MsSQLConstants class
MsSQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MsSQLDBI: Map MySQL SET columns  to JSON when generating DML statements
MsSQLDBI: Fixed parameters names used in reportTransactionState()'
MsSQLDBI: Fix mapping of Oracle data type "MDSYS.SDO_GEOMETRY"
MsSQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MsSQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MsSQLDBI: Wrap calls to fs.createReadStream() in a promise
MsSQLDBI: Map bit to Boolean. Write Boolean columns as true/false
MsSQLDBI: Map Postgres "jsonb" data type to JSON.
MsSQLDBI: Add Snowflake data type mappings
MsSQLDBI: Restrict  STisValid test to Geography columns
MsSQLDBI: Use YadamuLibrary for Boolean Conversions
MsSQLParser: Use Object.values() to Pivot from Object to Array
MsSQLParser: Return binary data types as Buffer, not HexBinary String
MsSQLWriter: Convert GeoJSON to WKT before writing batch

OracleDBI: Add OracleConstants class
OracleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
OracleDBI: Map MySQL  SET columns to JSON when generating DML statements
OracleDBI: Standardize LOB conversion functions using sourceToTarget naming convention
OracleDBI: Remove all HexBinary LOB conversion functions \
OracleDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
OracleDBI: Convert all LOB copy operations to Stream.pipeline()
OracleDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
OracleDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
OracleDBI: Wrap calls to fs.createReadStream() in a promise
OracleDBI: Export BFILE in JSON Format / Import BFILE from JSON Format
OracleDBI: Add Snowflake data type mappings
OracleDBI: Map raw(1) to Boolean. Write Boolean Values as 0x00 and 0x01
OracleDBI: Map Postgres "jsonb" data type to JSON.
OracleDBI: Use YadamuLibrary for Boolean Conversions
OracleDBI: Remove unused parameter TABLE_NAME from YADAMU_EXPORT procedure
OracleDBI: Switch Default JSON storage to BLOB in Oracle12c
OracleDBI: Retun JSON as CLOB or BLOB. Use JSON_SERIALIZE in 20c with Native JSON dataType.
OracleParser: Convert all LOB copy operations to Stream.pipeline()
OracleParser: Return binary data types as Buffer, not HexBinary String
OracleParser: Convert JSON store as BLOB to text.
OracleParser: Use transformation array for CLOB, BLOB and JSON conversions
OracleParser: Do not run transformations unless one or more transformation  are defined
OracleWriter: Use await when serializing LOB columns before logging errors
OracleWriter: Put rows containing LOBs back into column order before logging errors
OracleWriter: Remove HexBinary conversions
OracleWriter: Use await when calling with HandleBatchError
OracleError: Implement "SpatialError" method

PostgresDBI: Add PostgreConstants class
PostgresDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
PostgresDBI: Return binary data types as Buffer, not HexBinary String
PostgresDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
PostgresDBI: Map MySQL Set data type to JSON
PostgresDBI: Serialize JSON/JSONB Columns to avoid brianc/node-postgres#442
PostgresDBI: Set rowMode to array on query exectuion
PostgresDBI: Wrap calls to fs.createReadStream() in a promise

SnowflakeDBI: Add SnowflakeConstants class
SnowflakeDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
SnowflakeDBI: Defined constants for all SQL statements used by the driver
SnowflakeDBI: Removed functions related to uploading and processing YADAMU files on the server
SnowflakeDBI: Optimize insert of variant types using FROM VALUES (),()
SnowflakeDBI: Fix PARSE_XML usage  when copying from databases where XML datatype is "XML"
SnowflakeDBI: Added TIME_INPUT_FORMAT mask
SnowflakeDBI: Add support for specifying transient and DATA_RETENTION_TIME to create table statements
SnowflakeDBI: Added support for type discovery for columns of type 'USER_DEFINED_TYPE'
SnowflakeDBI: Refactor Spatial Conversion operations to new class YadamuSpatialLibary
SnowflakeDBI: Added support for GEOMETRY data type
SnowflakeDBI: Switched Default Spatial Format to EWKB
SnowflakeDBI: Use Describe to get length of Binary columns
SnowflakeDBI: Add Duck Typing for Variant columnss
SnowflakeParser: Use Object.values() to Pivot from Object to Array
SnowflakeWriter: Convert Buffers to HexBinary before inserting binary data (Snowflake-sdk does not handle Buffer)
SnowflakeWriter: Recode WKB and EWKB as WKT as snowflake rejects some valid WKB/EWKBT values
SnowflakeQA: Add transient and DATA_RETENTION_TIME to Database and Schema creation
SnowflakeQA: Added YADAMU_TEST stored procedure for comparing Snowflake schemas
SnowflakeQA: Added implementation for function compareSchemas()
SnowflakeQA: Added implementation for function getRowCounts()

MongoDBI: Add MongoConstants class
MongoDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MongoDBI: Add stack traces to MongoError
MongoError: Add stack trace information
MongoParser: Use Object.values() to Pivot from Object to Array
MongoParser: Use  transformation array for data conversions
MongoParser: Do not run transformations unless one or more transformation  are defined
MongoWriter: Use transformation array for Buffer and Date conversions
MongoWriter: Do not run transformations unless one or more transformation  are defined
MongoWriter: Fixed objectId transformation
MongoParser: Decode Mongo binData BSON Objects
MongoQA: Report Collection Hash Values as Extra and Missing rows counts

LoaderDBI: Add Experimental version of parallel File load/unload option

Docker: Limit Container Memory to 16GB
  • Loading branch information
markddrake committed Aug 1, 2020
1 parent 8140aba commit 9ead259
Show file tree
Hide file tree
Showing 115 changed files with 7,587 additions and 5,423 deletions.
6 changes: 5 additions & 1 deletion app/YADAMU/common/bufferWriter.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,12 @@ class BufferWriter extends Writable {
callback();
}

toBuffer() {
return Buffer.concat(this.chunks)
}

toHexBinary() {
return Buffer.concat(this.chunks).toString('hex');
return this.toBuffer().toString('hex');
}

}
Expand Down
78 changes: 34 additions & 44 deletions app/YADAMU/common/dbReader.js
Original file line number Diff line number Diff line change
Expand Up @@ -66,10 +66,9 @@ class DBReader extends Readable {
super({objectMode: true });

this.dbi = dbi;
this.mode = dbi.parameters.MODE
this.status = dbi.yadamu.getStatus()
this.status = dbi.yadamu.STATUS
this.yadamuLogger = yadamuLogger;
this.yadamuLogger.info([`Reader`,dbi.DATABASE_VENDOR,this.mode,this.dbi.getWorkerNumber()],`Ready.`)
this.yadamuLogger.info([`Reader`,dbi.DATABASE_VENDOR,this.dbi.MODE,this.dbi.getWorkerNumber()],`Ready.`)

this.schemaInfo = [];

Expand Down Expand Up @@ -122,7 +121,7 @@ class DBReader extends Readable {
abortOnError(cause,dbi) {
const abortCodes = ['ABORT',undefined]
// dbi.setFatalError(cause);
return abortCodes.indexOf(dbi.parameters.ON_ERROR) > -1
return abortCodes.indexOf(dbi.yadamu.ON_ERROR) > -1
}

async pipelineTable(task,readerDBI,writerDBI) {
Expand All @@ -144,10 +143,10 @@ class DBReader extends Readable {
let errorDBI

try {
const tableInfo = readerDBI.generateSelectStatement(task)
const tableInfo = readerDBI.generateQueryInformation(task)
// ### TESTING ONLY: Uncomment folllowing line to force Table Not Found condition
// tableInfo.SQL_STATEMENT = tableInfo.SQL_STATEMENT.replace(tableInfo.TABLE_NAME,tableInfo.TABLE_NAME + "1")
const transformer = readerDBI.createParser(tableInfo,true)
const transformer = readerDBI.createParser(tableInfo)
const tableInputStream = await readerDBI.getInputStream(tableInfo,transformer)
tableInputStream.on('error',(err) => {
pipeStatistics.readerEndTime = performance.now()
Expand All @@ -158,10 +157,10 @@ class DBReader extends Readable {
const mappedTableName = writerDBI.transformTableName(task.TABLE_NAME,readerDBI.getInverseTableMappings())
const tableOutputStream = writerDBI.getOutputStream(mappedTableName)
transformer.on('error',(err) => {
pipeStatistics.parserEndTime = performance.now()
})
pipeStatistics.parserEndTime = performance.now()
})

try {
try {
await tableOutputStream.initialize()
pipeStatistics.pipeStartTime = performance.now();
await pipeline(tableInputStream,transformer,tableOutputStream)
Expand All @@ -181,16 +180,16 @@ class DBReader extends Readable {
stream = 'READER'
errorDBI = readerDBI
cause = readerDBI.streamingError(e,tableInfo.SQL_STATEMENT)
if ((continueProcessing.indexOf(readerDBI.parameters.ON_ERROR) > -1) && cause.lostConnection()) {
if ((continueProcessing.indexOf(readerDBI.yadamu.ON_ERROR) > -1) && cause.lostConnection()) {
// Re-establish the input stream connection
await readerDBI.reconnect(cause,'READER')
}
}
this.yadamuLogger.handleException(['PIPELINE',mappedTableName,this.dbi.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR,'STREAM PROCESSING',stream,errorDBI.parameters.ON_ERROR],cause)
if (abortCurrentTable.indexOf(writerDBI.parameters.ON_ERROR) > -1) {
this.yadamuLogger.handleException(['PIPELINE',mappedTableName,this.dbi.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR,'STREAM PROCESSING',stream,errorDBI.yadamu.ON_ERROR],cause)
if (abortCurrentTable.indexOf(writerDBI.yadamu.ON_ERROR) > -1) {
tableOutputStream.abortWriter();
}
await tableOutputStream.forcedEnd();
await tableOutputStream.forcedEnd(cause);
}
pipeStatistics.pipeEndTime = performance.now();
pipeStatistics.rowsRead = transformer.getCounter()
Expand All @@ -199,7 +198,7 @@ class DBReader extends Readable {
this.dbWriter.recordTimings(timings);

if (cause && (this.abortOnError(cause,errorDBI))) {
throw cause;
throw cause;
}
} catch (e) {
this.yadamuLogger.handleException(['PIPELINE',task.TABLE_NAME,this.dbi.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR,'STREAM CREATION'],e)
Expand All @@ -208,18 +207,18 @@ class DBReader extends Readable {
}

async pipelineTables(readerDBI,writerDBI) {


if (this.schemaInfo.length > 0) {
if (this.schemaInfo.length > 0) {
this.yadamuLogger.info(['SEQUENTIAL',readerDBI.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR],`Processing Tables`);
for (const task of this.schemaInfo) {
try {
await this.pipelineTable(task,readerDBI,writerDBI)
} catch (cause) {
this.yadamuLogger.handleException(['SEQUENTIAL','PIPELINES',readerDBI.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR],cause)
// Throwing here raises 'ERR_STREAM_PREMATURE_CLOSE' on the Writer. Cache the cause
this.underlyingError = cause;
throw(cause)
if (task.INCLUDE_TABLE === true) {
try {
await this.pipelineTable(task,readerDBI,writerDBI)
} catch (cause) {
this.underlyingError = cause;
this.yadamuLogger.handleException(['SEQUENTIAL','PIPELINES',readerDBI.DATABASE_VENDOR,writerDBI.DATABASE_VENDOR],cause)
// Throwing here raises 'ERR_STREAM_PREMATURE_CLOSE' on the Writer. Cache the cause
throw(cause)
}
}
}
}
Expand Down Expand Up @@ -266,10 +265,10 @@ class DBReader extends Readable {
const systemInformation = await this.getSystemInformation();
// Needed in case we have to generate DDL from the system information and metadata.
this.dbi.setSystemInformation(systemInformation);
this.dbi.yadamu.rejectionManager.setSystemInformation(systemInformation)
this.dbi.yadamu.warningManager.setSystemInformation(systemInformation)
this.dbi.yadamu.REJECTION_MANAGER.setSystemInformation(systemInformation)
this.dbi.yadamu.WARNING_MANAGER.setSystemInformation(systemInformation)
this.push({systemInformation : systemInformation});
if (this.mode === 'DATA_ONLY') {
if (this.dbi.MODE === 'DATA_ONLY') {
this.nextPhase = 'metadata';
}
else {
Expand All @@ -288,13 +287,13 @@ class DBReader extends Readable {
})
}
this.push({ddl: ddl});
this.nextPhase = this.mode === 'DDL_ONLY' ? 'exportComplete' : 'metadata';
this.nextPhase = this.dbi.MODE === 'DDL_ONLY' ? 'exportComplete' : 'metadata';
break;
case 'metadata' :
const metadata = await this.getMetadata();
this.push({metadata: this.dbi.transformMetadata(metadata,this.dbi.inverseTableMappings)});
this.dbi.yadamu.rejectionManager.setMetadata(metadata)
this.dbi.yadamu.warningManager.setMetadata(metadata)
this.dbi.yadamu.REJECTION_MANAGER.setMetadata(metadata)
this.dbi.yadamu.WARNING_MANAGER.setMetadata(metadata)
this.nextPhase = 'pause';
break;
case 'pause':
Expand All @@ -304,33 +303,24 @@ class DBReader extends Readable {
case 'copyData':
await this.ddlComplete
await this.pipelineTables(this.dbi,this.dbWriter.dbi);
// this.yadamuLogger.trace([this.constructor.name,,this.dbi.DATABASE_VENDOR,`_READ(${this.nextPhase})`,this.dbi.parameters.ON_ERROR],'Exeucting Deferred Callback')
// this.yadamuLogger.trace([this.constructor.name,,this.dbi.DATABASE_VENDOR,`_READ(${this.nextPhase})`,this.dbi.yadamu.ON_ERROR],'Exeucting Deferred Callback')
this.dbWriter.deferredCallback();
// No 'break' - fall through to 'exportComplete'.
case 'exportComplete':
this.push(null);
this.push(null);
break;
default:
}
} catch (e) {
this.yadamuLogger.handleException([`READER`,this.dbi.DATABASE_VENDOR,`_READ(${this.nextPhase})`,this.dbi.parameters.ON_ERROR],e);
this.yadamuLogger.handleException([`READER`,this.dbi.DATABASE_VENDOR,`_READ(${this.nextPhase})`,this.dbi.yadamu.ON_ERROR],e);
this.underlyingError = e;
await this.dbi.releasePrimaryConnection();
this.destroy(e)
}
}

causedByLostConnection(cause) {
return
}


async exportComplete(cause) {
// Finalize the export and release the primary connection.
// this.yadamuLogger.trace([this.constructor.name,this.dbi.isDatabase()],'completeExport()')
}

async _destroy(cause,callback) {
// this.yadamuLogger.trace([this.constructor.name,this.dbi.isDatabase()],'_destroy()')
try {
await this.dbi.finalizeExport();
await this.dbi.releasePrimaryConnection();
Expand All @@ -340,7 +330,7 @@ class DBReader extends Readable {
callback(cause)
}
else {
this.yadamuLogger.handleException([`READER`,this.dbi.DATABASE_VENDOR,`_DESTROY()`,this.dbi.parameters.ON_ERROR],e);
this.yadamuLogger.handleException([`READER`,this.dbi.DATABASE_VENDOR,`_DESTROY()`,this.dbi.yadamu.ON_ERROR],e);
callback(e)
}
}
Expand Down
6 changes: 4 additions & 2 deletions app/YADAMU/common/dbReaderParallel.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class DBReaderParallel extends DBReader {
async pipelineTables(primaryReaderDBI,primaryWriterDBI) {

if (this.schemaInfo.length > 0) {
const maxWorkerCount = parseInt(this.dbi.parameters.PARALLEL)
const maxWorkerCount = parseInt(this.dbi.yadamu.PARALLEL)
const workerCount = this.schemaInfo.length < maxWorkerCount ? this.schemaInfo.length : maxWorkerCount
this.yadamuLogger.info(['PARALLEL',workerCount,this.schemaInfo.length,primaryReaderDBI.DATABASE_VENDOR,primaryWriterDBI.DATABASE_VENDOR],`Processing Tables`);
const tasks = [...this.schemaInfo]
Expand All @@ -28,7 +28,9 @@ class DBReaderParallel extends DBReader {
try {
while (tasks.length > 0) {
const task = tasks.shift();
await this.pipelineTable(task,readerDBI,writerDBI)
if (task.INCLUDE_TABLE === true) {
await this.pipelineTable(task,readerDBI,writerDBI)
}
}
await readerDBI.releaseWorkerConnection()
await writerDBI.releaseWorkerConnection()
Expand Down
19 changes: 9 additions & 10 deletions app/YADAMU/common/dbWriter.js
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,10 @@ class DBWriter extends Writable {
super({objectMode: true});

this.dbi = dbi;
this.mode = dbi.parameters.MODE;
this.ddlRequired = (this.mode !== 'DATA_ONLY');
this.status = dbi.yadamu.getStatus()
this.ddlRequired = (this.dbi.MODE !== 'DATA_ONLY');
this.status = dbi.yadamu.STATUS
this.yadamuLogger = yadamuLogger;
this.yadamuLogger.info([`Writer`,dbi.DATABASE_VENDOR,this.mode,this.dbi.getWorkerNumber()],`Ready.`)
this.yadamuLogger.info([`Writer`,dbi.DATABASE_VENDOR,this.dbi.MODE,this.dbi.getWorkerNumber()],`Ready.`)

this.transactionManager = this.dbi
this.currentTable = undefined;
Expand Down Expand Up @@ -96,7 +95,7 @@ class DBWriter extends Writable {

async generateStatementCache(metadata,ddlRequired) {
const startTime = performance.now()
this.dbi.setMetadata(metadata)
await this.dbi.setMetadata(metadata)
await this.dbi.generateStatementCache(this.dbi.parameters.TO_USER,!this.ddlComplete)
let ddlStatementCount = 0
let dmlStatementCount = 0
Expand Down Expand Up @@ -145,7 +144,7 @@ class DBWriter extends Writable {
}
tableMetadata.source = {
vendor : tableMetadata.vendor
,columns : tableMetadata.columns
,columnNames : tableMetadata.columnNames
,dataTypes : tableMetadata.dataTypes
,sizeConstraints : tableMetadata.sizeConstraints
}
Expand All @@ -171,7 +170,7 @@ class DBWriter extends Writable {
return sourceMetadata[key].tableName;
})

switch ( this.dbi.parameters.TABLE_MATCHING ) {
switch ( this.dbi.TABLE_MATCHING ) {
case 'UPPERCASE' :
sourceTableNames = sourceTableNames.map((tableName) => {
return tableName.toUpperCase();
Expand Down Expand Up @@ -266,7 +265,7 @@ class DBWriter extends Writable {
}
callback();
} catch (e) {
this.yadamuLogger.handleException([`WRITER`,this.dbi.DATABASE_VENDOR,`_WRITE(${messageType})`,this.dbi.parameters.ON_ERROR],e);
this.yadamuLogger.handleException([`WRITER`,this.dbi.DATABASE_VENDOR,`_WRITE(${messageType})`,this.dbi.yadamu.ON_ERROR],e);
this.transactionManager.skipTable = true;
try {
await this.transactionManager.rollbackTransaction(e)
Expand All @@ -287,7 +286,7 @@ class DBWriter extends Writable {
async _final(callback) {
// this.yadamuLogger.trace([this.constructor.name],'final()')
try {
if (this.mode === "DDL_ONLY") {
if (this.dbi.MODE === "DDL_ONLY") {
this.yadamuLogger.info([`${this.dbi.DATABASE_VENDOR}`],`DDL only export. No data written.`);
}
else {
Expand All @@ -300,7 +299,7 @@ class DBWriter extends Writable {
await this.dbi.releasePrimaryConnection()
callback();
} catch (e) {
this.yadamuLogger.handleException([`WRITER`,this.dbi.DATABASE_VENDOR,`_FINAL(${this.currentTable})`,this.dbi.parameters.ON_ERROR],e);
this.yadamuLogger.handleException([`WRITER`,this.dbi.DATABASE_VENDOR,`_FINAL(${this.currentTable})`,this.dbi.yadamu.ON_ERROR],e);
// Passing the exception to callback triggers the onError() event
callback(e);
}
Expand Down
4 changes: 2 additions & 2 deletions app/YADAMU/common/defaultParser.js
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ const YadamuParser = require('./yadamuParser.js')

class DefaultParser extends YadamuParser {

constructor(tableInfo,objectMode,yadamuLogger) {
super(tableInfo,objectMode,yadamuLogger);
constructor(tableInfo,yadamuLogger) {
super(tableInfo,yadamuLogger);
}
}

Expand Down
Loading

0 comments on commit 9ead259

Please sign in to comment.