You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@hntd187 fixed #1361 via #1378 but when I was reviewing the code, I found several other places that project RecordBatchs and Schemas that may also have the same subtle issues about losing the metadata. I am not sure of any bugs related to this yet but I fear they are lurking
The basic idea is to make functions like the following (which handle metadata correctly, following the pattern in #1361 )
let projected_schema = match&projection {Some(columns) => {let fields:Result<Vec<Field>> = columns
.iter().map(|i| {if*i < schema.fields().len(){Ok(schema.field(*i).clone())}else{Err(DataFusionError::Internal("Projection index out of range".to_string(),))}}).collect();Arc::new(Schema::new(fields?))}None => Arc::clone(&schema),};
Background
@hntd187 fixed #1361 via #1378 but when I was reviewing the code, I found several other places that project
RecordBatch
s andSchemas
that may also have the same subtle issues about losing the metadata. I am not sure of any bugs related to this yet but I fear they are lurkingThe basic idea is to make functions like the following (which handle metadata correctly, following the pattern in #1361 )
And replace the duplicated code like
And
ALl over the datafusion codebase
Additional context
Here is a corresponding arrow ticket to put the logic into arrow-rs: apache/arrow-rs#1014
The text was updated successfully, but these errors were encountered: