Skip to content

Latest commit

 

History

History
665 lines (476 loc) · 45 KB

CHANGES.md

File metadata and controls

665 lines (476 loc) · 45 KB

[2.34.0] - Unreleased

  • Add an example of deploying Python Apache Beam job with Spark Cluster

Highlights

  • New highly anticipated feature X added to Python SDK (BEAM-X).
  • New highly anticipated feature Y added to Java SDK (BEAM-Y).
  • The Beam Java API for Calcite SqlTransform is no longer experimental (BEAM-12680).

I/Os

  • Support for X source added (Java/Python) (BEAM-X).
  • ReadFromBigQuery and ReadAllFromBigQuery now run queries with BATCH priority by default. The query_priority parameter is introduced to the same transforms to allow configuring the query priority (Python) (BEAM-12913).

New Features / Improvements

  • X feature added (Java/Python) (BEAM-X).
  • Upgrade to Calcite 1.26.0 (BEAM-9379).

Breaking Changes

  • X behavior was changed (BEAM-X).
  • SQL Rows are no longer flattened (BEAM-5505).

Deprecations

  • X behavior is deprecated and will be removed in X versions (BEAM-X).

Bugfixes

  • Fixed X (Java/Python) (BEAM-X).
  • Fixed error while writing multiple DeferredFrames to csv (Python) (BEAM-12701).
  • Fixed top.SmallestPerKey implementation in the Go SDK (BEAM-12946).

Known Issues

[2.33.0] - Unreleased

Highlights

  • New highly anticipated feature X added to Python SDK (BEAM-X).
  • New highly anticipated feature Y added to Java SDK (BEAM-Y).
  • Go SDK is no longer experimental, and is officially part of the Beam release process.
    • Matching Go SDK containers are published on release.
    • Batch usage is well supported, and tested on Flink, Spark, and the Python Portable Runner.
      • SDK Tests are also run against Google Cloud Dataflow, but this doesn't indicate reciprical support.
    • The SDK supports Splittable DoFns, Cross Language transforms, and most Beam Model basics.
    • Go Modules are now used for dependency management.
      • This is a breaking change, see Breaking Changes for resolution.
      • Easier path to contribute to the Go SDK, no need to set up a GO_PATH.
      • Minimum Go version is now Go v1.16
    • See the announcement blogpost for full information (TODO(lostluck): Add link once published.)

I/Os

  • Support for X source added (Java/Python) (BEAM-X).

New Features / Improvements

  • X feature added (Java/Python) (BEAM-X).
  • Upgrade Flink runner to Flink versions 1.13.2, 1.12.5 and 1.11.4 (BEAM-10955).

Breaking Changes

  • Python GBK by defualt will fail on unbounded PCollections that have global windowing and a default trigger. The --allow_unsafe_triggers flag can be used to override this. (BEAM-9487).
  • Python GBK will fail if it detects an unsafe trigger unless the --allow_unsafe_triggers flag is set. (BEAM-9487).
  • Go SDK pipelines require new import paths to use this release due to migration to Go Modules.
    • go.mod files will need to change to require github.com/apache/beam/sdks/v2.
    • Code depending on beam imports need to include v2 on the module path.
      • Fix by'v2' to the import paths, turning .../sdks/go/... to .../sdks/v2/go/...
    • No other code change should be required to use v2.33.0 of the Go SDK.

Deprecations

  • X behavior is deprecated and will be removed in X versions (BEAM-X).

Known Issues

  • Fixed X (Java/Python) (BEAM-X).

[2.32.0] - 2021-08-25

Highlights

I/Os

  • New experimental Firestore connector in Java SDK, providing sources and sinks to Google Cloud Firestore (BEAM-8376).
  • Added ability to use JdbcIO.Write.withResults without statement and preparedStatementSetter. (BEAM-12511)
  • Added ability to register URI schemes to use the S3 protocol via FileIO. (BEAM-12435).
  • Respect number of shards set in SnowflakeWrite batch mode. (BEAM-12715)
  • Java SDK: Update Google Cloud Healthcare IO connectors from using v1beta1 to using the GA version.

New Features / Improvements

  • Add support to convert Beam Schema to Avro Schema for JDBC LogicalTypes: VARCHAR, NVARCHAR, LONGVARCHAR, LONGNVARCHAR, DATE, TIME (Java)(BEAM-12385).
  • Reading from JDBC source by partitions (Java) (BEAM-12456).
  • PubsubIO can now write to a dead-letter topic after a parsing error (Java)(BEAM-12474).
  • New append-only option for Elasticsearch sink (Java) BEAM-12601
  • DatastoreIO: Write and delete operations now follow automatic gradual ramp-up, in line with best practices (Java/Python) (BEAM-12260, BEAM-12272).

Breaking Changes

Deprecations

  • Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam 2.33. This can be overriden with --allow_unsafe_triggers. (BEAM-9487).
  • Python GBK will start requiring safe triggers or the --allow_unsafe_triggers flag starting with Beam 2.33. (BEAM-9487).

Bugfixes

  • Fixed race condition in RabbitMqIO causing duplicate acks (Java) (BEAM-6516))

[2.31.0] - 2021-07-08

I/Os

  • Fixed bug in ReadFromBigQuery when a RuntimeValueProvider is used as value of table argument (Python) (BEAM-12514).

New Features / Improvements

  • CREATE FUNCTION DDL statement added to Calcite SQL syntax. JAR and AGGREGATE are now reserved keywords. (BEAM-12339).
  • Flink 1.13 is now supported by the Flink runner (BEAM-12277).
  • Python TriggerFn has a new may_lose_data method to signal potential data loss. Default behavior assumes safe (necessary for backwards compatibility). See Deprecations for potential impact of overriding this. (BEAM-9487).

Breaking Changes

  • Python Row objects are now sensitive to field order. So Row(x=3, y=4) is no longer considered equal to Row(y=4, x=3) (BEAM-11929).
  • Kafka Beam SQL tables now ascribe meaning to the LOCATION field; previously it was ignored if provided.
  • TopCombineFn disallow compare as its argument (Python) (BEAM-7372).
  • Drop support for Flink 1.10 (BEAM-12281).

Deprecations

  • Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam 2.33. This can be overriden with --allow_unsafe_triggers. (BEAM-9487).
  • Python GBK will start requiring safe triggers or the --allow_unsafe_triggers flag starting with Beam 2.33. (BEAM-9487).

[2.30.0] - 2021-06-09

I/Os

  • Allow splitting apart document serialization and IO for ElasticsearchIO
  • Support Bulk API request size optimization through addition of ElasticsearchIO.Write.withStatefulBatches

New Features / Improvements

  • Added capability to declare resource hints in Java and Python SDKs (BEAM-2085).
  • Added Spanner IO Performance tests for read and write. (Python) (BEAM-10029).
  • Added support for accessing GCP PubSub Message ordering keys, message IDs and message publish timestamp (Python) (BEAM-7819).
  • DataFrame API: Added support for collecting DataFrame objects in interactive Beam (BEAM-11855)
  • DataFrame API: Added apache_beam.examples.dataframe module (BEAM-12024)
  • Upgraded the GCP Libraries BOM version to 20.0.0 (BEAM-11205). For Google Cloud client library versions set by this BOM, see this table.

Breaking Changes

  • Drop support for Flink 1.8 and 1.9 (BEAM-11948).
  • MongoDbIO: Read.withFilter() and Read.withProjection() are removed since they are deprecated since Beam 2.12.0 (BEAM-12217).
  • RedisIO.readAll() was removed since it was deprecated since Beam 2.13.0. Please use RedisIO.readKeyPatterns() for the equivalent functionality. (BEAM-12214).
  • MqttIO.create() with clientId constructor removed because it was deprecated since Beam 2.13.0 (BEAM-12216).

[2.29.0] - 2021-04-29

Highlights

  • Spark Classic and Portable runners officially support Spark 3 (BEAM-7093).
  • Official Java 11 support for most runners (Dataflow, Flink, Spark) (BEAM-2530).
  • DataFrame API now supports GroupBy.apply (BEAM-11628).

I/Os

  • Added support for S3 filesystem on AWS SDK V2 (Java) (BEAM-7637)

New Features / Improvements

Breaking Changes

  • Deterministic coding enforced for GroupByKey and Stateful DoFns. Previously non-deterministic coding was allowed, resulting in keys not properly being grouped in some cases. (BEAM-11719) To restore the old behavior, one can register FakeDeterministicFastPrimitivesCoder with beam.coders.registry.register_fallback_coder(beam.coders.coders.FakeDeterministicFastPrimitivesCoder()) or use the allow_non_deterministic_key_coders pipeline option.

Deprecations

  • Support for Flink 1.8 and 1.9 will be removed in the next release (2.30.0) (BEAM-11948).

[2.28.0] - 2021-02-22

Highlights

I/Os

  • SpannerIO supports using BigDecimal for Numeric fields (BEAM-11643)
  • Add Beam schema support to ParquetIO (BEAM-11526)
  • Support ParquetTable Writer (BEAM-8202)
  • GCP BigQuery sink (streaming inserts) uses runner determined sharding (BEAM-11408)
  • PubSub support types: TIMESTAMP, DATE, TIME, DATETIME (BEAM-11533)

New Features / Improvements

  • ParquetIO add methods readGenericRecords and readFilesGenericRecords can read files with an unknown schema. See PR-13554 and (BEAM-11460)
  • Added support for thrift in KafkaTableProvider (BEAM-11482)
  • Added support for HadoopFormatIO to skip key/value clone (BEAM-11457)
  • Support Conversion to GenericRecords in Convert.to transform (BEAM-11571).
  • Support writes for Parquet Tables in Beam SQL (BEAM-8202).
  • Support reading Parquet files with unknown schema (BEAM-11460)
  • Support user configurable Hadoop Configuration flags for ParquetIO (BEAM-11527)
  • Expose commit_offset_in_finalize and timestamp_policy to ReadFromKafka (BEAM-11677)
  • S3 options does not provided to boto3 client while using FlinkRunner and Beam worker pool container (BEAM-11799)
  • HDFS not deduplicating identical configuration paths (BEAM-11329)
  • Hash Functions in BeamSQL (BEAM-10074)
  • Create ApproximateDistinct using HLL Impl (BEAM-10324)
  • Add Beam schema support to ParquetIO (BEAM-11526)
  • Add a Deque Encoder (BEAM-11538)
  • Hash functions in ZetaSQL (BEAM-11624)
  • Refactor ParquetTableProvider ()
  • Add JVM properties to JavaJobServer (BEAM-8344)
  • Single source of truth for supported Flink versions ()
  • Use metric for Python BigQuery streaming insert API latency logging (BEAM-11018)
  • Use metric for Java BigQuery streaming insert API latency logging (BEAM-11032)
  • Upgrade Flink runner to Flink versions 1.12.1 and 1.11.3 (BEAM-11697)
  • Upgrade Beam base image to use Tensorflow 2.4.1 (BEAM-11762)
  • Create Beam GCP BOM (BEAM-11665)

Breaking Changes

  • The Java artifacts "beam-sdks-java-io-kinesis", "beam-sdks-java-io-google-cloud-platform", and "beam-sdks-java-extensions-sql-zetasql" declare Guava 30.1-jre dependency (It was 25.1-jre in Beam 2.27.0). This new Guava version may introduce dependency conflicts if your project or dependencies rely on removed APIs. If affected, ensure to use an appropriate Guava version via dependencyManagement in Maven and force in Gradle.

[2.27.0] - 2021-01-08

I/Os

  • ReadFromMongoDB can now be used with MongoDB Atlas (Python) (BEAM-11266.)
  • ReadFromMongoDB/WriteToMongoDB will mask password in display_data (Python) (BEAM-11444.)
  • Support for X source added (Java/Python) (BEAM-X).
  • There is a new transform ReadAllFromBigQuery that can receive multiple requests to read data from BigQuery at pipeline runtime. See PR 13170, and BEAM-9650.

New Features / Improvements

  • Beam modules that depend on Hadoop are now tested for compatibility with Hadoop 3 (BEAM-8569). (Hive/HCatalog pending)
  • Publishing Java 11 SDK container images now supported as part of Apache Beam release process. (BEAM-8106)
  • Added Cloud Bigtable Provider extension to Beam SQL (BEAM-11173, BEAM-11373)
  • Added a schema provider for thrift data (BEAM-11338)
  • Added combiner packing pipeline optimization to Dataflow runner. (BEAM-10641)
  • Support for the Deque structure by adding a coder (BEAM-11538)

Breaking Changes

  • HBaseIO hbase-shaded-client dependency should be now provided by the users (BEAM-9278).
  • --region flag in amazon-web-services2 was replaced by --awsRegion (BEAM-11331).

[2.26.0] - 2020-12-11

Highlights

  • Splittable DoFn is now the default for executing the Read transform for Java based runners (Spark with bounded pipelines) in addition to existing runners from the 2.25.0 release (Direct, Flink, Jet, Samza, Twister2). The expected output of the Read transform is unchanged. Users can opt-out using --experiments=use_deprecated_read. The Apache Beam community is looking for feedback for this change as the community is planning to make this change permanent with no opt-out. If you run into an issue requiring the opt-out, please send an e-mail to user@beam.apache.org specifically referencing BEAM-10670 in the subject line and why you needed to opt-out. (Java) (BEAM-10670)

I/Os

  • Java BigQuery streaming inserts now have timeouts enabled by default. Pass --HTTPWriteTimeout=0 to revert to the old behavior. (BEAM-6103)
  • Added support for Contextual Text IO (Java), a version of text IO that provides metadata about the records (BEAM-10124). Support for this IO is currently experimental. Specifically, there are no update-compatibility guarantees for streaming jobs with this IO between current future verisons of Apache Beam SDK.

New Features / Improvements

  • Added support for avro payload format in Beam SQL Kafka Table (BEAM-10885)
  • Added support for json payload format in Beam SQL Kafka Table (BEAM-10893)
  • Added support for protobuf payload format in Beam SQL Kafka Table (BEAM-10892)
  • Added support for avro payload format in Beam SQL Pubsub Table (BEAM-5504)
  • Added option to disable unnecessary copying between operators in Flink Runner (Java) (BEAM-11146)
  • Added CombineFn.setup and CombineFn.teardown to Python SDK. These methods let you initialize the CombineFn's state before any of the other methods of the CombineFn is executed and clean that state up later on. If you are using Dataflow, you need to enable Dataflow Runner V2 by passing --experiments=use_runner_v2 before using this feature. (BEAM-3736)
  • Added support for NestedValueProvider for the Python SDK (BEAM-10856).

Breaking Changes

  • BigQuery's DATETIME type now maps to Beam logical type org.apache.beam.sdk.schemas.logicaltypes.SqlTypes.DATETIME
  • Pandas 1.x is now required for dataframe operations.

Known Issues

  • Non-idempotent combiners built via CombineFn.from_callable() or CombineFn.maybe_from_callable() can lead to incorrect behavior. (BEAM-11522).

[2.25.0] - 2020-10-23

Highlights

  • Splittable DoFn is now the default for executing the Read transform for Java based runners (Direct, Flink, Jet, Samza, Twister2). The expected output of the Read transform is unchanged. Users can opt-out using --experiments=use_deprecated_read. The Apache Beam community is looking for feedback for this change as the community is planning to make this change permanent with no opt-out. If you run into an issue requiring the opt-out, please send an e-mail to user@beam.apache.org specifically referencing BEAM-10670 in the subject line and why you needed to opt-out. (Java) (BEAM-10670)

I/Os

  • Added cross-language support to Java's KinesisIO, now available in the Python module apache_beam.io.kinesis (BEAM-10138, BEAM-10137).
  • Update Snowflake JDBC dependency for SnowflakeIO (BEAM-10864)
  • Added cross-language support to Java's SnowflakeIO.Write, now available in the Python module apache_beam.io.snowflake (BEAM-9898).
  • Added delete function to Java's ElasticsearchIO#Write. Now, Java's ElasticsearchIO can be used to selectively delete documents using withIsDeleteFn function (BEAM-5757).
  • Java SDK: Added new IO connector for InfluxDB - InfluxDbIO (BEAM-2546).
  • Config options added for Python's S3IO (BEAM-9094)

New Features / Improvements

  • Support for repeatable fields in JSON decoder for ReadFromBigQuery added. (Python) (BEAM-10524)
  • Added an opt-in, performance-driven runtime type checking system for the Python SDK (BEAM-10549). More details will be in an upcoming blog post.
  • Added support for Python 3 type annotations on PTransforms using typed PCollections (BEAM-10258). More details will be in an upcoming blog post.
  • Improved the Interactive Beam API where recording streaming jobs now start a long running background recording job. Running ib.show() or ib.collect() samples from the recording (BEAM-10603).
  • In Interactive Beam, ib.show() and ib.collect() now have "n" and "duration" as parameters. These mean read only up to "n" elements and up to "duration" seconds of data read from the recording (BEAM-10603).
  • Initial preview of Dataframes support. See also example at apache_beam/examples/wordcount_dataframe.py
  • Fixed support for type hints on @ptransform_fn decorators in the Python SDK. (BEAM-4091) This has not enabled by default to preserve backwards compatibility; use the --type_check_additional=ptransform_fn flag to enable. It may be enabled by default in future versions of Beam.

Breaking Changes

  • Python 2 and Python 3.5 support dropped (BEAM-10644, BEAM-9372).
  • Pandas 1.x allowed. Older version of Pandas may still be used, but may not be as well tested.

Deprecations

  • Python transform ReadFromSnowflake has been moved from apache_beam.io.external.snowflake to apache_beam.io.snowflake. The previous path will be removed in the future versions.

Known Issues

  • Dataflow streaming timers once against not strictly time ordered when set earlier mid-bundle, as the fix for BEAM-8543 introduced more severe bugs and has been rolled back.
  • Default compressor change breaks dataflow python streaming job update compatibility. Please use python SDK version <= 2.23.0 or > 2.25.0 if job update is critical.(BEAM-11113)

[2.24.0] - 2020-09-18

Highlights

  • Apache Beam 2.24.0 is the last release with Python 2 and Python 3.5 support.

I/Os

  • New overloads for BigtableIO.Read.withKeyRange() and BigtableIO.Read.withRowFilter() methods that take ValueProvider as a parameter (Java) (BEAM-10283).
  • The WriteToBigQuery transform (Python) in Dataflow Batch no longer relies on BigQuerySink by default. It relies on a new, fully-featured transform based on file loads into BigQuery. To revert the behavior to the old implementation, you may use --experiments=use_legacy_bq_sink.
  • Add cross-language support to Java's JdbcIO, now available in the Python module apache_beam.io.jdbc (BEAM-10135, BEAM-10136).
  • Add support of AWS SDK v2 for KinesisIO.Read (Java) (BEAM-9702).
  • Add streaming support to SnowflakeIO in Java SDK (BEAM-9896)
  • Support reading and writing to Google Healthcare DICOM APIs in Python SDK (BEAM-10601)
  • Add dispositions for SnowflakeIO.write (BEAM-10343)
  • Add cross-language support to SnowflakeIO.Read now available in the Python module apache_beam.io.external.snowflake (BEAM-9897).

New Features / Improvements

  • Shared library for simplifying management of large shared objects added to Python SDK. An example use case is sharing a large TF model object across threads (BEAM-10417).
  • Dataflow streaming timers are not strictly time ordered when set earlier mid-bundle (BEAM-8543).
  • OnTimerContext should not create a new one when processing each element/timer in FnApiDoFnRunner (BEAM-9839)
  • Key should be available in @OnTimer methods (Spark Runner) (BEAM-9850)

Breaking Changes

  • WriteToBigQuery transforms now require a GCS location to be provided through either custom_gcs_temp_location in the constructor of WriteToBigQuery or the fallback option --temp_location, or pass method="STREAMING_INSERTS" to WriteToBigQuery (BEAM-6928).
  • Python SDK now understands typing.FrozenSet type hints, which are not interchangeable with typing.Set. You may need to update your pipelines if type checking fails. (BEAM-10197)

Known issues

  • When a timer fires but is reset prior to being executed, a watermark hold may be leaked, causing a stuck pipeline BEAM-10991.
  • Default compressor change breaks dataflow python streaming job update compatibility. Please use python SDK version <= 2.23.0 or > 2.25.0 if job update is critical.(BEAM-11113)

[2.23.0] - 2020-06-29

Highlights

I/Os

  • Support for reading from Snowflake added (Java) (BEAM-9722).
  • Support for writing to Splunk added (Java) (BEAM-8596).
  • Support for assume role added (Java) (BEAM-10335).
  • A new transform to read from BigQuery has been added: apache_beam.io.gcp.bigquery.ReadFromBigQuery. This transform is experimental. It reads data from BigQuery by exporting data to Avro files, and reading those files. It also supports reading data by exporting to JSON files. This has small differences in behavior for Time and Date-related fields. See Pydoc for more information.

New Features / Improvements

  • Update Snowflake JDBC dependency and add application=beam to connection URL (BEAM-10383).

Breaking Changes

  • RowJson.RowJsonDeserializer, JsonToRow, and PubsubJsonTableProvider now accept "implicit nulls" by default when deserializing JSON (Java) (BEAM-10220). Previously nulls could only be represented with explicit null values, as in {"foo": "bar", "baz": null}, whereas an implicit null like {"foo": "bar"} would raise an exception. Now both JSON strings will yield the same result by default. This behavior can be overridden with RowJson.RowJsonDeserializer#withNullBehavior.
  • Fixed a bug in GroupIntoBatches experimental transform in Python to actually group batches by key. This changes the output type for this transform (BEAM-6696).

Deprecations

  • Remove Gearpump runner. (BEAM-9999)
  • Remove Apex runner. (BEAM-9999)
  • RedisIO.readAll() is deprecated and will be removed in 2 versions, users must use RedisIO.readKeyPatterns() as a replacement (BEAM-9747).

Known Issues

  • Fixed X (Java/Python) (BEAM-X).

[2.22.0] - 2020-06-08

Highlights

I/Os

  • Basic Kafka read/write support for DataflowRunner (Python) (BEAM-8019).
  • Sources and sinks for Google Healthcare APIs (Java)(BEAM-9468).
  • Support for writing to Snowflake added (Java) (BEAM-9894).

New Features / Improvements

  • --workerCacheMB flag is supported in Dataflow streaming pipeline (BEAM-9964)
  • --direct_num_workers=0 is supported for FnApi runner. It will set the number of threads/subprocesses to number of cores of the machine executing the pipeline (BEAM-9443).
  • Python SDK now has experimental support for SqlTransform (BEAM-8603).
  • Add OnWindowExpiration method to Stateful DoFn (BEAM-1589).
  • Added PTransforms for Google Cloud DLP (Data Loss Prevention) services integration (BEAM-9723):
    • Inspection of data,
    • Deidentification of data,
    • Reidentification of data.
  • Add a more complete I/O support matrix in the documentation site (BEAM-9916).
  • Upgrade Sphinx to 3.0.3 for building PyDoc.
  • Added a PTransform for image annotation using Google Cloud AI image processing service (BEAM-9646)
  • Dataflow streaming timers are not strictly time ordered when set earlier mid-bundle (BEAM-8543).

Breaking Changes

  • The Python SDK now requires --job_endpoint to be set when using --runner=PortableRunner (BEAM-9860). Users seeking the old default behavior should set --runner=FlinkRunner instead.

Deprecations

Known Issues

[2.21.0] - 2020-05-27

Highlights

I/Os

  • Python: Deprecated module apache_beam.io.gcp.datastore.v1 has been removed as the client it uses is out of date and does not support Python 3 (BEAM-9529). Please migrate your code to use apache_beam.io.gcp.datastore.v1new. See the updated datastore_wordcount for example usage.
  • Python SDK: Added integration tests and updated batch write functionality for Google Cloud Spanner transform (BEAM-8949).

New Features / Improvements

  • Python SDK will now use Python 3 type annotations as pipeline type hints. (#10717)

    If you suspect that this feature is causing your pipeline to fail, calling apache_beam.typehints.disable_type_annotations() before pipeline creation will disable is completely, and decorating specific functions (such as process()) with @apache_beam.typehints.no_annotations will disable it for that function.

    More details will be in Ensuring Python Type Safety and an upcoming blog post.

  • Java SDK: Introducing the concept of options in Beam Schemas. These options add extra context to fields and schemas. This replaces the current Beam metadata that is present in a FieldType only, options are available in fields and row schemas. Schema options are fully typed and can contain complex rows. Remark: Schema aware is still experimental. (BEAM-9035)

  • Java SDK: The protobuf extension is fully schema aware and also includes protobuf option conversion to beam schema options. Remark: Schema aware is still experimental. (BEAM-9044)

  • Added ability to write to BigQuery via Avro file loads (Python) (BEAM-8841)

    By default, file loads will be done using JSON, but it is possible to specify the temp_file_format parameter to perform file exports with AVRO. AVRO-based file loads work by exporting Python types into Avro types, so to switch to Avro-based loads, you will need to change your data types from Json-compatible types (string-type dates and timestamp, long numeric values as strings) into Python native types that are written to Avro (Python's date, datetime types, decimal, etc). For more information see https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#avro_conversions.

  • Added integration of Java SDK with Google Cloud AI VideoIntelligence service (BEAM-9147)

  • Added integration of Java SDK with Google Cloud AI natural language processing API (BEAM-9634)

  • docker-pull-licenses tag was introduced. Licenses/notices of third party dependencies will be added to the docker images when docker-pull-licenses was set. The files are added to /opt/apache/beam/third_party_licenses/. By default, no licenses/notices are added to the docker images. (BEAM-9136)

Breaking Changes

  • Dataflow runner now requires the --region option to be set, unless a default value is set in the environment (BEAM-9199). See here for more details.
  • HBaseIO.ReadAll now requires a PCollection of HBaseIO.Read objects instead of HBaseQuery objects (BEAM-9279).
  • ProcessContext.updateWatermark has been removed in favor of using a WatermarkEstimator (BEAM-9430).
  • Coder inference for PCollection of Row objects has been disabled (BEAM-9569).
  • Go SDK docker images are no longer released until further notice.

Deprecations

  • Java SDK: Beam Schema FieldType.getMetadata is now deprecated and is replaced by the Beam Schema Options, it will be removed in version 2.23.0. (BEAM-9704)
  • The --zone option in the Dataflow runner is now deprecated. Please use --worker_zone instead. (BEAM-9716)

Known Issues

[2.20.0] - 2020-04-15

Highlights

I/Os

  • Java SDK: Adds support for Thrift encoded data via ThriftIO. (BEAM-8561)
  • Java SDK: KafkaIO supports schema resolution using Confluent Schema Registry. (BEAM-7310)
  • Java SDK: Add Google Cloud Healthcare IO connectors: HL7v2IO and FhirIO (BEAM-9468)
  • Python SDK: Support for Google Cloud Spanner. This is an experimental module for reading and writing data from Google Cloud Spanner (BEAM-7246).
  • Python SDK: Adds support for standard HDFS URLs (with server name). (#10223).

New Features / Improvements

  • New AnnotateVideo & AnnotateVideoWithContext PTransform's that integrates GCP Video Intelligence functionality. (Python) (BEAM-9146)
  • New AnnotateImage & AnnotateImageWithContext PTransform's for element-wise & batch image annotation using Google Cloud Vision API. (Python) (BEAM-9247)
  • Added a PTransform for inspection and deidentification of text using Google Cloud DLP. (Python) (BEAM-9258)
  • New AnnotateText PTransform that integrates Google Cloud Natural Language functionality (Python) (BEAM-9248)
  • ReadFromBigQuery now supports value providers for the query string (Python) (BEAM-9305)
  • Direct runner for FnApi supports further parallelism (Python) (BEAM-9228)
  • Support for @RequiresTimeSortedInput in Flink and Spark (Java) (BEAM-8550)

Breaking Changes

  • ReadFromPubSub(topic=) in Python previously created a subscription under the same project as the topic. Now it will create the subscription under the project specified in pipeline_options. If the project is not specified in pipeline_options, then it will create the subscription under the same project as the topic. (BEAM-3453).
  • SpannerAccessor in Java is now package-private to reduce API surface. SpannerConfig.connectToSpanner has been moved to SpannerAccessor.create. (BEAM-9310).
  • ParquetIO hadoop dependency should be now provided by the users (BEAM-8616).
  • Docker images will be deployed to apache/beam repositories from 2.20. They used to be deployed to apachebeam repository. (BEAM-9063)
  • PCollections now have tags inferred from the result type (e.g. the keys of a dict or index of a tuple). Users may expect the old implementation which gave PCollection output ids a monotonically increasing id. To go back to the old implementation, use the force_generated_pcollection_output_ids experiment.

Deprecations

Bugfixes

  • Fixed numpy operators in ApproximateQuantiles (Python) (BEAM-9579).
  • Fixed exception when running in IPython notebook (Python) (BEAM-X9277).
  • Fixed Flink uberjar job termination bug. (BEAM-9225)
  • Fixed SyntaxError in process worker startup (BEAM-9503)
  • Key should be available in @OnTimer methods (Java) (BEAM-1819).

Known Issues

[2.19.0] - 2020-01-31

  • For versions 2.19.0 and older release notes are available on Apache Beam Blog.