diff --git a/docs/spanner/snapshot-usage.rst b/docs/spanner/snapshot-usage.rst index a23ff114c2fa4..ba31425a54b44 100644 --- a/docs/spanner/snapshot-usage.rst +++ b/docs/spanner/snapshot-usage.rst @@ -64,9 +64,24 @@ fails if the result set is too large, .. note:: - If streaming a chunk fails due to a "resumable" error, - :meth:`Session.read` retries the ``StreamingRead`` API reqeust, - passing the ``resume_token`` from the last partial result streamed. + If streaming a chunk raises an exception, the application can + retry the ``read``, passing the ``resume_token`` from ``StreamingResultSet`` + which raised the error. E.g.: + + .. code:: python + + result = snapshot.read(table, columns, keys) + while True: + try: + for row in result.rows: + print row + except Exception: + result = snapshot.read( + table, columns, keys, resume_token=result.resume_token) + continue + else: + break + Execute a SQL Select Statement @@ -97,6 +112,26 @@ fails if the result set is too large, manually, perform all iteration within the context of the ``with database.snapshot()`` block. +.. note:: + + If streaming a chunk raises an exception, the application can + retry the query, passing the ``resume_token`` from ``StreamingResultSet`` + which raised the error. E.g.: + + .. code:: python + + result = snapshot.execute_sql(QUERY) + while True: + try: + for row in result.rows: + print row + except Exception: + result = snapshot.execute_sql( + QUERY, resume_token=result.resume_token) + continue + else: + break + Next Step ---------