Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statement parameter binding #87

Merged
merged 15 commits into from
Jun 4, 2019
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,8 @@ public Publisher<Void> createSavepoint(String s) {

@Override
public Statement createStatement(String sql) {
return new SpannerStatement(this.client, this.session, this.currentTransaction, sql);
return new SpannerStatement(this.client, this.session,
SpannerTransactionContext.from(this.currentTransaction), sql);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do SpannerTransactionContext.from here rather than in the SpannerStatement?
Or, why not replace the currentTransaction in SpannerConnection with SpannerTransactionContext?

}

@Override
Expand Down
87 changes: 75 additions & 12 deletions src/main/java/com/google/cloud/spanner/r2dbc/SpannerStatement.java
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,26 @@

package com.google.cloud.spanner.r2dbc;

import static java.util.Objects.requireNonNull;

import com.google.cloud.spanner.r2dbc.client.Client;
import com.google.cloud.spanner.r2dbc.codecs.Codec;
import com.google.cloud.spanner.r2dbc.codecs.Codecs;
import com.google.cloud.spanner.r2dbc.codecs.DefaultCodecs;
import com.google.cloud.spanner.r2dbc.result.PartialResultRowExtractor;
import com.google.protobuf.Struct;
import com.google.protobuf.Struct.Builder;
import com.google.spanner.v1.PartialResultSet;
import com.google.spanner.v1.Session;
import com.google.spanner.v1.Transaction;
import com.google.spanner.v1.Type;
import io.r2dbc.spi.Result;
import io.r2dbc.spi.Statement;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import javax.annotation.Nullable;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
Expand All @@ -37,16 +50,23 @@ public class SpannerStatement implements Statement {

private Session session;

private Transaction transaction;
private SpannerTransactionContext transaction;

private String sql;

private LinkedList<Map<String, Object>> bindings = new LinkedList<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in practice ArrayList is typically preferred over LinkedList because of caching advantages.


private Codecs codecs = new DefaultCodecs();

private Map<String, Type> types = Collections.EMPTY_MAP;

private Map<String, Codec> resolvedCodecs = new HashMap<>();

/**
* Creates a Spanner statement for a given SQL statement.
*
* <p>If no transaction is present, a temporary strongly consistent readonly transaction will be
* used.
*
* @param client cloud spanner client to use for performing the query operation
* @param session current cloud spanner session
* @param transaction current cloud spanner transaction, or empty if no transaction is started
Expand All @@ -55,45 +75,88 @@ public class SpannerStatement implements Statement {
public SpannerStatement(
Client client,
Session session,
@Nullable Transaction transaction,
@Nullable SpannerTransactionContext transaction,
String sql) {

this.client = client;
this.session = session;
this.transaction = transaction;
this.sql = sql;
add();
}

@Override
public Statement add() {
return null;
this.bindings.add(new HashMap<>());
return this;
}

@Override
public Statement bind(Object o, Object o1) {
return null;
public Statement bind(Object identifier, Object value) {
requireNonNull(identifier);
if (identifier instanceof String) {
this.bindings.getLast().put((String)identifier, value);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LinkedList.getLast() is efficient, but if we keep a map of in-progress/incomplete bindings separately, then there is no need for getting it, and also no need to add an empty list in constructor.
Then add() would add the incomplete map (that has just become complete) and get a new map going.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the only thing here would be that after completing the last one, you'd have to call add() just to put it into the completed list, or deal with merging completed and incomplete somewhere else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea I think that might complicate things. Especially since the user doesn't need to call add() to close/finalize the final one. That would mean some sort of code in the execute() that would add on this final map, which feels out of place to be in in the execute() method.

return this;
}
throw new IllegalArgumentException("Only String identifiers are supported");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could make this into a helper in the Assert helper; i.e. Assert.requireInstanceOf(..)

}

@Override
public Statement bind(int i, Object o) {
return null;
throw new IllegalArgumentException("Only named parameters are supported");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have an issue to track support for index based binding?
I think UnsupportedOperationException is more appropriate here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we decided that we don't want to support that, because it would require parsing of the SQL string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either way, let's track this as an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#91

}

@Override
public Statement bindNull(Object o, Class<?> type) {
return null;
public Statement bindNull(Object identifier, Class<?> type) {
return bind(identifier, null);
}

@Override
public Statement bindNull(int i, Class<?> type) {
return null;
throw new IllegalArgumentException("Only named parameters are supported");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto UnsupportedOperationException.

}

@Override
public Publisher<? extends Result> execute() {
List<Struct> paramsStructs = new ArrayList<>();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anything wrong with building the Structs in the bind immediately, rather than first keeping them as maps and then converting to structs here?

for (Map<String, Object> bindingsBatch : this.bindings) {
Builder paramsStructBuilder = Struct.newBuilder();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor style thing - it would be better to qualify this Builder class with Struct.Builder

Map<String, Type> types = this.types.isEmpty() ? new HashMap<>() : null;

for (Map.Entry<String, Object> binding : bindingsBatch.entrySet()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to put the logic in a helper method and call it from .add()? We are building up a map, might as well be building up a list of Structs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I would recommend this approach too. You could keep a List<Struct.Builder> instead of Map which will save a layer of conversions; then just build them all when you are ready to execute the query.

String paramName = binding.getKey();
Codec codec = this.resolvedCodecs.computeIfAbsent(paramName,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same binding for all rows should be of the same type, right? Can we save time by only doing this on the first add()?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we know that all rows will use the same type for a particular column? Is that the contract in R2DBC?
Frankly, I would try to avoid this resolvedCodecs caching altogether. If anything, Codecs itself can cache based on type instead. There doesn't seem to be an advantage to caching here vs in the Codecs class.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These aren't actually columns if i'm not mistaken. These are param tags in the query string, which means they don't need to be the same type. For example , you can get the size in bytes in googleSQL of a STRING or BYTES, so that single tag could take on either value.

Furthermore, I think the codec can change too? For example, if the tag in the SQL is like where 100 < @tag then could two different codecs convert either a long or a string into a number to fill that tag?

name -> this.codecs.getCodec(binding.getValue()));
paramsStructBuilder
.putFields(paramName, codec.encode(binding.getValue()));

if (this.types.isEmpty()) {
types.put(paramName,
Type.newBuilder().setCode(codec.getTypeCode()).build());
}

}

if (types != null) {
this.types = types;
}

paramsStructs.add(paramsStructBuilder.build());
}

Flux<Struct> structFlux = Flux.fromIterable(paramsStructs);

if (this.sql != null && this.sql.trim().toLowerCase().startsWith("select")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add assertion for sql being non-null to the constructor? A statement with null sql would not be valid, so we should fail early on that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's trivial logic, but can the second half of the condition go into a isSelectQuery helper method?

return structFlux.flatMap(this::runSingleStatement);
}
return structFlux.concatMapDelayError(this::runSingleStatement);
elefeint marked this conversation as resolved.
Show resolved Hide resolved
}

private Mono<? extends Result> runSingleStatement(Struct params) {
PartialResultRowExtractor partialResultRowExtractor = new PartialResultRowExtractor();

return this.client.executeStreamingSql(this.session, this.transaction, this.sql)
return this.client
.executeStreamingSql(this.session, this.transaction, this.sql, params, this.types)
.switchOnFirst((signal, flux) -> {
if (signal.hasError()) {
return Mono.error(signal.getThrowable());
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
/*
* Copyright 2019 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package com.google.cloud.spanner.r2dbc;

import com.google.spanner.v1.Transaction;
import java.util.concurrent.atomic.AtomicLong;
import javax.annotation.Nullable;

/**
* A class to hold transaction-related data.
*/
public class SpannerTransactionContext {

private Transaction transaction;

private AtomicLong seqNum = new AtomicLong(0);

private SpannerTransactionContext(Transaction transaction) {
this.transaction = transaction;
}

public Transaction getTransaction() {
return this.transaction;
}

public long nextSeqNum() {
return this.seqNum.getAndIncrement();
}

/**
* Creates the SpannerTransactionContext.
* @param transaction spanner transaction
* @return spanner transaction context
*/
public static @Nullable SpannerTransactionContext from(Transaction transaction) {
if (transaction == null) {
return null;
}
return new SpannerTransactionContext(transaction);
}
}
12 changes: 11 additions & 1 deletion src/main/java/com/google/cloud/spanner/r2dbc/client/Client.java
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,14 @@

package com.google.cloud.spanner.r2dbc.client;

import com.google.cloud.spanner.r2dbc.SpannerTransactionContext;
import com.google.protobuf.Struct;
import com.google.spanner.v1.CommitResponse;
import com.google.spanner.v1.PartialResultSet;
import com.google.spanner.v1.Session;
import com.google.spanner.v1.Transaction;
import com.google.spanner.v1.Type;
import java.util.Map;
import javax.annotation.Nullable;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
Expand Down Expand Up @@ -71,7 +75,13 @@ public interface Client {
* Execute a streaming query and get partial results.
*/
Flux<PartialResultSet> executeStreamingSql(
Session session, @Nullable Transaction transaction, String sql);
Session session, @Nullable SpannerTransactionContext transaction, String sql, Struct params,
Map<String, Type> types);

default Flux<PartialResultSet> executeStreamingSql(
Session session, @Nullable SpannerTransactionContext transaction, String sql) {
return executeStreamingSql(session, transaction, sql, null, null);
}

/**
* Release any resources held by the {@link Client}.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,11 @@
package com.google.cloud.spanner.r2dbc.client;

import com.google.auth.oauth2.GoogleCredentials;
import com.google.cloud.spanner.r2dbc.SpannerTransactionContext;
import com.google.cloud.spanner.r2dbc.util.ObservableReactiveUtil;
import com.google.common.annotations.VisibleForTesting;
import com.google.protobuf.Empty;
import com.google.protobuf.Struct;
import com.google.spanner.v1.BeginTransactionRequest;
import com.google.spanner.v1.CommitRequest;
import com.google.spanner.v1.CommitResponse;
Expand All @@ -35,10 +37,12 @@
import com.google.spanner.v1.TransactionOptions;
import com.google.spanner.v1.TransactionOptions.ReadWrite;
import com.google.spanner.v1.TransactionSelector;
import com.google.spanner.v1.Type;
import io.grpc.CallCredentials;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.grpc.auth.MoreCallCredentials;
import java.util.Map;
import javax.annotation.Nullable;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
Expand Down Expand Up @@ -155,20 +159,27 @@ public Mono<Void> deleteSession(Session session) {
});
}

// TODO: add information about parameters being added to signature
@Override
public Flux<PartialResultSet> executeStreamingSql(
Session session, @Nullable Transaction transaction, String sql) {
Session session, @Nullable SpannerTransactionContext transactionContext, String sql,
Struct params, Map<String, Type> types) {

return Flux.defer(() -> {
ExecuteSqlRequest.Builder executeSqlRequest =
ExecuteSqlRequest.newBuilder()
.setSql(sql)
.setSession(session.getName());
if (params != null) {
executeSqlRequest
.setParams(params)
.putAllParamTypes(types);
}

if (transaction != null) {
if (transactionContext != null && transactionContext.getTransaction() != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should just add a Assert.notNull(transaction) in the constructor of TransactionContext, and then simplify this to just check transactionContext != null. This way users in the future will not have to worry about doing 2 null-checks for transactions.

executeSqlRequest.setTransaction(
TransactionSelector.newBuilder().setId(transaction.getId()).build());
TransactionSelector.newBuilder().setId(transactionContext.getTransaction().getId())
.build());
executeSqlRequest.setSeqno(transactionContext.nextSeqNum());
}

return ObservableReactiveUtil.streamingCall(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,10 @@

import com.google.protobuf.Value;
import com.google.spanner.v1.Type;
import com.google.spanner.v1.TypeCode;
import reactor.util.annotation.Nullable;

interface Codec<T> {
public interface Codec<T> {

/**
* Indicates if the codec can decode a value.
Expand Down Expand Up @@ -80,4 +81,5 @@ interface Codec<T> {
*/
Class<?> type();

TypeCode getTypeCode();
}
Original file line number Diff line number Diff line change
Expand Up @@ -46,4 +46,7 @@ public interface Codecs {
* @throws NullPointerException if {@code value} is {@code null}
*/
Value encode(Object value);

Codec getCodec(Object value);

}
Original file line number Diff line number Diff line change
Expand Up @@ -99,17 +99,22 @@ public <T> T decode(Value value, Type spannerType, Class<? extends T> type) {
}

@Override
public Value encode(Object value) {
if (value == null) {
return NULL_VALUE;
}
public Codec getCodec(Object value) {
for (Codec<?> codec : this.codecs) {
if (codec.canEncode(value)) {
return codec.encode(value);
return codec;
}
}

throw new IllegalArgumentException(
String.format("Cannot encode parameter of type %s", value.getClass().getName()));
String.format("Cannot encode parameter of type %s", value.getClass().getName()));
}

@Override
public Value encode(Object value) {
if (value == null) {
return NULL_VALUE;
}
Codec codec = getCodec(value);
return codec.encode(value);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,11 @@ private boolean doCanDecode(Type dataType) {
return dataType.getCode() == this.typeCode;
}

@Override
public TypeCode getTypeCode() {
return this.typeCode;
}

T doDecode(Value value, Type spannerType, Class<? extends T> type) {
return (T) ValueUtils.decodeValue(spannerType, value);
}
Expand Down
Loading