Skip to content

Commit

Permalink
Merge branch 'main' into yiming/commit-partial-table
Browse files Browse the repository at this point in the history
  • Loading branch information
wenym1 committed Aug 15, 2024
2 parents 4a067ed + cbeda4d commit deb33db
Show file tree
Hide file tree
Showing 35 changed files with 516 additions and 93 deletions.
8 changes: 4 additions & 4 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ license = "Apache-2.0"
repository = "https://github.com/risingwavelabs/risingwave"

[workspace.dependencies]
foyer = { version = "0.10.1", features = ["nightly", "mtrace"] }
foyer = { version = "0.10.4", features = ["nightly", "mtrace"] }
apache-avro = { git = "https://github.com/risingwavelabs/avro", rev = "25113ba88234a9ae23296e981d8302c290fdaa4b", features = [
"snappy",
"zstandard",
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
RisingWave is a Postgres-compatible SQL engine engineered to provide the <i><b>simplest</b></i> and <i><b>most cost-efficient</b></i> approach for <b>processing</b>, <b>analyzing</b>, and <b>managing</b> real-time event streaming data.

![RisingWave](https://github.com/risingwavelabs/risingwave/assets/41638002/10c44404-f78b-43ce-bbd9-3646690acc59)
![RisingWave](./docs/dev/src/images/architecture_20240814.png)

## When to use RisingWave?
RisingWave can ingest millions of events per second, continuously join live data streams with historical tables, and serve ad-hoc queries in real-time. Typical use cases include, but are not limited to:
Expand Down
4 changes: 1 addition & 3 deletions ci/scripts/run-e2e-test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,7 @@ echo "--- e2e, $mode, batch"
RUST_LOG="info,risingwave_stream=info,risingwave_batch=info,risingwave_storage=info" \
cluster_start
sqllogictest -p 4566 -d dev './e2e_test/ddl/**/*.slt' --junit "batch-ddl-${profile}" --label "can-use-recover"
if [[ "$mode" != "single-node" ]]; then
sqllogictest -p 4566 -d dev './e2e_test/background_ddl/basic.slt' --junit "batch-ddl-${profile}"
fi
sqllogictest -p 4566 -d dev './e2e_test/background_ddl/basic.slt' --junit "batch-ddl-${profile}"

if [[ $mode != "single-node" ]]; then
sqllogictest -p 4566 -d dev './e2e_test/visibility_mode/*.slt' --junit "batch-${profile}"
Expand Down
Binary file added docs/dev/src/images/architecture_20240814.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
195 changes: 195 additions & 0 deletions e2e_test/sink/license.slt
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
statement ok
SET RW_IMPLICIT_FLUSH TO true;

statement ok
ALTER SYSTEM SET license_key TO '';

statement ok
CREATE TABLE t (k INT);

statement error
CREATE SINK dynamodb_sink
FROM
t
WITH
(
connector = 'dynamodb',
table = 'xx',
primary_key = 'k',
region = 'xx',
access_key = 'xx',
secret_key = 'xx'
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: Internal error
4: feature DynamoDbSink is only available for tier Paid and above, while the current tier is Free

Hint: You may want to set a license key with `ALTER SYSTEM SET license_key = '...';` command.


statement error
CREATE SINK snowflake_sink
FROM t
WITH (
connector = 'snowflake',
type = 'append-only',
force_append_only = 'true',
s3.bucket_name = 'xx',
s3.credentials.access = 'xx',
s3.credentials.secret = 'xx',
s3.region_name = 'xx',
s3.path = 'xx',
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: Internal error
4: feature SnowflakeSink is only available for tier Paid and above, while the current tier is Free

Hint: You may want to set a license key with `ALTER SYSTEM SET license_key = '...';` command.


statement error
CREATE SINK opensearch_sink
FROM t
WITH (
connector = 'opensearch',
url = 'xx',
username = 'xx',
password = 'xx',
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: feature OpenSearchSink is only available for tier Paid and above, while the current tier is Free

Hint: You may want to set a license key with `ALTER SYSTEM SET license_key = '...';` command.


statement error
CREATE SINK bigquery_sink
FROM
t
WITH
(
connector = 'bigquery',
type = 'append-only',
force_append_only='true',
bigquery.local.path= 'xx',
bigquery.project= 'xx',
bigquery.dataset= 'xx',
bigquery.table= 'xx'
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: Internal error
4: feature BigQuerySink is only available for tier Paid and above, while the current tier is Free

Hint: You may want to set a license key with `ALTER SYSTEM SET license_key = '...';` command.


statement ok
ALTER SYSTEM SET license_key TO DEFAULT;

statement ok
flush;

statement error
CREATE SINK dynamodb_sink
FROM
t
WITH
(
connector = 'dynamodb',
table = 'xx',
primary_key = 'xx',
region = 'xx',
access_key = 'xx',
secret_key = 'xx'
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: Sink error
2: Sink primary key column not found: xx. Please use ',' as the delimiter for different primary key columns.


statement ok
CREATE SINK snowflake_sink
FROM t
WITH (
connector = 'snowflake',
type = 'append-only',
force_append_only = 'true',
s3.bucket_name = 'xx',
s3.credentials.access = 'xx',
s3.credentials.secret = 'xx',
s3.region_name = 'xx',
s3.path = 'xx',
);


statement error
CREATE SINK opensearch_sink
FROM t
WITH (
connector = 'opensearch',
url = 'xx',
username = 'xx',
password = 'xx',
index = 'xx',
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: sink cannot pass validation: INTERNAL: Connection is closed


statement error
CREATE SINK bigquery_sink
FROM
t
WITH
(
connector = 'bigquery',
type = 'append-only',
force_append_only='true',
bigquery.local.path= 'xx',
bigquery.project= 'xx',
bigquery.dataset= 'xx',
bigquery.table= 'xx'
);
----
db error: ERROR: Failed to run the query

Caused by these errors (recent errors listed first):
1: gRPC request to meta service failed: Internal error
2: failed to validate sink
3: BigQuery error
4: No such file or directory (os error 2)


statement ok
DROP SINK snowflake_sink;

statement ok
DROP TABLE t;
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Single phase approx percentile
statement ok
create table t(p_col double, grp_col int);

statement ok
insert into t select a, 1 from generate_series(-1000, 1000) t(a);

statement ok
flush;

query I
select
percentile_cont(0.01) within group (order by p_col) as p01,
min(p_col),
percentile_cont(0.5) within group (order by p_col) as p50,
count(*),
percentile_cont(0.99) within group (order by p_col) as p99
from t;
----
-980 -1000 0 2001 980

statement ok
create materialized view m1 as
select
approx_percentile(0.01, 0.01) within group (order by p_col) as p01,
min(p_col),
approx_percentile(0.5, 0.01) within group (order by p_col) as p50,
count(*),
approx_percentile(0.99, 0.01) within group (order by p_col) as p99
from t;

query I
select * from m1;
----
-982.5779489474152 -1000 0 2001 982.5779489474152

# Test state encode / decode
onlyif can-use-recover
statement ok
recover;

onlyif can-use-recover
sleep 10s

query I
select * from m1;
----
-982.5779489474152 -1000 0 2001 982.5779489474152

# Test 0<x<1 values
statement ok
insert into t select 0.001, 1 from generate_series(1, 500);

statement ok
insert into t select 0.0001, 1 from generate_series(1, 501);

statement ok
flush;

query I
select * from m1;
----
-963.1209598593477 -1000 0.00009999833511933609 3002 963.1209598593477

query I
select
percentile_cont(0.01) within group (order by p_col) as p01,
min(p_col),
percentile_cont(0.5) within group (order by p_col) as p50,
count(*),
percentile_cont(0.99) within group (order by p_col) as p99
from t;
----
-969.99 -1000 0.0001 3002 969.9899999999998

statement ok
drop materialized view m1;

statement ok
drop table t;
Original file line number Diff line number Diff line change
Expand Up @@ -47,19 +47,6 @@ select * from m1;
----
-982.5779489474152 0 0 2001 982.5779489474152

# Test state encode / decode
onlyif can-use-recover
statement ok
recover;

onlyif can-use-recover
sleep 10s

query I
select * from m1;
----
-982.5779489474152 0 0 2001 982.5779489474152

# Test 0<x<1 values
statement ok
insert into t select 0.001, 1 from generate_series(1, 500);
Expand Down
3 changes: 3 additions & 0 deletions e2e_test/streaming/union.slt
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,9 @@ Caused by:
Invalid input syntax: When CORRESPONDING is specified, at least one column of the left side shall have a column name that is the column name of some column of the right side in a UNION operation. Left side query column list: ("v1", "v2", "v4"). Right side query column list: ("vxx").


statement ok
drop table txx;

statement ok
drop table t1;

Expand Down
Loading

0 comments on commit deb33db

Please sign in to comment.