Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create & alter inconsistency with "on cluster" for ENGINE = Distributed #3268

Closed
den-crane opened this issue Oct 2, 2018 · 5 comments · Fixed by #9617
Closed

create & alter inconsistency with "on cluster" for ENGINE = Distributed #3268

den-crane opened this issue Oct 2, 2018 · 5 comments · Fixed by #9617
Labels
bug Confirmed user-visible misbehaviour in official release comp-dddl Distributed DDL feature easy task Good for first contributors st-accepted The issue is in our backlog, ready to take

Comments

@den-crane
Copy link
Contributor

den-crane commented Oct 2, 2018

<vscluster>
   <shard>
       <internal_replication>true</internal_replication>
       <replica><host>node1</host></replica>
       <replica><host>node2</host></replica>
   </shard>
</vscluster>

CREATE works! And creates Distributed table on both replicas.
CREATE TABLE vs.t1_distrib ON CLUSTER vscluster (ts DATETIME,c1 VARCHAR) ENGINE=Distributed('vscluster','vs','t1')

ALTER does not work!
ALTER TABLE vs.t1_distrib ON CLUSTER vscluster ADD COLUMN c2 VARCHAR

Code: 371, e.displayText() = DB::Exception: Table t1_distrib isn't replicated, but shard #1 is replicated according to its cluster definition, e.what() = DB::Exception.

https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/clickhouse/BmsOrL2AMN4/eQnjeC1CAQAJ

@c3mb0
Copy link

c3mb0 commented May 8, 2019

Is this ever going to be fixed?

@filimonov filimonov added comp-dddl Distributed DDL feature bug Confirmed user-visible misbehaviour in official release labels Sep 25, 2019
@cw9
Copy link
Contributor

cw9 commented Sep 26, 2019

Hi, I'm hitting this issue as well, any plan to get this fixed?

@den-crane
Copy link
Contributor Author

den-crane commented Sep 26, 2019

BTW I always used a workaround.
One more cluster with all nodes as dedicated shards (internal_replication=false).
So for this cluster any DDL just works at all nodes.

@alexey-milovidov alexey-milovidov added the st-accepted The issue is in our backlog, ready to take label Nov 11, 2019
@alexey-milovidov
Copy link
Member

Our internal users also have faced this issue.

It should be trivial to fix: just skip the check for Distributed tables.

@nvartolomei
Copy link
Contributor

Shouldn't we do the same for Null tables? Or just remove the check altogether?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Confirmed user-visible misbehaviour in official release comp-dddl Distributed DDL feature easy task Good for first contributors st-accepted The issue is in our backlog, ready to take
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants