-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter tags on the fly - workaround for #168 #329
Conversation
22b5cea
to
ed0a38c
Compare
ed0a38c
to
5fccc55
Compare
private[dao] def perfectlyMatchTag( | ||
dbPublisher: DatabasePublisher[JournalRow], | ||
tag: String): Source[JournalRow, NotUsed] = { | ||
Source.fromPublisher(dbPublisher).filter(_.tags.exists(tags => tags.split(",").contains(tag))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realised that it's possible to change the tag separator and therefore the split must take that into consideration. I will change this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good workaround
*/ | ||
private[dao] def perfectlyMatchTag( | ||
dbPublisher: DatabasePublisher[JournalRow], | ||
tag: String): Source[JournalRow, NotUsed] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't matter, but this is conceptually a Flow
, and could be defined as such
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can refactor it. I don't have the reflex to think in terms of Flow. Will work on that. ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good for another review round.
|
||
# The tag separator to use when tagging events with more than one tag. | ||
# This property affects jdbc-journal.tagSeparator and jdbc-read-journal.tagSeparator. | ||
tagSeparator = "," |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This property was defined in two different places, but we should have only one. We should use the same on the write and read side.
jdbc-journal.tagSeparator
and jdbc-read-journal.tagSeparator
are now referencing this value.
*/ | ||
private[dao] def perfectlyMatchTag(tag: String, separator: String) = | ||
Flow[JournalRow].filter(_.tags.exists(tags => tags.split(separator).contains(tag))) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@patriknw, refactor to a Flow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's now taking into consideration the separator
that is defined in config file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Looks good |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
As discussed here: #318
We plan to cut a release for users that can't immediately migrate to the upcoming 4.0.0 (because of schema migrations).
This is a quick fix for the most annoying bug we have. When using sharded tags (like in Lagom), the
eventsByTag
call may return events with unrelated tags. For instance, if you search for the tag "foo1", you also get events tag "foo10".The workaround is to filter out any partial match, in memory. Once merged and release, this can immediately benefit existing Lagom users.