Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Add filter-out history config option #4739

Merged
merged 2 commits into from
Aug 4, 2018
Merged

Conversation

ScottSallinen
Copy link
Contributor

Right now, filter-on is opt-in -- that is to say, you choose which contracts to include. This change adds a second filter option, filter-out. This removes storing history if it matches the filter-on, but also matches the filter-out.
Filter -on and -out can be combined in multiple ways. For example, we could do the following:

filter-on = eosio.token:transfer:
filter-out = eosio.token:transfer:spammer

In this case, we can be sure to include all eosio.token history, except for a specific spammer. The only way to do such a system with the current opt-in only filter is to have a filter-on for every single actor that isn't the spammer.

Also allowed is filter-out = contract:action:, which could be combined with filter-on = * to enable storing history for all except some specific contract's action, or an even more strict filter-out = contract:: to disable all history for a specific contract.

These changes bring filters more in line with actor blacklist and whitelists, allowing both opt-on or opt-out functionality.

More work will need to be done to make filters more customizable in the future, but this represents a stepping stone in this direction.

@lukestokes
Copy link

@ScottSallinen is this something your team is already running? Does additional test code to cover this addition need to be written? I see this is a really valuable addition and was curious about potentially testing it and including it in https://github.com/EOS-Mainnet/eos if needed and letting this upstream repo bring it in if they agree with it. Not saying we need to move that direction right now, but it's something I wanted to at least bring up for discussion. Thanks for your work on this.

@ScottSallinen
Copy link
Contributor Author

This is something we've been running in production in our Greymass infrastructure for almost 2 weeks now.

In regards to testing -- yes, more tests is always good. Ideally we should duplicate tests looking at filter-on and inverse them to look at filter-off. However, the reason for this would be only to ensure future changes respect the desired behaviour created in this change, as the change itself is actually relatively trivial.

It would be nice if this gets merged into EOSIO mainnet, but anyone wishing to use this change can simply pull it and apply it to their working / production copy (like we are doing).

If people would have listened to us 8 days ago, we would have prevented a lot of block explorers from crashing yesterday. Everyone changed to use our API since it was one of the only ones working at the time 😄

@ankh2054
Copy link

ankh2054 commented Jul 31, 2018

We have been running this on production for a week now.
Both nodes have the filter applied, but only one was replayed.

Node1 - 74GB (after --replay)
Node2 - 64GB (no --replay)

Not sure its working for us. But wondering whether the filter-out needs to be below filter-on. I am trying that on node2 and performing replay.

@ScottSallinen
Copy link
Contributor Author

ScottSallinen commented Jul 31, 2018

@ankh2054 two things:
a) What filter-out's have you applied?
b) What is the actual disk usage (du output) rather than file size?

Location of the setting in the config does not matter, but you might need to think about how the two interact to ensure you're filtering the right things.

@ankh2054
Copy link

ankh2054 commented Aug 1, 2018

@ScottSallinen - Worked it out in the end, it was actually working I was misinterpreting the memory usage.I was looking at VIRST instead of RES.

thanks

@heifner heifner changed the base branch from master to develop August 4, 2018 13:59
@heifner heifner merged commit 4d5e663 into EOSIO:develop Aug 4, 2018
@csquan
Copy link

csquan commented Aug 22, 2018

has these change merge in 1.2?

@heifner
Copy link
Contributor

heifner commented Aug 22, 2018

yes, but there is a fix for it in release/1.2.x, see #5306

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants