-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Notify distributor when a new task added #678
Notify distributor when a new task added #678
Conversation
6326b85
to
01dd9e6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -76,6 +79,15 @@ def schedule_boot_jobs(self): | |||
LOG.error("Failed to initialize periodic tasks, reason: %s.", | |||
six.text_type(e)) | |||
raise e | |||
metrics_task_server = service. \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sub routine is started only by leader.. if we keep metrics_task_service here only leader will have this service running..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created a new service here, to get notification when new job created.
delfin/cmd/task.py
Outdated
@@ -46,18 +46,9 @@ def main(): | |||
task_server = service.TaskService.create(binary='delfin-task', | |||
coordination=True) | |||
leader_election = service.LeaderElectionService.create() | |||
metrics_task_server = service. \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metric worker node need not start resource task and alert task.. if we start new service inside scheduler start we cant control the service start during multi process spawn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reversed
delfin/cmd/task.py
Outdated
@@ -46,18 +46,9 @@ def main(): | |||
task_server = service.TaskService.create(binary='delfin-task', | |||
coordination=True) | |||
leader_election = service.LeaderElectionService.create() | |||
metrics_task_server = service. \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metric worker node need not start resource task and alert task.. if we start new service inside scheduler start we cant control the service start during multi process spawn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reversed
@@ -74,6 +74,16 @@ def __call__(self): | |||
else: | |||
LOG.debug("Periodic job distribution completed.") | |||
|
|||
def distribute_new_job(self, task_id): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about function name 'assign_new_job' ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thought is that the file is task_distributor
, so I named this function with distribute
delfin/api/v1/storages.py
Outdated
@@ -109,8 +111,8 @@ def create(self, req, body): | |||
capabilities = self.driver_api.get_capabilities( | |||
context=ctxt, storage_id=storage['id']) | |||
validation.validate_capabilities(capabilities) | |||
_create_performance_monitoring_task(ctxt, storage['id'], | |||
capabilities) | |||
self.perf_job_manager.create_perf_job(ctxt, storage['id'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why we move this function to task manager.. API itself can create task table.. Why we need to distribute create task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_create_performance_monitoring_task
only do one thing: add the job to db
self.perf_job_manager.create_perf_job
do two things: add the job to db and put the task id to RabbitMQ
8c34e9b
to
b2ac417
Compare
Codecov Report
@@ Coverage Diff @@
## perf_coll_fw_enhance #678 +/- ##
========================================================
- Coverage 70.18% 70.11% -0.07%
========================================================
Files 159 161 +2
Lines 14936 14976 +40
Branches 1822 1822
========================================================
+ Hits 10483 10501 +18
- Misses 3846 3868 +22
Partials 607 607
|
b2ac417
to
4bbd926
Compare
@@ -74,6 +74,17 @@ def __call__(self): | |||
else: | |||
LOG.debug("Periodic job distribution completed.") | |||
|
|||
def distribute_new_job(self, task_id): | |||
executor = CONF.host |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting executor should be a function , which generates the executor based on some alogorithm.. This Applicable to all reference of CONF.host in distributor.. Are we planning that logic in a different PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would in another PR
@@ -74,6 +74,17 @@ def __call__(self): | |||
else: | |||
LOG.debug("Periodic job distribution completed.") | |||
|
|||
def distribute_new_job(self, task_id): | |||
executor = CONF.host |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting executor should be a function , which generates the executor based on some alogorithm.. This Applicable to all reference of CONF.host in distributor.. Are we planning that logic in a different PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
RPC_API_VERSION = '1.0' | ||
|
||
def __init__(self): | ||
self.target = messaging.Target(topic='JobGenerator', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be put under metrics_manager , under same topic as CONF.host
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified
db.task_create(context=context, values=task) | ||
# Add it to RabbitMQ | ||
filters = {'storage_id': storage_id} | ||
task_id = db.task_get_all(context, filters=filters)[0].get('id') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below steps can be handled within metrics_rpc_api ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified
@@ -76,6 +79,15 @@ def schedule_boot_jobs(self): | |||
LOG.error("Failed to initialize periodic tasks, reason: %s.", | |||
six.text_type(e)) | |||
raise e | |||
job_generator = service. \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, This service is not required. We can put the JobGenerator under metrics_manager itself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This service should be only on leader node, so I create a new service here
6b946f1
to
02f9b2f
Compare
02f9b2f
to
df0d160
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LGTM |
* Make job scheduler local to task process (#674) * Make job scheduler local to task process * Notify distributor when a new task added (#678) * Remove db-scan for new task creation (#680) * Use consistent hash to manage the topic (#681) * Remove the periodically call from task distributor (#686) * Start one historic collection immediate when a job is rescheduled (#685) * Start one historic collection immediate when a job is rescheduled * Remove failed task distributor (#687) * Improving Failed job handling and telemetry job removal (#689) Co-authored-by: ThisIsClark <liuyuchibubao@gmail.com> Co-authored-by: Ashit Kumar <akopensrc@gmail.com>
What this PR does / why we need it:
When a new performance task created, notify distributor to distribute it instead of scan db periodically.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #Special notes for your reviewer:
Release note: