Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PX4 general work queue #8814

Closed
dagar opened this issue Feb 3, 2018 · 13 comments · Fixed by #11571
Closed

PX4 general work queue #8814

dagar opened this issue Feb 3, 2018 · 13 comments · Fixed by #11571

Comments

@dagar
Copy link
Member

dagar commented Feb 3, 2018

  • pull work queue into PX4 proper and improve
    • this is already done for posix and qurt, we might as well generalize it
  • on nuttx leave HPWORK and LPWORK alone
    • depending on defconfig nuttx may or may not use them at all, and will give them a much smaller stack
  • add a work queue per bus for drivers (spi & i2c)
  • create new work queue for high priority PX4 periodic tasks like the land detector (aka vehicle model)
  • consider other modules that could be configured to run either in a work queue or as a task
  • build in perf counters to the "PX4 work queue" api so at a glance we can understand what's running in each, at what frequency, and cycle execution time stats
  • simple C++ api
  • lazy init
  • queue work immediately or schedule periodically once (like hrt_call_every)
  • see dispatch queue for ideas 3059b0c
@dagar
Copy link
Member Author

dagar commented Sep 16, 2018

@bartslinger @dakejahl FYI

@dakejahl
Copy link
Contributor

Nice! I should be ready for this mid week.

@bartslinger
Copy link
Contributor

I'm thinking about a concept of an event queue as a more generalized version of a work queue. The events get processed by an event processor, which executes the work in a callback. Until here, the result is similar to the work queue.

However, an event queue can also take uORB events. The event processor can invoke a call to the right module. This way we can pull multiple modules (that are now running at same priority level anyway) into a single thread.

An event queue can also accept timing events, which can be published from hrt timer interrupts on intervals configured by hrt_call_every() for example.

It is also possible to have several event queues with different priorities in parallel, although it will not preempt a lower priority event processing function when a high priority event arrives. If that is required, another event queue + processor can be added in another thread with higher priority.

@dagar
Copy link
Member Author

dagar commented Sep 18, 2018

Sounds good to me. The other piece of this I didn't mention above (perhaps the killer feature) was an orb mechanism for queuing work items on publish.

@dagar
Copy link
Member Author

dagar commented Sep 18, 2018

However, an event queue can also take uORB events. The event processor can invoke a call to the right module. This way we can pull multiple modules (that are now running at same priority level anyway) into a single thread.

Minor update, but I believe we straightened out many of the priorities (https://github.com/PX4/Firmware/blob/master/src/platforms/px4_tasks.h) after following up with that AeroVinci PX4 blog post (http://px4.io/aerovinci-scalable-flexible-dronedock-system/).

@bartslinger
Copy link
Contributor

So what I really ment to say was: This way we can pull multiple modules (whose code is now executed sequentially anyway) into a single thread

@dagar
Copy link
Member Author

dagar commented Sep 19, 2018

This way we can pull multiple modules (whose code is now executed sequentially anyway) into a single thread

Absolutely!

@bartslinger
Copy link
Contributor

I'm working on something that's going to look a bit like this. It is a work queue which can be linked to uORB topics.

First of all, uORB is extended to support callbacks on publications:
orb_register_callback_multi(ORB_ID(vehicle_rates_setpoint), 0, my_callback);

But actually this is going to be encapsulated by the work queue client, which will behave something like this:

WorkQueueClient wq(RATEQ); // HPWORK, LPWORK, RATE_Q, ATT_Q, I2C1_Q, ..  etc
wq.scheduleWork(once_off_work_callback, 500000); // 500ms from now
wq.scheduleRegularWork(regular_work_callback, 200000); // every 200ms
wq.registerTopic(ORB_ID(vehicle_rates_setpoint), 0, rate_sp_processor_function);

In the background, there is a WorkQueueManager object and some WorkQueue objects. Each WorkQueue runs at a different priority in a different thread or task.

Still very conceptual and WIP.

@bkueng
Copy link
Member

bkueng commented Sep 25, 2018

First of all, uORB is extended to support callbacks on publications

In which context is this going to be called? On the publishing task? If so, you defeat the purpose of uORB - passing messages to different tasks. If on the calling task you're going to need an event loop (which is what we do with poll()).

Otherwise I like the concept of a general work queue if we can get an efficient implementation.

@bartslinger
Copy link
Contributor

It will be called on publish() so I understand your concern. However it should only be used to schedule an item into a work queue. Maybe I should add the work queue id as additional argument, so that it cannot be abused.
orb_register_callback_multi(ORB_ID(vehicle_rates_setpoint), 0, my_callback, HPWORK);

@bkueng
Copy link
Member

bkueng commented Sep 25, 2018

I'm not sure if you want to build that into uorb then or just add that into the work queue (since it's a specific use-case). But let's see.
Did you consider using poll in the work queue? Basically poll on all interested topics with a poll timeout for the next scheduled time-based event (if there is any). And do a usleep if there are no topics to wait for.

@dagar
Copy link
Member Author

dagar commented Sep 25, 2018

The idea is that you want to register something such that every orb publish results in a work item being added to the queue. We can register it once (module "start") and skip every cycle doing the poll setup, poll teardown, locking, etc.

The goal is to be able to use it for inherently serialized processes with minimal latency. For example the rate controller and output module (mixing + pwm) could share a thread in this new work queue. The rate controller's actuator_control orb publication results in a px4fmu work item inserted to run immediately. In this case they'd run back to back without a context switch and without the need for a task each.

@dagar dagar added this to the Release v2.0.0 milestone Sep 25, 2018
@bkueng
Copy link
Member

bkueng commented Sep 26, 2018

The rate controller's actuator_control orb publication results in a px4fmu work item inserted to run immediately. In this case they'd run back to back without a context switch and without the need for a task each.

Ok sounds interesting. On the critical path we have 2 task switches that minimally take 31us each. The control latency is 215us on the bench, so we could save a bit there, assuming the work-queue implementation is efficient.
Looking at a recent log (pixracer) the control latency is 483us on average, which I thought was lower at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment