Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lazy_static rhai Engine #112

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 51 additions & 3 deletions benches/benchmark.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
#![feature(test)]
use casbin::prelude::*;

extern crate test;

use test::Bencher;

use casbin::prelude::*;

fn await_future<F, T>(future: F) -> T
where
F: std::future::Future<Output = T>,
Expand Down Expand Up @@ -479,3 +478,52 @@ fn b_benchmark_cached_priority_model(b: &mut Bencher) {

b.iter(|| await_future(e.enforce(&["alice", "data1", "read"])).unwrap());
}

#[allow(dead_code)]
mod task {
use std::future::Future;
use std::pin::Pin;
use std::ptr;
use std::task::{Context, Poll, RawWaker, RawWakerVTable, Waker};

const RAW_WAKER: RawWaker = RawWaker::new(ptr::null(), &VTABLE);
const VTABLE: RawWakerVTable = RawWakerVTable::new(clone, wake, wake_by_ref, drop);

unsafe fn clone(_: *const ()) -> RawWaker {
RAW_WAKER
}

unsafe fn wake(_: *const ()) {}

unsafe fn wake_by_ref(_: *const ()) {}

unsafe fn drop(_: *const ()) {}

pub fn create() -> Waker {
// Safety: The waker points to a vtable with functions that do nothing. Doing
// nothing is always safe.
unsafe { Waker::from_raw(RAW_WAKER) }
}

/// Using this tests only the code that is polled.
///
/// `block_on` creates no runtime overhead so a more accurate benchmark of just the code
/// can be collected.
pub fn block_on<F, T>(mut future: F) -> T
where
F: Future<Output = T>,
{
// Safety: since we own the future no one can move any part of it but us, and we won't.
let mut fut = unsafe { Pin::new_unchecked(&mut future) };
let waker = create();
let mut ctx = Context::from_waker(&waker);
loop {
if let Poll::Ready(res) = fut.as_mut().poll(&mut ctx) {
return res;
}
// TODO since criterion is single threaded simply looping seems ok
// burning cpu for a simpler function seems fair
// possible `std::sync::atomic::spin_loop_hint` here.
}
}
}
7 changes: 6 additions & 1 deletion src/enforcer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ use crate::{
};

use async_trait::async_trait;
use lazy_static::lazy_static;
use rhai::{Array, Engine, RegisterFn, Scope};

use std::{
Expand Down Expand Up @@ -223,11 +224,15 @@ impl CoreApi for Enforcer {
/// fn main() {}
/// ```
async fn enforce<S: AsRef<str> + Send + Sync>(&mut self, rvals: &[S]) -> Result<bool> {
lazy_static! {
static ref ENGINE: Arc<RwLock<Engine<'static>>> = Arc::new(RwLock::new(Engine::new()));
}

if !self.enabled {
return Ok(true);
}

let mut engine = Engine::new();
let mut engine = ENGINE.write().unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original version of enforce is lock-free, will this change makes enforce can't handle concurrent request because every request have to wait for the write lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if it matters, I can't get it to panic even using RwLock::try_write in either an async spawned context or on a std thread. The only way to pass the Enforcer to a thread or an async_std::spawn is to wrap it in some type of sync control. I'm pushing another commit mostly to show the test I wrote for this. If you can think of any other ways to pass this into concurrent execution it'd be great to add them to the test.

This also reminds me I was wondering about using async_std::sync::RwLock/Mutex instead of std's RwLock/Mutex. There is a discussion about this sort of here

Copy link
Member

@GopherJ GopherJ Apr 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do have only one enforcer, it's ok to use lazy_static, but I would say if the users created multiple enforcers. It will cause problems because basically they will share the same Engine and the more enforcers we have, the worser result we get. => This is also why I removed lazy_static event emitter from the code

@xcaptain @DevinR528 I think inline engine, scope will improve bench about 15%, but yes I didn't succeed to implement it yesterday due to lifetime issue.

May try again in the future

@DevinR528 You are right we shouldn't use std::sync::RwLock to wrap a type which implemented many async functions. Because .await requires Send and std::sync::RwLockGuard is not send. That causes also difficutiles when I try to implement a clonable SyncedEnforcer, which is basically Arc<RwLock<Enforcer/CachedEnforcer>>. This maybe out of topic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of just one Engine could you have a lazy_static Arc<Mutex<Vec<Engine>>> and every time Enforcer::new is called you push a new Engine into the vec and assign an index for that Enforcer instance? I'm going to make a somewhat educated guess that the lifetime issues may be insurmountable 😢, basically the lifetime of the &str's passed to Scope::push_constant become the lifetime of the &mut self borrow for Enforcer::enforce and there doesn't seem to be a way to say a lifetime that is smaller than the lifetime of the Enforcer struct which is what you need.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DevinR528
Arc<Mutex<Vec<Engine>>> still require write access, it means when one enforcer is enforcing, there won't be any other enforcer can acquire a engine.

I believe rhai::Engine 's lifetime is to for the registered functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh right the vec won't work at all. I had one more idea creating a lockless AtomicPtr struct that has a flag for when the Engine is in use and either starts we a few Engines or adds to a vec as needed.

let mut scope: Scope = Scope::new();

let r_ast = get_or_err!(self, "r", ModelError::R, "request");
Expand Down