We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
centos7.6.1810 php7.0.33 nginx1.17.1
使用了beast全站加密后,正常通过客户端访问是没有问题的,但是,在服务器上通过crontab执行.sh脚本,通过.sh脚本去运行php index.php cron xxxx这样的任务的时候,会出现cpu使用率高的情况,试过增加size_cache的值,已经加到10000000,还是一样的效果,关于cli执行,有没有单独的设置地方,或者是不是哪个地方设置错误了?
相关参数 extension=beast.so [beast] beast.log_file = "/tmp/beast.log" beast.log_user = "www" beast.log_level = "ERROR" beast.enable = On beast.cache_size = 10000000
麻烦帮忙看下,谢谢
The text was updated successfully, but these errors were encountered:
通过ltrace查看,很多memcpy的操作。。。这是解密引起的吗?有没有办法吧cli下执行的程序也放在类似opcache的缓存里面直接使用,而不用每次都去解密?因为程序任务2分钟一次,这个影响很大
Sorry, something went wrong.
如果是fpm模式只有第一次执行才会解密
这个基本上无解,因为你每次执行都要重新解密一次
一种变通的方法是你把脚本改成允许在 fpm 下使用,然后定时任务里用 curl 或 wget 访问这个 url
No branches or pull requests
centos7.6.1810
php7.0.33
nginx1.17.1
使用了beast全站加密后,正常通过客户端访问是没有问题的,但是,在服务器上通过crontab执行.sh脚本,通过.sh脚本去运行php index.php cron xxxx这样的任务的时候,会出现cpu使用率高的情况,试过增加size_cache的值,已经加到10000000,还是一样的效果,关于cli执行,有没有单独的设置地方,或者是不是哪个地方设置错误了?
相关参数
extension=beast.so
[beast]
beast.log_file = "/tmp/beast.log"
beast.log_user = "www"
beast.log_level = "ERROR"
beast.enable = On
beast.cache_size = 10000000
麻烦帮忙看下,谢谢
The text was updated successfully, but these errors were encountered: