Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Redis cache store for distributed deployment #1751

Merged
merged 9 commits into from
Mar 18, 2022
Merged

Conversation

luoxmc
Copy link

@luoxmc luoxmc commented Mar 15, 2022

增加新的缓存处理方式 - redis缓存
以解决分布式部署halo时后台经常需要重新登陆的问题

关联issue地址 https://github.com/halo-dev/halo/issues/1505

@CLAassistant
Copy link

CLAassistant commented Mar 15, 2022

CLA assistant check
All committers have signed the CLA.

@luoxmc
Copy link
Author

luoxmc commented Mar 15, 2022

相关介绍见我的这篇博客 https://luoxx.top/archives/halo-distributed-deploy

@luoxmc luoxmc requested a review from guqing March 15, 2022 08:50
Copy link
Member

@ruibaby ruibaby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@JohnNiang JohnNiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @luoxmc ,可以提供一个详细的测试流程么?

@JohnNiang JohnNiang changed the title add new cache way - redis Add Redis cache store for distributed deployment Mar 16, 2022
@guqing
Copy link
Member

guqing commented Mar 16, 2022

你可以引入一个 embedded-redis 并且scope设置为 testOnly,然后编写一个 RedisCacheStoreTest 单元测试以帮助我们更好的验证其正确性,并作为以后代码改动的基准线

@luoxmc
Copy link
Author

luoxmc commented Mar 16, 2022

Hi @luoxmc ,可以提供一个详细的测试流程么?

@JohnNiang 只有比较简陋的测试流程

1、备份原博客
2、更新jar包,更新配置文件切换为redis缓存,配置好redis地址。mount,重启(两台或以上的机器)
3、重新登陆博客,redis客户端登陆redis观察redis里面的值,检查是否正常。
4、进入halo后台-系统-小工具-开发者选项-查看机器配置信息
5、五分钟或十分钟或更久之后刷新页面,检查是否弹窗提示需要重新登陆,若未提示则查看开发者选项里面的机器配置是否变更,若未变更,继续过一段时间后再来刷新。(ps:确认访问机器是否变更也可以通过查看nginx日志)
6、机器信息变更代表访问的服务已经变了,而且不需要重新登陆代表缓存能共享了
7、其他功能测试(访问博客前端,各个地方点一点。访问博客后台各个地方点一点,看有啥问题)

@luoxmc luoxmc requested a review from guqing March 16, 2022 08:03
@luoxmc
Copy link
Author

luoxmc commented Mar 16, 2022

你可以引入一个 embedded-redis 并且scope设置为 testOnly,然后编写一个 RedisCacheStoreTest 单元测试以帮助我们更好的验证其正确性,并作为以后代码改动的基准线

@guqing 加上了,有啥不对的地方你们那边应该可以直接改吧

Refactor unit test case for redis cache store
Copy link
Member

@guqing guqing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@JohnNiang JohnNiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your first contribution!

LGTM

@guqing guqing merged commit 5588734 into halo-dev:master Mar 18, 2022
@guqing
Copy link
Member

guqing commented Mar 18, 2022

hi @luoxmc, Congrats on merging your first pull request! 🎉🎉🎉

@luoxmc
Copy link
Author

luoxmc commented Mar 18, 2022

@guqing 刚刚仔细看了下代码,可能还是有点错误,RedisCacheStoretoMap 方法里面,转map时应该不能直接使用带 REDIS_PREFIX 前缀的key,而是应该把前缀去掉。不然其他地方获取的时候key值会对应不上。
这个前缀只是为了防止在redis数据库中与其他应用的数据冲突而设置的。

所以对应的单元测试类也需要修改

@guqing
Copy link
Member

guqing commented Mar 18, 2022

@guqing 刚刚仔细看了下代码,可能还是有点错误,RedisCacheStoretoMap 方法里面,转map时应该不能直接使用带 REDIS_PREFIX 前缀的key,而是应该把前缀去掉。不然其他地方获取的时候key值会对应不上。 这个前缀只是为了防止在redis数据库中与其他应用的数据冲突而设置的。

所以对应的单元测试类也需要修改

是的 你提醒的很对,如果redis是和其他系统共享的toMap就不正确,不过目前 toMap只用来遍历数据过滤到想要的,所以目前不会出错,run.halo.app.controller.content.auth.ContentAuthentication 在该类中有一个CACHE_PREFIX,由于之前没有考虑redis,所以缓存key没有统一管理,目前1.5.x不会再考虑该问题,如果想共享redis可以使用不同的db index
halo 2.0版本正在进行中,会使用策略来管理缓存key,1.5.x不再进行大的改动。

winar-jin pushed a commit to winar-jin/halo that referenced this pull request Mar 24, 2022
* add new cache way - redis

* Optimize redis operation

* Remove public from CacheWrapper class

* add redis cache unit test

* refactor: test case for redis cache store

Co-authored-by: guqing <1484563614@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants