Skip to content

proxy and backend connecition support both single & multi connections #1007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

left2right
Copy link
Contributor

As we know, codis proxy and backend server(redis) use one connection, this works well on fast redis, however if we want to use some disk nosql database which is not that fast(eg ssdb or pika) as backend server, the performance descend a lot.
Disk database could store more data than memory database, it is used a lot in many companies.
This PR support codis 2.0 could be configured both multi and single connection between proxy and backend server.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
@yangzhe1991
Copy link
Member

yangzhe1991 commented Oct 25, 2016

感谢你的patch。2.0作为稳定分支,只修bug,而且可能会在短期内EOL。3.0也属于稳定分支,也不太适合加feature,请提交到3.1分支上,谢谢!

@left2right
Copy link
Contributor Author

OK,get it ,thx

@left2right left2right closed this Oct 25, 2016
@spinlock
Copy link
Member

@left2right 有 1024 个 slot,这么修改的话,最后会导致 backend 连接数过大 1024 x |codis-server|,并且每个 backend conn 会产生两个 routine 分别用于读写,所以我还是不太建议这么修改。

之前我有一个解决方案,一直没有提交过,因为在 redis 下面多连接性能损失会比较大,但是如果后端是 rocksdb 的确结论会不太一样。你可以切到 redis3.2 branch 下看一下。不过这个 branch 后面有空的话应该会 merge 到 release3.1。

谢谢~

@spinlock
Copy link
Member

7f72696

@left2right
Copy link
Contributor Author

@spinlock 多谢你的回复,及在release 3.1上支持多连接
“有 1024 个 slot,这么修改的话,最后会导致 backend 连接数过大 1024 x |codis-server|,并且每个 backend conn 会产生两个 routine 分别用于读写,所以我还是不太建议这么修改。”====这个点可能你理解有点偏差,实际情况是每个proxy和后端server一共1024个连接。每个codis-server上的连接数是该server上分配的slot数乘以proxy数。这样看,以及我们实际使用上看还不错~
感谢回复,现在才看到~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants