-
Notifications
You must be signed in to change notification settings - Fork 12
Optimize account token cache design #419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
README.md
Outdated
| **2. Bucket Sharding** | ||
| Instead of creating one Redis key per account (millions of keys), accounts are distributed across 12,000 buckets using FNV-1a hashing: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this should say a configurable size for Redis hash buckets or something to that effect?
| - redis-data:/data | ||
| command: > | ||
| redis-server | ||
| --hash-max-listpack-entries 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would operators want to tune these as well, since they can tune the bucket size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aristidesstaffieri Yes I have added a section in README explaining how these can be tuned and affect the token cache
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
What
Optimizes Redis storage for the Account Token Cache to significantly reduce memory usage through multiple compression strategies.
With all these optimizations the memory usage went from 3 GB -> 387 MB.
Why
Optimizing memory usage for our token cache is important since the account keys will have an unbounded growth and this could lead to increased storage.
Known limitations
N/A
Issue that this PR addresses
Closes #422