The Base Framework includes a comprehensive caching system that provides high-performance data caching with support for multiple backends, automatic invalidation, and HTTP response caching.
- Overview
- Features
- Quick Start
- Configuration
- Cache Providers
- Usage
- Authorization Caching
- HTTP Response Caching
- Cache Invalidation
- Best Practices
- Monitoring & Statistics
- API Reference
The caching system is designed to:
- Reduce database load by caching frequently accessed data
- Improve response times with sub-millisecond cache hits
- Support multiple backends (in-memory, Redis)
- Auto-invalidate cached data when underlying data changes
- Cache HTTP responses for public or semi-public endpoints
- Provide observability through statistics and monitoring
┌─────────────────┐
│ Application │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Cache Layer │ ◄── Cache Interface
└────────┬────────┘
│
┌────┴────┐
▼ ▼
┌────────┐ ┌─────────┐
│ Memory │ │ Redis │
└────────┘ └─────────┘
✅ Multiple Cache Providers
- In-memory cache (default)
- Redis cache (distributed)
- No-op cache (disabled)
- Automatic fallback from Redis to memory
✅ Smart Caching
- TTL (Time-To-Live) support
- LRU eviction for memory cache
- Pattern-based invalidation
- Automatic cleanup of expired entries
✅ Integration
- Dependency injection ready
- Authorization service caching
- HTTP middleware for response caching
- Event-driven cache invalidation
✅ Observability
- Hit/miss statistics
- Cache size monitoring
- Performance metrics
- Debug logging
Caching is enabled by default with an in-memory provider. No configuration needed!
# .env
CACHE_ENABLED=true
CACHE_PROVIDER=memory
CACHE_DEFAULT_TTL=15mThe cache is automatically injected into all modules via Dependencies:
import "base/core/cache"
type MyService struct {
DB *gorm.DB
Cache cache.Cache
}
func NewMyService(deps module.Dependencies) *MyService {
return &MyService{
DB: deps.DB,
Cache: deps.Cache,
}
}import (
"context"
"time"
)
func (s *MyService) GetUser(userID uint) (*User, error) {
ctx := context.Background()
cacheKey := fmt.Sprintf("user:%d", userID)
// Try cache first
var user User
err := s.Cache.Get(ctx, cacheKey, &user)
if err == nil {
return &user, nil // Cache hit!
}
// Cache miss - query database
if err := s.DB.First(&user, userID).Error; err != nil {
return nil, err
}
// Cache the result
s.Cache.Set(ctx, cacheKey, user, 15*time.Minute)
return &user, nil
}| Variable | Type | Default | Description |
|---|---|---|---|
CACHE_ENABLED |
bool | true |
Enable/disable caching globally |
CACHE_PROVIDER |
string | memory |
Cache provider: memory, redis, none |
CACHE_DEFAULT_TTL |
duration | 15m |
Default cache TTL |
CACHE_FALLBACK_ENABLED |
bool | true |
Fall back to memory if Redis fails |
CACHE_MEMORY_MAX_SIZE |
int | 10000 |
Max items in memory cache |
CACHE_MEMORY_CLEANUP_INTERVAL |
duration | 5m |
Cleanup interval for expired entries |
CACHE_REDIS_HOST |
string | localhost |
Redis server host |
CACHE_REDIS_PORT |
string | 6379 |
Redis server port |
CACHE_REDIS_PASSWORD |
string | `` | Redis password (if required) |
CACHE_REDIS_DB |
int | 0 |
Redis database number |
CACHE_REDIS_PREFIX |
string | base: |
Key prefix for all cache entries |
CACHE_ENABLED=true
CACHE_PROVIDER=memory
CACHE_DEFAULT_TTL=15m
CACHE_MEMORY_MAX_SIZE=10000CACHE_ENABLED=true
CACHE_PROVIDER=redis
CACHE_DEFAULT_TTL=15m
CACHE_FALLBACK_ENABLED=true
CACHE_REDIS_HOST=redis.example.com
CACHE_REDIS_PORT=6379
CACHE_REDIS_PASSWORD=your_secure_password
CACHE_REDIS_DB=0
CACHE_REDIS_PREFIX=myapp:CACHE_ENABLED=false
CACHE_PROVIDER=noneBest for: Development, single-instance deployments
Features:
- ✅ Zero external dependencies
- ✅ Sub-microsecond access times
- ✅ LRU eviction when max size reached
- ✅ Automatic cleanup of expired entries
- ❌ Not shared across instances
- ❌ Data lost on restart
Configuration:
CACHE_PROVIDER=memory
CACHE_MEMORY_MAX_SIZE=10000
CACHE_MEMORY_CLEANUP_INTERVAL=5mBest for: Production, multi-instance deployments
Features:
- ✅ Distributed caching across instances
- ✅ Persistent storage (optional)
- ✅ Supports millions of keys
- ✅ Advanced features (pub/sub, clustering)
- ❌ Requires Redis server
- ❌ Network latency (1-5ms)
Configuration:
CACHE_PROVIDER=redis
CACHE_REDIS_HOST=localhost
CACHE_REDIS_PORT=6379
CACHE_REDIS_PASSWORD=
CACHE_REDIS_DB=0Best for: Testing, disabling cache
Features:
- ✅ Always returns cache miss
- ✅ No overhead
- ✅ Useful for debugging
Configuration:
CACHE_ENABLED=false
CACHE_PROVIDER=nonevar user User
err := cache.Get(ctx, "user:123", &user)
if cache.IsMiss(err) {
// Cache miss - fetch from DB
}cache.Set(ctx, "user:123", user, 15*time.Minute)cache.Delete(ctx, "user:123")// Delete all user-related keys
cache.DeletePattern(ctx, "user:*")exists, err := cache.Has(ctx, "user:123")// Clear all cache entries
cache.Clear(ctx)Use the built-in key generation helpers for consistency:
import "base/core/cache"
// User keys
cache.UserKey(userID) // "user:123"
cache.UserEmailKey(email) // "user:email:john@example.com"
cache.UserPermissionsKey(userID) // "user:permissions:123"
// Role keys
cache.RolePermissionsKey(roleID) // "role:permissions:1"
// HTTP keys
cache.HTTPResponseKey("GET", "/api/users", "page=1")
// "http:GET:/api/users?page=1"The authorization service automatically caches permissions to reduce database queries.
-
User Permissions (
GetUserPermissions)- Cache key:
user:permissions:{userID} - TTL: 15 minutes
- Queries: Complex JOINs across users, roles, permissions
- Cache key:
-
Role Permissions (
GetRolePermissions)- Cache key:
role:permissions:{roleID} - TTL: 15 minutes
- Queries: JOIN between roles and permissions
- Cache key:
Before Caching:
- 🐌 Every auth check = 2-3 DB queries
- 🐌 50-100ms per permission check
- 🐌 High DB load on auth-heavy apps
After Caching:
- ⚡ First check = DB query + cache write
- ⚡ Subsequent checks = <1ms cache hit
- ⚡ 80-90% reduction in auth queries
// First call - Cache miss (queries DB)
perms, err := authzService.GetUserPermissions("123")
// Time: ~50ms
// Second call - Cache hit
perms, err := authzService.GetUserPermissions("123")
// Time: <1ms ⚡Caches are automatically invalidated when:
- Role permissions are updated
- User roles are changed
- Permissions are assigned/revoked
// This automatically invalidates related caches
authzService.UpdateRolePermissions(roleID, permissionIDs)Cache entire HTTP responses for GET requests to reduce server load.
import (
"base/core/router/middleware"
"time"
)
// Apply to specific routes
router.GET("/api/public/posts", handler,
middleware.CacheMiddleware(cache, middleware.CacheMiddlewareConfig{
TTL: 5 * time.Minute,
CacheableStatuses: []int{200},
CacheableMethods: []string{"GET"},
VaryByQueryParams: true,
}),
)type CacheMiddlewareConfig struct {
TTL time.Duration // How long to cache
CacheableStatuses []int // Which statuses to cache (e.g., [200])
CacheableMethods []string // Which methods to cache (e.g., ["GET"])
SkipPaths []string // Paths to skip caching
VaryByQueryParams bool // Include query params in cache key
VaryByHeaders []string // Vary by specific headers
}config := middleware.DefaultCacheMiddlewareConfig()
// TTL: 5 minutes
// Cacheable Statuses: [200]
// Cacheable Methods: ["GET"]
// Vary By Query Params: trueResponses include X-Cache header:
X-Cache: HIT- Served from cacheX-Cache: MISS- Served from origin (and cached)
# First request - Cache miss
curl -i http://localhost:8100/api/posts
# X-Cache: MISS
# Time: 50ms
# Second request - Cache hit
curl -i http://localhost:8100/api/posts
# X-Cache: HIT
# Time: 1ms ⚡import "context"
ctx := context.Background()
// Delete specific key
cache.Delete(ctx, "user:123")
// Delete all user keys
cache.DeletePattern(ctx, "user:*")
// Clear entire cache
cache.Clear(ctx)Use the event emitter to invalidate caches automatically:
// In your service
func (s *UserService) UpdateUser(user *User) error {
// Update database
if err := s.DB.Save(user).Error; err != nil {
return err
}
// Invalidate cache
ctx := context.Background()
s.Cache.Delete(ctx, cache.UserKey(user.ID))
// Emit event for other services
s.Emitter.Emit("user.updated", user.ID)
return nil
}// Invalidate all permissions for users with a specific role
rolePattern := cache.RolePattern(roleID) // "role:*:1*"
cache.DeletePattern(ctx, rolePattern)The framework provides cache management endpoints:
# Get cache statistics
GET /api/cache/stats
# Clear entire cache
POST /api/cache/clear
# Invalidate pattern
POST /api/cache/invalidate
{
"pattern": "user:*"
}// Frequently changing data - Short TTL
cache.Set(ctx, "session:"+sessionID, session, 5*time.Minute)
// Rarely changing data - Long TTL
cache.Set(ctx, "config:site", config, 1*time.Hour)
// Static data - Very long TTL
cache.Set(ctx, "metadata:app", metadata, 24*time.Hour)// Good ✅
"user:123"
"user:email:john@example.com"
"user:permissions:123"
// Bad ❌
"user_123"
"email-john@example.com"
"permissions_for_user_123"var user User
err := cache.Get(ctx, key, &user)
if err != nil {
// Always fall back to DB on cache error
if err := db.First(&user, id).Error; err != nil {
return nil, err
}
// Attempt to cache (non-blocking)
cache.Set(ctx, key, user, ttl)
}Good candidates for caching:
- ✅ Frequently read data
- ✅ Expensive queries (JOINs, aggregations)
- ✅ Data that changes infrequently
- ✅ User permissions and roles
- ✅ Configuration data
Poor candidates for caching:
- ❌ Data that changes frequently
- ❌ User-specific real-time data
- ❌ Large binary data (> 1MB)
- ❌ Sensitive data (consider security)
stats := cache.GetStats()
hitRate := float64(stats.Hits) / float64(stats.Hits + stats.Misses) * 100
if hitRate < 70 {
log.Warn("Low cache hit rate", "rate", hitRate)
}// When updating a role, invalidate all related caches
func (s *AuthzService) UpdateRole(role *Role) error {
// ... update role ...
// Invalidate all caches for this role
ctx := context.Background()
s.Cache.DeletePattern(ctx, fmt.Sprintf("role:*:%d*", role.ID))
s.Cache.DeletePattern(ctx, "user:permissions:*")
return nil
}stats := cache.GetStats()
fmt.Printf("Provider: %s\n", stats.Provider)
fmt.Printf("Hits: %d\n", stats.Hits)
fmt.Printf("Misses: %d\n", stats.Misses)
fmt.Printf("Hit Rate: %.2f%%\n",
float64(stats.Hits) / float64(stats.Hits + stats.Misses) * 100)
fmt.Printf("Size: %d entries\n", stats.Size)
fmt.Printf("Evictions: %d\n", stats.Evictions)GET /api/cache/statsResponse:
{
"provider": "memory",
"hits": 15234,
"misses": 3421,
"hit_rate": 81.67,
"sets": 3421,
"deletes": 245,
"size": 3176,
"evictions": 12,
"last_cleanup": "2025-01-15T10:30:00Z"
}| Metric | What It Means | Good Value |
|---|---|---|
| Hit Rate | % of requests served from cache | > 70% |
| Evictions | Items removed due to memory limits | Low |
| Size | Current number of cached items | < Max Size |
| Deletes | Manual cache invalidations | Moderate |
type Cache interface {
// Get retrieves a value from cache
Get(ctx context.Context, key string, dest interface{}) error
// Set stores a value in cache with TTL
Set(ctx context.Context, key string, value interface{}, ttl time.Duration) error
// Delete removes a single key
Delete(ctx context.Context, key string) error
// DeletePattern removes all keys matching pattern
DeletePattern(ctx context.Context, pattern string) error
// Has checks if a key exists
Has(ctx context.Context, key string) (bool, error)
// Clear removes all keys
Clear(ctx context.Context) error
// Close cleanly shuts down the cache
Close() error
// GetStats returns cache statistics
GetStats() Stats
}import "base/core/cache"
err := cache.Get(ctx, key, &value)
// Check if error is a cache miss
if cache.IsMiss(err) {
// Handle cache miss
}
// Check if cache is unavailable
if errors.Is(err, cache.ErrCacheNotAvailable) {
// Handle cache unavailability
}// Create memory cache
memoryCache := cache.NewMemoryCache(cache.MemoryConfig{
MaxSize: 10000,
CleanupInterval: 5 * time.Minute,
})
// Create Redis cache
redisCache, err := cache.NewRedisCache(cache.RedisConfig{
Host: "localhost",
Port: "6379",
Password: "",
DB: 0,
Prefix: "myapp:",
})
// Create no-op cache
noopCache := cache.NewNoopCache()Problem: Cache always misses
Solutions:
- Check
CACHE_ENABLED=truein.env - Verify provider is correct:
CACHE_PROVIDER=memoryorredis - Check logs for cache initialization errors
- Ensure you're using the same cache key for get/set
Problem: redis connection failed
Solutions:
- Verify Redis is running:
redis-cli ping - Check host/port configuration
- Verify password (if set)
- Enable fallback:
CACHE_FALLBACK_ENABLED=true
Problem: Hit rate < 50%
Solutions:
- Increase TTL for stable data
- Check if data is being invalidated too aggressively
- Verify cache keys are consistent
- Monitor for high eviction rates
Problem: High memory usage
Solutions:
- Reduce
CACHE_MEMORY_MAX_SIZE - Decrease TTL values
- Switch to Redis for large datasets
- Don't cache large objects
package myservice
import (
"base/core/cache"
"context"
"fmt"
"time"
"gorm.io/gorm"
)
type ProductService struct {
DB *gorm.DB
Cache cache.Cache
}
func NewProductService(db *gorm.DB, c cache.Cache) *ProductService {
return &ProductService{DB: db, Cache: c}
}
func (s *ProductService) GetProduct(id uint) (*Product, error) {
ctx := context.Background()
cacheKey := fmt.Sprintf("product:%d", id)
// Try cache
var product Product
err := s.Cache.Get(ctx, cacheKey, &product)
if err == nil {
return &product, nil
}
// Query DB
if err := s.DB.First(&product, id).Error; err != nil {
return nil, err
}
// Cache result
s.Cache.Set(ctx, cacheKey, product, 30*time.Minute)
return &product, nil
}
func (s *ProductService) UpdateProduct(product *Product) error {
// Update DB
if err := s.DB.Save(product).Error; err != nil {
return err
}
// Invalidate cache
ctx := context.Background()
cacheKey := fmt.Sprintf("product:%d", product.ID)
s.Cache.Delete(ctx, cacheKey)
return nil
}
func (s *ProductService) GetProductsByCategory(categoryID uint) ([]Product, error) {
ctx := context.Background()
cacheKey := fmt.Sprintf("products:category:%d", categoryID)
// Try cache
var products []Product
err := s.Cache.Get(ctx, cacheKey, &products)
if err == nil {
return products, nil
}
// Query DB
if err := s.DB.Where("category_id = ?", categoryID).Find(&products).Error; err != nil {
return nil, err
}
// Cache result (shorter TTL for lists)
s.Cache.Set(ctx, cacheKey, products, 5*time.Minute)
return products, nil
}| Operation | Without Cache | With Cache | Improvement |
|---|---|---|---|
| GetUserPermissions | 45ms | <1ms | 45x faster |
| GetRolePermissions | 32ms | <1ms | 32x faster |
| HasPermission | 28ms | <1ms | 28x faster |
| Endpoint | Without Cache | With Cache | Improvement |
|---|---|---|---|
| GET /api/posts | 120ms | 1ms | 120x faster |
| GET /api/users?page=1 | 85ms | <1ms | 85x faster |
| GET /api/products | 200ms | 1ms | 200x faster |
| Provider | Read Latency | Write Latency | Scalability |
|---|---|---|---|
| Memory | <1μs | <1μs | Single instance |
| Redis (local) | 1-2ms | 1-2ms | Multi-instance |
| Redis (remote) | 5-20ms | 5-20ms | Multi-instance |
Before:
func (s *UserService) GetUser(id uint) (*User, error) {
var user User
if err := s.DB.First(&user, id).Error; err != nil {
return nil, err
}
return &user, nil
}After:
func (s *UserService) GetUser(id uint) (*User, error) {
ctx := context.Background()
cacheKey := cache.UserKey(id)
// Try cache
var user User
if err := s.Cache.Get(ctx, cacheKey, &user); err == nil {
return &user, nil
}
// Fallback to DB
if err := s.DB.First(&user, id).Error; err != nil {
return nil, err
}
// Cache result
s.Cache.Set(ctx, cacheKey, user, 15*time.Minute)
return &user, nil
}This caching system is part of the Base Framework and is licensed under the MIT License.
For issues, questions, or contributions:
- GitHub Issues: https://github.com/base-go/base-core/issues
- Documentation: https://base.al/docs