Skip to content

Latest commit

 

History

History
937 lines (699 loc) · 19.6 KB

File metadata and controls

937 lines (699 loc) · 19.6 KB

Caching System Documentation

The Base Framework includes a comprehensive caching system that provides high-performance data caching with support for multiple backends, automatic invalidation, and HTTP response caching.

Table of Contents


Overview

The caching system is designed to:

  • Reduce database load by caching frequently accessed data
  • Improve response times with sub-millisecond cache hits
  • Support multiple backends (in-memory, Redis)
  • Auto-invalidate cached data when underlying data changes
  • Cache HTTP responses for public or semi-public endpoints
  • Provide observability through statistics and monitoring

Architecture

┌─────────────────┐
│  Application    │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  Cache Layer    │  ◄── Cache Interface
└────────┬────────┘
         │
    ┌────┴────┐
    ▼         ▼
┌────────┐  ┌─────────┐
│ Memory │  │  Redis  │
└────────┘  └─────────┘

Features

Multiple Cache Providers

  • In-memory cache (default)
  • Redis cache (distributed)
  • No-op cache (disabled)
  • Automatic fallback from Redis to memory

Smart Caching

  • TTL (Time-To-Live) support
  • LRU eviction for memory cache
  • Pattern-based invalidation
  • Automatic cleanup of expired entries

Integration

  • Dependency injection ready
  • Authorization service caching
  • HTTP middleware for response caching
  • Event-driven cache invalidation

Observability

  • Hit/miss statistics
  • Cache size monitoring
  • Performance metrics
  • Debug logging

Quick Start

1. Enable Caching

Caching is enabled by default with an in-memory provider. No configuration needed!

# .env
CACHE_ENABLED=true
CACHE_PROVIDER=memory
CACHE_DEFAULT_TTL=15m

2. Access Cache in Your Service

The cache is automatically injected into all modules via Dependencies:

import "base/core/cache"

type MyService struct {
    DB    *gorm.DB
    Cache cache.Cache
}

func NewMyService(deps module.Dependencies) *MyService {
    return &MyService{
        DB:    deps.DB,
        Cache: deps.Cache,
    }
}

3. Use the Cache

import (
    "context"
    "time"
)

func (s *MyService) GetUser(userID uint) (*User, error) {
    ctx := context.Background()
    cacheKey := fmt.Sprintf("user:%d", userID)

    // Try cache first
    var user User
    err := s.Cache.Get(ctx, cacheKey, &user)
    if err == nil {
        return &user, nil // Cache hit!
    }

    // Cache miss - query database
    if err := s.DB.First(&user, userID).Error; err != nil {
        return nil, err
    }

    // Cache the result
    s.Cache.Set(ctx, cacheKey, user, 15*time.Minute)

    return &user, nil
}

Configuration

Environment Variables

Variable Type Default Description
CACHE_ENABLED bool true Enable/disable caching globally
CACHE_PROVIDER string memory Cache provider: memory, redis, none
CACHE_DEFAULT_TTL duration 15m Default cache TTL
CACHE_FALLBACK_ENABLED bool true Fall back to memory if Redis fails
CACHE_MEMORY_MAX_SIZE int 10000 Max items in memory cache
CACHE_MEMORY_CLEANUP_INTERVAL duration 5m Cleanup interval for expired entries
CACHE_REDIS_HOST string localhost Redis server host
CACHE_REDIS_PORT string 6379 Redis server port
CACHE_REDIS_PASSWORD string `` Redis password (if required)
CACHE_REDIS_DB int 0 Redis database number
CACHE_REDIS_PREFIX string base: Key prefix for all cache entries

Example Configurations

Development (In-Memory)

CACHE_ENABLED=true
CACHE_PROVIDER=memory
CACHE_DEFAULT_TTL=15m
CACHE_MEMORY_MAX_SIZE=10000

Production (Redis)

CACHE_ENABLED=true
CACHE_PROVIDER=redis
CACHE_DEFAULT_TTL=15m
CACHE_FALLBACK_ENABLED=true

CACHE_REDIS_HOST=redis.example.com
CACHE_REDIS_PORT=6379
CACHE_REDIS_PASSWORD=your_secure_password
CACHE_REDIS_DB=0
CACHE_REDIS_PREFIX=myapp:

Disabled (Testing)

CACHE_ENABLED=false
CACHE_PROVIDER=none

Cache Providers

1. Memory Cache (Default)

Best for: Development, single-instance deployments

Features:

  • ✅ Zero external dependencies
  • ✅ Sub-microsecond access times
  • ✅ LRU eviction when max size reached
  • ✅ Automatic cleanup of expired entries
  • ❌ Not shared across instances
  • ❌ Data lost on restart

Configuration:

CACHE_PROVIDER=memory
CACHE_MEMORY_MAX_SIZE=10000
CACHE_MEMORY_CLEANUP_INTERVAL=5m

2. Redis Cache

Best for: Production, multi-instance deployments

Features:

  • ✅ Distributed caching across instances
  • ✅ Persistent storage (optional)
  • ✅ Supports millions of keys
  • ✅ Advanced features (pub/sub, clustering)
  • ❌ Requires Redis server
  • ❌ Network latency (1-5ms)

Configuration:

CACHE_PROVIDER=redis
CACHE_REDIS_HOST=localhost
CACHE_REDIS_PORT=6379
CACHE_REDIS_PASSWORD=
CACHE_REDIS_DB=0

3. No-op Cache

Best for: Testing, disabling cache

Features:

  • ✅ Always returns cache miss
  • ✅ No overhead
  • ✅ Useful for debugging

Configuration:

CACHE_ENABLED=false
CACHE_PROVIDER=none

Usage

Basic Operations

Get

var user User
err := cache.Get(ctx, "user:123", &user)
if cache.IsMiss(err) {
    // Cache miss - fetch from DB
}

Set

cache.Set(ctx, "user:123", user, 15*time.Minute)

Delete

cache.Delete(ctx, "user:123")

Delete Pattern

// Delete all user-related keys
cache.DeletePattern(ctx, "user:*")

Has

exists, err := cache.Has(ctx, "user:123")

Clear

// Clear all cache entries
cache.Clear(ctx)

Cache Key Helpers

Use the built-in key generation helpers for consistency:

import "base/core/cache"

// User keys
cache.UserKey(userID)                // "user:123"
cache.UserEmailKey(email)           // "user:email:john@example.com"
cache.UserPermissionsKey(userID)    // "user:permissions:123"

// Role keys
cache.RolePermissionsKey(roleID)    // "role:permissions:1"

// HTTP keys
cache.HTTPResponseKey("GET", "/api/users", "page=1")
// "http:GET:/api/users?page=1"

Authorization Caching

The authorization service automatically caches permissions to reduce database queries.

What's Cached

  1. User Permissions (GetUserPermissions)

    • Cache key: user:permissions:{userID}
    • TTL: 15 minutes
    • Queries: Complex JOINs across users, roles, permissions
  2. Role Permissions (GetRolePermissions)

    • Cache key: role:permissions:{roleID}
    • TTL: 15 minutes
    • Queries: JOIN between roles and permissions

Performance Impact

Before Caching:

  • 🐌 Every auth check = 2-3 DB queries
  • 🐌 50-100ms per permission check
  • 🐌 High DB load on auth-heavy apps

After Caching:

  • ⚡ First check = DB query + cache write
  • ⚡ Subsequent checks = <1ms cache hit
  • ⚡ 80-90% reduction in auth queries

Example

// First call - Cache miss (queries DB)
perms, err := authzService.GetUserPermissions("123")
// Time: ~50ms

// Second call - Cache hit
perms, err := authzService.GetUserPermissions("123")
// Time: <1ms ⚡

Automatic Invalidation

Caches are automatically invalidated when:

  • Role permissions are updated
  • User roles are changed
  • Permissions are assigned/revoked
// This automatically invalidates related caches
authzService.UpdateRolePermissions(roleID, permissionIDs)

HTTP Response Caching

Cache entire HTTP responses for GET requests to reduce server load.

Setup

import (
    "base/core/router/middleware"
    "time"
)

// Apply to specific routes
router.GET("/api/public/posts", handler,
    middleware.CacheMiddleware(cache, middleware.CacheMiddlewareConfig{
        TTL:               5 * time.Minute,
        CacheableStatuses: []int{200},
        CacheableMethods:  []string{"GET"},
        VaryByQueryParams: true,
    }),
)

Configuration

type CacheMiddlewareConfig struct {
    TTL                time.Duration   // How long to cache
    CacheableStatuses  []int           // Which statuses to cache (e.g., [200])
    CacheableMethods   []string        // Which methods to cache (e.g., ["GET"])
    SkipPaths          []string        // Paths to skip caching
    VaryByQueryParams  bool            // Include query params in cache key
    VaryByHeaders      []string        // Vary by specific headers
}

Default Configuration

config := middleware.DefaultCacheMiddlewareConfig()
// TTL: 5 minutes
// Cacheable Statuses: [200]
// Cacheable Methods: ["GET"]
// Vary By Query Params: true

Cache Headers

Responses include X-Cache header:

  • X-Cache: HIT - Served from cache
  • X-Cache: MISS - Served from origin (and cached)

Example

# First request - Cache miss
curl -i http://localhost:8100/api/posts
# X-Cache: MISS
# Time: 50ms

# Second request - Cache hit
curl -i http://localhost:8100/api/posts
# X-Cache: HIT
# Time: 1ms ⚡

Cache Invalidation

Manual Invalidation

import "context"

ctx := context.Background()

// Delete specific key
cache.Delete(ctx, "user:123")

// Delete all user keys
cache.DeletePattern(ctx, "user:*")

// Clear entire cache
cache.Clear(ctx)

Event-Driven Invalidation

Use the event emitter to invalidate caches automatically:

// In your service
func (s *UserService) UpdateUser(user *User) error {
    // Update database
    if err := s.DB.Save(user).Error; err != nil {
        return err
    }

    // Invalidate cache
    ctx := context.Background()
    s.Cache.Delete(ctx, cache.UserKey(user.ID))

    // Emit event for other services
    s.Emitter.Emit("user.updated", user.ID)

    return nil
}

Pattern-Based Invalidation

// Invalidate all permissions for users with a specific role
rolePattern := cache.RolePattern(roleID) // "role:*:1*"
cache.DeletePattern(ctx, rolePattern)

HTTP Endpoints

The framework provides cache management endpoints:

# Get cache statistics
GET /api/cache/stats

# Clear entire cache
POST /api/cache/clear

# Invalidate pattern
POST /api/cache/invalidate
{
  "pattern": "user:*"
}

Best Practices

1. Choose Appropriate TTLs

// Frequently changing data - Short TTL
cache.Set(ctx, "session:"+sessionID, session, 5*time.Minute)

// Rarely changing data - Long TTL
cache.Set(ctx, "config:site", config, 1*time.Hour)

// Static data - Very long TTL
cache.Set(ctx, "metadata:app", metadata, 24*time.Hour)

2. Use Consistent Key Naming

// Good ✅
"user:123"
"user:email:john@example.com"
"user:permissions:123"

// Bad ❌
"user_123"
"email-john@example.com"
"permissions_for_user_123"

3. Handle Cache Failures Gracefully

var user User
err := cache.Get(ctx, key, &user)
if err != nil {
    // Always fall back to DB on cache error
    if err := db.First(&user, id).Error; err != nil {
        return nil, err
    }
    // Attempt to cache (non-blocking)
    cache.Set(ctx, key, user, ttl)
}

4. Don't Cache Everything

Good candidates for caching:

  • ✅ Frequently read data
  • ✅ Expensive queries (JOINs, aggregations)
  • ✅ Data that changes infrequently
  • ✅ User permissions and roles
  • ✅ Configuration data

Poor candidates for caching:

  • ❌ Data that changes frequently
  • ❌ User-specific real-time data
  • ❌ Large binary data (> 1MB)
  • ❌ Sensitive data (consider security)

5. Monitor Cache Performance

stats := cache.GetStats()
hitRate := float64(stats.Hits) / float64(stats.Hits + stats.Misses) * 100

if hitRate < 70 {
    log.Warn("Low cache hit rate", "rate", hitRate)
}

6. Use Patterns for Bulk Invalidation

// When updating a role, invalidate all related caches
func (s *AuthzService) UpdateRole(role *Role) error {
    // ... update role ...

    // Invalidate all caches for this role
    ctx := context.Background()
    s.Cache.DeletePattern(ctx, fmt.Sprintf("role:*:%d*", role.ID))
    s.Cache.DeletePattern(ctx, "user:permissions:*")

    return nil
}

Monitoring & Statistics

Get Cache Statistics

stats := cache.GetStats()

fmt.Printf("Provider: %s\n", stats.Provider)
fmt.Printf("Hits: %d\n", stats.Hits)
fmt.Printf("Misses: %d\n", stats.Misses)
fmt.Printf("Hit Rate: %.2f%%\n",
    float64(stats.Hits) / float64(stats.Hits + stats.Misses) * 100)
fmt.Printf("Size: %d entries\n", stats.Size)
fmt.Printf("Evictions: %d\n", stats.Evictions)

HTTP Endpoint

GET /api/cache/stats

Response:

{
  "provider": "memory",
  "hits": 15234,
  "misses": 3421,
  "hit_rate": 81.67,
  "sets": 3421,
  "deletes": 245,
  "size": 3176,
  "evictions": 12,
  "last_cleanup": "2025-01-15T10:30:00Z"
}

Key Metrics

Metric What It Means Good Value
Hit Rate % of requests served from cache > 70%
Evictions Items removed due to memory limits Low
Size Current number of cached items < Max Size
Deletes Manual cache invalidations Moderate

API Reference

Cache Interface

type Cache interface {
    // Get retrieves a value from cache
    Get(ctx context.Context, key string, dest interface{}) error

    // Set stores a value in cache with TTL
    Set(ctx context.Context, key string, value interface{}, ttl time.Duration) error

    // Delete removes a single key
    Delete(ctx context.Context, key string) error

    // DeletePattern removes all keys matching pattern
    DeletePattern(ctx context.Context, pattern string) error

    // Has checks if a key exists
    Has(ctx context.Context, key string) (bool, error)

    // Clear removes all keys
    Clear(ctx context.Context) error

    // Close cleanly shuts down the cache
    Close() error

    // GetStats returns cache statistics
    GetStats() Stats
}

Error Handling

import "base/core/cache"

err := cache.Get(ctx, key, &value)

// Check if error is a cache miss
if cache.IsMiss(err) {
    // Handle cache miss
}

// Check if cache is unavailable
if errors.Is(err, cache.ErrCacheNotAvailable) {
    // Handle cache unavailability
}

Cache Providers

// Create memory cache
memoryCache := cache.NewMemoryCache(cache.MemoryConfig{
    MaxSize:         10000,
    CleanupInterval: 5 * time.Minute,
})

// Create Redis cache
redisCache, err := cache.NewRedisCache(cache.RedisConfig{
    Host:     "localhost",
    Port:     "6379",
    Password: "",
    DB:       0,
    Prefix:   "myapp:",
})

// Create no-op cache
noopCache := cache.NewNoopCache()

Troubleshooting

Cache Not Working

Problem: Cache always misses

Solutions:

  1. Check CACHE_ENABLED=true in .env
  2. Verify provider is correct: CACHE_PROVIDER=memory or redis
  3. Check logs for cache initialization errors
  4. Ensure you're using the same cache key for get/set

Redis Connection Errors

Problem: redis connection failed

Solutions:

  1. Verify Redis is running: redis-cli ping
  2. Check host/port configuration
  3. Verify password (if set)
  4. Enable fallback: CACHE_FALLBACK_ENABLED=true

Low Hit Rate

Problem: Hit rate < 50%

Solutions:

  1. Increase TTL for stable data
  2. Check if data is being invalidated too aggressively
  3. Verify cache keys are consistent
  4. Monitor for high eviction rates

Memory Issues

Problem: High memory usage

Solutions:

  1. Reduce CACHE_MEMORY_MAX_SIZE
  2. Decrease TTL values
  3. Switch to Redis for large datasets
  4. Don't cache large objects

Examples

Complete Service Example

package myservice

import (
    "base/core/cache"
    "context"
    "fmt"
    "time"
    "gorm.io/gorm"
)

type ProductService struct {
    DB    *gorm.DB
    Cache cache.Cache
}

func NewProductService(db *gorm.DB, c cache.Cache) *ProductService {
    return &ProductService{DB: db, Cache: c}
}

func (s *ProductService) GetProduct(id uint) (*Product, error) {
    ctx := context.Background()
    cacheKey := fmt.Sprintf("product:%d", id)

    // Try cache
    var product Product
    err := s.Cache.Get(ctx, cacheKey, &product)
    if err == nil {
        return &product, nil
    }

    // Query DB
    if err := s.DB.First(&product, id).Error; err != nil {
        return nil, err
    }

    // Cache result
    s.Cache.Set(ctx, cacheKey, product, 30*time.Minute)

    return &product, nil
}

func (s *ProductService) UpdateProduct(product *Product) error {
    // Update DB
    if err := s.DB.Save(product).Error; err != nil {
        return err
    }

    // Invalidate cache
    ctx := context.Background()
    cacheKey := fmt.Sprintf("product:%d", product.ID)
    s.Cache.Delete(ctx, cacheKey)

    return nil
}

func (s *ProductService) GetProductsByCategory(categoryID uint) ([]Product, error) {
    ctx := context.Background()
    cacheKey := fmt.Sprintf("products:category:%d", categoryID)

    // Try cache
    var products []Product
    err := s.Cache.Get(ctx, cacheKey, &products)
    if err == nil {
        return products, nil
    }

    // Query DB
    if err := s.DB.Where("category_id = ?", categoryID).Find(&products).Error; err != nil {
        return nil, err
    }

    // Cache result (shorter TTL for lists)
    s.Cache.Set(ctx, cacheKey, products, 5*time.Minute)

    return products, nil
}

Performance Benchmarks

Authorization Service

Operation Without Cache With Cache Improvement
GetUserPermissions 45ms <1ms 45x faster
GetRolePermissions 32ms <1ms 32x faster
HasPermission 28ms <1ms 28x faster

HTTP Response Caching

Endpoint Without Cache With Cache Improvement
GET /api/posts 120ms 1ms 120x faster
GET /api/users?page=1 85ms <1ms 85x faster
GET /api/products 200ms 1ms 200x faster

Cache Provider Performance

Provider Read Latency Write Latency Scalability
Memory <1μs <1μs Single instance
Redis (local) 1-2ms 1-2ms Multi-instance
Redis (remote) 5-20ms 5-20ms Multi-instance

Migration Guide

Upgrading from Non-Cached Code

Before:

func (s *UserService) GetUser(id uint) (*User, error) {
    var user User
    if err := s.DB.First(&user, id).Error; err != nil {
        return nil, err
    }
    return &user, nil
}

After:

func (s *UserService) GetUser(id uint) (*User, error) {
    ctx := context.Background()
    cacheKey := cache.UserKey(id)

    // Try cache
    var user User
    if err := s.Cache.Get(ctx, cacheKey, &user); err == nil {
        return &user, nil
    }

    // Fallback to DB
    if err := s.DB.First(&user, id).Error; err != nil {
        return nil, err
    }

    // Cache result
    s.Cache.Set(ctx, cacheKey, user, 15*time.Minute)

    return &user, nil
}

License

This caching system is part of the Base Framework and is licensed under the MIT License.

Support

For issues, questions, or contributions: