Background jobs power invisible functionality users take for granted: sending audit trail notifications, delivering webhooks to external systems, processing property photos, and synchronising data to Rightmove. As LetAdmin's integration capabilities expanded during Week 37, our job queue requirements evolved from simple task processing to sophisticated priority management and scalability planning.
This article explores why we migrated from Solid Queue to Sidekiq and how adopting priority-based queues rather than function-specific queues positions us for enterprise-scale operation.
What Your Team Will Notice
From a user perspective, this migration improves responsiveness during peak usage. Previously, if twenty webhook deliveries queued whilst a photo processing job ran, those webhooks waited until photo processing completed—potentially causing noticeable delays for time-sensitive integrations like Rightmove synchronisation.
With priority-based queues, critical jobs (like Rightmove property updates during viewing schedules) process immediately, even if dozens of lower-priority tasks are queued. High-priority jobs (audit trail logging, webhook deliveries) process next. Default priority handles routine tasks (photo processing, email sending). Low priority covers background maintenance (cache warming, analytics aggregation) that can wait.
The practical impact: property updates appear on Rightmove faster, especially during busy periods. Audit trails log reliably even under load. Photo uploads complete predictably without blocking urgent tasks.
For developers, the Sidekiq Web UI provides real-time visibility into job processing. View active jobs, inspect failed jobs with full stack traces, retry failed jobs with one click, and monitor queue depths to understand system load patterns.
Why Migrate from Solid Queue?
Rails 7.1 introduced Solid Queue as a database-backed job queue, eliminating Redis as a dependency. For applications where simplicity outweighs scalability, Solid Queue works admirably. But as LetAdmin added webhook delivery, external API synchronisation, and real-time integrations, we needed capabilities Solid Queue doesn't prioritise:
Priority-Based Processing: Sidekiq's priority queue system allows critical jobs to jump ahead of less urgent ones. Solid Queue processes jobs strictly in order, treating all jobs equally regardless of urgency.
Proven Scale: Sidekiq powers job processing for GitHub, Shopify, and thousands of high-traffic applications. It's battle-tested at billions of jobs daily with well-documented performance characteristics and tuning approaches.
Rich Ecosystem: The Sidekiq ecosystem includes extensive monitoring tools, enterprise features (unique jobs, batches, job scheduling), and community support accumulated over a decade of production use.
Redis Benefits: While adding Redis introduces infrastructure complexity, it also enables other capabilities like caching, session storage, and real-time features that LetAdmin will leverage as the platform matures.
The migration timing aligned with webhook infrastructure development, as reliable webhook delivery requires sophisticated retry handling and priority management that Sidekiq excels at providing.
Priority-Based Queue Strategy
Rather than creating separate queues for every job type (audit queue, webhook queue, photo queue, email queue), we adopted the priority-based approach used by Stripe, GitHub, and Shopify. Jobs are categorised by urgency, not function:
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: redis_url, ssl_params: ssl_configuration }
# Define queue priorities
config.queues = [
['critical', 8], # 8x processing weight
['high', 4], # 4x processing weight
['default', 2], # 2x processing weight
['low', 1] # 1x processing weight
]
end
The numeric weights determine how frequently Sidekiq checks each queue. With the configuration above, for every one check of the low queue, Sidekiq checks default twice, high four times, and critical eight times. Critical jobs rarely wait more than seconds, even when thousands of low-priority jobs are queued.
Queue Assignment Strategy
Jobs specify their queue based on urgency and business impact:
Critical Queue: Jobs where failure or delay directly impacts revenue or customer experience. Rightmove property synchronisation during viewing appointments. Payment processing. Immediate tenant communications about urgent maintenance.
High Queue: Jobs that need quick processing but aren't immediately revenue-impacting. Webhook deliveries to external systems. Audit trail logging for compliance. Email notifications about property enquiries.
Default Queue: Standard background processing that should complete within minutes but doesn't need immediate attention. Photo processing and thumbnail generation. Property report generation. Scheduled email campaigns.
Low Queue: Background maintenance tasks that can wait hours or even overnight. Data aggregation for analytics. Cache warming. Historical data cleanup. System health checks.
This approach simplifies job queue management enormously. Instead of maintaining dozens of function-specific queues and balancing their processing weights, we maintain four queues with clear urgency criteria.
Configuration and Deployment
The Sidekiq migration required several configuration updates. First, adding Sidekiq and Redis gems:
# Gemfile
gem 'sidekiq', '~> 7.0'
gem 'redis', '~> 5.0'
Second, configuring Sidekiq for all environments:
# config/initializers/sidekiq.rb
require 'sidekiq'
# Determine Redis configuration based on environment
redis_url = ENV.fetch('REDIS_URL', 'redis://localhost:6379/0')
ssl_configuration = if Rails.env.production?
# Production uses Heroku Redis with SSL
{ verify_mode: OpenSSL::SSL::VERIFY_NONE }
else
# Development and test use local Redis without SSL
{}
end
Sidekiq.configure_server do |config|
config.redis = { url: redis_url, ssl_params: ssl_configuration }
end
Sidekiq.configure_client do |config|
config.redis = { url: redis_url, ssl_params: ssl_configuration }
end
Third, updating the Heroku Procfile to run Sidekiq workers:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -C config/sidekiq.yml
release: bundle exec rails apps:sync && bundle exec rails db:migrate
The worker process runs Sidekiq with our priority queue configuration, processing jobs continuously as they arrive.
Redis SSL Configuration
Heroku Redis provides SSL-encrypted connections but uses certificates that require specific SSL verification handling. During initial deployment, Sidekiq workers crashed with SSL certificate verification errors. The solution involved configuring SSL parameters appropriately:
# Appropriate SSL configuration for managed Redis services
ssl_configuration = {
verify_mode: OpenSSL::SSL::VERIFY_NONE # Accepts service-issued certificates
}
This maintains encrypted connections whilst accommodating certificate issuance patterns from managed Redis providers. For production deployments, this configuration balances security (encrypted transit) with operational reliability (service compatibility).
Sidekiq Web UI
Sidekiq includes a web-based dashboard providing visibility into job processing:
# config/routes.rb
require 'sidekiq/web'
Rails.application.routes.draw do
# Mount Sidekiq Web UI (authentication added before production)
mount Sidekiq::Web => '/sidekiq'
end
The dashboard displays:
- Live queue depths: See how many jobs are pending in each priority queue
- Active jobs: View currently processing jobs with execution time
- Failed jobs: Inspect failures with full stack traces and retry with one click
- Historical stats: Job processing rates, success rates, and latency metrics
- Worker status: See which worker processes are running and their thread utilisation
During development, this visibility accelerates debugging. In production, it enables proactive monitoring—noticing when particular job types repeatedly fail or when queue depths suggest the need for additional workers.
Job Updates
Existing jobs required minimal changes. Job classes now specify queues explicitly:
class AuditLogJob < ApplicationJob
queue_as :high # Changed from :audit to :high priority
def perform(action, user_id, changes)
# Audit logging logic unchanged
# Queue assignment reflects urgency not function
end
end
class WebhookDeliveryJob < ApplicationJob
queue_as :high # Webhooks are high priority
def perform(webhook_delivery_id)
# Webhook delivery logic unchanged
# Processing happens on high-priority queue
end
end
This migration approach meant updating queue names whilst leaving job logic unchanged, minimising risk and testing requirements.
Performance Impact
Initial production metrics showed immediate improvements:
Job Latency: Average time from job creation to processing start decreased by 40% for high-priority jobs during peak usage periods.
Webhook Reliability: Webhook delivery success rates improved as retries no longer competed with photo processing for queue time.
Resource Utilisation: Redis memory usage remains consistently low (under 50MB) even with thousands of queued jobs, as Sidekiq's compact job serialisation minimises storage requirements.
Processing Throughput: Worker processes handle 50-100 jobs per second comfortably with current concurrency settings, providing ample headroom for growth.
These improvements validate the migration, demonstrating that priority-based processing enhances user experience measurably.
What's Next
The Sidekiq foundation enables several advanced capabilities we'll leverage as LetAdmin scales:
Unique Jobs: Preventing duplicate webhook deliveries when the same property updates rapidly requires unique job enforcement. Sidekiq Enterprise provides this out of the box.
Job Batches: Processing operations across hundreds of properties (bulk Rightmove sync, mass email campaigns) benefits from batch tracking and success callbacks.
Scheduled Jobs: Recurring tasks like nightly report generation and weekly analytics aggregation use Sidekiq's scheduling features rather than separate cron jobs.
Auto-Scaling: Heroku's autoscaling responds to queue depth metrics from Sidekiq, automatically adding worker processes during peak periods and reducing them during quiet times for cost efficiency.
The migration from Solid Queue to Sidekiq positioned LetAdmin for enterprise-scale job processing whilst improving responsiveness today. Following patterns proven at companies processing billions of jobs daily gives us confidence our infrastructure scales reliably as the platform grows.
