Wednesday, June 11, 2025

Testing from Day One: Why RSpec Matters for Letting Agency Software

Paul (Founder)
Software Quality
Rubber duck debugging companion next to laptop with code

On 11 June 2025, the same day we configured Heroku deployment and S3 storage, we integrated RSpec—Ruby's most popular testing framework. This wasn't an afterthought or a "we'll add tests later" compromise. It was a deliberate decision that every feature, from day one, would include tests proving it works as intended.

For letting agencies evaluating property management software, asking about test coverage should be mandatory. Untested code breaks unpredictably. Bugs surface at the worst moments: during viewings, when syndicating to portals, when landlords are checking compliance reports. Tested code provides confidence that changes won't break existing features—a critical foundation for software that handles your business operations.

What Your Team Will Notice

The direct benefit to agency staff is reliability. When they create a property listing, upload photos, or update rental prices, the system behaves consistently. When a developer adds a new feature (say, bulk email to tenants), they don't inadvertently break the property search or photo uploads—because the test suite catches regressions before deployment.

Consider a scenario: A developer refactors the Property model to improve database query performance. Without tests, this refactoring could accidentally remove validation rules, breaking data integrity. With tests, any regression fails immediately during development, long before reaching production. The feature ships when tests pass, giving confidence it won't disrupt agency operations.

Testing also accelerates development over time. In the early weeks, writing tests feels like overhead—you could ship features faster without them. But by month three, the codebase has hundreds of features, thousands of lines of code, and countless edge cases. Making changes becomes terrifying without tests. "Will updating this controller break something elsewhere?" becomes a constant worry. With comprehensive tests, changes are confident and fast.

Under the Bonnet: RSpec Setup

RSpec replaced Rails' default testing framework (Minitest) because it offers more expressive syntax and better ecosystem support. The initial setup added RSpec to the Gemfile:

# Gemfile
group :development, :test do
  gem 'rspec-rails', '~> 6.1'
  gem 'factory_bot_rails'  # For test data generation
  gem 'faker'               # For realistic fake data
end

group :test do
  gem 'shoulda-matchers'    # For concise validation tests
  gem 'database_cleaner'    # For cleaning test database between runs
end

The rspec-rails gem integrates RSpec with Rails, providing generators, helpers, and matchers specific to Rails applications. The supporting gems handle common testing needs:

  • factory_bot_rails: Replaces fixtures with factories that generate test data programmatically
  • faker: Generates realistic fake data (names, addresses, emails) for tests
  • shoulda-matchers: Provides one-liner matchers for common Rails validations and associations
  • database_cleaner: Ensures each test runs against a clean database, preventing test pollution

The RSpec configuration lives in spec/rails_helper.rb:

# spec/rails_helper.rb
require 'spec_helper'
require File.expand_path('../config/environment', __dir__)
require 'rspec/rails'

RSpec.configure do |config|
  config.use_transactional_fixtures = true
  config.infer_spec_type_from_file_location!
  config.filter_rails_from_backtrace!

  # Include Factory Bot methods
  config.include FactoryBot::Syntax::Methods
end

The use_transactional_fixtures = true setting is clever. Each test runs inside a database transaction that rolls back after completion. This means tests can create, modify, and delete records freely—the database returns to a pristine state afterward. Tests remain isolated and fast (no time wasted dropping/recreating the entire database).

Model Testing: Validations and Associations

The initial Property model tests (introduced Week 24, expanded in subsequent weeks) covered fundamentals:

# spec/models/property_spec.rb
RSpec.describe Property, type: :model do
  describe "associations" do
    it { should have_many(:property_photos)
                  .dependent(:destroy)
                  .order(:position) }
  end

  describe "validations" do
    it { should validate_inclusion_of(:status)
                  .in_array(Property::ALLOWED_STATUSES) }

    it { should validate_presence_of(:reference) }

    it "rejects duplicate references" do
      create(:property, reference: "TEST001")
      duplicate = build(:property, reference: "TEST001")

      expect(duplicate).not_to be_valid
      expect(duplicate.errors[:reference])
        .to include("has already been taken")
    end
  end

  describe "nested attributes" do
    it "accepts nested attributes for property_photos" do
      property = create(:property,
        property_photos_attributes: [
          { image: fixture_file_upload('test.jpg', 'image/jpeg') }
        ]
      )

      expect(property.property_photos.count).to eq(1)
    end

    it "destroys photos when allow_destroy is true" do
      property = create(:property, :with_photos)
      photo_id = property.property_photos.first.id

      property.update(
        property_photos_attributes: [
          { id: photo_id, _destroy: '1' }
        ]
      )

      expect(PropertyPhoto.find_by(id: photo_id)).to be_nil
    end
  end
end

The shoulda-matchers gem enables the concise should have_many and should validate_inclusion_of syntax. These one-liners test Rails conventions without verbose setup. They confirm:

  • Associations exist and have correct options (:dependent, :order)
  • Validations enforce expected rules
  • Error messages appear correctly when validations fail

The create and build methods come from Factory Bot. They generate test records using factories:

# spec/factories/properties.rb
FactoryBot.define do
  factory :property do
    sequence(:reference) { |n| "PROP#{n.to_s.rjust(4, '0')}" }
    price { 1500.00 }
    beds { 2 }
    bathrooms { 1 }
    status { "Available to Let" }
    postcode { "M1 1AA" }

    trait :with_photos do
      after(:create) do |property|
        create_list(:property_photo, 3, property: property)
      end
    end
  end
end

The sequence method ensures each property gets a unique reference ("PROP0001", "PROP0002", etc.), preventing uniqueness validation failures. The :with_photos trait demonstrates Factory Bot's power: creating a property with associated photos requires just create(:property, :with_photos).

Controller Testing: Request Specs

Controller tests (called "request specs" in RSpec terminology) verify HTTP requests behave correctly:

# spec/requests/properties_spec.rb
RSpec.describe "Properties", type: :request do
  let(:user) { create(:user) }

  before { sign_in user }

  describe "GET /properties" do
    it "returns a successful response" do
      get properties_path
      expect(response).to have_http_status(:success)
    end

    it "renders the index template" do
      get properties_path
      expect(response).to render_template(:index)
    end
  end

  describe "POST /properties" do
    context "with valid parameters" do
      let(:valid_attributes) do
        attributes_for(:property)
      end

      it "creates a new Property" do
        expect {
          post properties_path, params: { property: valid_attributes }
        }.to change(Property, :count).by(1)
      end

      it "redirects to the created property" do
        post properties_path, params: { property: valid_attributes }
        expect(response).to redirect_to(property_path(Property.last))
      end
    end

    context "with invalid parameters" do
      let(:invalid_attributes) do
        attributes_for(:property, reference: nil)
      end

      it "does not create a new Property" do
        expect {
          post properties_path, params: { property: invalid_attributes }
        }.not_to change(Property, :count)
      end

      it "renders the new template with unprocessable_entity status" do
        post properties_path, params: { property: invalid_attributes }
        expect(response).to have_http_status(:unprocessable_entity)
        expect(response).to render_template(:new)
      end
    end
  end
end

Request specs verify the full request/response cycle:

  1. User makes HTTP request (GET, POST, etc.)
  2. Rails routes to the appropriate controller action
  3. Controller processes the request (queries database, applies business logic)
  4. Controller renders response (HTML template, JSON, redirect)

Testing both success and failure paths is critical. The "valid parameters" context confirms properties are created successfully. The "invalid parameters" context confirms validation failures are handled gracefully—the user sees error messages, not exception pages.

The sign_in user helper (provided by Devise test helpers) simulates an authenticated session, allowing tests to access protected controller actions without manually setting cookies and session data.

Feature Testing: System Specs

While not implemented extensively in Week 24, RSpec supports system specs (integration tests) that simulate real user interactions via a headless browser:

# spec/system/property_management_spec.rb (illustrative, not Week 24)
RSpec.describe "Property Management", type: :system do
  let(:user) { create(:user) }

  before do
    sign_in user
    visit properties_path
  end

  it "allows creating a new property" do
    click_link "New Property"

    fill_in "Reference", with: "TEST001"
    fill_in "Price", with: "1500"
    select "Available to Let", from: "Status"
    fill_in "Postcode", with: "M1 1AA"

    click_button "Create Property"

    expect(page).to have_content("Property was successfully created")
    expect(page).to have_content("TEST001")
  end
end

System specs use Capybara to drive a browser (typically headless Chrome), filling forms, clicking buttons, and verifying page content. They test the entire stack: routing, controllers, models, views, and JavaScript. They're slower than model or controller tests but provide confidence that features work end-to-end.

Test Coverage: Measuring Completeness

By Week 35, we'd integrate SimpleCov to measure test coverage:

# spec/spec_helper.rb
require 'simplecov'
SimpleCov.start 'rails' do
  add_filter '/spec/'
  add_filter '/config/'
  add_filter '/vendor/'
end

SimpleCov analyses which lines of code execute during tests, generating reports showing coverage percentages:

Coverage Summary
----------------
Files:          142
Lines of Code:  3,847
Covered:        3,847
Coverage:       100.0%

While 100% coverage doesn't guarantee bug-free code, it ensures every method, branch, and edge case has at least one test exercising it. For mission-critical systems (which lettings software is—it handles tenant data, financial transactions, compliance records), high coverage is non-negotiable.

Continuous Integration: Automated Testing

The CI/CD pipeline (implemented Week 35) runs the test suite automatically on every commit:

# .github/workflows/ci.yml
name: CI Pipeline
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
      - uses: actions/checkout@v4
      - uses: ruby/setup-ruby@v1
        with:
          ruby-version: '3.2.9'
          bundler-cache: true

      - name: Setup Database
        run: |
          bundle exec rails db:create db:schema:load
        env:
          RAILS_ENV: test
          DATABASE_URL: postgres://postgres:postgres@localhost:5432/test

      - name: Run Tests
        run: bundle exec rspec --format progress

      - name: Upload Coverage
        uses: actions/upload-artifact@v4
        with:
          name: coverage
          path: coverage/

This GitHub Actions workflow:

  1. Spins up a PostgreSQL service container
  2. Installs Ruby and dependencies
  3. Creates and migrates the test database
  4. Runs the entire test suite
  5. Uploads coverage reports

If any test fails, the workflow fails, preventing broken code from merging. This "test gate" ensures the main branch remains stable—deployable at any moment.

Testing Philosophy: Behaviour, Not Implementation

Good tests verify behaviour (what the code does) rather than implementation (how it does it). Consider these two approaches:

Testing implementation (brittle):

it "calls the S3 upload service" do
  expect(S3UploadService).to receive(:upload).with(file)
  property.attach_photo(file)
end

Testing behaviour (robust):

it "attaches the photo to the property" do
  property.attach_photo(file)
  expect(property.property_photos.count).to eq(1)
  expect(property.property_photos.first.image).to be_attached
end

The first test breaks if you change the upload mechanism (say, switching from a service object to a job). The second test remains valid—it verifies the photo is attached, regardless of implementation.

This distinction matters as codebases evolve. Refactoring should be safe—improving code structure without changing behaviour. Tests that focus on behaviour enable confident refactoring. Tests that focus on implementation require constant updates, reducing their value.

The Cost of Not Testing

Consider the alternative: shipping code without tests. Initially, it feels faster. You write a feature, manually test it, deploy. But manual testing doesn't scale:

  • Regression risk: Every change could break existing features, and you won't know until users report bugs
  • Fear of refactoring: Improving code becomes risky, leading to accumulating technical debt
  • Onboarding friction: New developers can't verify their changes work without extensive manual testing
  • Deployment anxiety: Every release is a "hope nothing breaks" moment

Agencies using untested software experience this as instability: features that worked yesterday break today, bugs resurface after being "fixed," and deployments require cautious scheduling and rollback plans.

Testing Levels: The Pyramid

Effective test suites follow the "testing pyramid":

        /\
       /  \  System Tests (few, slow, comprehensive)
      /----\
     /      \ Controller Tests (moderate, medium, focused)
    /--------\
   /          \ Model Tests (many, fast, specific)
  /____________\

The base (model tests) is widest—many fast, focused tests covering individual components. The middle (controller/request tests) is narrower—fewer tests covering integration between components. The apex (system tests) is smallest—a handful of slow but comprehensive end-to-end tests.

This distribution optimises feedback speed. Most bugs are caught by fast model tests that run in milliseconds. Fewer slip through to controller tests. Even fewer reach system tests. When a system test fails, lower-level tests help pinpoint the issue.

What This Enabled Later

By establishing testing discipline in Week 24, we enabled rapid iteration in subsequent weeks. Week 35 saw dozens of features ship confidently:

  • Multi-tenancy (complex data scoping, high regression risk)
  • API endpoints (security-critical, integration-heavy)
  • Performance optimisations (refactoring-heavy, behaviour must remain identical)
  • Photo management (JavaScript-heavy, browser interaction)

Each feature included tests. By Week 35, we achieved 100% test coverage—every line of application code had at least one test exercising it. This wasn't perfectionism; it was pragmatism. With comprehensive tests, changes were fearless. Without them, every change was a gamble.

For Agencies: Questions to Ask Software Vendors

When evaluating lettings management software, ask:

  1. "What's your test coverage?" Anything below 80% is concerning. 100% is excellent.
  2. "Do you run tests on every deployment?" Manual testing doesn't scale.
  3. "Can you show me your CI/CD pipeline?" Automated testing should be non-negotiable.
  4. "How do you prevent regressions?" Tests should catch bugs before users do.

Vendors uncomfortable answering these questions are shipping untested code. That's a red flag.

What's Next

The testing foundation established in Week 24 would grow throughout development. By Week 35, the test suite included:

  • 500+ model tests
  • 300+ request tests
  • 100+ system tests
  • Full CI/CD pipeline with automated quality checks

But it started here: one RSpec setup on 11 June 2025, followed by disciplined test-writing for every feature thereafter. That discipline made everything else possible.


Related articles: