Saturday, August 30, 2025

Software That Doesn't Break When We Add New Features: Why Reliability Matters

Photo of Paul
Paul (Founder)

Paul is a software architect and director at Phillip James Lettings, who have arranged thousands of tenancies over twenty years. LetAdmin is what happens when you know both sides.

Software Quality & Reliability
Hands holding novelty glasses with CODE and DEBUG written on lenses

It's Monday morning, 8:45am. You open LetAdmin to review today's viewings. The system loads, but something's wrong:

  • The properties list is blank
  • Clicking on a property shows an error message
  • Trying to search properties crashes the page
  • Your reports won't generate

What happened? Over the weekend, we released a "small update" to fix the email notifications. Somehow, that broke the properties feature. Now your entire team can't access property data during the busiest day of the week.

This scenario is every agency's nightmare—software that worked fine on Friday, completely broken on Monday.

This week, we achieved a significant milestone: 100% automated test coverage. Every line of code in LetAdmin is covered by automated tests that verify it works correctly. Before any update deploys, 300+ scenarios run automatically—if any test fails, the update is blocked.

This article explains why software reliability matters for letting agents, how automated testing works, and what it means for your daily operations.

What You Actually Experience (Or Don't Experience)

With unreliable software (what we're preventing):

  • Monday morning: Update deployed over weekend, properties feature broken
  • Wednesday afternoon: Reports timing out after "minor fix" to accounting
  • Friday: Photo uploads broken after update to viewing scheduler
  • Next Monday: Emergency rollback to previous version, lose new features
  • Next Wednesday: Fix deployed, creates different bug

Constant cycle: New feature → Something breaks → Emergency fix → Different thing breaks


With reliable software (what 100% testing provides):

  • Every Monday: Software works exactly as it did Friday
  • Every update: New features added without breaking existing features
  • Every interaction: Properties load, reports generate, photos upload—consistently
  • Zero surprises: If it worked yesterday, it works today

Peace of mind: You can trust the software to work every single day.

Real-World Scenario: What Automated Testing Prevents

Let's walk through what could go wrong (and how testing prevents it):

Scenario: We Add a "Tenant Application Form" Feature

Without automated testing:

Week 1 - We build tenant application forms

  • Code looks good
  • We manually test: form loads, submissions work
  • Deploy to production Friday evening

Week 2 Monday - Agencies report problems:

  • Tenant applications work ✓
  • But property photos won't upload ✗ (broken by new code)
  • And reports are generating blank PDFs ✗ (broken by new code)
  • And email notifications aren't sending ✗ (broken by new code)

We fixed the photo upload bug while testing, but didn't notice we broke reports and emails. Manual testing can't catch everything.

Week 2 Tuesday - Emergency patches:

  • Fix photo uploads
  • Fix reports
  • Fix email notifications
  • Re-deploy

Week 2 Wednesday - New problem:

  • Property searches now timeout (broke when fixing reports)

Week 2-4 - Endless cycle of fixes creating new problems


With automated testing:

Week 1 - We build tenant application forms

  • Code looks good
  • Automated tests run: 297 tests pass, 3 fail
    • ❌ Photo upload test fails (new code broke uploads)
    • ❌ Report generation test fails (new code broke PDFs)
    • ❌ Email notification test fails (new code broke emails)

Deployment is blocked—the update cannot deploy with failing tests.

Week 1 (same day) - We fix the bugs:

  • Fix photo upload issue
  • Fix report generation issue
  • Fix email notification issue
  • Re-run tests: All 300 tests pass ✓

Week 1 Friday evening - Deploy to production with confidence:

  • Tenant applications work ✓
  • Photo uploads still work ✓
  • Reports still work ✓
  • Email notifications still work ✓
  • Property searches still work ✓

Week 2 Monday - Agencies use new tenant application forms:

  • Zero bugs reported
  • Zero emergency fixes needed
  • Zero broken features

Automated testing caught 3 breaking bugs before they reached agencies.

What Gets Tested Automatically

Every time we make a code change, 300+ tests run automatically. Here's what they check:

Property Management Tests (90 tests)

  • Can create a new property? ✓
  • Can upload photos? ✓
  • Can reorder photos? ✓
  • Can edit property details? ✓
  • Can delete a property? ✓
  • Can search properties by postcode? ✓
  • Can filter by bedrooms, price, type? ✓
  • Does pagination work correctly? ✓
  • Do property pages load in < 1 second? ✓

Security & Data Privacy Tests (75 tests)

  • Can Agency A access Agency B's properties? Should be NO
  • Can deleted users still log in? Should be NO
  • Do password reset links expire after 24 hours? ✓
  • Are uploaded photos stored securely? ✓
  • Does database encryption work correctly? ✓
  • Can staff see properties from wrong agency? Should be NO
  • Does session timeout after 2 hours of inactivity? ✓

Integration Tests (60 tests)

  • Does Rightmove sync work? ✓
  • Does Zoopla sync work? ✓
  • Does email sending work? ✓
  • Does PDF generation work? ✓
  • Does photo resizing work? ✓
  • Do portal API credentials validate correctly? ✓
  • Does data export to Excel work? ✓

User Interface Tests (45 tests)

  • Do forms validate input correctly? ✓
  • Do error messages display clearly? ✓
  • Does mobile layout work on tablets? ✓
  • Do drag-and-drop interactions work? ✓
  • Do buttons and links navigate correctly? ✓
  • Does the system work in Chrome, Safari, Firefox? ✓

Performance Tests (30 tests)

  • Do property lists load in < 1 second? ✓
  • Do searches complete in < 500ms? ✓
  • Can the system handle 100 concurrent users? ✓
  • Do reports generate in < 5 seconds? ✓
  • Does photo upload work with 20MB files? ✓

Total: 300+ tests running on every code change

If any test fails, the code change is blocked from deploying.

How This Works Behind the Scenes

You don't see any of this (which is the point), but here's what happens automatically:

1. Developer Makes a Code Change

Example: Adding "Mark Property as Let" button

2. Automated Tests Run (Takes 3 Minutes)

  • 300+ tests execute automatically
  • Each test simulates a real user action
  • Tests check if features still work correctly

3. Results Displayed

  • ✅ All 300 tests passed → Code can deploy
  • ❌ 3 tests failed → Code is blocked, cannot deploy

4. If Tests Failed, Fix the Code

  • Review which tests failed
  • Fix the bugs
  • Re-run tests
  • Repeat until all tests pass

5. Only Then Can It Deploy

  • All tests passing = safe to deploy
  • Tests failing = stays in development

You never see failed deployments because they never make it to production.

What This Means for Your Agency

Predictable software: Monday mornings don't bring surprises. The software works the same way it did Friday.

Fewer disruptions: No emergency "the system is down" emails. No mid-day hotfixes. No scrambling to find workarounds.

Faster feature development: We can add features confidently without fear of breaking existing functionality. Tests catch issues before they reach you.

Your data stays safe: Security tests ensure multi-agency data isolation works correctly. If we accidentally wrote code that could leak data between agencies, tests catch it before deployment.

Updates you can trust: When we say "We added tenant application forms," you can trust that:

  • The new feature works
  • Your existing features still work
  • Nothing broke in the process

Common Questions

Q: If you have 100% test coverage, does that mean the software is bug-free?

A: No—but it's close. Tests verify that features work as designed. If we designed something incorrectly (e.g., we thought 28-day notice periods were standard, but your agency uses 30-day), tests won't catch that—user feedback does.

Tests prevent regressions (breaking things that used to work) and technical bugs (crashes, errors, data corruption). They don't prevent design mistakes (building the wrong feature or building a feature incorrectly).

Q: How often do tests actually catch bugs before they reach us?

A: Every week. On average, tests catch 3-5 bugs per week that would have affected agencies. Most are minor (broken link, formatting issue) but some are major (feature completely non-functional).

Q: Do tests slow down feature development?

A: Initially, yes—writing tests takes time. But they accelerate development long-term because:

  • We spend less time fixing bugs in production
  • We can refactor code confidently (knowing tests catch mistakes)
  • New developers can contribute without fear of breaking things

Q: What if tests pass but there's still a bug in production?

A: It happens occasionally (maybe 1-2 times per quarter). When it does:

  • We fix the bug immediately
  • We write a new test to prevent that specific bug from recurring
  • The test suite gets stronger over time

Why Small Agencies Benefit Most

Large agencies with dedicated IT staff can handle software issues:

  • System goes down? IT team troubleshoots
  • Bug discovered? IT team reports it and follows up
  • Workaround needed? IT team creates temporary processes

You don't have an IT team. You have 2-3 people running the entire agency. If the software breaks:

  • You're the one troubleshooting
  • You're the one finding workarounds
  • You're the one dealing with frustrated staff

Reliable software is critical when you don't have technical resources to fix problems.

What We're Testing Next

Current test coverage: 100% of application code

Next targets:

Load testing: Simulate 1,000 concurrent users to ensure system remains fast under heavy load

Mobile-specific testing: Automated tests on real iOS and Android devices (currently desktop browser testing)

Portal integration testing: Daily automated tests that verify Rightmove/Zoopla APIs still work correctly (portals occasionally change APIs without warning)

Data migration testing: When we add new features requiring database changes, test that existing data migrates correctly

Accessibility testing: Automated tests that verify the system works with screen readers and keyboard navigation

The goal: Every interaction, on every device, in every scenario—tested and verified automatically.

We'd Love to Hear from You

Have you experienced software updates breaking features you rely on? How disruptive was it?

How much confidence do you have in your current letting agent software? Do you trust it to work every day, or do you expect issues?

What would make you trust software with your agency's critical operations? Guaranteed uptime? Automated testing? Something else?

Get in touch: paul@letadmin.com


LetAdmin is in active development, built by letting agents for letting agents. Automated testing runs on every code change at Phillip James (370+ properties) where reliability is critical for daily operations. If you're frustrated with unreliable software, we'd love to hear from you.