SignalLine AI
Guide · Housing · 10 min read

How to run a TSM perception survey programme

A week-by-week operational playbook from board approval to NROSH+ submission. For the operations manager actually responsible for delivery.

You have your methodology decision. You have board sign-off in principle. Now you actually have to run the programme. This guide is the operational playbook — what to do, when, and who needs to sign what — to get from kickoff to published TSMs without sliding the timeline and without the Regulator finding gaps in your summary of approach. It assumes you have already read the cornerstone TSM guide (the regime, the 22 measures, the compliance pitfalls) and the methodology deep-dive (which method to choose for your stock profile).

Plan ~12 weeks end-to-end from kickoff to NROSH+ submission, plus 4–6 weeks of pre-launch if you are selecting a vendor or revising your questionnaire. The week-by-week breakdown below is the pattern that survives first contact with a real tenant population.

The 12-week timeline at a glance

PhaseWeeksPrimary ownerDecision gate at end
Pre-launch−6 to 0Insights / Research leadBoard sign-off on methodology + questionnaire
Fieldwork launch0 to 2Operations + vendorFirst-wave response rate review
Fieldwork main wave2 to 6Operations + vendorQuota representativeness check
Fieldwork tail + follow-up6 to 8Operations + vendorCut-off decision
Analysis + weighting8 to 10Insights / Research leadInternal sign-off on weighted figures
Reporting + publication10 to 12Insights + Comms + BoardBoard approval + NROSH+ submission

Phase 1 — Pre-launch (weeks −6 to 0)

Week −6: Methodology lock

Confirm the survey mode mix you will use this cycle (informed by the methodology decision framework). Document the rationale in a one-page note that will become the opening of your summary of approach. This is the moment to flag any mode change versus last year — the Regulator and the Board both expect rationale, not surprise.

Week −5: Sample frame extract

Pull tenant data from your housing management system: tenancy ID, household composition, stock type (general-needs / sheltered / supported), build type, tenancy start date, geographic patch, language flag where held, contact preferences (postal address, phone number, email address, opt-out flags). Lock this snapshot — fieldwork must reconcile back to this list. Tenant churn during the programme is the most common cause of week-9 data-quality headaches.

Week −4: Vendor selection or in-house ramp

If outsourcing, this is the latest point to brief vendors and confirm contracts. Five questions every vendor must answer (from the methodology deep-dive): verbatim wording fidelity, mode-mix transparency, representativeness weighting approach, year-on-year defensibility, recording and audit trail. Get these in writing.

Week −3: Questionnaire finalisation

Use the verbatim wording from the RSH Tenant Survey Requirements (March 2023, updated April 2024). The ordering rules matter — TP01 first, TP02 before TP03 — and no warm-up questions. The Stoke-on-Trent and Cannock Chase exclusions in 2024/25 came from materially altered wording. Pilot the questionnaire on 20–50 tenants in the same week to catch wording problems before main fieldwork.

Week −2: Tenant communications + accessibility

Draft pre-notification letters and emails. Translate into the languages relevant to your portfolio (the Regulator’s representativeness expectations explicitly include ethnicity / language as a characteristic to test). Coordinate with your residents’ comms team — surveys that arrive on the same week as a rent-increase letter perform badly.

Week −1: Board sign-off

Board approval (or delegated-committee approval) of: methodology, questionnaire, sample frame, vendor appointment, communications. This is the gate before fieldwork — the Regulator’s text says Board responsibility is “ultimate” for accuracy, so an explicit pre-fieldwork sign-off is hard to leave out.

Phase 2 — Fieldwork launch (weeks 0 to 2)

Week 0: Launch

Send invitations. For mixed-mode designs, push-to-web invitations go out first (with a 14-day window). For postal-only designs, the main mailing leaves this week. Set up daily response-rate dashboards from day one — the biggest operational mistake is monitoring weekly when first-week patterns predict the final outcome.

Weeks 1–2: First-wave reading

By the end of week 2 you should know whether you’re on track. Compare achieved response counts to your forecast for each quota cell (age band, stock type, geographic patch). If a cell is materially behind, flag it now — do not wait until week 6 when remedial fieldwork compresses your analysis window. Vendors should be producing daily or weekly response reports automatically.

Phase 3 — Fieldwork main wave (weeks 2 to 6)

Weeks 3–4: Reminders + postal sweep

For push-to-web designs, this is when the postal sweep goes out to non-responders. For postal-only, this is when reminder 1 is mailed (reminder 2 follows in week 5). Telephone backstop calls begin for under-represented quotas identified in the week-2 reading.

Weeks 5–6: Quota fill

Active management of under-represented cells. AI-phone or human telephone is best deployed here — targeted at specific demographic gaps rather than blanket coverage. Hold a weekly cell-by-cell review with the vendor; cut cells that have hit target so reminder spend doesn’t over-shoot.

Phase 4 — Fieldwork tail (weeks 6 to 8)

Week 7: Final-week chase

Final reminders, final phone calls, final email nudges. The marginal response in week 7 typically comes from the hardest-to-reach cells — sheltered, supported, ESL — so this is where your translated and accessible materials earn their keep.

Week 8: Cut-off

Hard cut-off date. Document the date in the summary of approach. Any response received after this date is excluded from the headline TSM but can be retained in the operational dataset. Communicate the cut-off to the vendor explicitly — accidental post-cut-off responses creep in and create reconciliation issues later.

Phase 5 — Analysis and weighting (weeks 8 to 10)

Week 8: Data cleaning

Reconcile responses back to the locked sample frame. Apply the household cap (one response per household). Resolve partial responses (count for the questions answered, exclude from the questions skipped). Flag any responses where TP02/TP03 or TP09/TP10 filter inconsistencies are apparent (e.g., respondent answered the repair-satisfaction question after saying they had no repair).

Week 9: Weighting

Calculate weights against the six representativeness characteristics named in the Technical Requirements: stock type, age, ethnicity, building type, property / household size, geographic area. Apply weights consistently across all TSM perception measures (the Regulator’s text is explicit on this). Document the weighting approach in the summary of approach — do not just describe it as “weighted”.

Week 10: Internal sign-off

Insights / Research lead signs off the weighted TSM figures. Cross-check that LCRA and LCHO have been calculated separately where both are owned in material volume. Flag any management-information measures (BS01–BS05, RP01, RP02, CH01, CH02, NM01) that are at risk of late delivery from finance / repairs / complaints — these come from operational records and often need chasing in parallel with the survey.

Phase 6 — Reporting and publication (weeks 10 to 12)

Week 10: Summary of approach drafting

Write the summary of approach in parallel with the headline report. Every required element must be present: achieved sample size, survey timing, collection method(s), sampling method, representativeness assessment, weighting applied, named external contractors, households excluded with reasons, reasons for not hitting minimum samples, any incentives used, and methodological issues with material impact.

Week 11: Board approval

Board-level approval of: the headline TSM results, the summary of approach, the publication artefacts (website page, downloadable PDF, tenant communications). The Regulator treats Board approval as the accountability checkpoint, so document the approval minute and date.

Week 12: Publication + NROSH+ submission

Publish on your own website first. Include the questionnaire that was actually used (this is a requirement, not a nice-to-have). Submit to the Regulator via NROSH+ within the annual window — the exact date is set in the CEO data-requirements letter each year. Communicate the published results to tenants via the channel that worked best in the survey wave (often a tenant newsletter or post-survey email).

Anti-patterns we see most often

  • Sample frame drift.Tenant data refresh during the campaign without re-running quota calculations against the new frame. Lock the frame at week −5 and reconcile in week 8 — don’t move during fieldwork.
  • Late representativeness reading. Discovering at week 6 that a quota cell is critically under-represented. The remedy — emergency phone follow-up — works but compresses analysis. Build the check into week 2.
  • Summary of approach written last. Drafting it in the final week before publication leads to gaps and Board push-back. Start drafting in week 10 in parallel with weighting.
  • Management-information measures out-of-sync.The 10 MI measures come from operational systems, not the survey. Run a parallel workstream so they’re ready when the perception data is.
  • Tenant communications collision. Survey arrives the same week as a rent increase or a service-charge letter. Response rates drop materially. Coordinate with the comms team on a rolling 8-week view of outbound tenant communications.
“You should be hitting 100% compliance on all of your safety indicators. If you’re not… we will come and ask.” — Will Perry, Director, RSH

The quote above is about the BS01–BS05 building-safety measures, but it applies to the operational discipline of the whole programme. The Regulator’s questioning starts where the published data has gaps — fix the process, not just the number.

Sources and further reading

About SignalLine

We run the whole 12-week TSM programme on your behalf.

SignalLine AI runs the fieldwork end-to-end — verbatim TP01–TP12 wording, multilingual auto-detect, daily response-rate dashboards, weekly cell-by-cell quota review, weighting against the six representativeness characteristics, and a complete summary of approach alongside your headline report when the campaign closes. We hand back the deliverables; you take them to the Board and submit through NROSH+. No dashboard for your team to operate, no recruitment of interviewers, no postal logistics.

Call Thalia · +44 20 4511 4077