The Client
PayGrid Technologies is a payments infrastructure startup based in Lagos, building a transaction routing and settlement layer for Nigerian merchant acquiring. The company is three years old, 14 people, and at a scale where the founders' original decision to run everything on cloud infrastructure was starting to feel expensive relative to their transaction volumes. Their CTO, Chidi Eze, had done the math: at their current growth rate, the monthly cloud bill would exceed ₦8 million within 18 months. An on-premise server for their database and application workloads would reduce that materially, even accounting for hardware, colo, and power costs.
There was a complication. PayGrid's office in Lagos Island does not have a dedicated server room. The machine would live in a utility room adjacent to the main office — a room that houses the electrical panel, some storage shelving, and an air conditioning unit that the building manager runs on a schedule, not continuously. Ambient temperature in that room ranges from 24°C at best to 38°C on bad days during dry season. That is not a server environment by conventional standards.
The Challenge
Standard server hardware is designed to operate with inlet air temperatures between 10°C and 35°C (ASHRAE A1 class). At 38°C ambient, you are beyond the specification envelope for most servers. At those temperatures, thermal throttling is inevitable, fan speeds become aggressive, and hardware failure rates increase significantly — particularly for drives and capacitors.
The challenge was engineering a server that could survive — not just survive, but operate reliably under sustained load — in a hot room with unreliable air conditioning, Lagos Island's power supply, and the financial uptime requirements of a payments business. In fintech, server downtime is not a technical incident. It is a regulatory risk and a client trust event.
Chidi was clear about requirements: the server needed to handle their PostgreSQL database (currently 4TB, growing 30GB per month), their Node.js application tier (eight microservices), and their Redis cache layer. It needed to be able to cold-restart automatically after a power event without human intervention. And it needed to run, reliably, in that utility room.
The Consultation
We made three key engineering decisions early in the consultation that shaped the entire build:
- Active cooling redundancy: The server would have more cooling than it needed under normal conditions, so that degraded cooling (high ambient temp or one failed fan) would not cause thermal problems
- Component selection for thermal tolerance: Enterprise-class components with higher thermal ratings, particularly for the drives and power supplies — not consumer hardware optimistically rated for 25°C ambient
- Hot-swappable fans and drives: In an environment where a fan might fail more frequently due to heat stress, fan replacement needed to be possible without downtime
We also proposed a supplementary thermal management measure: a portable precision cooler (similar to a large dehumidifier with a cooling coil) installed in the utility room on a smart thermostat, set to activate automatically when the room temperature exceeded 30°C. This was Chidi's decision to implement — it was not hardware we would build, but infrastructure the startup would need to support the machine's operating environment.
The Build
PayGrid On-Premise Server — ₦9.4 million:
- Platform: 2U rackmount chassis (Supermicro SuperServer 2029U-TR4) — enterprise-class, designed for data centre environments with hot-swap drive bays and redundant fan modules
- CPU: 2× Intel Xeon Silver 4316 (20 cores each, 40 cores total) — server-class CPUs rated for continuous operation at elevated temperatures; ECC support
- RAM: 256GB DDR4 ECC RDIMM — error-correcting memory for database integrity; ECC errors in RAM are silent data corruption events in databases without it
- Storage (database): 4× 8TB Seagate Exos enterprise HDDs in RAID-10 — enterprise-class drives rated for 45°C operating temperature; RAID-10 for performance and redundancy
- Storage (OS/cache): 2× 2TB Samsung PM9A3 enterprise NVMe in RAID-1 — OS and Redis persistence
- Cooling: 8× hot-swap 80mm fans with N+1 redundancy — one fan can fail completely without thermal impact
- PSU: 2× 1200W redundant PSUs (hot-swap) — one fails, the other carries the load seamlessly
- Network: 4× 10GbE ports — two for production traffic, two for management and backup
- UPS: 20kVA online double-conversion UPS with 30-minute runtime at full load — seamless power source transitions
- IPMI/iDRAC: Remote management configured — Chidi's team can restart, monitor sensors, and access the console without being physically present
We installed Ubuntu Server 24.04 LTS, configured PostgreSQL 16 with optimised settings for the hardware, deployed their application stack via Docker Compose, and ran the full application test suite before handover. We documented the thermal monitoring thresholds and the automated alerting configured to notify Chidi's phone if any sensor exceeded safe limits.
The Result
PayGrid's server has been running for five months. CPU temperatures under typical load: 48–54°C. Drive temperatures (the most heat-sensitive components): 36–39°C — within specification for the enterprise drives, where consumer drives would be throttling or failing. The hottest day recorded in the utility room was 41°C; the server continued operating without thermal incidents, fans running at approximately 75% speed rather than 100%, maintaining adequate thermal headroom.
Cloud infrastructure cost has reduced from ₦4.2 million per month (at time of server deployment) to ₦1.1 million per month — primarily for off-site backups, overflow compute, and geographic redundancy for their API endpoints. The server's amortised cost over its expected five-year life is well within the savings.
One hot-swap fan was replaced at month three — it was showing elevated RPM variance suggesting early bearing wear. Replacement took four minutes without any service interruption. That moment, Chidi told us, was exactly the outcome the redundant design was built for.
Key Takeaway
On-premise servers in Nigeria's environmental conditions are viable — but they require engineering for the actual environment, not the ideal one. Specifying enterprise-grade hardware with proper thermal tolerances, building in redundancy for the components most likely to fail in heat (fans, drives), and providing remote management so interventions don't require physical presence are all non-negotiable in this context. A server built for a 22°C air-conditioned data centre will fail in a Lagos utility room. A server built for the utility room will run for years.
Does your startup or business need on-premise server infrastructure in Nigeria? Talk to our team about a consultation. We design for real-world Nigerian conditions, not idealised data centre specs.