The Client
Dr. Tunde Adeyemi is a machine learning researcher who returned to Nigeria in 2025 after ten years in the United States, where he completed a PhD and spent four years at a research institution in California. He relocated to Lagos with his family and a clear intention: to do the same quality of research in Nigeria that he was doing abroad, and to mentor a generation of Nigerian ML practitioners who were producing serious work but doing so on inadequate infrastructure.
When he contacted Sephora Systems, he had a specification in mind. He sent it via email before we'd even confirmed the consultation. It was, by any measure, extreme. He wasn't asking us to match it — he was asking us to build it, in Lagos, with Nigerian power infrastructure, and make it reliable.
The Challenge
Dr. Adeyemi's research involves training large neural network models — transformer architectures for NLP tasks specific to Nigerian language data. This is GPU compute work of the most demanding kind. The models he trains require GPUs with large VRAM (to hold the model and its gradients simultaneously), high memory bandwidth (to move data between compute units quickly), and NVLink or equivalent interconnect (so multiple GPUs can share a single model that exceeds one GPU's VRAM).
His American lab had access to A100 80GB clusters over the cloud. He was not going to recreate that in a home office in Lekki. But he wanted to get as close as possible with consumer and prosumer hardware available in Nigeria.
The additional challenge: Lagos power. His research runs are sometimes 6–72 hours of sustained GPU compute. A power interruption that crashes a 48-hour training run wastes 48 hours of compute time, potentially corrupts checkpoint files, and breaks his research cadence. The power protection had to be serious.
The Consultation
We spent three hours with Dr. Adeyemi — two of them technical, one of them practical. The technical conversation covered GPU selection (RTX 4090 vs RTX 6000 Ada vs H100 — we discussed the tradeoffs honestly), NVLink bridging between consumer RTX 4090s, memory bandwidth limitations, and the thermal envelope of running two RTX 4090s in sustained compute mode in a Lagos home office.
The practical conversation covered his power situation (Lekki Phase 1 — relatively stable but not perfectly reliable), his generator setup (a 7.5kVA generator he already owned), the room the machine would live in, and how we'd integrate the UPS into his existing power chain so the machine could ride through transitions between NEPA and generator without interruption.
We also discussed cooling honestly. Two RTX 4090s under sustained AI training load produce approximately 600W of heat. In a Lagos home office, without a dedicated cooling system, that heat builds. We recommended — and he agreed to — a portable precision air conditioner for the machine room, controlled by a smart thermostat that activates when room temperature exceeds 26°C.
The Build
AI Research Workstation — ₦18.4 million:
- CPU: AMD Threadripper PRO 7965WX — 24 cores, 128 PCIe 5.0 lanes; necessary to provide full-bandwidth PCIe x16 to both GPUs simultaneously
- RAM: 256GB DDR5 ECC RDIMM — ECC for data integrity in long research runs; 256GB to hold large datasets in system memory
- GPU: 2× NVIDIA RTX 4090 24GB with NVLink bridge — 48GB combined VRAM, NVLink interconnect for model parallelism across both GPUs
- Storage (active): 4TB Samsung 990 Pro NVMe — training data and model checkpoints
- Storage (archive): 2× 16TB Seagate Exos enterprise HDDs in RAID-1 — research datasets backed up nightly
- Motherboard: ASUS Pro WS WRX90E-SAGE SE — workstation-class, supports Threadripper PRO, dual NVLink
- PSU: 2× Seasonic Prime TX-1600 in dual-PSU configuration — 3,200W total capacity with failover
- Case: Fractal Design Define 7 XL — maximum airflow, acoustic damping, 420mm radiator support
- Cooling: Custom hardline water cooling loop — both GPUs and CPU on a single 420mm + 360mm radiator stack
- UPS: 10kVA online double-conversion UPS — seamless transition between NEPA, generator, and UPS battery; no interruption at any transition point
We spent two days on-site: one day building and cabling, one day testing under load, configuring CUDA, PyTorch, and his development environment, and verifying the power chain worked correctly through simulated NEPA cuts and generator transitions.
The Result
Dr. Adeyemi's first benchmark: training a medium-sized transformer model on a Nigerian language dataset that previously took 14 hours on a single cloud A100 node (at significant cloud compute cost per run) completed in 11 hours on the dual-RTX-4090 local setup. The NVLink interconnect was functioning as expected — the two GPUs were acting as a 48GB unit for model distribution.
More meaningfully: he has run 23 training jobs since installation, some lasting up to 61 hours. Not one has been interrupted by a power event. The online UPS has activated four times during NEPA cuts; each time, the machine continued without interruption.
He now hosts weekly ML study sessions for Nigerian engineering students in his home — the machine serves as a demonstration and teaching resource. "It's important," he told us, "that young Nigerians see this hardware exists here. They don't have to go abroad to do serious research."
Key Takeaway
World-class AI research infrastructure is buildable in Nigeria. The components exist. The expertise to configure them exists. What was missing was a vendor who would approach it seriously — designing for Nigerian power realities, thinking about thermal management in tropical conditions, and building with the reliability that sustained research computing demands. The ceiling on what's achievable in Nigeria is being raised by practitioners who refuse to accept that geography equals limitation.
Are you a data scientist, ML researcher, or AI practitioner in Nigeria? Explore the AI Series or talk to our team about a research-grade workstation built for your workload.