Dr. Adaeze Nwosu's research group at a Nigerian federal university was spending more on AWS compute per month than their department's entire equipment budget. They were training NLP models on cloud GPUs because nothing local could handle the workload. It was not sustainable.
The requirement was specific: a single workstation capable of training mid-size transformer models (up to 7B parameters with quantisation), running overnight training jobs without cloud dependency, and supporting three concurrent researchers.
The Build
We worked with the research team to define their actual training parameters rather than just defaulting to "most powerful GPU available":
- AMD Ryzen 9 9950X (for data preprocessing, tokenisation, and parallel CPU tasks)
- 128GB DDR5 ECC-compatible RAM (large dataset loading without paging to disk)
- NVIDIA RTX 4090 24GB (24GB VRAM is the meaningful threshold for this workload — allows full 7B model fine-tuning without offloading)
- 2TB NVMe PCIe 5.0 (fast enough to feed training data without becoming the bottleneck)
- 4TB NVMe PCIe 4.0 (model checkpoints and datasets)
- 1200W 80+ Platinum (the 4090 can pull 450W sustained under training load)
The Impact
A training run that previously took 18 hours on an AWS A10G (at significant cost per hour) now completes in 11 hours locally. The initial hardware investment was recovered in under four months of avoided cloud spend.
More importantly: researchers can now iterate freely. They are not managing cloud budgets per experiment. The machine runs overnight. They check results in the morning.