LightYear
/Docs
DocsComputeGPU Plan Comparison Guide

GPU Plan Comparison Guide

Compare all available GPU plans — A100, H100, RTX 4090 — and choose the right one for your workload.

beginner
6 min read
LightYear Team
Updated April 24, 2026
gpuplansa100h100
Ready to get started?

Available GPU Plans

PlanGPUVRAMvCPUsRAMStoragePrice/hr
GPU-A100-40NVIDIA A100 40GB40 GB864 GB500 GB NVMe$1.89
GPU-A100-80NVIDIA A100 80GB80 GB16128 GB1 TB NVMe$2.49
GPU-H100-80NVIDIA H100 80GB80 GB24192 GB2 TB NVMe$3.99
GPU-RTX4090NVIDIA RTX 409024 GB832 GB250 GB NVMe$0.99
GPU-A6000NVIDIA RTX A600048 GB1296 GB500 GB NVMe$1.49

Choosing the Right GPU

Training Large Language Models (LLMs)

Use H100 80GB for maximum throughput. The H100 offers 3× the FP8 throughput of the A100 and supports NVLink for multi-GPU configurations.

Fine-Tuning (LoRA / QLoRA)

A100 80GB or A6000 48GB are cost-effective for fine-tuning models up to 70B parameters with 4-bit quantisation.

Inference / Serving

RTX 4090 offers the best price-per-token for models up to 13B parameters. For larger models, use A100 40GB.

Computer Vision / Image Generation

RTX 4090 is ideal for Stable Diffusion and similar workloads due to its high memory bandwidth relative to cost.

Multi-GPU Configurations

Contact support to provision bare-metal nodes with 4× or 8× H100 GPUs for large-scale distributed training.

Spot vs On-Demand

LightYear currently offers on-demand pricing only. Spot instances are on the roadmap for Q3 2025.

Was this article helpful?

Your cookie choices for this website

This site uses cookies and related technologies, as described in our privacy policy, for purposes that may include site operation, analytics, and enhanced user experience. You may choose to consent to our use of these technologies, or manage your own preferences. Cookie policy