Dell 575TK NVIDIA A100 80GB PCIe GPU, Refurbished
Warranty: Lifetime
Condition: Refurbished
NVIDIA OEM Part Number: 900-21001-0020-000 (or similar 900-21001-XXXX variants) Model: NVIDIA A100 80GB PCIe (Ampere Architecture)
Key Specifications:
- GPU Memory: 80GB HBM2e with ECC
- Memory Bandwidth: 1,935 GB/s
- CUDA Cores: 6,912
- Tensor Cores: 432 (3rd Generation)
- RT Cores: N/A (compute-focused)
- FP64 Performance: 9.7 TFLOPS
- Tensor Float-32 (TF32) Performance: 156 TFLOPS (312 TFLOPS with sparsity)
- FP16 / BF16 Tensor Core: Up to 312 TFLOPS (624 TFLOPS with sparsity)
- INT8 Tensor Core: Up to 624 TOPS (1,248 TOPS with sparsity)
- Interface: PCIe 4.0 x16
- Power Consumption (TDP): 300W
- Power Connector: 1x 8-pin EPS auxiliary power
- Form Factor: Full Height, Full Length (FHFL), Dual-slot, Passive cooling (requires high airflow server environment)
- Features: Multi-Instance GPU (MIG) support – up to 7 isolated instances @ 10GB each, NVLink support (for 2 GPUs)
The NVIDIA A100 80GB PCIe is a powerhouse data center GPU designed for demanding AI, machine learning, deep learning training/inference, high-performance computing (HPC), data analytics, and large-scale simulations. The massive 80GB HBM2e memory makes it ideal for handling enormous models and datasets that smaller GPUs can't accommodate.
Condition: Refurbished
NVIDIA OEM Part Number: 900-21001-0020-000 (or similar 900-21001-XXXX variants) Model: NVIDIA A100 80GB PCIe (Ampere Architecture)
Key Specifications:
- GPU Memory: 80GB HBM2e with ECC
- Memory Bandwidth: 1,935 GB/s
- CUDA Cores: 6,912
- Tensor Cores: 432 (3rd Generation)
- RT Cores: N/A (compute-focused)
- FP64 Performance: 9.7 TFLOPS
- Tensor Float-32 (TF32) Performance: 156 TFLOPS (312 TFLOPS with sparsity)
- FP16 / BF16 Tensor Core: Up to 312 TFLOPS (624 TFLOPS with sparsity)
- INT8 Tensor Core: Up to 624 TOPS (1,248 TOPS with sparsity)
- Interface: PCIe 4.0 x16
- Power Consumption (TDP): 300W
- Power Connector: 1x 8-pin EPS auxiliary power
- Form Factor: Full Height, Full Length (FHFL), Dual-slot, Passive cooling (requires high airflow server environment)
- Features: Multi-Instance GPU (MIG) support – up to 7 isolated instances @ 10GB each, NVLink support (for 2 GPUs)
The NVIDIA A100 80GB PCIe is a powerhouse data center GPU designed for demanding AI, machine learning, deep learning training/inference, high-performance computing (HPC), data analytics, and large-scale simulations. The massive 80GB HBM2e memory makes it ideal for handling enormous models and datasets that smaller GPUs can't accommodate.