Skip to product information
1 of 1


HP Nvidia Tesla V100 16GB HBM2 4096-Bit PCI Express 3.0 x16 Video Graphics Card (Q2N68A) - NVIDIA-TESLA-V100-16GB-PCIE - Refurbished

HP Nvidia Tesla V100 16GB HBM2 4096-Bit PCI Express 3.0 x16 Video Graphics Card (Q2N68A) - NVIDIA-TESLA-V100-16GB-PCIE - Refurbished

Regular price $1,256.15 USD
Regular price Sale price $1,256.15 USD
Sale Sold out

Shipping calculated at checkout.
  • Free Ground Shipping Over $500
  • 3 Year Warranty Standard
  • Same Day Shipping Before 4PM EST
Carbon-neutral shipping with Shopify Planet
Carbon-neutral shipping on all orders
shipping emissions removed
That's like...
miles driven by an average gasoline-powered car

Refurbished Like New with Warranty NVIDIA-TESLA-V100-16GB-PCIE

*Compute Cards are backed by a 6 Month Warranty*


  • GPU Architecture: NVIDIA Volta
  • NVIDIA Tensor Cores: 640
  • NVIDIA CUDA Cores: 5120
  • Double-Precision Performance: 7 TFLOPS
  • Single-Precision Performance: 14 TFLOPS
  • Tensor Performance: 112 TFLOPS
  • GPU Memory: 16GB HBM2
  • Memory Bandwidth: 900 GB/s
  • ECC: Yes
  • Interconnect Bandwidth: 32 GB/s
  • System Interface: PCIe Gen3
  • Form Factor: PCIe Full Height/Length
  • Max Power Consumption: 250 W
  • Thermal Solution: Passive

For business or home office use

NVIDIA Tesla V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible. It features new levels of performance and groundbreaking hardware and software advancements, making it an ideal solution for a wide range of use cases including machine learning, deep learning, high-performance computing, and graphics applications.

Featuring 640 Tensor Cores and CUDA Cores within the same architecture, the Tesla V100 GPU offers 125 teraFLOPS of DL performance, boasting 12X more Tensor FLOPS for DL Training and 6X more Tensor FLOPS for DL Inference than NVIDIA Pascal GPUs. NVIDIA NVLink allows for 2X greater throughput than its predecessor. Interconnect up to 8 Tesla V100 accelerators with a max bandwidth of 300GB/s to obtain highest application performance. With a maximum efficiency mode enabled, get 40% more compute capacity/rack with 80% performance at half the power consumption. Additionally, enjoy 1.5X higher memory bandwidth on STREAM with Tesla V100's 900GB/s raw bandwidth and 95% DRAM utilization efficiency. Plus, the 32GB configuration offers double the memory of the original 16GB. Plus, Tesla V100 simplifies programmability with independent thread scheduling, allowing for resource-sharing among small jobs, optimizing GPU utilization.

FedEx Priority and First Overnight Shipping Available

UPC: 832938094499
View full details