AI Workstation of the Deep Learning Elite

This quiet, fast, reliable and universal multi-GPU deep learning machine beats every other solution on the market
VIEW PRODUCTS

AI Server for the Infinite Inference

A cost-effective solution that delivers exceptional performance and scalability for AI inference needs
VIEW PRODUCTS

liquid cooling integration kit for ai multi-gpu server

Upgrades any air-cooled GPU server with liquid-cooling system, boosting performance up to 30% and lowering total facility power consumption
VIEW PRODUCTS
GRANDO SERVER

inference PRODUCT LINE

Comino Grando AI INFERENCE servers are designed for high-performance, low-latency inference and fine-tuning on pre-trained machine learning or deep learning Generative AI models like Stable Diffusion, Midjourney, Hugging Face, Character.AI, QuillBot, DALLE-2, etc. Unique multi-GPU cost-optimised and adjustable configurations are perfect for scaling on-premise or in a Data Center.

GRANDO AI INFERENCE BASE

Multi-GPU Server
NVIDIA OPTION: 6x RTX 4090 GPUs
AMD OPTION: 6x Radeon RX 7900XTX
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling

Buy with 6x NVIDIA 4090 Buy with 6x AMD 7900XTX
GRANDO AI INFERENCE PRO

Multi-GPU Server
NVIDIA OPTION: 6x L40S GPUs
AMD OPTION: 6x Radeon PRO W7900
1x AMD Threadripper Pro 7985WX CPU
Comino Liquid Cooling

Buy with 6x NVIDIA l40sBuy with 6x AMD w7900
GRANDO AI INFERENCE MAX

Multi-GPU Server
NVIDIA OPTION: 6x A100 / H100 GPUs
1x AMD Threadripper Pro 7995WX CPU
Comino Liquid Cooling

Buy with 6x NVIDIA a100/h100

Grando AI INFERENCE BASE and PRO servers are unique and cost-optimized solutions hosting either SIX liquid-cooled NVIDIA 4090 / AMD 7900 XTX GPUs with 24GB VRAM that are considered a sweet spot for the majority of real-life inference tasks or NVIDIA 6000 ADA / AMD W7900 GPUs with 48GB of VRAM capable of running Llama with 70B parameters on a single card. Efficient cooling eliminates any thermal throttling providing up to 50% performance overhead over similar air-cooled solutions. In addition to unexcelled performance Comino solutions come with up to 3-years maintenance-free period, maintenance as easy as air-cooled systems and remote Comino Monitoring System (CMS) ready to be integrated into your software stack via API.

Grando INFERENCE servers are pre-tested to run NVIDIA CUDA Toolkit & cuDNN, AMD ROCm, PyTorch, TensorFlow, ONNX Runtime, Keras, JAXX frameworks & libraries and equipped with SIX NVIDIA (A100 / H100 / L40S / 4090) or AMD (W7900 / 7900 XTX) GPUs paired with the most modern high-frequency multi-core CPUs to guarantee best in class Inference performance and throughput for the most demanding and versatile workflows.

expert review

"INFINITE Inference Power for AI"

Unlock the power of performance with Sendex!

"A lot of inference power comes from this Powerhouse machine from Comino which has not one, not two, not three - it has six 4090s inside!
Harrison Kinsley, the coding maestro aka Sentdex, dives into the ultimate tech thrill with the Comino Grando Server featuring a mind-blowing 6x RTX 4090s!

Talk To Engineer

Let's talk

Grando AI Inference Product Specifications

Please, contact our sales team In case you want a custom setup
Specs
GRANDO AI INFERENCE BASE
GRANDO AI INFERENCE PRO
GRANDO AI INFERENCE MAX
NVIDIA GPU OPTION
6X NVIDIA RTX 4090
6X NVIDIA L40S
6X NVIDIA A100 / H100
AMD GPU OPTION
6x AMD Radeon RX 7900XTX
6x AMD Radeon PRO W7900
GPU MEMORY
6X 24GB
6X 48GB
6X 80GB / 94GB
CPU
AMD THREADRIPPER PRO 7975WX (32 cores)
AMD Threadripper PRO 7985WX (64 cores)
AMD Threadripper PRO 7995WX (96 cores)
SYSTEM POWER USAGE
UP TO 3.6 KW
UP TO 3.0 KW
UP TO 3.0 KW
MEMORY
256 GB DDR5
512 GB DDR5
512 GB DDR5
NETWORKING
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
STORAGE OS
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
STORAGE DATA/CACHE
ON REQUEST
ON REQUEST
ON REQUEST
COOLING SYSTEM
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
SYSTEM ACOUSTICS
HIGH
HIGH
HIGH
OPERATING TEMPERATURE RANGE
UP TO 38ºC
UP TO 38ºC
UP TO 38ºC
SOFTWARE
UBUNTU / WINDOWS
UBUNTU / WINDOWS
UBUNTU / WINDOWS
SIZE
439 x 177 x 681 MM
439 x 177 x 681 MM
439 x 177 x 681 MM
CLASS
SERVER
SERVER
SERVER
GRANDO WORKSTATION

DEEP learning PRODUCT LINE

Comino Grando AI Deep Learning workstations are designed for on-premise training and fine-tuning of complex deep learning neural networks with large datasets focusing the field of Generative AI, but not limited by it. They provide top tier and unique multi-GPU configurations to accelerate training and fine-tuning of compute-hungry Diffusion, Multimodal, Computer vision, Large Language (LLM) and other models.

GRANDO AI DL BASE

Multi-GPU Workstation
NVIDIA OPTION: 4x RTX 4090 GPUs
AMD OPTION: 4x Radeon RX 7900XTX
1x AMD Threadripper Pro 7975WX CPU
Comino Liquid Cooling

Buy with 4x NVIDIA 4090 Buy with 4x AMD  7900XTX
GRANDO AI DL PRO

Multi-GPU Workstation
NVIDIA OPTION: 4x L40S GPUs
AMD OPTION: 4x Radeon PRO W7900
1x AMD Threadripper Pro 7985WX CPU
Comino Liquid Cooling

Buy with 4x NVIDIA L40s Buy with 4x AMD W7900
GRANDO AI DL MAX

Multi-GPU Workstation
NVIDIA OPTION: 4x A100 / H100 GPUs
1x AMD Threadripper Pro 7995WX CPU
Comino Liquid Cooling

buy with 4x nvidia a100/h100

Grando AI DL MAX workstation hosts FOUR liquid-cooled NVIDIA H100 GPUs with 376GB of HBM Memory and 96-core Threadripper PRO CPU running up to 5.1GHz providing up to 50% performance overhead over similar air-cooled solutions. In addition to unexcelled performance Comino solutions come with up to 3-years maintenance-free period, maintenance as easy as air-cooled systems and remote Comino Monitoring System (CMS) ready to be integrated into your software stack via API.

Grando DL workstations are pre-tested to run NVIDIA CUDA Toolkit & cuDNN, AMD ROCm, PyTorch, TensorFlow, ONNX Runtime, Keras, JAXX frameworks & libraries and equipped with FOUR NVIDIA (A100 / H100 / L40S / 4090) or AMD GPUs (W7900 / 7900 XTX) paired with the most modern high-frequency multi-core CPUs to deliver best in class Machine and Deep Learning performance combined with silent operation even for the most demanding and versatile workflows that include Stable Diffusion, Midjourney, Hugging Face, Llama, Character.AI, QuillBot, DALLE-2, etc.

Talk To Engineer

Let's talk

Grando AI DL Product Specifications

Please, contact our sales team in case you want a custom setup
Specs
GRANDO AI DL BASE
GRANDO AI DL PRO
GRANDO AI DL MAX
NVIDIA GPU OPTION
4x NVIDIA RTX 4090
4x NVIDIA L40S
4x NVIDIA A100 / H100
AMD GPU OPTION
4x AMD Radeon RX 7900XTX
4x AMD Radeon PRO W7900
GPU MEMORY
TOTAL: 96 GB
TOTAL: 192 GB
TOTAL: 320 GB / 376 GB
CPU
AMD Threadripper PRO 7975WX (32 cores)
AMD Threadripper PRO 7985WX (64 cores)
AMD Threadripper PRO 7995WX (96 cores)
SYSTEM POWER USAGE
UP TO 2.6 KW
UP TO 2.2 KW
UP TO 2.2 KW
MEMORY
256 GB DDR5
512 GB DDR5
1024 GB DDR5
NETWORKING
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
DUAL-PORT 10Gb, 1Gb IPMI
STORAGE OS
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
DUAL 1.92TB M.2 NVME DRIVE
STORAGE DATA/CACHE
ON REQUEST
DUAL 7.68TB U.2 NVME DRIVE
DUAL 7.68TB U.2 NVME DRIVE
COOLING SYSTEM
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
CPU & GPU LIQUID COOLING
SYSTEM ACOUSTICS
MEDIUM
LOW
LOW
OPERATING TEMPERATURE RANGE
UP TO 30ºC
UP TO 30ºC
UP TO 30ºC
OS COMPATIBILITY
UBUNTU / WINDOWS
UBUNTU / WINDOWS
UBUNTU / WINDOWS
SIZE
439 x 177 x 681 MM
439 x 177 x 681 MM
439 x 177 x 681 MM
CLASS
WORKSTATION
WORKSTATION
WORKSTATION

certified to the partners programs

COMINO have established a strong strategic relationship with industry leaders
Ephicient logoOE logo
testimonials

Praised by the Top Tech Leaders worldwide

jesse woolston

"The main factor as to why I love the Grando RM is its ability to be diverse with training and modelling, where I can give it any and all assignments and I am able to just utilise the tools and focus on the art".

linus sebastian

"God of computers".
"On this machine, compute take such little time, that I've been having trouble getting all GPUs to get fully loaded".
"It appears to be rock freaking solid stable".

harrison kinsley

"This is the coolest deep learning machine that I have ever had the opportunity to use. It’s the most power in the smallest form factor also, that I’ve ever used, and finally, it also runs the coolest temperatures, that I’ve ever used"

trusted by
are you ready?

join the elite
of Grando Professionals

order your grando now

Have a media inquiry? Looking for more info about Comino? Contact one of our team members at pr@comino.com

Technology Partners

At Comino, we are dedicated to our flexibility, showcasing a wide array of components to demonstrate our versatility. This expansive range prevents us from being confined by constraints imposed by single vendors. Through custom-tailored solutions that cater to the specific needs of each client, our meticulously selected offerings ensure precise and individualized results. By embracing this multifaceted strategy, we remain committed to delivering exceptional, bespoke solutions that fulfill the unique requirements of our valued clients.