OCP Articles: GPU Acceleration
- How to use GPUs in OpenShift and Kubernetes (22/02/2017),
- How to use GPUs in OpenShift 3.6 (Still Alpha) (29/08/2017),
- How to use GPUs with Device Plugin in OpenShift 3.9 (Now Tech Preview!) (28/03/2018),
- How to use GPUs with Device Plugin in OpenShift 3.10 (01/08/2018),
- GPU Accelerated SQL queries with PostgreSQL & PG-Strom in OpenShift-3.10 (08/08/2018),
- GPU Support for AI Workloads in Red Hat OpenShift 4 (09/05/2019),
- OpenShift 4.2 on Red Hat OpenStack Platform 13 + GPU (27/10/2019),
- Creating a GPU-enabled node with OpenShift 4.2 in Amazon EC2 (28/10/2019),
- NVIDIA GPU Operator with OpenShift 4.3 on Red Hat OpenStack Platform 13 (16/02/2020),
- Part 1: How to Enable Hardware Accelerators on OpenShift (17/02/2020),
- JupyterHub on-demand (and other tools) (18/03/2020),
- Simplifying deployments of accelerated AI workloads on Red Hat OpenShift with NVIDIA GPU Operator (18/03/2020),
- Part 2: How to enable Hardware Accelerators on OpenShift, SRO Building Blocks (25/03/2020),
- Validating Distributed Multi-Node Autonomous Vehicle AI Training with NVIDIA DGX Systems on OpenShift with DXC Robotic Drive (29/07/2020),
- Running HPC workloads with Red Hat OpenShift Using MPI and Lustre Filesystem (29/10/2020),
- How to install NVIDIA GPU Operator in OpenShift 4 (02/11/2020),
- Adding More Support in NVIDIA GPU Operator (26/01/2021),
- OpenShift on NVIDIA GPU Accelerated Clusters (12/03/2021),
- Using NVidia GPUs in OpenShift (14/03/2021),
- Using the NVIDIA GPU Operator to Run Distributed TensorFlow 2.4 GPU Benchmarks in OpenShift 4 (23/03/2021),
- Multi-Instance GPU Support with the GPU Operator v1.7.0 (15/06/2021),
- Using NVIDIA A100’s Multi-Instance GPU to Run Multiple Workloads in Parallel on a Single GPU (26/08/2021),
- Red Hat collaborates with NVIDIA to deliver record-breaking STAC-A2 Market Risk benchmark (09/11/2021),
- Entitlement-Free Deployment of the NVIDIA GPU Operator on OpenShift (14/12/2021),
- Enabling vGPU in OpenShift Containerized Virtualization (06/02/2022),
- Enabling vGPU in a Single Node using OpenShift Virtualization (11/05/2022),
- A Guide to Functional and Performance Testing of the NVIDIA DGX A100 (23/06/2022),
- Use GPU workloads with Azure Red Hat OpenShift (31/08/2022),
- How we built an AI platform with Red Hat OpenShift, VMware vSphere and NVIDIA GPUs (05/12/2022),
- How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge (14/02/2023),
- Red Hat collaborates with NVIDIA to offer GPU-based 5G vRAN solutions with Red Hat OpenShift (27/02/2023),
- GPU instance types available for ROSA (02/03/2023),
- Autoscaling NVIDIA GPUs on Red Hat OpenShift (09/06/2023),
- Nvidia Infiniband on an OpenShift Connected or Air-Gapped Cluster (12/09/2023),
- How precompiled drivers improve NVIDIA GPU autoscaling on Red Hat OpenShift (26/10/2023),
- Nvidia Infiniband on a Red Hat OpenShift connected or air-gapped cluster (21/11/2023),
- Deploying GPUs for AI Workloads on OpenShift on AWS (29/01/2024),
- Setting Up NVIDIA Tesla T4 Time-Slicing for AI Workloads on Red Hat OpenShift (RHOAI) in AWS Cloud (15/02/2024),
- Enable GPU acceleration with the Kernel Module Management Operator (05/04/2024),
- Enabling Nvidia GPU in Red Hat Openshift AI (21/04/2024),
- Building a Custom NVIDIA DCGM Grafana Dashboard on OpenShift (24/04/2024),
- How to check NVIDIA MIG device actual UUID? (01/07/2024),
- Sharing is caring: How to make the most of your GPUs (part 1 – time-slicing) (02/07/2024),
- Sharing is caring: How to make the most of your GPUs part 2 – Multi-instance GPU (06/09/2024),
- How AMD GPUs accelerate model training and tuning with OpenShift AI (03/10/2024),
- How to use AMD GPUs for model serving in OpenShift AI (08/10/2024),
- Democratizing AI Accelerators and GPU Kernel Programming using Triton (07/11/2024),
- RDMA with NVIDIA on OpenShift (04/01/2025),
- RDMA+CUDA with NVIDIA on OpenShift (06/01/2025),
- Build RDMA GPU-Tools Container (08/01/2025),
- RDMA: Shared, Hostdevice, Legacy SRIOV (15/01/2025),
- How MIG maximizes GPU efficiency on OpenShift AI (06/02/2025),
- Optimizing GPU ROI: Inference by day, training by night (26/03/2025),
- Enable 3.5 times faster vision language models with quantization (01/04/2025),
- NVIDIA GPU Direct Storage on OpenShift (01/04/2025),
- Accelerate model training on OpenShift AI with NVIDIA GPUDirect RDMA (29/04/2025),
- The benefits of dynamic GPU slicing in OpenShift (06/05/2025),
- Boost GPU efficiency in Kubernetes with NVIDIA Multi-Instance GPU (27/05/2025),
- Exploring the NVIDIA Maintenance Operator (07/06/2025),
- MLPerf Inference v5.0 results with Supermicro’s GH200 Grace Hopper Superchip-based Server and Red Hat OpenShift (18/06/2025),
- Episode-XXVII: Lessons Learned from a Telco MCP BackEnd Experiments (20/06/2025),
- The hidden cost of large language models (24/06/2025),
- Infiniband on OpenShift 4.16 with DGX H100 (10/07/2025),
- NVIDIA GPU Direct Storage on OpenShift (11/07/2025),
- Ollama vs. vLLM: A deep dive into performance benchmarking (08/08/2025),
- Boost AI efficiency with GPU autoscaling on OpenShift (12/08/2025),
- Optimize GPU utilization with Kueue and KEDA (26/08/2025),
- Unlocking AI innovation: GPU-as-a-Service with Red Hat (16/09/2025),
- OpenShift with GPU support on your Laptop (26/09/2025),
- Dynamic GPU slicing with Red Hat OpenShift and NVIDIA MIG (14/10/2025),
- Network performance in distributed training: Maximizing GPU utilization on OpenShift (16/10/2025),
- GPU-as-a-Service for AI at scale: Practical strategies with Red Hat OpenShift AI (10/11/2025),
- Triton Kernel Profiling with NVIDIA Nsight Tools (19/11/2025).

Loading...
Recent Comments