OpenQase Logo
BETA
Case StudiesRelated ContentBlog
Sign InGet Started
  • About OpenQase
  • Roadmap
  • Contact Us
  • Blog
  • Case Studies
  • Related Content
  • GitHub
  • Threads
  • Privacy Policy
  • Terms of Use
  • Cookie Policy
openQase Wordmark

© 2025 OpenQase. All rights reserved.

Built with ❤️ by the quantum computing community

    Back to Case Studies

    Google and NVIDIA collaborate on quantum circuit simulation

    Accelerating quantum circuit simulations by integrating NVIDIA's GPU-powered cuQuantum SDK with Google's Cirq.

    Introduction

    The partnership between Google Quantum AI and NVIDIA represents a strategic collaboration aimed at advancing quantum computing research through enhanced classical simulation capabilities. As quantum computers remain limited in scale and prone to errors, classical simulation of quantum circuits remains crucial for algorithm development, error correction research, and validating quantum supremacy experiments. Google Quantum AI, known for achieving quantum supremacy with their Sycamore processor, recognised the need for more powerful simulation tools to support their quantum research initiatives. NVIDIA, with its leadership in GPU computing and recent focus on quantum computing through its cuQuantum platform, emerged as an ideal partner. This collaboration leverages NVIDIA’s GPU acceleration expertise with Google’s quantum algorithms and Cirq framework, creating a powerful ecosystem for quantum research. The partnership addresses the computational bottleneck in simulating quantum systems, where the computational requirements grow exponentially with the number of qubits, making traditional CPU-based simulations impractical for circuits beyond 30-40 qubits.

    Challenge

    The primary challenge addressed by this partnership is the exponential scaling problem inherent in quantum circuit simulation. As quantum systems grow, the memory and computational requirements for classical simulation increase exponentially - simulating an n-qubit system requires storing and manipulating 2^n complex amplitudes. This fundamental limitation has constrained researchers’ ability to develop and validate quantum algorithms, study error correction schemes, and benchmark quantum hardware performance. Google Quantum AI faced specific challenges in simulating their experimental quantum circuits, particularly for validating their quantum supremacy experiments and developing new quantum algorithms. Traditional CPU-based simulators were reaching their limits, unable to handle circuits with more than 40-50 qubits even on supercomputers. Additionally, the research community needed better tools to prototype quantum algorithms before running them on actual quantum hardware, which remains scarce and expensive. The lack of efficient simulation capabilities was creating a bottleneck in quantum algorithm development, making it difficult to explore the full potential of near-term quantum devices and prepare for future fault-tolerant quantum computers.

    Solution

    The partnership developed an integrated solution combining NVIDIA’s cuQuantum SDK with Google’s Cirq quantum programming framework. The cuQuantum library provides GPU-accelerated primitives for quantum circuit simulation, including state vector and tensor network methods optimised for NVIDIA GPUs. This integration allows Cirq users to seamlessly leverage GPU acceleration without modifying their existing quantum circuits or algorithms. The solution includes optimised implementations of common quantum gates and operations, automatic memory management for large quantum states, and multi-GPU support for simulating even larger circuits. The technical implementation focuses on three key areas: state vector simulation acceleration, where full quantum states are stored and manipulated on GPUs; tensor network contraction optimization, enabling approximate simulations of larger circuits; and hybrid CPU-GPU algorithms that intelligently partition workloads. The solution also provides tools for noise simulation, crucial for understanding how algorithms will perform on real quantum hardware. By leveraging NVIDIA’s Tensor Cores and high-bandwidth memory, the solution achieves significant speedups over CPU-based simulators while maintaining numerical precision required for quantum computing research.

    Implementation

    The implementation involved close collaboration between Google’s Cirq development team and NVIDIA’s cuQuantum engineers to create seamless integration points. The teams developed a plugin architecture allowing Cirq to automatically detect and utilise cuQuantum backends when available. Implementation began with benchmarking existing simulation workloads to identify performance bottlenecks and optimization opportunities. The integration supports multiple simulation modes: full state vector simulation for exact results on smaller circuits, tensor network approximations for larger circuits, and hybrid approaches that balance accuracy and performance. The implementation includes automatic device selection, choosing between CPU and GPU execution based on circuit characteristics and available hardware. Memory management strategies were developed to handle the large memory requirements of quantum simulation, including out-of-core algorithms for circuits exceeding GPU memory capacity. The teams also implemented comprehensive testing frameworks to ensure numerical accuracy and compatibility with existing Cirq workflows. Documentation and tutorials were created to help researchers transition to GPU-accelerated simulation. The implementation maintains backward compatibility, allowing existing Cirq programs to benefit from acceleration without code changes while providing advanced APIs for users wanting fine-grained control over the simulation process.

    Results and Business Impact

    The partnership yielded significant performance improvements in quantum circuit simulation, with benchmarks showing speedups of 10-100x for typical quantum circuits compared to CPU-based simulation. Researchers can now simulate circuits with 40+ qubits on a single GPU that previously required entire CPU clusters. This acceleration has enabled new research directions, including more comprehensive studies of quantum error correction, deeper exploration of variational quantum algorithms, and validation of quantum advantage experiments. The collaboration has strengthened both organisations’ positions in the quantum computing ecosystem. For Google, it provides their researchers and cloud users with state-of-the-art simulation capabilities, accelerating their quantum algorithm development cycle. For NVIDIA, it validates their quantum computing strategy and establishes their GPUs as essential infrastructure for quantum research. The broader impact includes democratising access to quantum simulation capabilities, as researchers can now perform substantial simulations on workstation GPUs rather than requiring supercomputer access. This has accelerated the pace of quantum algorithm development across the research community. Several research papers have already cited performance improvements from using this integrated solution, demonstrating its immediate scientific impact.

    Future Directions

    The partnership continues to evolve with plans to support emerging quantum computing paradigms and larger-scale simulations. Future developments include enhanced support for quantum error correction simulations, crucial for the path toward fault-tolerant quantum computing. The teams are exploring advanced tensor network methods that could enable approximate simulation of hundreds of qubits for specific circuit classes. Integration with quantum machine learning workflows is another priority, as hybrid classical-quantum algorithms become increasingly important. The partnership is also investigating support for other quantum computing frameworks beyond Cirq, potentially creating a broader ecosystem of GPU-accelerated quantum tools. As NVIDIA develops next-generation GPUs with increased memory capacity and computational power, the collaboration will optimise simulations to leverage these advances. There are also plans to integrate with cloud platforms, making GPU-accelerated quantum simulation accessible to a broader audience through quantum cloud services. The long-term vision includes developing specialised hardware accelerators for quantum simulation, potentially leading to custom silicon optimised for quantum computing workloads.


    References

    [1]

    J Ruane, E Kiesow, J Galatsanos, C Dukatz. “Quantum Index Report 2025”. arXiv preprint (2025). https://arxiv.org/abs/2506.04259

    [2]

    Y Alexeev, VS Batista, N Bauman, L Bertels. “A Perspective on Quantum Computing Applications in Quantum Chemistry using 25–100 Logical Qubits”. arXiv preprint (2025). https://arxiv.org/abs/2506.19337

    [3]

    YY Hong, DJD Lopez, YY Wang. “Solar irradiance forecasting using a hybrid quantum neural network: A comparison on gpu-based workflow development platforms”. IEEE Access (2024). https://ieeexplore.ieee.org/abstract/document/10703035/

    [4]

    GN Perdue. “Quantum Computing at Fermilab”. Technical Report (2024). https://www.osti.gov/servlets/purl/2361088

    Quick Facts

    Year
    2021
    Partner Companies
    NVIDIA
    Quantum Companies
    Google Quantum AI

    Technical Details

    Quantum Hardware
    Google Sycamore
    Quantum Software
    Cirq
    NVIDIA cuQuantum SDK

    Categories

    Industries
    AI and Machine Learning
    Government and Public Sector
    Education
    Algorithms
    Quantum Error Correction (QEC)
    Variational Quantum Eigensolver (VQE)
    Quantum Approximate Optimization Algorithm (QAOA)
    Target Personas
    Software Engineer
    Quantum Cloud and Platform Provider
    Quantum Educator
    Quantum Algorithm Developer
    Quantum Hardware Engineer

    Additional Resources

    NVIDIA Accelerates Google Quantum AI Processor Design With Simulation of Quantum Device PhysicsNVIDIA Partners Accelerate Quantum Breakthroughs with AI Supercomputing2021 Year in Review: Google Quantum AINVIDIA Teams With Google Quantum AI, IBM and Other Leaders to Speed Research in Quantum ComputingCould Google’s Quantum Leap Represent Long-Term Challenges For Nvidia?