OpenQase Logo
BETA
Case StudiesRelated ContentBlog
Sign InGet Started
  • About OpenQase
  • Roadmap
  • Contact Us
  • Blog
  • Case Studies
  • Related Content
  • GitHub
  • Threads
  • Privacy Policy
  • Terms of Use
  • Cookie Policy
openQase Wordmark

© 2025 OpenQase. All rights reserved.

Built with ❤️ by the quantum computing community

    Back to Algorithms

    Quantum Gradient Descent (QGD)

    QGD uses quantum computing to accelerate gradient descent, potentially improving optimization and machine learning. It uses quantum properties for faster gradient calculations and parameter updates.

    8 Use Cases

    Primary Use Cases

    Optimization
    Parameter Estimation
    Neural Networks
    Machine Learning
    Training Models
    Deep Learning
    Predictive Analytics
    AI Research

    Quantum Gradient Descent (QGD) is a quantum optimization algorithm that extends the classical gradient descent method to the quantum domain[1]. Gradient descent is a widely used optimization technique in machine learning, used for training models such as neural networks, support vector machines, and logistic regression. The goal of gradient descent is to find the optimal parameters of a model that minimize a given cost function, by iteratively updating the parameters in the direction of the negative gradient of the cost function.

    Problem Target

    QGD exploits the power of quantum computing to perform the gradient descent updates more efficiently than classical algorithms, particularly for high-dimensional and non-convex optimization problems[2]. The key idea behind QGD is to use quantum states and quantum operations to represent the model parameters and compute the gradients and updates more efficiently than classical methods.

    Quantum Approach

    The core idea behind QGD is to utilise quantum operations to estimate the gradient of the function being optimised. This is often achieved by encoding the function’s parameters into a quantum state and then applying quantum circuits to compute the gradient information[3]. Quantum superposition and interference effects can be harnessed to simultaneously explore multiple directions in the parameter space, potentially leading to faster convergence compared to classical methods that explore one direction at a time.

    QGD typically involves an iterative process where the quantum estimate of the gradient is used to update the parameters, gradually moving closer to the minimum of the function. The specific quantum operations and circuits used for gradient estimation can vary depending on the problem and the available quantum hardware.

    Practical Applications

    The potential advantages of QGD over classical gradient descent include the desired speedup. For certain types of cost functions and parameter spaces, QGD can provide a polynomial or even exponential speedup over classical gradient descent, in terms of the number of iterations and the overall optimization time. This is due to the ability of quantum computers to perform certain operations, such as function evaluations and gradient computations, more efficiently than classical computers.

    There’s also an improvement in convergence, where QGD may be able to find better optimization solutions than classical gradient descent, by exploring a larger space of possible parameter configurations and avoiding local minima[5]. This advantage is also due to the ability of quantum algorithms to perform global search and optimization more efficiently than classical algorithms.

    Another potential advantage includes the reduced memory requirements. QGD can potentially reduce the amount of memory required for storing and updating the parameters, by encoding them into quantum states and performing the computations directly on the quantum hardware.

    Implementation Challenges

    The practical implementation of QGD on near-term quantum devices faces several familiar challenges. Efficient parameter encoding requires the development of efficient methods for encoding large-scale and high-precision parameters into quantum states, while preserving the relevant information and gradients. The cost functions need to be scalable and require the design of efficient quantum subroutines for evaluating them and their gradients. These need to be efficiently implemented on near-term quantum hardware with limited qubit count and gate fidelity. Which in turn is currently reliant on noise-resilient optimization, and the development of robust QGD algorithms that can operate in the presence of noise and errors in the quantum hardware.

    Similar to other implementations of machine learning approaching in quantum computing, the question of integration with classical machine learning is an open one. Efforts are being made to explore hybrid quantum-classical methods that combine QGD with classical optimization techniques (such as momentum, adaptive learning rates, and regularisation) to leverage the strengths of both paradigms.

    Experimental demonstrations of QGD have been reported on various quantum computing platforms, including superconducting qubits and quantum simulators, showing promising results for small-scale problems. However, the scalability and performance of QGD on larger and more realistic problems remain open research questions.

    Bottom Line

    Quantum Gradient Descent is a promising quantum optimization algorithm that extends the classical gradient descent method to the quantum domain, with the potential to provide significant speedups and improved convergence for high-dimensional and non-convex optimization problems. By using the power of quantum states and quantum operations to represent the parameters, and compute the gradients and updates more efficiently than classical methods, QGD has the potential to accelerate the training of machine learning models and the solution of optimization problems in various domains, from finance and logistics to physics and chemistry.

    Having said that, significant research efforts are still needed to address the challenges of efficient parameter encoding, scalable cost functions, noise-resilient optimization, and integration with classical techniques, before QGD can be deployed in real-world scenarios. As quantum technologies continue to advance, QGD is expected to play an important role in the emerging field of quantum-enhanced optimization and machine learning.


    Implementation Steps

    Step 1.

    Parameter encoding

    The model parameters are encoded into a quantum state, typically using amplitude encoding or qubit encoding. In amplitude encoding, each parameter is represented by the amplitude of a basis state in a quantum superposition. In qubit encoding, each parameter is assigned to a qubit or a group of qubits, and the parameter values are encoded in the qubit states.

    Step 2.

    Cost function evaluation

    The cost function is evaluated for the current set of parameters, using a quantum subroutine that computes the value of the cost function from the quantum state representing the parameters. This can be done using techniques such as Quantum Phase Estimation (QPE), Quantum Amplitude Estimation (QAE), or Variational Quantum Eigensolvers (VQE).

    Step 3.

    Gradient computation

    The gradients of the cost function with respect to the parameters are computed using a quantum subroutine, such as the quantum finite difference method, the quantum back-propagation algorithm, or the parameter-shift rule. These subroutines can estimate the gradients more efficiently than classical methods, by exploiting the quantum parallelism and finite difference effects[^4].

    Step 4.

    Parameter update

    The parameters are updated based on the computed gradients, using a quantum subroutine that performs the gradient descent update rule, such as the quantum stochastic gradient descent or the quantum Adam optimiser. These subroutines can update the parameters more efficiently than classical methods, by exploiting the quantum superposition and interference effects.

    Step 5.

    FYI

    Steps two to four are repeated until the cost function converges or a maximum number of iterations is reached. After the algorithm terminates, the optimal parameters are obtained by measuring the quantum state and post-processing the results.


    References

    [1]

    Rebentrost, P., Schuld, M., Petruccione, F., & Lloyd, S. (2016). Quantum gradient descent and Newton’s method for constrained polynomial optimization. arXiv preprint arXiv:1612.01789.

    [2]

    Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., & Lloyd, S. (2017). Quantum machine learning. Nature, 549(7671), 195-202.

    [3]

    Mitarai, K., Negoro, M., Kitagawa, M., & Fujii, K. (2018). Quantum circuit learning. Physical Review A, 98(3), 032309.

    [4]

    Gilyen, A., Arunachalam, S., & Wiebe, N. (2019). Optimizing quantum optimization algorithms via faster quantum gradient computation. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (pp. 1425-1444).


    Related Case Studies

    Google-NASA Quantum Artificial Intelligence Laboratory Partnership

    Google and NASA established the Quantum Artificial Intelligence Laboratory in 2013 to explore quantum computing applications for complex optimization and machine learning problems. This partnership has been instrumental in advancing quantum supremacy research and developing practical quantum algorithms for aerospace and artificial intelligence applications.

    Quantinuum and Google search for quantum circuit optimization

    A collaboration between Quantinuum and Google DeepMind to explore AI-enhanced quantum circuit optimization.

    D-Wave and Pattison Food Group Partnership: Quantum-Powered Supply Chain Optimization

    D-Wave Systems partnered with Pattison Food Group, Canada's largest food and pharmacy retailer, to implement quantum annealing solutions for complex supply chain optimization challenges. This collaboration demonstrated practical quantum computing applications in retail logistics and inventory management.

    1QBit and NatWest explore financial portfolio optimisation

    Explore quantum computing for financial portfolio optimisation and risk management, enhancing traditional modeling.

    Algorithm Details

    Applications
    Optimization
    Parameter Estimation
    Neural Networks
    Machine Learning
    Training Models
    Deep Learning
    Predictive Analytics
    AI Research