OpenQase Logo
BETA
Case StudiesRelated ContentBlog
Sign InGet Started
  • About OpenQase
  • Roadmap
  • Contact Us
  • Blog
  • Case Studies
  • Related Content
  • GitHub
  • Threads
  • Privacy Policy
  • Terms of Use
  • Cookie Policy
openQase Wordmark

© 2025 OpenQase. All rights reserved.

Built with ❤️ by the quantum computing community

    Back to Algorithms

    Quantum Boltzmann Machines

    Quantum versions of classical Boltzmann machines, designed to use quantum effects for potentially more efficient training and inference.

    8 Use Cases
    1 Related Industry
    2 Target Roles

    Primary Use Cases

    Data Science
    Recommendation Systems
    Generative Models
    Pattern Recognition
    Neural Networks
    Machine Learning
    AI Research
    Statistical Learning

    Quantum Boltzmann Machines (QBMs) are a class of quantum machine learning models that generalize the classical Boltzmann machines to the quantum domain[1]. Boltzmann machines are probabilistic graphical models that learn the probability distribution underlying a set of input data and can be used for tasks such as unsupervised learning, generative modeling, and combinatorial optimization.

    Problem Target

    QBMs exploit the power of quantum computing to represent and manipulate complex probability distributions more efficiently than classical Boltzmann machines, particularly for high-dimensional and strongly correlated data[2]. The key idea behind QBMs is to use quantum states and quantum operations to represent the model parameters and perform the learning and inference tasks.

    Quantum Approach

    A QBM consists of a network of quantum nodes, each representing a qubit or a group of qubits, and connected by quantum edges that encode the interactions between the nodes[3]. The quantum state of the QBM represents the joint probability distribution of the variables in the model, and the goal of training is to adjust the parameters of the quantum edges to minimize the difference between the model distribution and the target distribution of the input data.

    Practical Applications

    The potential advantages of QBMs over classical Boltzmann machines include some of the usual themes we might expect in such quantum explorations. The exponential speedup is a benefit for certain types of data and model architectures, where QBMs can provide an exponential speedup over classical Boltzmann machines in terms of training time and model capacity[5]. This is due to the ability of quantum computers to represent and manipulate exponentially large state spaces with a linear number of qubits.

    These quantum states and quantum operations can potentially capture more complex and expressive probability distributions than classical models, due to the presence of entanglement and interference effects[6]. Likewise the improved generalisation of QBMs may result in more robust representations of the input data, by exploiting the quantum superposition and quantum parallelism effects to explore a larger hypothesis space.

    Implementation Challenges

    QBMs face the usual collection of limitations given their reliance on near-term quantum devices. Developing efficient methods for encoding large-scale and high-dimensional classical data into quantum states, while preserving the relevant features and correlations, requires efficient data encoding in the first place. Similar issues exist with designing QBM architectures that can be efficiently implemented on near-term quantum hardware with limited qubit count and gate fidelity. One such example is in noise-resilient training, where the development of robust QBM training algorithms that can operate in the presence of noise and errors in the quantum hardware, requires continual advances in techniques such as error mitigation and quantum error correction.

    The practical application of integrating with classical machine learning is also a concern in the current era[7]. There is much work to be done in exploring hybrid quantum-classical approaches that combine QBMs with classical machine learning techniques, such as pre-training, fine-tuning, and transfer learning, to use the strengths of both paradigms. Experimental demonstrations of QBMs have been reported on various quantum computing platforms, including superconducting qubits, trapped ions, and quantum annealers, showing promising results for small-scale datasets. However, the scalability and performance of QBMs on larger and more realistic datasets remain open research questions.

    Bottom Line

    Quantum Boltzmann Machines are a promising class of quantum machine learning models that exploit the power of quantum computing to learn and represent complex probability distributions more efficiently than classical models. By exploiting the quantum superposition, entanglement, and interference effects, QBMs have the potential to provide exponential speedups and enhanced expressivity compared to classical Boltzmann machines, with applications ranging from unsupervised learning and generative modeling to optimization and decision-making.

    However, significant research efforts are still needed to address the challenges of efficient data encoding, scalable model architectures, noise-resilient training, and integration with classical machine learning techniques, before QBMs can be deployed in real-world scenarios. As quantum technologies continue to advance, QBMs are expected to play an important role in the emerging field of quantum-enhanced artificial intelligence.


    Implementation Steps

    Step 1.

    State preparation

    The input data is encoded into a quantum state, typically using amplitude encoding or qubit encoding. In amplitude encoding, each data sample is represented by a quantum state, where the amplitudes of the basis states correspond to the feature values. In qubit encoding, each feature is assigned to a qubit, and the feature values are encoded in the qubit states.

    Step 2.

    Model initialisation

    The parameters of the QBM, such as the weights of the quantum edges and the biases of the quantum nodes, are initialised to random values or based on prior knowledge.

    Step 3.

    Quantum sampling

    A quantum sampling algorithm, such as quantum annealing or quantum Gibbs sampling, is used to generate samples from the model distribution4. These algorithms exploit the quantum superposition and quantum tunnelling effects to explore the state space more efficiently than classical sampling methods.

    Step 4.

    Gradient estimation

    The gradients of the model parameters with respect to the objective function, such as the log-likelihood or the Kullback-Leibler divergence, are estimated using the quantum samples and classical post-processing. This can be done using techniques such as quantum back-propagation or quantum natural gradient.

    Step 5.

    Parameter update

    The model parameters are updated based on the estimated gradients, using classical optimization algorithms such as gradient descent or Adam. Steps three to five are repeated until the model converges or a maximum number of iterations is reached. After training, the QBM can be used for tasks such as data generation, anomaly detection, and classification, by sampling from the learned distribution or computing the probabilities of the input data.


    References

    [1]

    Amin, M. H., Andriyash, E., Rolfe, J., Kulchytskyy, B., & Melko, R. (2018). Quantum Boltzmann machine. Physical Review X, 8(2), 021050.

    [3]

    Benedetti, M., Realpe-Gómez, J., Biswas, R., & Perdomo-Ortiz, A. (2017). Quantum-assisted learning of hardware-embedded probabilistic graphical models. Physical Review X, 7(4), 041052.

    [4]

    Johnson, M. W., Amin, M. H. S., Gildert, S., Lanting, T., Hamze, F., Dickson, N., Harris, R., Berkley, A. J., Johansson, J., Bunyk, P., Chapple, E. M., Enderud, C., Hilton, J. P., Karimi, K., Ladizinsky, E., Ladizinsky, N., Oh, T., Perminov, I., Rich, C., … & Rose, G. (2011). Quantum annealing with manufactured spins. Nature, 473(7346), 194-198.

    [5]

    Adachi, S. H., & Henderson, M. P. (2015). Application of quantum annealing to training of deep neural networks. arXiv preprint arXiv:1510.06356.

    [6]

    Korenkevych, D., Xue, Y., Bian, Z., Chudak, F., Macready, W. G., Rolfe, J., & Andriyash, E. (2016). Benchmarking quantum hardware for training of fully visible Boltzmann machines. arXiv preprint arXiv:1611.04528.

    [7]

    Khoshaman, A., Vinci, W., Denis, B., Andriyash, E., Sadeghi, H., & Amin, M. H. (2018). Quantum variational autoencoder. Quantum Science and Technology, 4(1), 014001.


    Related Case Studies

    Zapata Computing and Biogen Partnership: Advancing Drug Discovery through Quantum Machine Learning

    Zapata Computing partnered with Biogen to explore quantum computing applications in drug discovery and molecular simulation, focusing on developing quantum-enhanced machine learning models for pharmaceutical research. This collaboration aimed to accelerate the identification of drug targets and optimize molecular properties using hybrid classical-quantum algorithms.

    D-Wave and Lockheed Martin Quantum Computing Partnership for Aerospace Optimization

    D-Wave and Lockheed Martin formed a groundbreaking partnership in 2011, making Lockheed Martin the first commercial customer of D-Wave's quantum annealing systems. This collaboration focused on exploring quantum computing applications for complex aerospace optimization problems, software verification, and machine learning tasks critical to defense and aerospace operations.

    Quantinuum and Google search for quantum circuit optimization

    A collaboration between Quantinuum and Google DeepMind to explore AI-enhanced quantum circuit optimization.

    Xanadu and AstraZeneca explore drug discovery

    Exploring quantum computing for drug discovery and molecular simulation, aiming to accelerate new drug identification.

    1QBit and BMW explore automotive optimisation

    1QBit and BMW applied quantum-inspired algorithms to optimise automotive manufacturing, logistics, and supply chain challenges.

    Algorithm Details

    Applications
    Data Science
    Recommendation Systems
    Generative Models
    Pattern Recognition
    Neural Networks
    Machine Learning
    AI Research
    Statistical Learning

    Related Industries

    AI and Machine Learning

    Target Roles

    Quantum Algorithm Developer
    Quantum Solutions Provider