Hi Lev - Our mission is to deliver fault-tolerant quantum computing systems and services to the commercial market. We're currently prototyping our technology at the small scale i.e. < 20 qubits. Once this validation is complete, we intend to scale up to much larger systems.
Chad,
Sounds interesting! Is the 20-qubit threshold set by limitations of your simulation tools, or is that really all you need to validate your approach and ensure that nothing weird will happen (crosstalk / correlated errors / etc.) when you scale up to larger numbers of qubits?
Hey Doug! Thanks! Great to see the IBM quantum team on HackerNews! We've worked really hard to bake scalability into the designs from day 1. Our challenge now is to validate standard 1 and 2 qubit performance metrics on that scalable 2-D qubit lattice. A system with ~16 qubits is big enough to get a reasonable assessment of correlated errors between non-nearest neighbors, and small enough that it's still pretty cheap to build. We've also made some great headway in developing a scalable and low-cost architecture for the control electronics and signal delivery - about 20x cheaper than the standard approaches - and 16 qubits is large enough to really put those to the test.
Be well!
Chad
thanks for the response. Are your systems superconducting/atomic/spintronic etc? Are they an instance of adiabatic computing? Curious about these sorts of details.
Hi saryn - we're building gate-based systems and working towards quantum error correction. We've developed our own physical architecture for the processor that we believe is highly scalable and much cheaper than other approaches.
@FiatLuxDave - Great questions, thank you! We are building quantum computing systems. We have a simulation-driven development process for the hardware, both the quantum and classical parts of the system, and that helps us keep costs down.
For challenge problems, that's a great idea. We're focused on applications to computational chemistry and machine learning, among others, right now.
So, if I understand it correctly, Rigetti Computing is planning on simulating how the software for a quantum computer will provide better performance than classical computing, and otherwise do useful stuff. This sounds like an interesting part of the quantum computing ecosystem, assuming that such an ecosystem evolves.
So, a few questions would be:
a) what does RC offer that is not already being provided by the academic algorithmic community?
b) are you making the assumption that hardware specifics do not have any effect upon your goals? if so, how confident are you that this is true? does the DWave quantum annealing issue play into this at all?
c) do you have any customers lined up yet?
d) do you have any kind of 'challenge problem' which you think would be particularly good at demonstrating what RC (as opposed to QC in general) can do?
Great to hear that there is space in this industry for new companies beyond BBN, IBM and Northrop Grunman!
Have you already applied for funding with Quantum Valley Investments (http://quantumvalleyinvestments.com)? If not you should do so! The funding is not restricted to Canadian or Waterloo Ontario based research, as far as I know.
Are you gonna run HFSS/Comsol/ADS/Qutip... on Amazon EC2, scalable at will, or are you going to invest in your own parallel computing hardware? I can imagine a big chunk of this initial funding is going to go into buying a couple HPC pack licenses...
Hi Jean-Luc,
Thank you for the kind wishes. Those are some great questions! We are definitely running HFSS, and we've made some small investments in traditional HPC hardware. Our primary use case thus far has been Eigenmode solver, which doesn't parallelize well. We're figuring out how use Driven solver a little more - the HPC packs provides a real boost in that case. We're excited to be expanding the eco-system of companies doing groundbreaking QC work beyond the standard list of big defense contractors :)
I assume you are simulating resonators for ion traps with the eigenmode solver? I had quicker results optimizing with time-domain analysis (CST Microwave Studio), then switching to eigenmode to find the Q, than using frequency domain solver. The time-domain solver is easily spread across multiple GPU.
Hi madengr - Thanks for the comment. We're focused on solid-state qubits right now, not ion traps. Are you building your own GPU-based hardware for your simulations?
Nope, my workstation is a pre-built Tesla Whisper-station from Microway with two Nvidia K20. Also have a 4 node HP blade cluster dedicated for CST and Microwave Office (Axiem), with last generation Tesla GPUs. That 4 node cluster is expanded up to 32 nodes by convincing coworkers to install CST distributing computing solver server. That works really well for parametric sweeps and solvers that can't use GPU. It's sort of a cobbled together system, but it works well.
To what sort of problems are you employing eigenmode analysis, or just 3D EM in general?
What sort of qubits are y'all using? I'm guessing "superconducting", because only those seem to be close to the [published] error thresholds right now.
We're focused on superconducting qubits right now. But our core processor technology and the overall system architecture are fairly agnostic to the kind of physical qubits that it uses.