Hacker News new | past | comments | ask | show | jobs | submit | tmvphil's comments login

> A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.

Just saying "this conclusion feels wrong to me, so I reject the premise" is not a serious argument. Consciousness is weird. How do you know it's not so weird as to be present in flickering abacus beads?


ZX calculus is very interesting framework for doing cutting edge research in error correction and gate compilation, but it seems wildly off base as a means of making quantum computing accessible to a broader audience. Anything beyond the simple "teleportation is like pulling a string" picture is extremely difficult abstract manipulations.

(PhD in experimental QC)


For those of us who have decent computer science and math education and are curious about QC but have jobs in classical computing, are there any resources you recommend that are a better introduction? I understand something about being able to test large numbers of permutations of something at once, or square rooting the number of necessary operations for some functions...

edit: found this below, but it is all ZX calculus

https://zxcalc.github.io/book/html/main_html.html


My understanding is ZX calculus is more like a “calculus” or a nice toolkit. So you probably still cannot bypass regular QC formalism (being able to manipulate things fast doesn’t mean you necessarily understand it well)


https://x.com/coecke/status/1907809898852667702

>High-schoolers excelling at Oxford Uni post-grad quantum exam, thanks to Quantum Picturalism!

thoughts?


They didn't give the stdents the full post-grad exam. They cherry picked a few of the questions that can be solved with this method. It's handy for a few special cases, but not in general.


I somewhat agree. I think the goal would be to create a UI that allows you to do the ZX calculus by just creating/moving/joining the spiders etc.


Just compared the time to check on a fairly large project:

- mypy (warm cache) 18s

- ty: 0.5s (and found 3500 errors)

They've done it again.


(ty developer here)

This is an early preview of a pre-alpha tool, so I would expect a good chunk of those 3500 errors to be wrong at at this point :) Bug reports welcome!


Any rough estimates of how much faster you expect ty to be compared to mypy? I'd be super curious to know!

I was also one of those people who, when first trying Ruff, assumed that it didn't work the first time I ran it because of how fast it executed!


We're looking forward to hearing what your experience is! There's a certain amount of roughly-constant overhead (e.g. reading all the files), so generally ty will look relatively faster the larger the project is. For very large projects we've seen up to 50-60x faster or more. We haven't really put a lot of work into targeted optimization yet, so we aim for it to get faster in the future.

It will certainly be slower than Ruff, just because multi-file type analysis more complex and less embarrassingly parallel than single-file linting.


Great, but how does it compare to Pyright on the utility / performance curve? Pyright is mature and already very fast.

https://github.com/microsoft/pyright


I don't get why so many people go to bat for pyright, my experience with it has been pretty miserable. Open enough instances of it and you're in OOM city. It works, but often gets confused... and of course the absolute audacity of MSFT to say "let's go over to pyright, and by the way we're going to carve up some stuff and put it into pylance instead", meaning that it's totally not within the actual spirit of open source.

I would like to just not use it, but the existence of pyright as a _barely_ functional alternative really sucks the air out of other attempts' continued existence. Real "extend/extinguish" behavior from MSFT.


I tried all the type checkers available as of ~1 year ago, and Pyright worked the best for me. It's not perfect, but it's better than any of the pure Python checkers. Memory is cheap (unless you're buying it from Apple I guess...). Would I take a faster type checker with better memory footprint? Heck yes, assuming equal or superior functionality.


Memory ain’t that cheap on laptops in general! The bigger issue is less “pyright” and more “every tool out there being as heavy as pyright” + docker etc… but things are getting better IMO


If you haven’t checked it out already, basedpyright is pyright with all the arbitrarily carved out functionality put back in – plus some extra features that you may or may not find useful depending on how strict you like your typing.

Can’t recommend it enough


To be honest I can't respect a project that names itself like that. I am a working professional.


I tested it side-by-side on my ~100Kloc codebase.

Ty: 2.5 seconds, 1599 diagnostics, almost all of which are false positives

Pyright: 13.6 seconds, 10 errors, all of which are actually real errors

There's plenty of potential here, but Ty's type inference is just not as sophisticated as Pyright's at this time. That's not surprising given it hasn't even been released yet.

Whether Ty will still perform so much faster once all of Pyright's type inference abilities have been matched or implemented - well, that remains to be seen.

Pyright runs on Node, so I would expect it to be a little slower than Ty, but perhaps not by very much, since modern JS engines are already quite fast and perform within a factor of ~2-3x of Rust. That said, I'm rooting for Ty here, since even a 2-3x performance boost would be useful.


Compilation / type checking depends on a lot of trees of typed data, and operating on those tree nodes. That's something where a statically typed language with custom data structures that allows for optimised representations makes a big difference, and where a lot of the fancy optimisations in v8 don't work so well.

There is a reason Typescript moved to a typed language.


Let's hope you're right and that translates to even higher performance for Ty compared to Pyright. There are of course many variables and gotchas with these sorts of things.


Pyright is only for type-checking and it lacks many features you'd expected from a modern LSP (I forgot which). Hence, it was forked and someone created basedpyright to fix it: https://github.com/DetachHead/basedpyright


To extend on this:

In python it's pretty common to have LSP separate from type checking separate from linting (e.g. ruff+mypy+ide_specific_lsp).

Which to be fair sucks (as it limits what the LSP can do, can lead to confusing mismatches in error/no-error and on one recent project I had issues with the default LSP run by vscode starting to fall apart and failing to propose auto imports for some trivial things for part of the project....)

But it's the stack where pyright fits in.


The pylance team has started exploring this, namely whether it makes sense to have an API for type checkers that is not the LSP, as language servers have a somewhat different goal in which type checking/inference is an enabling technology. This could allow multiple different language servers to be built on top of different type checkers (and the type checkers can run out-of-proc, so implementation languages can be different). https://github.com/microsoft/pylance-release/discussions/718...


Pyright is good, but it's quite a memory hog. (Yes, I have plenty of RAM on my machine. No, it has other uses during development, too.)


Pyright is incredibly slow in my experience, I've seen it take over a minute on complex codebases


in my experience pyright is unable to infer many inherited object types (compared to PyCharm's type inference)


PyCharm definitely excels on more ‘dynamic’ code but the number of times I’ve pulled in code written by colleagues using PyCharm only to get a rainbow of type errors from Pyright is too damn high.

The PyCharm checker seems to miss really, really obvious things, e.g. allowing a call site to expect a string while the function returns bytes or none.

Maybe my colleagues just have it configured wrong but there’s several of them and the config isn’t shared.


I have no doubt that it will be faster than mypy, but:

> This project is still in development and is not ready for production use.


> They’ve done it again.

Indeed they have. Similar improvement in performance on my side.

It is so fast that I thought it must have failed and not actually checked my whole project.


It's seemed to me that all the productivity gains would be burned up by just making our jobs more and more BS, not be reducing hours worked, just like with previous technology. I expect more meetings, not less work.


I'm a bit stuck on this, maybe you can explain why an LLM would have any difficulty writing REST API calls? Seems like it should be no problem.


They have no qubits at all, "logical" or not. yet. They plan to make millions. It is substantially easier to release a plan for millions of qubits than it is to make even one.


They are currently in the process of creating their own build system: https://github.com/astral-sh/uv/issues/3957#issuecomment-265...


I work in the field. While all players are selling a dream right now, this announcement is even more farcical. Majoranas are still trying to get to the point where they have even one qubit that could be said to exist and whose performance can be quantified.

The majorana approach (compared with more mature technologies like superconducting circuits or trapped ions) is a long game, where there are theoretical reasons to be optimistic, but where experimental reality is so far behind. It might work in the long run, but we're not there yet.


Given that Microsoft has been a heavy research collaborator ( Atom and Quantinuum), is there a possibility that the cross pollination would make it harder to deliver a farcical majorana chip since microsoft isn't all in on their home rolled hardware choice?

I've held the same view that this stuff was sketchy because of the previous mistakes in recent history but I do not work in the field


>> I need HN's classic pessimism to know if this is something to be excited about. Please chime in!

> While all players are selling a dream right now, this announcement is even more farcical.

Thanks a lot, I didn't get disappointed.


Another take, to feed your cynicism: MSFT need money to keep investing in this sort of science. By posting announcements like this they hope to become the obvious place for investors interested in quantum to park their money. Stock price goes brrr, MSFT wins.

More cynical still: what exactly has the Strategic Missions and Technologies unit achieved in the last few years? Burned a few billion on Azure for Operators, and sold it off. Got entangled and ultimately lost the JEDI mega deal at the DoD. Was notably not the unit that developed or brought in AI to Microsoft. Doing anything in quantum is good news for whoever leads this division, and they need it.

On the bright side, this is still fundamentally something to be celebrated. Years ago major corporations did basic science research and we are all better off for those folk. With the uncertainty around the future of science funding in the US right now, I at least draw some comfort in the fact that its still happening. My jaded-ness about press releases in no way diminishes my respect for the science that the lab people are publishing.


> MSFT need money to keep investing in this sort of science

Microsoft is making absurd amounts of money from Azure and Office (Microsoft 365) subscriptions. Any quantum computing investment is a drop in the bucket for this company.


Does MSFT sell new stock? If not, how does the stock price going up affect their ability to invest?


Even if Microsoft doesn't sell the stock it controls, its existing assets become more valuable when the stock price goes up. There are many ways one could spend those resources if needed: sell it off, borrow against the assets, trade the stock for stock in other companies.

However, since Microsoft has plenty of cash flow already, they can probably afford to just sit on the investment.


That's all you needed?


So you are saying its official fake news from Redmond ?


As someone working with it day to day, coming from around 18 years of mostly python, I wish I could say my experience has been great. I find myself constantly battling with the JIT and compilation and recompilation and waiting around all the time (sometimes 10 to 15 minutes for some large projects). Widespread macro usage makes stack traces much harder to read. Lack of formal interfaces means a lot of static checking is not practical. Pkg.jl is also not great, version compatibility is kind of tacked on and has odd behavior.

Obviously there are real bright spots too, with speed, multiple dispatch, a relatively flourishing ecosystem, but overall I wouldn't pick it up for something new if given the choice. I'd use Jax or C++ extensions for performance and settle on python for high level, despite its obvious warts.


Yeah, Jax with Equinox, jaxtyping, and leaning hard on python’s static typing modules + typeguard lets you pretend that you have a nice little language embedded in python. I swore off Julia a few years ago.


> Pkg.jl is also not great, version compatibility is kind of tacked on and has odd behavior.

Huh? I think Pkg is very good as far as package managers go, exceptionally so. What specifically is your issue with it?


Since 1800 congress has used its constitutional power to establish agencies. It is congress who has the constitutional power to shut them down, not the president. The president executes the laws passed by Congress.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: