Hacker News new | past | comments | ask | show | jobs | submit | sheeshkebab's comments login

“Emergency” my ass. More like made up crap to bully Canada by orange buffoon who hates trees as much as windmills.

They won’t have any choice soon enough - all that wishy washy they are not our enemy bullshit goes out the window with the first missile/shell/drone flying over.


Why would Russia/NK attack their wannabe ally?


Something something the frog and the scorpion


If you remember Putin is a spy by training, and a damn good one at that, you must consider spies really don’t want to change things when they are advantageous to them. Right now he knows very well what levers to pull to make things happen the way he wants. He won’t change that.


Enjoy your CCP dripfeed while it lasts. This crap is going byebye.


Can it run DOOM yet?


Search of what?


Anything. Everything. In domains where the search space is small enough to physically enumerate and store or evaluate every option, search is commonly understood as a process solved by simple algorithms. In domains where the search space is too large to physically realize or index, search becomes "intelligence."

E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.

E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.

With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.

But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm


Grover's algorithm is useful for very few things in practice, because for most problems we have a better technique than checking sqrt(N) of all possible solutions, at least heuristicly.

There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.


O(sqrt(N)) is easily dominated by the relative ease of constructing much bigger classical computers though.


Uh, no? Not for large N.

There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.

But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.


Doing 2^76 iterations is huge. That's a trillion operations a second for two and a half thousand years if I've not slipped up and missed a power of ten.


Maybe 100 years from now we can do 2^18 quantum ops/sec and solve chess in a day, whereas a classical computer could do 2^36 ops/sec and still take longer than the lifetime of the universe to complete.


Google's SHA-1 collision took 2^63.1 hash operations to find. Given that a single hash operation takes more than 1000 cycles, that's only less than three doublings away.

Cryptographers worry about big numbers. 2^80 is not considered secure.


It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?

We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.


If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.

This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.

This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).

In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.


That only helps for a relative small range of N. Chess happens to sort of fit into this space. Go is way out, even a sqrt(N) is still in the "galaxy-sized computer" range. So again, there are few problems for which Grover's algorithms really takes us from practically uncomputable to computable.

Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.


No-one would solve chess by checking every possible legal chess state -- also checking 'all the states' wouldn't solve chess, you need a sequence of moves, and that pushes you up to an even bigger number. But again, you can easily massively prune that, as many moves are forced, or you can check you are in a provable end-game situation.



training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.


I’m still waiting for a neural net that can do my laundry. Until there is one I’m on Marcus’ side.



No teleoperation, can haul the basket from 3 floor down and back up fully folded and put away in my closet without me doing a thing.


That video demo is not tele-operated.

You are arguing a straw man. The discussion was about LLMs.


It it can’t do laundry I personally don’t care if it can add 2+2 correctly 99% the time


I'm still waiting for a computer that can make my morning coffee. Until it's there I don't really believe in this whole "computer" or "internet" thing, it's all a giant scam that has no real-world benefit.


Automatic coffee machines are literally a computer making your morning coffee :)

But my washing machine doesn't have a neural network... yet. I am sure that there is some startup somewhere planning to do it.


What is lacking compared to current bean to cup coffee makers?


should ban X too - it’s a rats nest of disinformation bots and brainwashed idiots.


and it shows… Google codebases I see in the wild are the worst - jumbled mess of hard to read code.


Honestly the whole java/kotlin tooling is the worst to pick for mobile dev, and KEEP it after so many other great languages and tools that are out there. I don’t why google didn’t offer at least Go as a native alternative for android dev.


Because adding Go as an official language would be a monumental amount of work for vanishingly little benefit. Remember Android itself is half written in Java, so are you doing JNI calls for everything from Go? That's not fast nor "native". Are you rewriting the framework in Go? That's a crazy amount of effort.

And all for what? To satisfy a hobby itch that has no practical benefit? The problem of app developers is almost never the language. Hell, look at how popular web apps are even though JavaScript is the worst language in any sort of widespread usage. The platform is what gets people excited (or frustrated), not a language.


Because Go only exists due to a trio that doesn't like C++, has fought against Java 20 years ago and lost (Inferno and Limbo), and is pretty much the atenthesis of the feature rich capabilities of Java, Kotlin and C++, the official Android languages.

They hit jackpot with Kubernetes and Docker taking off after their pivot to Go.


That'd require some form of collaborative behavior across internal organizations, and real planning! </s> </disgruntled_xoogler>

I wake up every morning and thank God that Flutter exists. I can target Android without dealing with building on years and years of sloppy work.

Sunk-cost fallacy x politics leads to this never being fixed. I don't think Google can fix it, unless hardware fails entirely. There's been years-long politics that culminated in the hardware org swallowing all the software orgs, and each step along the way involved killing off anything different.


>I don't think Google can fix it

Can they eventually throw away Android and replace it with Fuchsia? In the reporting about Fuchsia that I read ages ago, it sounded like it was intended to be an Android replacement but, looking into it again just now, it seems more like an embedded OS for other non-smartphone hardware -- maybe with some (aspirational?) claims of utility on smartphones and tablets.


Fuchsia has components for running APKs (Android Runner) and Linux binaries (Starnix), but that probably isn't what you meant.

The problem with replacing a UI toolkit - any toolkit - is that any change to the toolkit requires modification of all software, including third-party software. Typically, when an OS wants to provide a new toolkit, they wrap the existing toolkit in new code. For example, on macOS, UIKit wraps AppKit, and on all Apple platforms SwiftUI is a wrapper around AppKit and UIKit (depending on platform). On Windows, every UI toolkit ultimately is creating "windows" as they are understood by USER[0], which creates corresponding objects in CSRSS and/or the NT kernel, which can then be used to draw on or attach to a GPU. The lowest level UI abstraction either OS provides is the objects supported by their oldest toolkit, and the lowest level programming language you can write apps in is whatever can call it.

Linux is a bit different, because it inherits its windowing model from X11. X shipped with no default toolkit and a stable window server protocol that apps could program against directly, in an era where most GUI OSes[1] didn't have 'servers' or 'protocols'. You populated resource files and called the relevant function calls to make things happen, and those function calls became sacrosanct. Even Windows NT couldn't escape this; it still used USER despite USER being years older than NT.

The best you can do is shim the library - write something more lower level than the old junk and then rewrite the old library in terms of the new one. This is what Xwayland does to make X apps work on Wayland; and it's what Apple did (mostly) with Carbon to give a transition path to Mac OS 8/9 apps on OS X. Google could, say, ship a new Android toolkit that doesn't use Java bindings, and then make Android's Java toolkit a shim to the new native toolkit. However, this still means you have to keep the shim around forever, at least unless you want to start having flag dates and cut-offs. For context, Apple didn't kill Carbon until macOS 10.15 Catalina, and if they hadn't refused to ship Carbon on 64-bit Intel, it probably would still be in macOS today.

[0] An interesting consequence of this is that disabling "legacy input" in games turns off the ability to move the application window since all that code is intimately coupled to every app that has to open a top-level (i.e. not a widget) window.

[1] At the time that would be XEROX Star, the Lisa, and the Macintosh

[2] This is also why Apple will never, ever ship an iPad that can run macOS software in any capacity. Even if they were forced to allow root access and everything else macOS can do. The entire point of the iPad is to force software developers to rewrite their apps for touch, and I suspect their original intent was for the Macintosh to go away like the Apple ][ did.


Correct, tl;dr roadkill. I don't mean to be disrespectful, someone anonymous picked a bitter fight about this once. But to your point, its clear there was a larger context that Fuchsia was born from, and the ambitions and commitment to it are greatly different than they were at some previous juncture.


Google even didn't consider supporting Go as an official flutter language.


Swift is useless outside of macOS/iOS dev… the thing doesn’t even have namespacing. I don’t know for what reason someone would use it outside for Apple environment.


You can disambiguate two types with the same name from different libraries, e.g. `Factotvm.URL` and `Foundation.URL`. Do you mean something more full-featured? You are not prefixing types with three letters, if that's what you think has to be done.

I don't know if it's still the case, but there was an annoyance where you couldn't have a type with the same name as the package. But that is hardly a lack of namespaces.


Objective-C had some minimal adoption outside of Apple (probably due to NextStep nostalgia), so if Objective-C managed to get some traction, Swift will do it to, probably.

However, Apple's history is very much stacked against Swift becoming a mainstream language outside of Apple's platform.


You're right but this touches the hot stove.

HN majority doesn't like hearing that ladybird et al might just be wandering around, even if the goal is catnip for the bleachers, and we should be skeptical this is the year of multiplatform Swift, because it wasn't last year if you actually tried it. Or the year before last. Or the year before that. Or the year before that year. Or the year before that one.


I’m slightly more ambivalent than you about it. Swift is a nice language and has better ergonomics than C++ and I imagine a Swift codebase might find more contributors than a C++ one (maybe I’m wrong about that!)

I also think it’s separate from the dream of “multiplatform Swift”. For that you need a healthy package ecosystem that all works cross platform, Swift doesn’t have that. But a lot of Ladybird is written at a low enough level that it won’t matter so much.


Problem is Swift engineer supply is low, there's not a viable business case to learn Swift because it's not actually viable cross-platform for development unless you have $X00 million to throw at your macOS/iOS team to build it from scratch, platform by platform (to wit, sibling comment re: Arc Browser)

So best case we're looking at: Swift isn't ready yet, the next major version will be, and we can't build UI with it, so we'll put in the effort in to bootstrap a cross-platform ecosystem and UI frameworks. Or maybe we'll just do our business logic in it? It's a confusing mess that is irrational. Even with great blessings of resources. ex. $X00M that Arc has obtained one incremental platform after a year. And "all" they had to do was Swift bindings for WinRT and connect it to the existing C++ engine.

All of this is easy to justify if we treat it as an opportunity to shoot for how we wish software could work in theory, instead of practice. I hope I'm wrong but after being right the last few years, I'm willing to say it's naive wishcasting out loud, even though its boorish. I see it as unfortunately necessary, a younger me would be greatly misled by the conversations about it on HN. "We've decided to write the browser in Swift!" approaches parody levels of irresponsible resource management and is a case study in several solo engineer delusions that I also fall victim to.

It's genuinely impossible for me to imagine anyone in my social circle of Apple devs, going back to 2007, who would think writing a browser engine in Swift is a good idea. I love Swift, used it since pre-1.0, immediately started shipping it after release, and that was the right decision. However, even given infinite resources and time, it is a poor fit for a browser engine, and an odd masochistic choice for cross-platform UI.


> Problem is Swift engineer supply is low, there's not a viable business case to learn Swift because it's not actually viable cross-platform for development

The Swift business case is that in many situations native is strongly preferable than cross-platform. Excluding some startups that wants to go to market super fast and consulting companies that have to sell the cheapest software possible, usually the benefits of native outweighs the ones of cross platform.

For this reason now there are plenty of companies of all sizes (faangs included) that build and maintain native apps with separate iOS/Android teams. There are very good business reasons to learn Swift or Kotlin in my opinion.


Right -- no one read my comment and thought I meant Swift was unnecessary or businesses don't use it. Contextually, we're discussing Swift for cross-platform dev.


I'm glad you know what everyone thinks while reading your comments.


Well I was trying to be kinder than just leaving you downvoted and confused why. I guess I shouldn't have bothered, my apologies. Hope your week gets better!


I wonder what convinced Andreas Kling to abandon his own language Jakt [1] in favour of Swift.

In the long run, it would be good to have high-level languages other than Java that have garbage collection (at least optionally) and classes, and that are still capable of doing cross-platform system development. I don't know if Swift fits that bill, besides cross-platform ecosystem (a la Java), submitting the language for ISO standardization (not just open sourcing one implementation) would be a good indication of being serious about language support.

[1] https://github.com/SerenityOS/jakt


> In the long run, it would be good to have high-level languages other than Java that have garbage collection (at least optionally) and classes, and that are still capable of doing cross-platform system development.

C#


One of the major differences between ladybird as part of serenity and ladybird the separate project is using 3rd party libraries. When what you are building are is for fun and you build everything yourself it makes sense to also build a language.

Ladybird as a separate project has the goal though of something usable in the shorter term. So similarly with switching to 3rd libraries for things I don't think it makes sense to spend potentially years first building the language before building the browser.


You might be powerfully dry here, imparting a Zen lesson. If not, or if you, dear reader, doesn't see it: it is worth meditating on that there was a language, an OS, and a web browser. Then, on common characteristics in decision-making that would lead to that. Then, consider the yaks that have to be shaved, the ISO standardization / more than one implementation hints at this.


> and has better ergonomics than C++

It is unfair to compare a twenty first century language with one from the 1980s

Rust is the proper comparison

The only advantage Swift has is an easier learning curve , but the ergonomics of Rust are far superior than Sift's


It’s not really a question of fairness. The existing codebase is C++, the new stuff is Swift. Hence the comparison.

I’ve written both Rust and Swift while being an expert in neither. I wouldn’t say Swift has no pluses in comparison, reference counting is often a lot easier to reckon with than lifetimes, for one. I’m actually curious what a large multithreaded Swift codebase looks like with recent concurrency improvements. Rust’s async story isn’t actually that great.


> Rust’s async story isn’t actually that great

I agree. People's perspectives differ. It abhor `async/await` in Rust. It has poisoned the well IMO for asynchronous Rust programming. (I adore asynchronous programming - I do not need to pretend my code is synchronous)

But that is taste, not a comment on poor engineering!

The lack of the borrow checker in Swift is what makes it approachable for newcomers, as opposed to Rust which is a harsh mistress.

But reference counting is such a silly idea. Swift really should have a garbage collector with a mechanism or subset to do that very small part of programming that cannot be done with a garbage collector. That would have been design!

I fear that Swift is going to get a borrow checker bolted on - and have the worst of both worlds....


Reference counted objects have deterministic lifetimes, like Rust or C++. Garbage collected languages don't have that. Essentially a reference counter automates some* of the ideas of the borrow checker, with a runtime cost as opposed to a compile time one.

The great thing about automatic reference counting is you can elide the incrementing and decrementing a lot of the time. Stronger guarantees, such as saying "this object is single-threaded" lead to even more optimizations.


I don’t think Apple would want to sacrifice the determinism that refcounting provides for garbage collection. iOS apps are still more buttery smooth than Android in many cases.


I mean this year we did have the porting of Arc browser to windows. I use it my gaming PC and it is starting to feel like it has a similar level of polish to the MacOS version.

https://speakinginswift.substack.com/p/swift-meet-winrt

https://speakinginswift.substack.com/p/swift-tooling-windows...

Now that’s not an engine but the UI.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: