Hacker News new | past | comments | ask | show | jobs | submit | lanthissa's comments login

this done well is a transformational thing, its just no one has been willing to invest yet, but the compute on a phone is now good enough to do most things most users do on desktop.

I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.

theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.


The future of personal computing is being dictated by the economics of it, which are that the optimal route to extract value from consumers is to have walled-garden software systems gated by per-month subscription access and/or massive forced advertising. This leads to everything being in the cloud and only fairly thin clients running on user hardware. That gives the most control to the system owners and the least control to the user.

Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.


I've heard so many "The future of personal computing" statements that haven't come true, so I don't take much stock in them.

I remember when everyone thought we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)

> Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.

IMO, it's a pain-in-the-ass to manage multiple devices, so IMO, it's much easier to just plug my phone into a clamshell and have all my apps show up there.


> we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)

We're almost there. The cool kids are already using 12" touchscreen ARM devices that people from 10 or 20 years ago would probably think of as tablets. Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time - I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".


Well, the MacBook Air is pretty much an iPad that swapped its touchscreen for a keyboard (and trackpad).

> I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".

I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.

I do agree that laptops might get better tablet modes, but if you want to have a full-sized comfortable-ish keyboard, the laptop is gonna be more unwieldy than a dedicated tablet.

The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else. But even today the cost of desktop processing components that can reach phone-like performance is almost a rounding error; just because they have so much more space, cooling and power to play with.

(Destop CPUs can be quite pricey if you buy higher end ones, but they'll outclass phones by comical amounts. Phone performance is really, really cheap in a desktop.)


> I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.

> The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else.

Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.

I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices. I definitely still want an easy way to take a keyboard with my device on the train/plane, and I don't know what exact hardware arrangement will win out for that, but I'm confident that the convergence will happen. I think phone convergence will also happen eventually, for the same reason, but how that will actually work in terms of the physical form factor is anyone's guess.


> Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.

Yes, that's useful. But eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.

> I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices.

I agree with the latter, but not the former. There are mechanical limits to shrinking a keyboard, and still preserve comfort.

(And once you have the extra space from a keyboard, you might as well fill it up with more battery. But I'm not so sure about that compared to the argument about physical lower bounds on keyboard size.)


> eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.

I don't understand what you mean here. If you're talking about some kind of easy sync between devices software, people have been trying to make that work for decades, but they not haven't succeeded but haven't even really made any progress.

> There are mechanical limits to shrinking a keyboard, and still preserve comfort.

Maybe, but those limits are plenty big enough for a tablet - particularly with the size of phones these days, a tablet smaller than say 10" is pointless, and the keyboards on 11" laptops are fine. Now making a device that can work as both a phone and a laptop-with-keyboard will probably require some mechanical innovation, yes, but that's the sort of thing that I suspect will be figured out sooner or later, e.g. we're already seeing various types of folding phones going through the development process.


11" laptops are not fine to type on all day unless you give them huge bezels (even the 11" macbook which did have those huge bezels was space-constrained on the less important keys). Ergonomics is really important.

Sure it's fine to get by for an hour or two but spending 8 hours 5 days a week on one is a really bad idea and will provide a great path to crippling RSI. In fact using any laptop that much is a bad idea, due to the bad posture it provides (with the screen attached to the keyboard). This is why docking stations are still so important.


> Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time

I would say most kinds of work.

Even if you're just on teams discussions - a real keyboard is much more productive than messing around on a touchscreen. Same with just reading. Sometimes I read a forum thread on my phone and then when I get back to the real computer I'm surprised how little I read and how much it felt like.

The only thing where I don't see this being the case is creative work like drawing where a tablet is really perfect, much better than a wacom or something.


> (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)

I think that's to generative AI, I would expect people to gradually replace manually creating a spreadsheet with 'vibecoding' it.

> IMO, it's a pain-in-the-ass to manage multiple devices, so IMO, it's much easier to just plug my phone into a clamshell and have all my apps show up there.

ChromeOS already works like that, when you log in on different devices, without having to physically lug one device around that you plug into different shells.


I know many people where that is exactly the case, not everyone is doing spreadsheets or coding.

Also I haven't owned a desktop since 2003, and my last one at work was in 2006, although we may debate laptops with docking station are also desktops.


In software development, "desktop" is synonymous with laptop.

Laptops + docking stations are usually just as fast as a desktop. You can buy $10,000 desktops that are much faster (50+ cores, and a lot of RAM), but most developers don't find them enough faster to be worth it. (in my benchmarks rebuilds with 40 cores finished faster than rebuilds using all 50, for a 10+million line C++ project) It is easier to have everything locally where you are. If like many of us you sometimes work from home remote into a different machine is always a bit painful.

Exactly, and that also makes Surface like devices good enough way to code on the go.

> (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)

Desktop LibreOffice works fine on my Librem 5 phone.


I think this is a really good take - Apple especially (but Google too) aren't gonna naturally invest time and resources into software that'll make you less likely to buy more of their hardware.

That said, market incentives can and do change pretty fast. Especially with climate change, and current tension in global supply chains, we could see a shift away from hardware caused by taxes or pirce hikes (I'm not saying we will though).

That'd be a game changer for how much companies might invest in changing what computing looks like.


> the compute on a phone is now good enough to do most things most users do on desktop.

Really, the compute on a phone has been good enough for at least a decade now once we got USB C. We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long

I'm happy this is becoming a real thing. I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.

I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.


> I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.

I might have misunderstood but do you mean as an input device attached to your desktop computer? Kdeconnect has made that available for quite some time out of the box. (Although it's been a long time since I used it and when I tested it just now apparently I've somehow managed to break the input processing half of the software on my desktop in the interim.)


Yes! I enjoy KDEConnect a lot for that :) With the phone being the computer, the latency can probably be made low enough that it just feels like a proper touchpad

> We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long

Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.

Sending plain text messages is pretty much the same as back then, yes. But these days I'm also taking high resolution photos and videos and share those with others via my phone.

> I hope they'll also allow the phone's screen to be used like a trackpad.

Samsung's DeX already does that.

> I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.

Your own 'good enough' logic already suggests otherwise? Processors are still getting cheap and better, so why not just duplicate them? Instead of having a dumb large screen (and keyboard) that you plug your phone into, it's not much extra cost to add some processing power to that screen, and make it a full desktop pc.

If we are getting to 'thin client' world, it'll be because of 'cloud', not because of connecting to our phones. Even today, most of what people do on their desktops can be done in the browser. So we likely see more of that.


> Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.

Do people really do this now? Watching a movie on my phone is so suboptimal I'd only consider it if I really have no other option. Holding it up for 2 hours, being stuck with that tiny screen, brrr.

I can imagine doing it on a plane ride when I'm not really interested in the movie and am just doing it to waste some time. But when it's a movie I'm really looking forward to, I'd want to really experience it. A VR headset does help here but a mobile device doesn't.


you position it vertically against something in bed and keep it close enough (half a meter) so that its practically same size as tv which is 4-5 meters away and you enjoy the pixels. i love doing this few times a week when im going to sleep or just chilling

Hmm ok, for me a phone at 50cm is way smaller than a TV but mine is also not 5m away. In bed I usually use my meta quest in lie down mode.

We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.

The thin client world is one anticipating a world with fewer resources to make these excess chips. It's just a speculation of what things will look like when we can't sustain what is unsustainable.


> We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.

The video comment was about phones. The raytracing was about laptops.

Yes, laptops were capable of watching DVDs in 2005. (But they weren't capable of watching much YouTube, because YouTube was only started later that year. Streaming video was in its infancy.)

> It's just a speculation of what things will look like when we can't sustain what is unsustainable.

Huh? We are sitting on a giant ball of matter, and much of what's available in the crust is silicates. You mostly only need energy to turn rocks into computer chips. We get lots and lots of energy from the sun.

How is any of this unsustainable?

(And a few computer chips is all you save with the proposed approach. You still need to make just as many screens and batteries etc.)


Last time I used DeX you phone does become a TouchPad for the desktop when plugged to a monitor

Yes it can, it can also become a keyboard in fact.

One thing I'm kinda missing is that it doesn't seem to be able to become both at the same time on a system that has the screen space for that. Like a tablet or Z Fold series.


:D I avoid Samsung products but I'm happy that at least exists. I hope it's not patented, and Google is both able to put the same thing into Android, and that it's available in AOSP

This concept has been floating around for a long time. I think Motorola was pitching it in 2012, and I'm sure confidential concepts in the same vein have been tried in the labs of most of the big players.

> I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.

I don't see that at all.

That's because I think over time the processing power of a eg laptop will become a small fraction of its costs (both in terms of buying and in terms of power).

The laptop form factor is pretty good for having a portable keyboard, pointing device and biggish screen together. Outsourcing the compute to a phone still leaves you with the need for keyboard, pointing device and screen. You only save on the processor, which is going to be a smaller and smaller part.

> theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.

Even in your scenario, most of your devices will be idle most of the time anyway. And they don't use any energy when turned off. So you are only saving the cost to acquire the processor itself.

Desktop computer processors that can hit the computing power of a mobile processor are really, really cheap already today.


You are ignoring data location and software installs.

Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.

One OS with all your software. No need to install same app multiple times on different devices. Don't need to deal with questions like, for how many devices is my license valid for. However, apps would need to come with a reactive UI. No more separate mobile and desktop versions.

Example, you take a photos on your phone, dock it at your desk or laptop shell, and edit them comfortably on a big screen, with an app you bought and installed once. No internet connection is required.

A docking station could be more than just display and input devices. It could contain storage for backing up your data from the phone. Or powerful CPU and GPU for extended compute power (you would still use OS and apps/games on your phone with computations being delegated to more powerful HW).

This could replicate many things cloud offers today (excluding collaboration). No need to deal with an online account for your personal stuff. IMO, it would probably be less mystical than cloud to most users.


> Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.

You need to sync it anyway. Having that phone with you all day also means exposing it to a lot of risk involving theft, drops and other kind of damage. You need that sync for backup purposes.

I agree actually having it on the phone is great though. I use DeX a LOT, it's a great way of working when I don't have my laptop with me but do have a docking station available (e.g. at the office when I forget my laptop or just dropped in unplanned)


> You need that sync for backup purposes.

Backup is a simple one way sync, but like you said, it is needed. It could still be private, if backup to another of your devices is made when your phone connects to your home WiFi.


You can (in principle) back of over the cloud and still have everything private. Encryption and open source software can handle that. (You want the software to be open source, so you can check that it's really end-to-end encrypted without a backdoor.)

Of course, that scenario would only become the norm, if there's mainstream demand for that. By and large, there ain't.


> You are ignoring data location and software installs.

Caching works well for that.

> Having all your data always with you stored locally (on your phone) is simpler than syncing and more private than cloud.

Have a look at how GMail handles this. It has my emails cached locally on my devices so I can read them offline (and can also compose and hit-the-send-key when offfline), but GMail also does intelligent syncing behind the scenes. It just works.

> Example, you take a photos on your phone, dock it at your desk or laptop shell, and edit them comfortably on a big screen, with an app you bought and installed once. No internet connection is required.

My devices are online all the time anyway.

> A docking station could be more than just display and input devices. It could contain storage for backing up your data from the phone.

I'm already backing up to the Cloud automatically. And Google handles all the messy details, even if my house burns down.

> Or powerful CPU and GPU for extended compute power (you would still use OS and apps/games on your phone with computations being delegated to more powerful HW).

How is that different from the ChromeOS scenario, apart from that the syncing in your case doesn't involve the cloud?

> This could replicate many things cloud offers today (excluding collaboration). No need to deal with an online account for your personal stuff. IMO, it would probably be less mystical than cloud to most users.

No, it would be more annoying, because I couldn't just log in anywhere in the world, and get access to my data. And I would have to manually bring devices in contact to sync them.

You can build what you are suggesting. And some people (like you!) will like it. But customers by-and-large don't want it.


Cache invalidation is hard. Offline-first is also hard and expensive to develop. Single source of truth + backup is simpler.

> No, it would be more annoying, because I couldn't just log in anywhere in the world, and get access to my data. And I would have to manually bring devices in contact to sync them.

You are traveling without your phone? I don't always have an unlimited internet when traveling. If you loose your phone while traveling there's a good chance you won't be able to log in due to 2FA anyway. Devices just have to connect to the same local network to sync. Phone probably connects to your WiFi automatically when you come home. Syncing over internet is also possible.

I'm just saying it could be done. Not that everybody would use it or like it. Although, I imagine getting rid of one dependency (cloud) and having more control would be a plus to some.

Cloud is not magically without issues. People do get locked out their cloud account due to some heuristics flagging it, payment issues, user errors or even political reasons. And it can take a very long time before you get it resolved. Last year there was even a story on HN about Google Cloud accidentally deleting customer's account and deleting all their data.

> But customers by-and-large don't want it.

Do you have any data backing this up?

Phone centered solution could be more cost effective. A casual user would only need a phone, a backup solution (either cloud based or an external drive connected to a network) and a bigger display with input devices (portable or desktop). Possibly one less subscription they have to pay and lower HW costs.


Since Windows has started this iteration of their move to ARM, I wondered if Microsoft would be the first to do this properly, by building an adaptable/mobile Desktop/UX to Windows 12 (or 13), pumping up the Microsoft Store, and then relaunching the Windows (Surface, I guess) Phone with full fat Windows on it.

In a way it's the same strategy that Nintendo used to re-gain a strong position in gaming (including the lucrative Home Console market where they'd fallen to a distant last place) - drafting their dominance in Handheld into Home Console by merging the two.


Windows Phone had this ability, it was called Continuum: https://learn.microsoft.com/en-us/windows-hardware/design/de...

Also Ubuntu phone had it and it worked well. Better than on the phone itself due to lack of good small-screen-capable software.

I strongly agree, and have felt this way for a long time. We are being sold many processors, each placed into their own device. The reality is our phone processor could be used to run our TVs, streaming devices, monitors, VR glasses, consoles, laptops, etc. That's less profitable, however.

With cables, yes. And LG did that for a while in fact, they had a VR headset that would plug into the phone: https://www.cnet.com/reviews/lg-360-vr-review/ It wasn't a success but this was more software-related and also some hardware-skimping. It was a good idea, it just seems like the devs forgot to actually try using it before declaring it a finished product.

But wireless the lag is so bad that it's not really usable. Like Wireless DeX. Definitely not good enough for processor-less VR glasses (even the wireless VR streaming from meta does require significant processing power on the glasses end).


A laptop wins everytime because I don't have to carry around all my peripherals and set em all up again. Unless there's going to be dock setups in every conference room, coffee shop, table in my house, airplane, car, deck, etc, a laptop makes more sense.

The peripherals need not be anything more than a clamshell screen + keyboard, same as a laptop.

Than what do you save? Only the system-on-a-chip (CPU, GPU, RAM).

And the hardware to get an SoC with phone-like performance in a laptop or desktop form factor is relatively cheap, just because you have so much more space and power and cooling to work with.

(Your laptop-shell definitely needs its own power supply, whether that be a battery or a cable, because the screen alone will take more power than your phone's battery can provide for any sustained period of use.)


Right but if it's the same as a laptop why not just use a laptop?

The only things I can think of are you really want to keep all the data on your phone and don't want to use cloud sync solutions (Dropbox etc.), or you really want to save a couple of hundred dollars getting a (probably terrible) laptop without a motherboard. Not very compelling IMO.


Surely long term it'd be cost? A screen and a keyboard in a laptop shell should be a lot cheaper that a screen, a keyboard, RAM, SSD, fans etc in a laptop shell.

Those other parts of the laptop are cheap though. Sure not free, but chromebooks can be had new for just a few hundred $$$ (they don't need a fan either). If you want a fast laptop you need to spend a lot of money, but a fast laptop has the ability to have better RAM, SSDs and such than your phone because there is more space in that form factor and so if you want fast you are back to laptop while if you don't need fast your laptop is cheap.

Ease of use and clamshell should be cheaper if vendor would promise 10 years of support so clamshell bought today would still work with iPhone 22.

Those would be the most expensive parts of the laptop. You're basically just saving on a mobile SoC which isn't much of a cost.

To be clear, compute on a phone has been good enough for what most people do on a desktop for a long long time. That is not at all a new thing.

There's no need to stop there, why not just generalize that already to the WEF-approved "there's not a great reason for an individual to own anything"

> but the compute on a phone is now good enough to do most things most users do on desktop.

The compute power yes, but the OSs and UX are shit.


Well, they are a generation ahead in many perspective than desktop UIs, so.

E.g. android/ios has better security than Windows/GNU Linux/MacOS, much more reliable suspend/wake functionality, much better battery management, etc.

Like it's a 50/50 chance my laptop with Win 11 will wake up fully charged or fully discharged in the morning, and whether it will be kind enough to actually be ready for work, or I can go brew a coffee before it's ready..


in a sense apple is already doing this, since there's shared chip tech in the laptops and phones.

I still will prefer the form factor of a laptop for anything serious though; screen, speakers, keyboard.

Yes you can get peripherals for a phone, yes I have tried that, no they're not good. Though perhaps with foldable screens this could change in the future.


Apple is intentionally hampering the desktop experience on the iPad and is very late in brining Stage Manager to the iPhone (the rumor is now iOS 19). Until there is serious competition (this and/or improvements to DeX, Apple will drag their feet because they want to sell you three compute device categories (or four if you count the Vision Pro).

Also, stage manager is not a good way of doing real work. It's with good reason that people abhor it on the Mac. On an iPad with no better alternative it's workable but not great.

So true! I have experimented with plugging an iPad Pro into an Apple 7K Studio Monitor with keyboard and an Apple Trackpad and Stage Manager: close to being generally useful, but I also get the idea that Apple is purposely holding back to prevent reducing Mac sales.

That is why I am rooting for Samsung DeX and what Google is offering: Samsung and Google can make money for their own reasons making a universal personal digital device.


They have the hardware. They don’t provide ANY software for this kind of thing though. And there is a very real chance it could cannibalize some Mac sales.

I’ve always wondered if this kind of thing is actually that useful, but it’s not even an option for me because of the above.

Seems surprising Google didn’t act on this earlier. But maybe they didn’t want to cannibalize the Chromebooks?

I get the feeling very very few people know this exists at all on some Samsung phones. I’ve asked some tech-y people with Samsungs about it before and they didn’t even know it existed.


True! Apple’s already ahead with the shared chip setup between Macs and iPhones. But yeah, for real work, nothing beats a proper laptop — big screen, keyboard, good speakers. I’ve tried using a phone with accessories too… not the same vibe. Maybe foldables will change that someday!

A desktop then, or a laptop plugged into a proper screen, real speakers etc. A laptop is still a compromise.

this done well is a transformational thing, its just no one has been willing to invest yet

I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.

Obviously, it didn't take off. Perhaps it was ahead of its time. Or, as you say, it wasn't done well at the time.

Phones accepting Bluetooth keyboard connections was very common back in my road warrior (digital nomad) days, but the screen was always the annoyance factor. Writing e-mails on my SonyEricsson on a boat on the South China Sea felt like "the future!"

Slightly related, I built most of my first startup with a Palm Pilot Ⅲ and an attached keyboard. Again, though, a larger screen would have been a game changer.


AIUI, the main problem in the cell phone era is that by the time you create a notebook shell with an even halfway-decent screen, keyboard, battery, and the other things you'd want in your shell, it's hard to sell it next to the thing right next to it that is all that, but they also stuck a cheap computer in it (and is therefore no longer a dock). Yeah, it's $50 more expensive, but it looks way more than $50 more useful.

What may shift the balance is that slowly but surely USB-C docks are becoming more common, on their own terms, not related to cell phones. At some point we may pass a critical threshold where there's enough of them that selling a phone that can just natively use any USB-C dock you've got lying around becomes a sufficient distinguishing feature that people start looking for it. Even just treating it as a bonus would be a start.

I've got two docks in my house now; one a big powerful one to run the work-provided laptop in a more remote-work-friendly form factor, and fairly cheap one to turn my Steam deck into a halfway-decent Switch competitor (though "halfway-decent" and no more; it's definitely more finicky). We really ought to be getting to the point that a cell phone with a docked monitor, keyboard, & mouse for dorm room usage (replacing the desktop, TV, and if whoever pulls this off plays their cards right, the gaming console(s)) should start looking appealing to college students pretty soon here. The docks themselves are rapidly commoditizing if they aren't there already.

Once it becomes a feature that we increasingly start to just expect on our phones, then maybe the "notebook-like" case for a cell phone starts to look more appealing as an accessory. We've pretty much demonstrated it can't carry itself as its own product.

That would probably start the clock on the "notebook" as its own distinct product, though it would take years for them to finally be turned into nothing but shells for cell phones + a high-end, expensive performance-focused line that is itself more-or-less the replacement for desktops, which would themselves only be necessary for high-end graphics or places where you need tons and tons of storage and you don't want 10 USB-C drives flopping around separately.


BTW you don't even need a dock if you have a USB-C monitor with USB and audio ports, which is not that uncommon. The monitor acts like a USB hub, so if you plug in your keyboard and mouse that's your computer essentially

At least for the monitors I've used this way, they don't provide enough power over USB-C to keep my phone charged.

> I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.

Still in the "smart" era, but the Motorola Atrix allowed that, but with its own laptop form factor dock.

https://www.cnet.com/culture/how-does-the-motorola-atrix-4g-...


I had one of these Atrix and laptop docks. It was really good, but sadly way ahead of its time. The desktop was a Debian-based Linux desktop and you could install various ARM packages. Unfortunately, the phone just wasn't powerful enough at the time. The touchpad was also not brilliant compared to Macs (probably better than Windows touchpads of the time). I sold it on ebay to a guy who plugged his Raspberry Pi into it, since the Atrix dock used mini HDMI and microUSB connectors. This has obviously been replaced in the modern age with USB-C.

I am pretty sure that modern phones are more than powerful enough! My wife's iPhone 16 Pro Max would be amazingly useful if not limited by iOS (which always feels like it's hiding true capabilities behind an Etch-A-Sketch interface to me). If you could plug the iPhone in and run a macOS desktop (which hasn't really changed for 15+ years), that'd be great. Thanks in advance.

I have a POCO F7 Ultra which is powerful enough to run LLMs via PocketPal and could easily replace my daily laptop or PC for work if it wasn't scuppered by USB2 on the USB-C port. If I could easily run ollama on the phone via a web interface I would because it's faster than my main PC for LLMs I think!

On Android you can go into Developer settings and force enable the ability to use desktop mode but sadly I can't without proper display support on the USB C.


I had an Atrix 2 that I was excited could work with a laptop-form dock. I never bothered to actually get the dock, though.

> I think we've seen this before. Back before phones were "smart" there was one (Nokia, maybe?) that you could put on a little dock into which you could plug a keyboard and monitor.

There have been multiple attempts at this over the years.

https://liliputing.com/5-laptop-docks-that-let-you-use-a-sma...


I think power was a real problem. A 2010 phone was bit as close to a laptop in performance.

An M4 Mac is way more powerful than an iPhone 16, but the iPhone is powerful enough to prove a much better experience on normal tasks compared to what that 2010 phone could at the time.

Basically I think everything has enough headroom that it’s not the compromise it would’ve been before. The biggest constraints on an iPhone’s performance are the battery and cooling. If you’re plugged in the battery doesn’t matter. And unless you’re not playing a fancy game cooling may not be an issue due to headroom.


I remember there was a fad I think in 2009 or 2010 where a bunch of Android manufacturers released 'laptops' (just a display and keyboard) with a dock connector in the back that was meant to turn the phone into a laptop basically

Obviously the trend didn't take off


> but the screen was always the annoyance factor.

Agreed. For this reason I'm quite excited about glasses like the Xreal One Pro. Having to carry around with me just my phone, a pair of glasses and a lightweight Bluetooth keyboard would be a game changer for me in terms of ergonomics.


Do you have this yet? I wonder how well it works in practice. I know some people using it with DeX but they're pretty expensive (around $400 I think) so I didn't try it myself.

No, not yet. I'm waiting for reviews of the Xreal One Pro to come out.

> this done well is a transformational thing, its just no one has been willing to invest yet

https://news.ycombinator.com/item?id=19328085


they're paying to aquire places where you can sell tokens at a markup, because the future is multiple base models that are good enough for most user tasks where user gateways play the base model providers off each other and capture a lot of the value

AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.

Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.


> AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity

The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.

But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.


> The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.

This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.


> this has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling

To be precise, it assumes a low variability in cycle time and improvement per cycle. If everyone is subjected to the same limits, the first-mover advantage remains insurmountable. I’d also argue that whether there is a ceiling matters less than how high it is. If the first AGI won’t hit a ceiling for decades, it will have decades of fratricidal supremacy.


> I’d also argue that whether there is a ceiling matters less than how high it is.

How steeply the diminishing returns curve off at.


I think the foundation model companies are actually poorly situated to reach the leading edge of AGI first, simply because their efforts are fragmented across multiple companies with different specializations—Claude is best at coding, OpenAI at reasoning, Gemini at large context, and so on.

The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.

I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.


I find these assumptions curious. How so? What is the AGI going to do that captures markets? Even if it can take over all desk work, then what? Who is going to consume that? And further more (and perhaps more importantly), with it putting everyone out of work, who is going to pay for it?

I'm pretty sure today's models probably can be capable of self-improving. It's just that they are not yet as good as self-improving as the combinations of programmers improving them with the help of the models.

Nothing OpenAI is doing, or ever has done, has been close to AGI.

Agreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.

I agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.

I agree with you

I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?


Differentiating between AGI and non-AGI, if we ever get remotely close, would be challenging, but for now it's trivial. The defining feature of AGI is recursive self improvement across any field. Without self improvement, you're just regurgitating. Humanity started with no advanced knowledge or even a language. In what should practically be a heartbeat at the speed of distributed computing with perfect memory and computation power, we were landing a man on the Moon.

So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent. In fact you would prefer to feed it as minimal a series of the most primitive first principles as possible because it's certain that much of what we think is true is going to end up being not quite so -- the same as for humanity at any other given moment in time.

We could derive more basic principles, but this one is fundamental and already completely incompatible with our current direction. Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry, mistakes and all. It'd create a facsimile of impressive intelligence because no human would have a remotely comparable knowledge base, but it'd basically just be a glorified natural language search engine - frozen in time.


I mostly agree with you. But if you think about it mimicry is an aspect of intelligence. If I can copy you and do what you do reliably, regardless of the method used, it does capture an aspect of intelligence. The true game changer is a reflective AI that can automatically improve upon itself

So then it’s something exponentially more capable than the most capable human?

> So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent.

The first 22 years of life for a “western professional adult” is literally dedicated to a giant bootstrapping info dump


Your quote is a non sequitur to your question. The reason you want to avoid massive data dumps is because there are guaranteed to be errors and flaws. See things like Alpha Go vs Alpha Go Zero. The former was trained on the entirety of human knowledge, the latter was trained entirely on itself.

The zero training version not only ended up dramatically outperforming the 'expert' version, but reached higher levels of competence exponentially faster. And that should be entirely expected. There were obviously tremendous flaws in our understanding of the game, and training on those flaws resulted in software seemingly permanently handicapping itself.

Minimal expert training also has other benefits. The obvious one is that you don't require anywhere near the material and it also enables one to ensure you're on the right track. Seeing software 'invent' fundamental arithmetic is somewhat easier to verify and follow than it producing a hundred page proof advancing, in a novel way, some esoteric edge theory of mathematics. Presumably it would also require orders of magnitude less operational time to achieve such breakthroughs, especially given the reduction in preexisting state.


Think beyond software and current models

The moment after human birth the human agent starts a massive information gathering process - that no other system really expects much output from in a coherent way - for 5-10 years. Aka “data dump” some of that data is good, and some of it is bad. This in turn leads to biases, it leads to poor thinking models; everything that you described, is also applicable to every intelligent system - including humans. So again you presupposing that there’s some kind of perfect information benchmark that couldn’t exist.

When that system comes out of the birth canal it already has embedded in it millions of years of encoded expectations predictability systems and functional capabilities that are going to grow independent of what the environment does (but will be certainly shaped in its interactions by the environment).

So no matter what, you have a structured system of interaction that must be loaded with previously encoded data (experience, transfer learning etc) with and it doesn’t matter what type of intelligent system you’re talking about there are foundational assumptions at the physical interaction layer that encode all previous times steps of evolution.

Said an easier way: a lobster, because of the encoded DNA that created it, will never have the same capabilities as a human, because it is structured to process information completely differently and their actuators don’t have the same type and level of granularity as human actuators.

Now assume that you are a lobster compared to a theoretical AGI in sensor-effector combination. Most likely it would be structured entirely differently than you are as a biological thing - but the mere design itself carries with it an encoding of structural information of all previous systems that made it possible.

So by your definition you’re describing something that has never been seen in any system and includes a lot of assumptions about how alternative intelligent systems could work - which is fair because I asked your opinion.


With due respect I do not think you're tackling the fundamental issue, which I do not think is particularly controversial: intelligence and knowledge are distinct things, with the latter created by the former. What we're aiming to do is to create an intelligent system, a system that can create fundamentally new knowledge, and not simply reproduce or remix it on demand.

The next time your in the wilds, it's quite amazing to consider that your ancestors - millennia past, would have looked at, more or less, these exact same wilds but with so much less knowledge. Yet nonetheless they would discover such knowledge - teaching themselves, and ourselves, to build rockets, put a man on the Moon, unlock the secrets of the atom, and so much more. All from zero.

---

What your example and elaboration focus on is the nature of intelligence, and the difficulty in replicating it. And I agree. This is precisely we want to avoid making the problem infinitely more difficult, costly, and time consuming by dumping endless amounts of knowledge in the equation.


Intelligence and knowledge being different things is quite the claim - namely it sounds like you’re stuck in the Cartesian dualist world and having transitioned into statistical empiricism.

I’m curious what epistemological grounding you are basing your claim on


I don't understand how you can equate the two and reconcile the past. The individuals who have pushed society forward in this domain or that scarcely, if ever, had any particular knowledge edge. Cases like Ramanujan [1] exemplify such to the point of absurdity.

[1] - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan


I'm not sure humans meet the definition here.

If you took the average human from birth and gave them only 'the most primitive first principles', the chance that they would have novel insights into medicine is doubtful.

I also disagree with your following statement:

> Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry

At worst it's complex mimicry! But I would also say that mimicry is part of intelligence in general and part of how humans discover. It's also easy to see that AI can learn things - you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.

I also disagree with this statement:

> One fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent

I don't think how something became intelligent should affect whether it is intelligent or not. These are two different questions.


> you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.

You didn't teach it, the model is still the same after you ran that. That is the same as a human following instructions without internalizing the knowledge, he forgets it afterward and didn't learn what he performed. If that was all humans did then there would be no point in school etc, but humans do so much more than that.

As long as LLM are like an Alzheimer's human they will never become a general intelligence. And following instructions is not learning at all, learning is building an internal model for those instructions that is more efficient and general than the instructions themselves, humans do that and that is how we manage to advance science and knowledge.


It depends what you count as learning - you told it something, and it then applied that new knowledge, and if you come back to that conversation in 10 years, it will still have that new knowledge and be able to use it.

Then when OpenAI does another training run it can also internalise that knowledge into the weights.

This is much like humans - we have short term memory (where it doesn't get into the internal model) and then things get baked into long term memory during sleep. AI's have context-level memory, and then that learning gets baked into the model during additional training.

Although whether or not it changed the weights IMO is not a prerequisite for whether something can learn something or not. I think we should be able to evaluate if something can learn by looking at it as a black-box, and we could make a black-box which would meet this definition if you spoke to a LLM and limited it to it's max context length each day, and then ran an overnight training run to incorporate learned knowledge into weights.


It's not much help but when I read "AGI" I picture a fish tank with brains floating in it.

Interesting but I’m not sure very instructive

When it can start wars over resources.

Seems as good a difference as any

So now? Trump generated his tariff list with ChatGPT

On its own.

https://www.noemamag.com/artificial-general-intelligence-is-...

Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...


Why does the Author choose to ignore the "General" in AGI?

Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.

The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.


There are people who can’t drive cars. Are they not general intelligence?

I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.


Generally speaking, anyone can learn to use any tool. This isn't true of generative AI systems which can only learn through specialized training with meticulously curated data sets.

People physically unable to use the tool can't learn to use it. This isn't necessarily my view, but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.

> but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.

Of course they can. We already have computer controlled car systems, the reason LLMs aren't used to drive them is because AI systems that specialize in text are a poor choice for driving - specialized driving models will always outperform them for a variety of technical reasons.


We have compute controlled automobiles, not LLM controlled automobiles.

That was my whole point. Maybe in theory an LLM could learn to drive a car, but they can't today because they don't physically have access to cars they could try to drive just like a person who can't learn to use a tool because they're physically limited from using it.


It doesn't make sense to connect a LLM to a car, that could never work because they are trained offline using curated data sets.

>can only learn through specialized training with meticulously curated data sets.

but so do I!


This isn't true. A curated data set can greatly increase learning efficiency in some cases, but it's not strictly necessary and represents only a fraction of how people learn. Additionally, all curated data sets were created by humans in the first place, a feat that language models could never achieve if we did not program them to do so.

Generality is a continuous value, not a boolean; turned out that "AGI" was poorly defined, and because of that most people were putting the cut-off threshold in different places.

Likewise for "intelligent", and even "artificial".

So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.

* It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.


Text can be a carrier for any type of signal. The problem gets reduced to that of an interface definition. It’s probably not going to be ideal for driving cars, but if the latency, signal quality, and accuracy is within acceptable constraints, what else is stopping it?

This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.


You can literally today prompt ChatGPT with API instructions to drive a car, then feed it images of a car's window outlooks and have it generate commands for the car (JSON schema restricted structured commands if you like). Text can represent any data thus yes, it is general.

> JSON schema restricted structured commands if you like

How about we have ChatGPT start with a simple task like reliably generating JSON schema when asked to.

Hint: it will fail.


ChatGPT can write a working Python script to generate the Json. It can call a library to do that.

But it cannot think on it's own! Billions of years of evolution couldn't bring human level 'AGI' to many many species, and we think a mere LLM company could do so. AGI isn't just a language model, there's tons of things baked into dna(the way brain functions, it's structure when it grows etc). It's not simply neuron interactions as well. The complexity is mind boggling

Humans and other primates are only a million years apart. Animals are quite intelligent.

The latest models are natively multimodal. Gemini, GPT-4o, Llama 4.

Same model trained on audio, video, images, text - not separate specialized components stitched together.


> AGI is already here

Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?


That's exactly what I would expect from a lot of people. Post factum rationalization is a thing.

Exactly. A lot of these arguments end up dehumanizing people because our own intelligence doesn’t hit the definition

There is no post factum rationalization here. If you ask a human to think about how they do something before they do it, there's no post factum rationalization. If you ask an LLM to do the same, it will give you a different answer. So, there is a difference. It's all about having knowledge of your internal state and being conscious of your actions and how you perform them, so you can learn from that knowledge. Without that, there is no real intelligence, just statistics.

If you ask a human to think about how to do a thing, before they do it, then you will also get a different answer.

There’s a good reason why schools spend so much time training that skill!


Yes, humans can post rationalize. But an LLM do nothing but post rationalize, as you yourself admitted humans can think it through beforehand and then actually do what they planned, while an LLM wont follow that plan mentally.

It is easy to see why, since the LLM doesn't communicate what it thinks it communicates what it thinks a human would communicate. A human would explain their inner process, and then go through that inner process. An LLM would explain a humans inner process, and then generate a response using a totally different process.

So while its true that humans doesn't have perfect introspection, the fact that we have introspection about our own thoughts at all is extremely impressive. An LLM has no part that analyzes its own thoughts the way humans do, meaning it has no clue how it thinks.

I have no idea how you would even build introspection into an AI, like how are we able to analyze our own thoughts? What is even a thought? What would this introspection part of an LLM do, what would it look like, would it identify thoughts and talk about them the way we do? That would be so cool, but that is not even on the horizon, I doubt we will ever see that in our lifetime, it would need some massive insight changing the AI landscape at its core to get there.

But, once you have that introspection I think AGI will happen almost instantly. Currently we use dumb math to train the model, that introspection will let the model train itself in an intelligent way, just like humans do. I also think it will never fully replace humans without introspection, intelligent introspection seems like a fundamental part to general intelligence and learning from chaos.


I would argue that this is a fringe opinion that has been adopted by a mainstream scholar, not a mainstream opinion. That or, based on my reading of the article, this person is using a definition of AGI that is very different than the one that most people use when they say AGI.

"AGI is already here, just wait 30 more years". Not very convincing.

... that was written in mid-2023. So that opinion piece is trying to redefine 2 year old LLMs like GPT-4 (pre-4o) as AGI. Which can only be described as an absolutely herculean movement of goalposts.

Please, keep telling people that. For my sake. Keep the world asleep as I take advantage of this technology which is literally General Artificial Intelligence that I can apply towards increasing my power.

Every tool is a technology than can increase ones power.

That is just what it wants you to think.

Their multimodal models are a rudimentary form of AGI.

EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".

https://arxiv.org/abs/2311.02462


Ah! Like Full Self Driving!

Goalpost moving.

Thank you.

"AGI" was already a goalpost move from "AI" which has been gobbled up by the marketing machine.


Nothing to do with moving the goalposts.

This is current research. The classification of AGI systems is currently being debated by AI researchers.

It's a classification system for AGI, not a redefinition. It's a refinement.

Also there is no universally accepted definition of AGI in the first place.


AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, we don't wait for somebody to give us a task and go do it as if that is our sole existence. We live with our thoughts, blank out, watch TV, read books etc. What we currently have and possibly in the next century as well will be nothing close to an actual AGI.

I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.

And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.

If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.

That's just how I feel.


I think there's a useful distinction that's often missed between AGI and artificial consciousness. We could conceivably have some version of AI that reliably performs any task you throw at it consistently with peak human capabilities, given sufficient tools or hardware to complete whatever that task may be, but lacks subjective experience or independent agency; I would call that AGI.

The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.

Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.

It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.

Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.


One sci-fi example could be based on the replicators from Star Trek, who are able to synthesize any meals on demand.

It is not hard to imagine a "cooking robot" as a black box that — given the appropriate ingredients — would cook any dish for you. Press a button, say what you want, and out it comes.

Internally, the machine would need to perform lots of tasks that we usually associate with intelligence, from managing ingredients and planning cooking steps, to fine-grained perception and manipulation of the food as it is cooking. But it would not be conscious in any real way. Order comes in, dish comes out.

Would we use "intelligent" to describe such a machine? Or "magic"?


I immediately thought of Star Trek too, I think the ship's computer was another example of unconscious intelligence. It was incredibly capable and could answer just about any request that anyone made of it. But it had no initiative or motivation of its own.

Regarding "We could conceivably have some version of AI that reliably performs any task you throw at it consistently" - it is very clear to anyone who just looks at the recent work by Anthropic analyzing how their LLM "reasons" that such a thing will never come from LLMs without massive unknown changes - and definitely not from scale - so I guess the grandparent is absolute right that openai is nor really working on this.

It isn't close at all.


That's an important distinction.

A machine could be super intelligent at solving real world practical tasks, better than any human, without being conscious.

We don't have a proper definition of consciousness. Consciousness is infinitely more mysterious than measurable intelligence.


> AGI would mean something which doesn't need direction or guidance to do anything

There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".

ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.

https://arxiv.org/pdf/2311.02462

The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...


This constant redefinition of what AGI means is really tiring. Until an AI has agency, it is nothing but a fancy search engine/auto completer.

I agree. AGI is meaningless as a term if it doesn't mean completely autonomous agentic intelligence capable of operating on long-term planning horizons.

Edit: because if "AGI" doesn't mean that... then what means that and only that!?


> Edit: because if "AGI" doesn't mean that... then what means that and only that!?

"Agentic AI" means that.

Well, to some people, anyway. And even then, people are already arguing about what counts as agency.

That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.

I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?

We never did rename "atoms" after we split them…

And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.


Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.

It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.


> Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.

"Can", but not "must". The difference between an LLM being harnessed to be a customer service agent, or a code review agent, or a garden planning agent, can be as little as the prompt.

And in any case, the point was that the concept of "completely autonomous agentic intelligence capable of operating on long-term planning horizons" is better described by "agentic AI" than by "AGI".

> It's kind of a simple enough concept... it's really just something that functions on par with how we do.

"On par with us" is binary thinking — humans aren't at the same level as each other.

The problem we have with LLMs is the "I"*, not the "G". The problem we have with AlphaGo and AlphaFold is the "G", not the ultimate performance (which is super-human, an interesting situation given AlphaFold is a mix of Transformer and Diffusion models).

For many domains, getting a degree (or passing some equivalent professional exam) is just the first step, and we have a long way to go from there to being trusted to act competently, let alone independently. Someone who started a 3-year degree just before ChatGPT was released, will now be doing their final exams, and quite a lot of LLMs operate like they have just about scraped through degrees in almost everything — making them wildly superhuman with the G.

The G-ness of an LLM only looks bad when compared to all of humanity collectively; they are wildly more general in their capabilities than any single one of us — there are very few humans who can even name as many languages as ChatGPT speaks, let alone speak them.

* they need too many examples, only some of that can be made up for by the speed difference that lets machines read approximately everything


> Until an AI has agency, it is nothing but a fancy search engine/auto completer.

Stepping back for a moment - do we actually want something that has agency?


Who is "we"?

Vulture Capitalists, obviously

Unless you can define "agency", you're opening yourself to being called nothing more than a fancy chemical reaction.

It's not a redefinition, it's a refinement.

Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.

That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?

What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.

> fancy search engine/auto completer

That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete..


> It's not a redefinition, it's a refinement

It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.

> By the same reasoning, so is a person. They are just auto completing words when they speak.

We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.

[1] http://metaphors.iath.virginia.edu/metaphors/24583

[2] https://www.frontiersin.org/journals/ecology-and-evolution/a...


Technically it is a refinement, as it distinguishes levels of performance.

The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.

> We have no evidence of this.

That's my point. Humans are not "autocompleting words" when they speak.


> Technically it is a refinement, as it distinguishes levels of performance

No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.

> That's my point. Humans are not "autocompleting words" when they speak

Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.

LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.


> Gluten-free means free of gluten.

Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.

> Humans are not. LLMs are.

My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.


> That's a binary classification. AGI systems can have degrees of performance and capability

The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.

> if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans

No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Why are you linking a Wikipedia page like it's the ground zero for the term? Especially when neither article the page link to justify that definition see the term as a binary accomplishment.

The g in AGI is General. I don't what world you think Generality isn't a spectrum, but it's sure as hell isn't this one.


That's right, and the Wikipedia page refers to the classification system:

"A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman"

In the second paragraph:

"Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved."

The entire article makes it clear that the definitions and classifications are still being debated and refined by researchers.


Then you are simply rejecting any attempts to refine the definition of AGI. I already linked to the Google DeepMind paper. The definition is being debated in the AI research community. I already explained that definition is too limited because it doesn't capture all of the intermediate stages. That definition may be the end goal, but obviously there will be stages in between.

> No, you can't, unless you're pre-supposing that LLMs work like human minds.

You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.


> I already linked to the Google DeepMind paper. The definition is being debated in the AI research community

Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.

> obviously there will be stages in between

We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.

> If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights

Fair enough, granted.


> Sure, Google wants to redefine AGI

It is not a redefinition. It's a classification for AGI systems. It's a refinement.

Other researchers are also trying to classify AGI systems. It's not just Google. Also, there is no universally agreed definition of AGI.

> We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.

Generalization is a formal concept in machine learning. There can be degrees of generalized learning performance. This is actually measurable. We can compare the performance of different systems.


It seems like you believe AGI won't come for a long time, because you don't want that to happen.

The turing test was succesfull. Pre chatGPT, I would not have believed, that will happen so soon.

LLMs ain't AGI, sure. But they might be an essential part and the missing parts maybe already found, just not put together.

And work there will be always plenty. Distributing ressources might require new ways, though.


While I also hold a peer comment's view that the Turing Test is meaningless, I would further add that even that has not been meaningfully beaten.

In particular we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.

In "both" (probably more, referencing the two most high profile - Eugene and the large LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. And the tests are typically time constrained by woefully poor typing skills (this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of a few words each.

The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that.


I mean, I am pretty sure that I won't be fooled by a bot, if I get the time to ask the right questions.

And I did not looked into it (I also don'think the test has too much relevance), but fooling the average person sounds plausible by now.

Now sounding plausible is what LLMs are optimized for and not being plausible, still, I would not have thought we get so far so quick 10 years ago. So I am very hesistant about the future.


> The turing test was succesfull.

The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.

It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.


"but actually turns out to have no practical impact on anything whatsoever"

Bots, that are now allmost indistinguishable from humans, won't have a practical impact? I am sceptical. And not just because of scammers.


> I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone?

I don't think there has ever been a time in history when work has been equitable and available to everyone.

Of course, that isn't to say that AI can't make it worse then it is now.


> AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, ...

Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before


> Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before

Literally everything that's been invented.


I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.

> legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles

To be fair, there is a section of the population whose useful intelligence can roughly be summed up as that or worse.


I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:

- Processing visual data and classifying objects within their field of vision.

- Processing auditory data, identifying audio sources and filtering out noise.

- Maintaining an on-going and continuous stream of thoughts and emotions.

- Forming and maintaining complex memories on long-term and short-term scales.

- Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.

I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."


> It conflates "intelligence" with fact-retention and communicative ability

No, I’m using useful problem solving as my benchmark. There are useless forms of intelligence. And that’s fine. But some people have no useful intelligence and show no evidence of the useless kind. They don’t hit any of the bullets you list, there just isn’t that curiosity and drive and—I suspect—capacity to comprehend.

I don’t think it’s intrinsic. I’ve seen pets show more curiosity than some folk. But due to nature and nurture, they just aren’t intelligent to any material stretch.


Remember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"

It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.


They will just define away all of those terms to make that not apply.

Who defines "value-aligned, safety-conscious project"?

"Instead of our current complex non-competing structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal competing structure where ..." is all it takes


Most likely the same people who define "all natural chicken" - the company that creates the term.

I actually lol-ed at that. It's like asking the inventor of a religion who goes to heaven.

AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...

How would an AGI prevent others from competing? Sincere question. That seems like something that ASI would be capable of. If another company released an AGI, how would the original stifle it? I get that the original can self-improve to try to stay ahead, but that doesn't necessarily mean it self-improves the best or most efficiently, right?

AGI used to be synonymous with ASI; it's still unclear to me it's even possible to build a sufficiently general AI - that is, as general as humans - without it being an ASI just by virtue of being in silico, thus not being constrained in scale or efficiency like our brains are.

Well, it could pretend to be playing 4d chess and meanwhile destroy the economy and from there take over the world.

If it was first, it could have self-improved more, to the point that it has the capacity to prevent competition, while the competition does not have the capacity to defend itself against superior AGI. This all is so hypothetical and frankly far from what we're seeing in the market now. Funny how we're all discussing dystopian scifi scenarios now.

Homo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.

Well, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.

That's always been pretty overtly the winner-take-all AGI scenario.


You can say the same thing about big companies hiring all the smart people and somehow we think that's ok.

AGI can be winner take all. But winner take all AGI is not aligned with the larger interests of humanity.

Modern corporations did't seem to care about humanity...

AGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.

With stuff like this, they like the team that built it and want them on their side.

If you believe the founders/team is the right group to deliver a core product 3b is nothing to get them onside.


There is literally never a context in which 3b is “nothing”. Not even for a company that just raised 40b.


If investors think that the acquisition will increase the valuation of the company by 3B (or more), and the acquisition is paid fully or mostly in stock - then one can it basically for free.


US wont end up in the situation that UK finds itself in because the land it occupies is some of the most productive land on earth and at a similar scale as china. The incredible wealth of america, is that its a land mass that in the old world would be supporting 500m-1.5b people, but is divided only amount 350m.

The city at the center of the Missouri and Mississippi if it were in europe would be a major civilization. In the US its saint louis. The US, CA, and AU have an option few countries do -- at any point they want nominal gdp growth all they have to do is open the door.

I agree with you though that china's incredibly impressive.


I'm not super knowledgeable about AU, but my gut impression is that it doesn't have much in the way of natural advantages.

Yes, it's lowpop, but that's about it.

Somewhat similar for CA, but CA is a lot better off than AU.


Out of those three, Australia's economy is proportionally the most reliant on natural resources. Its mining sector alone is a bit over 10% of overall GDP.


Yeah but those don't exactly sustain a large, high HDI population.

They bring money and wreck the environment, which gets less pushback in Australia because very few live in the mining areas.


didnt you listen to the 70 year olds planning this? we're just going have the robots do it.

you know how people said putin was surrounded by an echo chamber and thats how he got stuck in ukraine? Thats the us now but with billionare VC's and 2nd tier 1980's NYC real estate developers. Look at their numbers and listen to them talk, they're genuinely not grounded in reality as whole group and theres no fixing that


its really far less cash than you think, if they make a central clearing house for digital ad sales and anoint a EU wide cloud leader, they have two trillion dollar companies in the next few years with little innovation needed. then use the funds and talent from that to build a tech industry, this is exactly what china did.


In a stable society where you dont tax wealth, this is the inevitable outcome.


We do tax wealth of course. We wealth is defined as primary residence. No need to remind me about 'services'. Countries tax plebs by constantly changing the definitions of income, wealth.


some of the largest scala codebases under active development make heavy use of this feature.


Sure but but the argument here is that Scala.js failed to grow the ecosystem. It doesn't convince people who aren't already.


agreed, intuitively, referential transparency + strong types 'feels' like it should be the best ai programming language.

and exhaustive pattern matching + result types should help as well pushing issues out of runtime.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: