how do you know that is actually currently their true goal. it appears to me that the goal has shifted, and the people that cared about the original goal are leaving.
An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc. If you still do those manually and not just offload all of it to ChatGPT then you would greatly benefit from a real AGI that could do those tasks on their own.
And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.
> An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc.
How come the goal posts for AGI are always the best of what people can do?
I can't diagnose anyone, yet I have GI.
Reminds me of:
> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
> How come the goal posts for AGI are always the best of what people can do?
Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.
> I can't diagnose anyone, yet I have GI.
You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.
So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.
For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.
If they can't learn then they don't have general intelligence, without learning there are many problems you wont be able to solve that average (or even very dumb) people can solve.
Learning is a core part to general intelligence, as general intelligence implies you can learn about new problems so you can solve those. Take away that and you are no longer a general problem solver.
That's a really good point. I want to define what I think of intelligence as being so we are on the same page: it is the combination of knowledge and reason. An example of a system with high knowledge amd low reason is Wikipedia. An example of a system with high reason and low knowledge is a scientific calculator. A highly intelligent system exhibits aspects of both.
A rule based expert intelligence system can be highly intelligent, but it is not general, and maybe no arrangement of rules could make one that is general. A general intelligence system must be able to learn and adapt to foreign problems, parameters, and goals dynamically.
Yes, I think that makes sense, you can be intelligent without being generally intelligent. For some definitions the person with Alzheimer can be more intelligent than someone without, but the person without is more general intelligent thanks to ability to learn.
The classical example of a general intelligent task is to get the rules for a new game and then play it adequately, there are AI contests for that. That is easy for humans to do, games are enjoyed even by dumb people, but we have yet to make an AI that can play arbitrary games as well as even dumb humans.
Note that LLMs are more general than previous AI's thanks to in context learning, so we are making progress, but still far from as general as humans are.
Let's take a step back from LLMs. Could you accept the network of all interconnected computers as a generally intelligent system? The key part here that drives me to ask this is:
> ChatGPT solving your problem would mean it drives you, not you driving it like it works today.
I had a very bad Reddit addiction in the past. It took me years of consciously trying to quit in order to break the habit. I think I could make a reasonable argument that Reddit was using me to solve its problems, rather than myself using it to solve mine. I think this is also true of a lot of systems - Facebook, TikTok, YouTube, etc.
It's hard to pin down all computers as an "agent" in the way we like to think about that word and assign some degree of intelligence to, but I think it is at least an interesting exercise to try.
Companies are general intelligences and they use people, yes. But that depends on humans interpreting that data reddit users generates and updating their models, code and algorithms to adapt to that data, the computer systems alone aren't general intelligences if you remove the humans.
An AGI could run such a company without humans anywhere in the loop, just like humans can run such a company without an AGI helping them.
I'd say a strong signal that AGI has happened are large fully automated companies without a single human decisionmaker in the company, no CEO etc. Until that has happened I'd say AGI isn't here, if that happens it could be AGI but I can also imagine a good enough script to do it for some simple thing.
The simplest answer, without adding any extraordinary capabilities to the AGI that veer into magical intelligence, is to have AI assistants that can seemlessly interact with technology the way a human assistant would.
So, if you want to meet with someone, instead of opening you calendar app and looking for an opening, you'd ask your AGI assistant to talk to their AGI assistant and set up a 1h meeting soon. Or, instead of going on Google to find plane tickets, you'd ask you AGI assistant to find the most reasonable tickets for a certain date range.
This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.
Going only slightly further with assumptions about how smart an AGI would be, it could revolutionize education, at any level, by acting as a true personalized tutor for a single student, or even for a small group of students. The single biggest problem in education is that it's impossible to scale the highest quality education - and an AGI with capabilities similar to a college professor would entirely solve that.
The examples you're providing seem to have been thoroughly solved already.
I'm at the European AI Conference for our startup tomorrow, and they use a platform that just booked me 3 meetings automatically with other people there based on our availability... It's not rocket science.
And you don't even need those narrow tools. You could easily ask GPT-4o (or lesser versions) something along the lines of :
> "you're going to interact with another AI assistant to book meetings for me: [here would be the details about the meeting]. Come up with a protocol that you'll send to the other assistant so it can understand what the meetings are about, communicate you their availability, etc. I want you to come up with the entire protocol, send it, and communicate with the other assistant end-to-end. I won't be available to provide any more context ; I just want the meeting to be booked. Go."
GPT-4(o) lacks the ability to connect to any of the tools needed to achieve what I'm describing. Sure, it maybe could give instructions about how this could be done, but it can't actually do it. It can't send an email to your email account, and it can't check your incoming emails to see if any arrived asking for a meeting. It can't then check your calendar, and propose another email, or book a time if the time is available. It doesn't know that you normally take your lunch at some time, so that even though the spot is free, you wouldn't want a meeting at that time. And even if you did take the considerable amount of effort to hook it up with all of these systems, it's failure rate is still far too high to rely on it for such a thing.
And getting it to actually buy stuff like plane tickets on your behalf would be entirely crazy.
Sure, it can be made to do some parts of this for very narrowly defined scenarios, like the specific platform of a single three day conference. But it's nowhere near good enough for dealing with the general case of the messy general world.
I had a (human) assistant in my previous business, super-smart MBA type, and by your definition she wasn't a general intelligence on the day of onboarding:
- she didn't have access to my email account or calendar
- she didn't know my usual lunch time hours
- she didn't have a company card yet.
All of those points you're raising are logistics, not intelligence.
Intelligence is "When trying to achieve a goal, can you conceive of a plan to get there despite adverse conditions, by understanding them and charting/reviewing a sequence of actions".
You can definitely be an intelligent entity without hands or tools.
I'm pretty certain your assistant learned to do all of those things more or less on her own. Of course, you shared your schedule and email with them, and similarly, you'd have to share your schedule and email with an AGI.
But you certainly didn't have to write a special program for your assistant to integrate with your inbox, they just used an existing email/calendar client and looked at their screen.
GPT-4 is nowhere near able to interact with, say, the Gmail web page at this level. And even if you created the proper integrations, it's nowhere near the level that it could read all incoming email and intelligently decide, with high accuracy, which emails necessitate updates to your calendar, which don't, and which necessitate back-and-forth discussions to negotiate a better date for you.
Sure, your assistant didn't know all of this on day one, but they learned how to do it on their own, presumably with a few dozen examples at most. That is the mark of a general intelligence.
I think we're disagreeing on the current capacity of models, as much as we're disagreeing about the definition of AGI.
I'm pretty sure, from previous interactions with GPT-4o and from their demos, that if you used their desktop app (which enables screensharing) and asked it to tell you where to click, step-by-step, in the Gmail web page, it would be able to do a pretty good job of navigating through it.
Let's remember that the Gmail UI is one of the most heavily documented (in blogs, FAQs, support pages, etc) in the world. I can't see GPT-4o having any difficulty locating elements in there.
I think the intelligence part is to think of any potential logistical obstacles and figure out ways to deal with them with minimal disruption except when necessary because of potential conflicts with other goals.
> Sure, it maybe could give instructions about how this could be done [...]
If you were in a room with no computer, would you consider yourself to be not intelligent enough to send an email? Does the tooling you have access to change your level of intelligence?
This is definitely an interesting way to look at it. My initial reaction is to consider that I can enhance the capabilities of a system without increasing its inteligence. For example, if I give a monkey a hammer, it can do more than it could do when it didn't have the hammer, but it is not more intelligent (though it could probably learn things by interacting with the world with the hammer). That leads me to think: can we enhance the capabilities of what we call "AI systems" to do these things, without increasing their intelligence? It seems like you can glue GPT-4o to some calendar APIs to do exactly this. This seems more like an issue of tooling rather than an issue of intelligence to me.
I guess the issue here is: can a system be "generally intelligent" if it doesn't have access to general tools to act on that intelligence? I think so, but I also can see how the line is very fuzzy between an AI system and the tools it can leverage, as really they both do information processing of some sort.
I'm sure some aspects of this can be achieved by manually programming GPT-4 links to other specific services. And obviously, some interaction tools would have to be written manually even for an AGI.
The difference though is the amount of work. Today if you wanted GPT-4 to work as I describe, you would have to write an integration for Gmail, another one for Office365, another one for Proton etc. You would probably have to create a management interface to give access to your auth tokens for each of these to OpenAI so they can activate these interactions. The person you want to sync with would have to do the same.
In contrast, an AGI that only has average human intelligence, or even below, would just need access to, say, Firefox APIs, and should easily be able to achieve all of this. And it would work regardless if the other side is a different AGI using a different provider, or even if they are just a regular human assistant.
> The single biggest problem in education is that it's impossible to scale the highest quality education
Do you work in education? Because I don't think many who do would agree with this take.
Where I live, the single biggest problem in education is that we can't scale staffing without increasing property taxes, and people don't want to pay higher property taxes. And no, AGI does not fix this problem, because you need staff to be physically present in schools to deal with children.
Even if we had an AGI that could do actual presentation of coursework and grading, you need a human being in there to make sure they behave and to meet the physical needs of the students. Humans aren't software to program around.
Having individual tutors for each child is not often discussed because it is self-evidently impossible for any cost whatsoever - it would require far too high a percentage of the workforce of a country to be dedicated to education. But it is the most responsible thing for the difference between the education the elites get, especially the elites of the past, and the general education.
Sure, this doesn't mean you could just fire all teachers and dissolve all schools. You still need people to physically be there and interact with the children in various ways. But if you could separate the actual teaching from the child care part, and if you could design individualized courses for each child with something approaching the skill of the best teachers in the whole world, you would get an inconceivably better educational system for the entire population.
And I don't need to work in education for much of this. Like all others, I was intimately acquainted with the educational system (in my country) for 16 years of my life through direct experience, and much more since in increasingly less direct experience. I have very very good and very direct experience of the variance between teachers and the impact that has on how well students understand and interact with the material.
That's like claiming you know how to run a restaurant because you like to eat out. Or worse actually, since you're extrapolating your individual experience from a small set of educational systems to education as a whole.
If you're looking for insight into the problems faced in education, speak to educators. I really doubt they would tell you that the quality of individual instructors is their biggest problem.
Educators don't like to discuss the performance of other educators, as most professionals don't like to diss their colleagues, especially not in front of their customers. But the quality of educators is absolutely a huge problem, so huge that there are even consecrated sayings about it (those who can, do; those who can't, teach). So huge that one of the most well known rock anthems of all time is about the poor quality of educators (Pink Floyd's Another Brick in the Wall Part II).
Educators are the best people to ask about how to make their jobs easier. They are not necessarily the best people to ask about how to make children's education better.
Edit:
> That's like claiming you know how to run a restaurant because you like to eat out.
No, it's like claiming you know some things about the problems of restaurants, and about the difference between good and bad restaurants, after spending 8+ hours a day almost every day, for 16 years, eating out at restaurants. Which I think would be a decent claim.
> This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.
Does it? I am quite certain those things are achievable right now without anything like AI in the sense being discussed here.
Show me one product that can offer me an AI assistant that can set up a meeting with you at a time that doesn't contradict any of our plans, given only my and your email address.
I've never looked into actual products as this isn't something I'm interested in. I'm just saying that accomplishing this can be done without involving AI of the sort being discussed here. I'm not sure what such AI would bring to the table for this sort of task.
> given only my and your email address.
AI or not, such an application would need more than just email addresses. It would need access to our schedules.
My point is that an AGI would give you this use case for free. Currently this kind of product, AI or not, simply doesn't exist. It's in principle doable, but the number of integrations required makes it uneconomical. An AGI assistant could use the same messy interfaces we use, and thus it would be compatible with every email provider and client ever created.
> AI or not, such an application would need more than just email addresses. It would need access to our schedules.
It needs access to my schedule, yes, but it only needs your email address. It can then ask you (or your own AGI assistant) if a particular date and time is convenient. If you then propose another time, it can negotiate appropriately.
A working memory that can preserve information indefinitely outside a particular context window and which can engage in multi-step reasoning that doesn't show up in its outputs.
GPT4o's context window is 128k tokens which is somewhere on the order of 128kB. Your brain's context window, all the subliminal activations from the nerves in your gut and the parts of your visual field you aren't necessarily paying attention to is on the order of 2MB. So a similar order of magnitude though GPT has a sliding window and your brain has more of an exponential decay in activations. That LLMs can accomplish everything they do just with what seems analogous to human reflex rather than human reasoning is astounding and more than a bit scary.
Looking up an estimate of the brain's input bandwidth at 10 million bits per second and multiplying by the second or two a subliminal stimuli can continue to affect a person's behavior. This is a very crude estimate and probably an order of magnitude off, but I don't think many orders of magnitude off.
Solve CO2 Levels
End sickness/death
Enhance cognition by integrating with willing minds.
Safe and efficient interplanetary travel.
Harness vastly higher levels of energy (solar, nuclear) for global benefit.
Science:
Uncover deeper insights into the laws of nature.
Explore fundamental mysteries like the simulation hypothesis, Riemann hypothesis, multiverse theory, and the existence of white holes.
Effective SETI
Misc:
End of violent conflicts
Fair yet liberal resource allocation (if still needed), "from scarcity to abundance"
The problem with CO2 levels is that no one likes the solution not that we don't have one. I highly doubt adding AGI to the mix is going to magically make things better. If anything we'll just burn more CO2 providing all the compute resources it needs.
People want their suburban lifestyle with their red meat and their pick-up truck or SUV. They drive fuel inefficient vehicles long-distances to urban work environments and they seem to have very limited interest in changing that. People who like detached homes aren't suddenly affording the rare instances of that closer to their work. We burn lots of oil because we drive fuel inefficient vehicles long distances. This is a problem of changing human preferences which you just aren't going to solve with an AGI.
Assuming embedded AI in every piece of robotics - sometimes directly, sometimes connected to a central server (this is doable even today) - it'll revolutionize industries: human-less mining, processing, manufacturing, services, and transportation. These factories would eventually produce and install enough solar power or build sufficient nuclear plants and energy infrastructure, making energy clean and free.
With abundant electric cars (at this future point in time) and clean electricity powering heating, transportation, and manufacturing, some AIs could be repurposed for CO2 capture.
It sounds deceptively easy, but from an engineering standpoint, it likely holds up. With free energy and AGI handling labor and thinking, we can achieve what a civilization could do and more (cause no individual incentives come into play).
However, human factors could be a problem: protests (luddites), wireheading, misuse of AI, and AI-induced catastrophes (alignment).
Having more energy is intrinsically dangerous, though, because it's indiscriminate: more energy cannot enable bigger solutions without also enabling bigger problems. Energy is the limiting factor to how much damage we can do. If we have way more of it, all bets are off. For instance, the current issue may be that we are indirectly cooking the planet through CO2 emissions, so capturing that sounds like a good idea. But even with clean energy, there is a point where we would cook the planet directly via waste heat of AI and gizmos and factories and whatever unforeseen crap we'll conjure just because we can. And given our track record I'm far from confident that we wouldn't do precisely that.
This exactly. Every self replicating organism will eventually use all the energy available to it, there will never be an abundance. From the dawn of time, mankind has similarly used every bit of energy it generates. From the perspective of a subsistence farmer in the 1600s, if you told them how much energy would be available in 400s year they would think we surely must live in paradise with no labor. Here we are, still metaphorically tilling the land.
Do you believe the average human has general intelligence, and do you believe the average human can intellectually achieve these things in ways existing technology cannot?
Yes, considering that AI operates differently from human minds, there are several advantages:
AI does not experience fatigue or distractions => consistent performance.
AI can scale its processing power significantly, despite the challenges associated with it (I understand the challenges)
AI can ingest and process new information at an extraordinary speed.
AIs can rewrite themselves
AIs can be multiplicated (solving scarcity of intelligence in manufacturing)
Once achieving AGI, progress could compound rapidly, for better or worse, due to the above points.
The first AGI will probably take way too much compute to have a significant effect, unless there is a revolution in architecture that gets us fast and cheap AGI at once the AGI revolution will be very slow and gradual.
A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.
> A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.
Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.
The hospital system near my hometown just announced they are filing for bankruptcy. The parent company that bought it, which itself is owned by private equity, extracted nearly a billion dollars in profit from hospitals before going bankrupt, according to local reporting. I don’t know what the solution is, but PE is a parasitic blight on society.
No, GP said that hospitals should not be run as a busniess. Non-profit status is just another way to structure a business and, in practice, those hospitals extract a similar amount of money from patients as similar for-profit hospitals.
My local 'non-profit' hospital is one of the most expensive in the state, and has a total monopoly over its area. It paid cash for a $140m new tower right as covid started, and complained loudly about losing $13m one year and having to lay off staff, and not having money to pay nurses more, while sitting on a $1B 'rainy day fund'.
They will release the footage to the regulators, likely without having to be asked.
The regulators don't want the footage released to the general public any more than Waymo does—the last thing they need is a bunch of armchair traffic accident investigators telling them how to do their jobs.
It's not enough, because at least for me and a few others (e.g. the neighboring comment "If the footage is not released to the public I'm going to assume it makes Waymo look bad.") being able to judge the event personally can establish much more trust than anything they do with the regulators, because we also don't necessarily fully trust the regulators and their political motivation, and want to see if we would agree with the regulators for such major cases.
Unless you’re willing to watch millions of hours video of every Waymo vehicle driving and judge them in aggregate, watching a one-off incident will tell you nothing about their safety. At that point, you might as well trust their aggregate statistics reported to the regulators because they capture everything.
If you’re not an expert, it’s best to let the regulators and insurance adjusters do their job.
Sounds like the same argument bad cops claim after questionable police shootings
Public trust requires building it first. As one of the first instances of a waymo crash, yes the public needs to see it. If after reading the footage in the first 99 crashes and in each time waymo’s assessment was valid, that’s when Waymo has public trust and can credibly not release every single video but only do it on a case by case basis.
How do you get to "bad cops" from here? Bad cops are "investigated" by their own units. So that analogy doesn't work.
Waymo is regulated by independent agencies (CA DMV and NHTSA). They are watching the videos and assessing if Waymo is telling the truth. Their permit is pulled if they get caught lying (like Cruise). How are you and thousands of SF residents more qualified than them? Why should I take your assessment more seriously than that of the regulators?
It looks like the recall will be addressed with a software update, which will probably limit autopilot to highway use only. That seems like a reasonable step, in line with Autopilot’s intended use.
The General Services Administration is the U.S. government agency that maintains login.gov, the public’s single sign-on for government. GSA is conducting an equity study on remote identity proofing to find whether methods like facial recognition work equitably for a diverse set of users. If you’d like to help make technology access more equitable, consider participating or sharing the study. Thanks HN!
I think Public is pretty nice, honestly. It feels official while still looking friendly and modern. Sweden’s is cool, but I don’t see what it actually does better on a practical level.
I found a pair of AirPods a while back and wanted to return them to their owner. They must be associated with an Apple ID, so I reached out to Apple and asked if they’d help me get in touch. They said no, and advised me to file a police report.
I wonder why they have this policy? They have all the necessary information to facilitate a conversation. Heck, with services like Hide My Email, they could even keep the identities of each party private.
Is it hard to imagine the thief themselves using this to extort someone if they want their airpods back, or use that contact info to phish the owner for their Apple ID password?
A shame they didn’t opt for Public Sans, which was designed by the US Web Design System team. Though I see it only supports Latin characters at the moment, which might make it unsuitable for use at State.
I believe that font is primarily intended for interfaces rather than documents. The design decisions are quite different between those contexts, or that's what I've heard.
I wonder "how many fonts we need". Is there a good, open resource for FOSS fonts, to compare them, and preferably a curation?
I agree with others on here, though. Everything our government does, and produces, from an IT/development perspective which is not a "competitive advantage" (security, etc.) should be open. Why there would be an official directive to choose a non-open font when so many open fonts exist, is beyond me.