Hacker News new | past | comments | ask | show | jobs | submit | abra0's comments login

That's where I am too right now for personal projects, and I ended up reimplementing parts of Dokuploy for that, but I don't feel much of a need to move from "fun little docker compose" for some reason


That's a great point! I'd agree that just the extra emotional motivation from having your own thing is worth a ton. I get some distance down that way by having a large RAM no GPU box, so that things are slow but at least possible for random small one offs.


I was thinking of doing something similar, but I am a bit sceptical about how the economics on this works out. On vast.ai renting a 3x3090 rig is $0.6/hour. The electricity price of operating this in e.g. Germany is somewhere about $0.05/hour. If the OP paid 1700 EUR for the cards, the breakeven point would be around (haha) 3090 hours in, or ~128 days, assuming non-stop usage. It's probably cool to do that if you have a specific goal in mind, but to tinker around with LLMs and for unfocused exploration I'd advise folks to just rent.


> On vast.ai renting a 3x3090 rig is $0.6/hour. The electricity price of operating this in e.g. Germany is somewhere about $0.05/hour.

Are you factoring in the varying power usage in that electricity price?

The electricity cost of operating locally will vary depending on the actual system usage. When idle, it should be much cheaper. Whereas in cloud hosts you pay the same price whether the system is in use or not.

Plus with cloud hosts reliability is not guaranteed. Especially with vast.ai, where you're renting other people's home infrastructure. You might get good bandwidth and availability on one host, but when that host disappears, you should hope that you did a backup, which vast.ai charges for separately, and if so, you need to spend time restoring the backup to another, hopefully equally reliable host, which can take hours depending on the amount of data and bandwidth.

I recently built an AI rig and went with 2x3090s, and am very happy with the setup. I evaluated vast.ai beforehand, and my local experience is much better, while my electricity bill is not much higher (also in EU).


Well rented cloud instances shouldn't idle in the first place.


Sure, but unless you're using them for training, the power usage for inference will vary a lot. And it's cumbersome to shutdown the instance while you're working on something else, and have to start it back up when you need to use it again. During that time, the vast.ai host could disappear.


Most people don't think of storage costs and network bandwidth. I have about 2tb of local models. What's the cost of storing this in the cloud? If I decide not to store them in the cloud, I have to transfer them in anytime I want to run experiments. Build your own rig so you can run experiments daily. This is a budget rig and you can even build cheaper.


Let me add that moving data in and out of vast.ai is extremely painful. I might be overprivileged with a 1000 MBit line but these vast.ai instances have highly variable bandwidth in my experience; plus even when advertising good speeds I'm sometimes doing transfers in the 10-100 KiB/s range.


Data as well. I have a 100TB NAS I can use for data storage and it was honesty pretty cheap overall.


Well if you are not using a rented machine during a period of time, you should release it.

Agreed on reliability and data transfer, that's a good point.

Out of curiosity, what do you use a 2x3090 rig for? Bulk not time-sensitive inference on down quanted models?


> Well if you are not using a rented machine during a period of time, you should release it.

If you're using them for inference, your usage pattern is unpredictable. I could spend hours between having to use it, or minutes. If you shut it down and release it, the host might be gone the next time you want to use it.

> what do you use a 2x3090 rig for? Bulk not time-sensitive inference on down quanted models?

Yeah. I can run 7B models unquantized, ~13-33B at q8, and ~70B at q4, at fairly acceptable speeds (>10tk/s).


if you are just using it for inference, i think an appropriate comparison would just be like a together.ai endpoint or something - which allows you to scale up pretty immediately and likely is more economical as well.


Perhaps, but self-hosting is non-negotiable for me. It's much more flexible, gives me control of my data and privacy, and allows me to experiment and learn about how these systems work. Plus, like others mentioned, I can always use the GPUs for other purposes.


to each their own. if you are having really high-sensitive conversations with your GAI that someone would bother snooping in your docker container, figuring out how you are doing inference, and then capturing it real-time - you have a different risk tolerance than me.

i do think that cloud GPUs can cover most of this experimentation/learning need.


together.ai is really good but there is a price mismatch for small models (a 1BN model is not x10 cheaper than 10BN models)

This is obviously because their are forced to use high memory cards.

Are there ideal cards for low memory (1-2BN) models? So higher flops/$ on crippled memory


> built an AI rig and went with 2x3090s,

Is there a goto card for low memory (1-2BN) models?

Something with much better flops/$ but purposely crippled with low memory.


with runpod/vast, you can request a set amount of time - generally if I request from Western EU or North America the availability is fine on the week-to-month timescale.

fwiw I find runpod's vast clone significantly better than vast and there isn't really a price premium.


For me "economics" are:

- if I have it locally, I'll play with it

- if not, I won't (especially with my data)

- if I have something ready for a long run I may or may not want to send it somewhere (it's not going to be on 3090s for sure if I send it)

- if I have requirement to have something public I'd probably go for per usage with ie [0].

[0] https://www.runpod.io/serverless-gpu


With the current more-or-less dependency on CUDA and thus Nvidia hardware it's about making sure you actually have the hardware available consistently.

I've had VERY hit-miss results with Vast.ai and I'm convinced people are cheating their evaluation stuff because when the rubber meets the road it's very clear performance isn't what it's claimed to be. Then you still need to be able to actually get them...


use runpod and yeah i think vast.ai has some scams, especially in the asian and eastern european nodes.


For me the economics is when I'm not using it to do AI stuff, I can use it to play games with max settings.

Unfortunately my CFO (a.k.a Wife) does not share the same understanding.


I fear that someday I will die and my wife will sell off all my stuff for what I said I paid for it.

(not really, but it is a joke I read someplace and I think it applies to a lot of couples).


Unless you are training, you never hit peak watts. When inferring, the watt is still minimal. I'm running inference now and using 20%. GPU 0 is using more because I have it as main GPU. Idle watt sits at about 5%.

Device 0 [NVIDIA GeForce RTX 3060] PCIe GEN 3@16x RX: 0.000 KiB/s TX: 55.66 MiB/s GPU 1837MHz MEM 7300MHz TEMP 43°C FAN 0% POW 43 / 170 W GPU[|| 5%] MEM[|||||||||||||||||||9.769Gi/12.000Gi]

Device 1 [Tesla P40] PCIe GEN 3@16x RX: 977.5 MiB/s TX: 52.73 MiB/s GPU 1303MHz MEM 3615MHz TEMP 22°C FAN N/A% POW 50 / 250 W GPU[||| 9%] MEM[||||||||||||||||||18.888Gi/24.000Gi]

Device 2 [Tesla P40] PCIe GEN 3@16x RX: 164.1 MiB/s TX: 310.5 MiB/s GPU 1303MHz MEM 3615MHz TEMP 32°C FAN N/A% POW 48 / 250 W GPU[|||| 11%] MEM[||||||||||||||||||18.966Gi/24.000Gi]


When you compute the break even point did you factor in that you still own the cards and you can resell them? I bought my 3090s for 1000$ and after 1 year I think they go for more in the open market if I resell them now.


Interesting. I checked it out. The providers running your docker container have access to all your data.


I just made a clone of diskprices.com for GPUs specifically for AI training, and it has a power and depreciation calculator: https://gpuprices.us

You can expect a GPU to last 5 years. So for 128 days break even you are only looking at 6.67% utilization. If you are doing training runs, I think you are going to beat it easily.

P.S. coincidentally or not, but shortly after it got mentioned on Hacker News, Best Buy run out of both RTX 4090s and RTX 4080s. They used to top the chart. Turns out at descent utilization they win due to the electricity costs.


Exactly. And you rarely see machines from Germany on vast. Might as well run a data center in Bermuda. [0]

[0] https://www.royalgazette.com/general/business/article/202307...


the current economics is a low ball to get costumers. it's absolutely not going to be the market price once commercial interests have locked in their products.

but if you're just goofing around and not planning to create anything production worthy, it's a great deal.


> the current economics is a low ball to get costumers.

vast.ai is basically a clearinghouse. they are not doing some VC subsidy thing

in general, community clouds are not suitable for commercial use.


Well maybe you could rent it out to others for 256 days at $0.3/hour, tinker, and sell it for parts after you get bored with it. ;)


Breakeven point would be less than 128 days due to the (depreciating) resale value of the rig.


Well, almost. GPUs have not be depreciating. The cost of 3090's and 4090's have gone up. Folks are selling it for what they paid for or even more. With the recent 40's SUPER series from Nvidia, I'm not expecting any new releases in a year. AMD & Intel still have ways to go before major adoption. Startups are buying up consumer cards. So I sadly expect prices to stay more or less the same.


If it isn’t depreciating that supports the parent’s bigger point even more.


He can use these cards for 128days non stop and re-sell, claiming back the purchase price almost fully since OP bought them cheap. Buying doesn't mean you use the GPUs to a point where they end up costing 0, yes there is risk with GPUs going but but c'mon.... Renting is money you will never see again.


The third effort is referred to sometimes as AI not-kill-everyone-ism, a tacky and unwieldy term that is unlikely to be co-opted or lead to the unproductive discussion like around the OP article.

It is pretty sad to read people bash together the efforts to understand and control the technology better and the companies doing their usual profit maximization.


More effort spent on early commercialization like keeping ChatGPT running might mean less effort on cutting edge capabilities. Altman was never an AI safety person, so my personal hope is that Anthropic avoids this by having higher quality leadership.


>rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic


The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!


Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.


Clearly there's a difference, because the architectures we have don't know how to persist information or further train.

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.

I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.


But context windows got to 100K now, RAG systems are everywhere, and we can cheaply fine-tune LoRAs for a price similar with inferencing, maybe 3x more expensive per token. A memory hierarchy made of LoRA -> Context -> RAG could be "all you need".

My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.

So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.

Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.

Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.


I've been wondering along similar lines, although I am for all intents and purposes here a layman so apologies if the following is nonsensical.

I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.

I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)

Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?


> Clearly there's a difference, because the architectures we have don't know how to persist information or further train. Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.


Isn't RAG essentially the "something small you can bolt on" to an LLM that gives it "persistence outside the context window?" There's no reason you can't take the output of an LLM and stuff it into a vector database. And, if you ask it to create a plan to do a thing, it can do that. So, there you have it: goal-oriented persistence outside of the context window.

I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.


From my experience there's definitely context beyond the current set of LLM state. It's how they're able to regurgitate facts or speak at all.


> regurgitate facts or speak at all.

Most of that is encoded into weights during training, though external function call interfaces and RAG are broadening this.


> Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.


I can remember to do something tomorrow after doing many things in-between.

I can reason about something and then combine it with something I reasoned about at a different time.

I can learn new tasks.

I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.

The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."


Former neuroscientist here.

Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.

But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.

And working towards long-term goals is something we actually take decades to learn. And many fail there as well.

I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.

I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.


> I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.

Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).

It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).


That just proves we real-time fine tuning of the neuron weights. It is computationally intensive but not fundamentally different. A million token context would look close to long short-term memory and frequent fine-tuning will be akin to long-term memory.

I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.


Real-time fine tuning would be one approach that probably helps with some things (improving performance at a task based on feedback) but is probably not well suited for others (remembering analogous situations, setting goals; it's not really clear how one fine-tunes a context window into persistence in an LLM). There's also the concern that right now we seem to need many, many more examples in training data than humans get for the machine to get passably good at similar tasks.

I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.

I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.


> I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs.

I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.

This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.


Are there theoretical models that use real time weights? Every intro to deep learning focuses on stochastic gradient descent for neural network weights; as a layperson I'm curious about what online algorithms would be like instead.


I agree with your premise.

You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.

It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.

I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.

But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.

When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.


If LLMs can copy the symbolic behaviors that let humans generate new knowledge, it'll be there.


> , they just regurgitate it, remix it, and expose patterns

Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.


> They are very convincing, and show that the Turing test may be flawed

The only think flawed here is this statement. Are you even familiar with the premise of Turing test?


Maybe "rightfully so" meant "it is totally within Sam's right to claim that LLMs aren't sufficient for AGI"?


If there's a lot of smoke coming, people are running out of the building and you can see an ominous red glow in the windows, shouting "FIRE" is the right thing to do even if we are not going to be engulfed in flames this very second or the next. The potential costs given the evidence we all have are simply not comparable.


What smoke? So far the predictions of AI x-risk folk haven't panned out the way they said it would. In fact the opposite has happened. What smoke are you referring to?


Bonus question on O-1: what do you think are the easiest boxes to check for a talented professionals in AI? High salary and critical capacity for established organizations are a given, but what else?


Hi, yes those two, for sure. And certainly add "original contributions of major significance." AI specialties are generating "original contributions" thick and fast right now for engineers and data specialists.

"Original contributions" is the foundational category for a tech or engineering case anyway. And trending specialties like AI are well-placed to develop solid evidence in this category.

I'd also add the publications and judging categories. These are easy-win categories that are evaluated with a more lenient standard than the other 6. They're also a great way to attract "sustained acclaim" by building a reputation as a thought leader in your field.

Note that the "field of endeavor" for these 2 categories in private-industry cases is industry publications, presentations, podcasts, broadcasts, and events, NOT academic publications or citation counts. (All these industry activities "count" as publications.)


Could some knowledge have been finetuned into it, and be outside of the prompt?


Great to see investment in the most alignment-conscious of the AI orgs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: