Hacker News new | past | comments | ask | show | jobs | submit | stevedekorte's comments login

But street lights don't have to use harsh 3000 kelvin LEDs, there are warm light LEDs (2400-2700 kelvin). For example, these lights are widely available for home, yet most people just buy the 3000K LED bulbs because (IME) it doesn't occur to them that there is a strong aesthetic (and health) difference between these colors. i.e. They don't care.


Besides the color temperature, they are way too bright. It's like daylight in front of my house since they changed the lamps.


A lot of America especially has an issue with too many lights, and the lights themselves are too bright relative to the population. Found what I thought was a cool image on Wikipedia Commons while searching the subject that has the light use relative to the population density. Green's lots of light use, red is lots of population density. America's bright green light use. Yellow is fairly equivalent light use to population density. [1]

[1] Light Use vs Population Density, WP Commons, https://upload.wikimedia.org/wikipedia/commons/b/ba/Earth_li...


They shine through my windows at night and are truly horrific.

They’re down the entire alleyway behind my place, and a walk to the grocery store at 7pm during the winter makes your body and mind think it’s sunrise.


It's probably getting better but the amber-colored LEDs used to be rather inefficient. I've also heard that white lighting can slightly improve reaction times of those in traffic and leads to slightly clearer captures for security cameras. I personally think these benefits do not outweigh how extremely ugly and unwelcoming they are, but "city officials just don't care" is not what led to the adoption of white LED street lighting at all.


A lot of wildlife, like birds, bats, insects etc. are really confused by white light. There are some nordic countries which are experimenting with red street lights in outer districts which are showing great promise. (Don't have a reference atm but should be googleable)


Looks useful and powerful. Nice work!


Thank you!


It’s great that this can run on a laptop but FWIW, llama 70B model is no where near “GPT-4 class” in my own use cases. 405B might be, though I haven’t tested it.


Are you sure about that?

When I say GPT-4 class I'm talking about being comparable to the GPT-4 that was released in March 2023.

The Llama 3.3 70B model is clearly no way near as good as today's GPT-4o family of models, or the other top-ranking models today like Gemini 1.5 Pro and Claude 3.5 Sonnet.

To my surprise, Llama 3.3 70B is ranking higher than Claude 3 Opus on https://livebench.ai/ - I'm suspicious of that result, personally. I think Opus was the best available model for a few months earlier this year.


I guess it's because it has the highest score of all models in instruction following, 20 points higher then Opus, which compensates for shortcomings elsewhere (e.g. in language), and which wouldn't necessarily translate to human evaluation of usefulness.


Wow, yeah I think you're right - 3.3 somehow gets top position on the entire leaderboard for that category, I bet that skews the average up a lot.


The model you are running isnt the one used in the benchmarks you link.

The default llama3.3 model in ollama is heavily quantized (~4 bit). Running the full fp16 model, or even an 8-bit quant wouldnt be possible on your laptop with 64G RAM.


Thanks - yeah, I should have mentioned that. I just added a note directly above this heading https://simonwillison.net/2024/Dec/9/llama-33-70b/#honorable...


How do you reliably compare it with the GPT-4 released in March 2023?


Vibes, based on what I can remember using that model for.

There's still a gpt-4 model available via the OpenAI API, but it's gpt-4-0613 from June 2023 - the March 2023 snapshot gpt-4-0314 is no longer available.

I ran one of my test prompts against that old June 2023 GPT-4 model here: https://gist.github.com/simonw/de4951452df2677f2a1a3cd415168...

I'm not going to try for an extensive evaluation comparing it with Llama 3.3 though, life's too short and that's already been done better than I could by https://livebench.ai/


Why not ask it to solve math questions?

The bar for GPT-4 was so low that unambiguously clearing that threshold should be pretty easy.


I am not particularly interested in those benchmarks that deliberately expose weaknesses in models: I know that models have weaknesses already!

What I care about is the things that they're proven to be good at - can I do those kinds of things (RAG, summarization, code generation, language translation) directly on my laptop?


The new 3.3 70B model has comparable benchmarks to the 405B model, which is probably what people mean by GPT-4 class.


> when I ran Llama 3.3 70B on the same laptop for the first time.

There is no llama 3.3 405B to test, 3.3 only comes in 70B. Are you sure you aren't thinking of llama 3 or 3.1?


No, I meant Llama 3.3 70B.


How are state changes consistently tracked?


The context window seems to be about one second long at ca 20 FPS, so you get enough state change to do realistic things like accelerated falling.


What you see is what you get. Build a house, but turn away from it, and it may disappear. Everything seems to be tracked from a few previous frames.


They are not, and probably never will be. Looking up at the sky regenerates your surroundings. Inventory is less tricky, you just feed it back as input, but it is so easy to lose the landscape.


Congrats! I have strong nostalgia for the ones I worked on and owned in the 90s. Using those machines felt more like living in the future than any tech I've experienced since, including the iPhone.


"In JavaScript, neither weak references... is generally available". I think that was true with the old weak collation classes, but doesn't the newer JS WeakRef provide proper weak references?


WeakRefs landed around 2020 (in Chrome 84), but SqueakJS was started in 2013 -- "November 2013 Project started (after seeing Dan's Smalltalk-72 emulator at Hackers)" -- and that paper was written in 2014.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

https://freudenbergs.de/vanessa/publications/Freudenberg-201...

https://squeak.js.org/


Great to see work like this being done. Javascript is often a "good enough" language, but an efficient Smalltalk (or Self) language with support for things like system images, become:, coroutines, and other advanced features would open up a lot of advanced programming techniques like fast portable images, transparent futures, cooperative concurrency without async/await on every call, etc.


On a similar note, you can now run Scheme in the browser via wasm: https://spritely.institute/hoot/

The current release can do coroutines via the delimited continuation support, but the next release will have ready-to-use lightweight threads (aka "fibers") that integrate with JS promises. No async/await marked functions/calls.


Effect-ts is also built on the same principles (fibers) if one wants to stay in TS land.

https://effect.website/docs/guides/runtime


As an Obj-C guy, if I declare a method as input only and no return, it can run async in the run loop.

I assume Smalltalk can do the same?


At first, I though this was a link to another cool language with a very similar name: http://www.redwoodsoft.com/~dru/cel/


I've run into many of the problems mentioned with Apple Feedback and gave up on using them many years ago. It's pretty disheartening when you spend a bunch of time getting an OS crashing bug to reproduce in a small amount of code to submit in a bug report for them to neither confirm that they can repeat it, or consider it worth fixing.


Can it be used via an API?


If you want to use stable diffusion t2i and i2i today via an self-hosted api it's very easy to do so with auto1111, just click the api link at the bottom of the gradio app and there's extensive documentation on the endpoints.


I think it can only use other APIs itself. Looks like they want to make it work with a1111's api and they will ship a plugin that talks to stability's pay api.


Did you miss the large API link in the header?


Unkind response, and the API link goes to info about Stability's API, not info about an API for interacting with the StableStudio app.


You don't use APIs to interact with apps. Apps use APIs to do actions.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: