But street lights don't have to use harsh 3000 kelvin LEDs, there are warm light LEDs (2400-2700 kelvin). For example, these lights are widely available for home, yet most people just buy the 3000K LED bulbs because (IME) it doesn't occur to them that there is a strong aesthetic (and health) difference between these colors. i.e. They don't care.
A lot of America especially has an issue with too many lights, and the lights themselves are too bright relative to the population. Found what I thought was a cool image on Wikipedia Commons while searching the subject that has the light use relative to the population density. Green's lots of light use, red is lots of population density. America's bright green light use. Yellow is fairly equivalent light use to population density. [1]
They shine through my windows at night and are truly horrific.
They’re down the entire alleyway behind my place, and a walk to the grocery store at 7pm during the winter makes your body and mind think it’s sunrise.
It's probably getting better but the amber-colored LEDs used to be rather inefficient. I've also heard that white lighting can slightly improve reaction times of those in traffic and leads to slightly clearer captures for security cameras. I personally think these benefits do not outweigh how extremely ugly and unwelcoming they are, but "city officials just don't care" is not what led to the adoption of white LED street lighting at all.
A lot of wildlife, like birds, bats, insects etc. are really confused by white light. There are some nordic countries which are experimenting with red street lights in outer districts which are showing great promise. (Don't have a reference atm but should be googleable)
It’s great that this can run on a laptop but FWIW, llama 70B model is no where near “GPT-4 class” in my own use cases. 405B might be, though I haven’t tested it.
When I say GPT-4 class I'm talking about being comparable to the GPT-4 that was released in March 2023.
The Llama 3.3 70B model is clearly no way near as good as today's GPT-4o family of models, or the other top-ranking models today like Gemini 1.5 Pro and Claude 3.5 Sonnet.
To my surprise, Llama 3.3 70B is ranking higher than Claude 3 Opus on https://livebench.ai/ - I'm suspicious of that result, personally. I think Opus was the best available model for a few months earlier this year.
I guess it's because it has the highest score of all models in instruction following, 20 points higher then Opus, which compensates for shortcomings elsewhere (e.g. in language), and which wouldn't necessarily translate to human evaluation of usefulness.
The model you are running isnt the one used in the benchmarks you link.
The default llama3.3 model in ollama is heavily quantized (~4 bit). Running the full fp16 model, or even an 8-bit quant wouldnt be possible on your laptop with 64G RAM.
Vibes, based on what I can remember using that model for.
There's still a gpt-4 model available via the OpenAI API, but it's gpt-4-0613 from June 2023 - the March 2023 snapshot gpt-4-0314 is no longer available.
I'm not going to try for an extensive evaluation comparing it with Llama 3.3 though, life's too short and that's already been done better than I could by https://livebench.ai/
I am not particularly interested in those benchmarks that deliberately expose weaknesses in models: I know that models have weaknesses already!
What I care about is the things that they're proven to be good at - can I do those kinds of things (RAG, summarization, code generation, language translation) directly on my laptop?
They are not, and probably never will be. Looking up at the sky regenerates your surroundings. Inventory is less tricky, you just feed it back as input, but it is so easy to lose the landscape.
Congrats! I have strong nostalgia for the ones I worked on and owned in the 90s. Using those machines felt more like living in the future than any tech I've experienced since, including the iPhone.
"In JavaScript, neither weak references... is generally available". I think that was true with the old weak collation classes, but doesn't the newer JS WeakRef provide proper weak references?
WeakRefs landed around 2020 (in Chrome 84), but SqueakJS was started in 2013 -- "November 2013 Project started (after seeing Dan's Smalltalk-72 emulator at Hackers)" -- and that paper was written in 2014.
Great to see work like this being done. Javascript is often a "good enough" language, but an efficient Smalltalk (or Self) language with support for things like system images, become:, coroutines, and other advanced features would open up a lot of advanced programming techniques like fast portable images, transparent futures, cooperative concurrency without async/await on every call, etc.
The current release can do coroutines via the delimited continuation support, but the next release will have ready-to-use lightweight threads (aka "fibers") that integrate with JS promises. No async/await marked functions/calls.
I've run into many of the problems mentioned with Apple Feedback and gave up on using them many years ago. It's pretty disheartening when you spend a bunch of time getting an OS crashing bug to reproduce in a small amount of code to submit in a bug report for them to neither confirm that they can repeat it, or consider it worth fixing.
If you want to use stable diffusion t2i and i2i today via an self-hosted api it's very easy to do so with auto1111, just click the api link at the bottom of the gradio app and there's extensive documentation on the endpoints.
I think it can only use other APIs itself. Looks like they want to make it work with a1111's api and they will ship a plugin that talks to stability's pay api.