Hacker Newsnew | past | comments | ask | show | jobs | submit | more Svetlitski's commentslogin

Class of 2021 UC Berkeley CS grad chiming in. It is untrue that it’s in Haas Pavilion, the course scaled by making lectures available online. The semester I took the course, it had over 2000 students and somewhere around 100 TAs IIRC.


> I have yet to see an in-depth treatise on how it works under the hood

Check out “A Complete Guide to useEffect” [1] by Dan Abramov, one of the core React developers.

[1] https://overreacted.io/a-complete-guide-to-useeffect/



This reminds me of programming on a TI-84 calculator. You selected from a list of keywords (as opposed to having to type them out), and the language being TI-BASIC meant variable names were usually just one letter.


Variables were always one letter. You could however create a list with up to 100 entries and give it a custom name up to 5 letters long. This is in addition to the several (5?6?) default lists named L1 through L5or6.


Implementing a simple program to run the quadratic formula on my TI was a lot of fun in school


This was a long time coming. I graduated in 2021, and while the UC Berkeley computer science program is still absolutely stellar, it was starting to collapse under its own weight. While it scaled admirably, there are limits. I TA’ed operating systems (CS162) my last two semesters, and the office hours queue would fill up immediately upon starting, and due to an ever-shrinking number of course staff due to budget limitations, we were never able to get to everyone who wanted help. We’re talking several hundred students with less than a dozen TAs. This situation was not uncommon across other courses. The absolute smallest non-graduate-level CS course I took had around 100-150 students.


I was in a 40-person CS course: compilers. I cannot imagine that went up much


Zstandard really is incredible. Anywhere I would’ve previously compressed things with gzip I now universally prefer to use Zstandard instead. It’s a shame it’s not supported as a HTTP Content-Encoding in web browsers.


According to some Firefox developers, brotli is more suitable because of the standardised dictionaries for web content, like JS: https://bugzilla.mozilla.org/show_bug.cgi?id=1301878


can't zstandard use standardized dictionaries as well?


Yes, sort of: "Zstd offers a training mode, which can be used to tune the algorithm for a selected type of data. Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. Using this dictionary, the compression ratio achievable on small data improves dramatically."

https://github.com/facebook/zstd#the-case-for-small-data-com...


According to the Bugzilla thread, Facebook won't bother going through the process of publishing standardized dictionaries.


But surely anyone could publish dictionaries? Like let's say Mozilla could publish a dictionary trained on HTTP headers, HTML and Javascript, and then accept traffic compressed with it under a "Accept-Encoding: zstd/mozilla" header


I wouldn't go to the trouble of determining the privacy implications of standartized dictionaries built from user data.


Brotli is super slow and thus suitable only for static content.

zStd also supports pre-trained dictionaries, it is particularly good for specific small JSON, for example. It won't be that helpful for relatively large HTML files.


The problem is brotli compression is pretty expensive/slow to encrypt. Which isn't really a problem for static content that can be compressed ahead of time. But doesn't work very well for just in time compression.


This isn't true. Brotli compression speed is similar to Zstd at comparable compression ratios.

https://quixdb.github.io/squash-benchmark/unstable/?dataset=...

Perhaps, this common misunderstanding is due to the fact the slowest level (-11) is the default one in the brotli command line utility?

It's also faster than gzip, and gzip was used for "just in time" compression on the web for ages, so I don't see how brotli can be a problem for this use case. (I assume by "encrypt" you mean "compress".)


On a project I worked on, we tried switching from gzip to brotli for on the fly compression, and it significantly increased CPU usage. That was years ago though so it's possible things are better now, or maybe the library we used just didn't support a compression ratio as low as what we were using with gzip.


Fair enough, I think there were some issues with early releases, but they should be fixed now.


I wish Caddy would support brotil, is either gzip or zstd currently.


Just did some quick research and it looks like you can serve pre-compressed static resources in Brotil. See the precompressed option of the file_server directive[1].

The thinking seems to be due to how CPU intensive it is, Brotil is not favoured for on-the-fly compression[2].

If you're really desperate for it though, you could try [this extension][3].

[1] https://caddyserver.com/docs/caddyfile/directives/file_serve...

[2] https://caddy.community/t/caddy-v2-brotli/8805

[3] https://github.com/ueffel/caddy-brotli


The extension is not performance focused, so its slow.

Other webservers support it, mainly Nginx. But since Caddy is written in go, they need a native go port which doesn't exist yet.


> The extension is not performance focused, so its slow.

Yeah, hence my "really desperate" qualifier. I probably should have been more explicit.


Caddy 2 supports brotli static compression out of the box, I ran a test on it just last week with this exact configuration. Dynamic, I'm not sure. That use case doesn't interest me as much.


Yeah, this is definitely a real advantage and not a broke-ass feedback loop of SEO and confirmation bias. It's great that we have to ship our compressors with massive English-biased static dictionaries now.


Technically it is available, but it's just not widely supported. Though it's really unnecessary. Brotli is, in many ways, very comparable or a near equivalent to zstd.


last time I looked Brotli format do not clearly define its frame format, such as there is a good chance that a random stream of data would be decoded as valid brotli compressed data.


When is this ever left to chance? Why would a system randomly decide to try decoding a stream as brotli if it isn't?


Its not so much about suddenly starting to decode something. It’s more when your stream for some reason gets misaligned or corrupted and whatever is still streamed is happily consumed with no possible way to detect the error.

Common reason being someone forgot to check how many bytes read() actually returned and just assume it filled the buffer.

Robust data formats have double protection against this class of bugs with unambiguous SOF and EOF boundaries that the consumer can assert. I guess a sanity-byte once every X Megabytes wouldn’t hurt either.


I use brotli as a web distribution format and as an internal compression format on web stacks for various projects and for production systems without issue.

I still don't understand the use case limitation, or exactly where this is a potential issue. Why is the data being interrupted? Is this some kind of raw UDP stream use case?


There is RFC8478 for HTTP Content-Encoding, but I don’t think any web browsers currently implement it.

https://www.rfc-editor.org/rfc/rfc8478


I bought a Quest 2 in hopes of replacing my external monitor with it, but unfortunately the resolution simply isn’t adequate (in my opinion) for traditional, text-heavy desktop computing tasks like writing code. I think the concept has potential, but personally wouldn’t recommend it right now. It is something I do want to try again if/when higher resolution VR headsets come to market.


Ah Logisim, brings back fond memories of creating a RISC-V CPU for CS61C at UC Berkeley.


That’s cool they switched CS61C to RISC-V.. we did a MIPS CPU (although maybe they are similar now?)


It’s not always the best fit for a particular task, but I’ve personally had some experiences where pair-programming was a huge productivity boon, as it prevented potentially difficult to diagnose bugs from being committed in the first place. I personally feel it shines when you are working on something very complex and particularly detail-sensitive, where a small mistake could cause a devious bug, and so the benefits of having a second set of eyes on everything in real time are large.

I’m curious to hear what you dislike about pair-programming.


I think it really is complete agony for some people, and when I've seen it in practise, certain people really benefit and others just don't.

Speaking for myself, when I'm working on something complex and detail-sensitive, the very last thing I want to do is spend any time at all with anyone else. I want to be able to think properly about the problem without the pressure of having someone else involved.


Anyone who has recently attended UC Berkeley would tell you that this is warranted: the school was running into scaling issues hard (particularly in the EECS department), and continuing to grow enrollment without the state investing significantly more money in the school would only result in a decline in the student experience and overall quality of the education.

Don’t get me wrong, I absolutely love UC Berkeley and wouldn’t want to unnecessarily deny anyone the wonderful opportunity that is studying there, but in order for that opportunity to stay so wonderful the school has to be realistic about growth.

(For context I graduated in 2021)


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: