Hacker Newsnew | past | comments | ask | show | jobs | submit | open-source-ux's favoriteslogin

The lectures for CMU's Principles of Functional Programming course are actually online for free! http://www.cs.cmu.edu/~15150/lect.html

Fantasque Sans Mono is similar in spirit and works great for coding too IMHO: https://github.com/belluzj/fantasque-sans/ Don't knock it until you try it, it really looks nice for clean languages without a lot of symbols, operators or noise.

The best method for healing burn wounds (also cuts, abrasions, lacerations, and the like) is outlined in these slides:

https://www.emilio-moriguchi.or.jp/disease/doc/moist_eng.pdf

^ Seriously these slides are incredible, check them out! In particular slide 42 shows the result of almost an entire finger tip being regenerated near perfectly. When I first saw the slides my jaw dropped, it seems like the type of thing that only reptiles are supposed to be able to do. Note that the finger could only be regenerated because there was a tiny part of the nail bed remaining; without that the body would have lost the genetic information to reconstruct the finger tip properly.

TL;DR: Moist wound heal by coating the wound in aquaphor (or pure petroleum jelly aka vaseline, but I prefer aquaphor due to its semi occlusive nature rather than fully occlusive as well as a couple of additional anti inflammatory compounds in aquaphor like bisabolol), then cover the wound with saran wrap, or a transparent film dressing like tegaderm, or a hydrocolloid like duoderm extra thin cgf (if using hydrocolloids you can omit the aquaphor because the hydrocolloid won't stick as well and also the natural fluid secreted by the wound fulfills the same role).

The result of this healing method is fast, low-pain, low-scarring wound healing.

An important part of the approach is to never use topical antibiotics nor soap, since those damage the body's wound healing cells more than they retard the pathogens.

---

As for maggot therapy specifically, it works for wounds with necrotic tissue. The maggots eat the dead tissue and leave the healthy tissue completely intact. So it's not necessary for burn wounds.


Right but the important part is the notes, not the notebook. It could be notecards, google drive, text files in git, sqlite, whatever. The fact that you take the notes and can find them later when you need them are literally the only two important factors here.

The "fooling themselves" element is in thinking adding sophistication beyond two those things improves the usefulness of the notes themselves. And there's some personal flexibility here too sure; if you truly can't ever find a note when you need it and adding a tagging system gets you there then that's useful additional sophistication.

I think the point they were trying to make, definitely the one I'm making, is that the line between useful system and hobbyist tinkering is a lot lower than people want to think, because they want to ascribe purpose or benefits to their tinkering. Which is where the productivity porn comes in, a framework that only values things if they are or contribute to "productivity" demands everything be productive, demands that you justify it.

But truly and honestly if you have a flat folder of text files and grep you have what you need and beyond that is tinkering. That's what people are fooling themselves about.


Simple answer: because the software industry is a goddamned joke. It's full of complexity-fetishists who are arrogant enough to try and call themselves engineers while producing objectively terrible products and patting themselves on the back for it.

Remember a while back, when Casey Muratori told Microsoft their terminal emulator could be a lot faster if it used a glyph index, and professional Microsoft engineers told him that was complexity worthy of a doctoral thesis [0]?

There's a reason he got so mad at them, and its because this level of "competence" is pervasive in our industry and he has to see it every single f'ing day. See also: "When does the draw window change?", a demonstration of how Microsoft's flagship IDE's debugger fails to compete with a one-man project.

https://imgs.xkcd.com/comics/voting_software.png

[0] For anyone not in the know, this is essentially just emulating how a real dumb terminal from the 70s would work, is the blindingly obvious implementation, and would be familiar to just about any game developer on earth. Casey spent the next weekend coding a 3-orders-of-magnitude faster terminal display demo that was completely unoptimized as a demonstration of the lower bound of how fast a terminal should be. Microsoft's terminal is still not as fast, even though they did eventually implement his solution without, initially, giving him any credit.


Ah! This question which offends as many people as it pleases (because it is a correct question).

Some people in academia are actually trying to solve this issue. You may look at:

1. How to Design Programs : https://htdp.org/

2. Data-Centric Introduction to Computing: https://dcic-world.org/

3. A course based on the above book: https://learnaifromscratch.github.io/software.html


Technically, you don't need a macro at all:

    use std::io::Write;

    fn main() {
        std::io::stdout().write(b"Hello, world!\n").unwrap();
    }
The reason people often use println! is that println! is variadic and can support any number of arguments to format in the string. For example:

    println!("x is: {}", x);
This also has the benefit of allowing type checking at compile time, as opposed to using a function like printf in C, which does not have such power. Additionally, this allows Rust to take references to the variables entered without the programmer's specification, in order to prevent unnecessary copying.

These reasons tie directly into Rust's philosophy of making it easy to write reliable, performant code.


> On the other hand, having the wisdom of knowing what can be static in the first place? I don't think that it's something teached.

All traffic is static by definition. You are not modifying bytes when they are in transit to user. And you don't have to serve different bytes each microsecond just because users want to be "up-to-date". The network latency is usually around 40ms or so. If your website serves 1000s of requests per second, you should be able to cache each response for 10ms, and no one will ever notice (today this is called "micro-caching").

Of course, most webpages can't be cached as whole — they have multiple "dynamic" parts and have to be put together before serving to user. But you can cache each of those parts! This is even simpler if you do client-side rendering (which is why MangaDex abysmal performance is pathetic).

Then there are ETags — arbitrary strings, that can be used as keys for HTTP caching. By encoding information about each "part" into a substring of ETag you can perform server-side rendering and still cache 100% of your site within static web-server, such as Nginx. The backend can be written in absolute hogwash of language such as Js or Python, but the site will run fast because most requests will hit Nginx instead of slow backend. ETags are very powerful, — there is literally no webpage, that can't be handled by a well-made ETag.

Even pages, that need to be tailored to user's IP can be cached. It is tricky, but possible with Nginx alone.

Instead of "static" you are better off thinking in terms of "content-addressable".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: