Hacker News new | past | comments | ask | show | jobs | submit | boxfire's comments login

I've really had a pet peeve about footnotes on mobile and this does the crime. If you click a footnote and it jumps very far away and doesn't have a return navigation, I have left your page usually after the second time I see a footnote and didn't note my exact scroll. A small thing but pretty please don't make me have to hunt for where I was...

Elliott Conal has to covered there: http://conal.net/papers/convolution/


I liked the way Pearl phrased it originally. A calculus of anti-correlations implies causation. That makes the nature of the analysis clear and doesn't set of the classic minds alarm bells.


Unfortunately this calculus is exceedingly complicated and I haven't even seen a definition of "a causes b" in terms of this calculus. One problem is that Pearl and others make use of the notion of "d-separation". This allows for elegant proofs but is hard to understand. I once found a paper which replaced d-separation with equivalent but more intuitive assumptions about common causes, but I since forgot the source.

By the way, there is also an alternative to causal graphs, namely "finite factored sets" by Scott Garrabrant. Probably more alternatives exist. Though I don't know more about (dis)advantages.


and probably no where to hide if you want to just rest and vest.

"Rest and vest". What a luxury. Some of us are trying to tread water with an anvil chained to the waist. That's what I get for choosing to work for something I'm passionate about. What a world


if you refuse a higher paying job because you would like to work for something you're passionate about, you are explicitly valuing your passion at at least the delta between the job offers.


> That's what I get for choosing to work for something I'm passionate about.

Are you passionate about working for your current employer or passionate about your career?

I’m passionate about my career (programming/software development): I love reading the classic tech books out there (TAOCP, DDIA, etc.), I love working on side projects, sometimes I contribute to open source, etc.

On the other hand, I couldn’t care less about my employer. Not because they are bad, but I just don’t care about most of the tech companies out there. I do what I need to do to get paid, but i’m not passionate about deadlines, conversations with other engineers about subjective topics, daily standups, on call, etc.


Yeah I work in an FFRDC, pushing the frontier of our exploration of space. I'm quite passionate both about the employer and (most of the time) my work. If I didn't care about my employer at that level I woulda been out the door after two years. I think to myself sometimes about the other side of that though, like if I didn't care for them but the money was good.

My home hobbies resemble work within my area, but not my current job function, but I'm keenly aware my employer may try to take advantage of my other talents and do I guess the only thing that saves having an alternative hobby is that I'm so diversified that whatever my work task is I can work on an orthogonal hobby at home. It's kinda nice, I've got both really.

In my mind if I "sell out" (in terms of trading in for that sense of purpose in my work), and just work for the paycheck, then I would go for max. Work an HFT quant firm as a senior or principal engineer or research analyst. $$$. Or some other juicy gig. But I would hate myself if that became a slog and I had no time for my interests at home.


The Mars global dust storm is caused by coupling of angular momentum of the (solar) system, a global a effect. The Mars system itself down to the dust does not create sufficient conditions


That coupling is due to aggregate interactions of all system particles, so it's still comes down to a cumulative microscopic effect. Pretty good example though!


I think one of the reasons I have that example is that specific dynamic is cited in most planetary science texts as "de-coupled", "invariant", etc etc, when in fact it's the major casual influence here, which was quite a surprise in recent years [glances at climate mostly still beating to the tune that particle inertia does not have to care about the system angular momentum variance at the solar system scale]


The idea of computing as the shared stage to reflect our own intelligence is really what sticks out to me as the best way to frame what interacting with a computer means. It's not new but Alan did a great job of motivating and framing it here. Thanks for posting this great reminder that what we use as computers today are still only poor imitations of what could truly be done if we can transport our minds to be more directly players on that stage. It's interesting to reflect the other way as well. If we are the actors reflecting a computer to itself. An AGI has to imagine and reflect in a space created of our ideas. To be native the AI needs better tools, the "mouse" of it's body controlling the closed loop of it's "graphics", how do we create such a space that is more directly shared? Dynamically trading been actor and audience in an improvisational exchange? This is the human computer symbiosis I seek.


A first-class programming language (not an LLM) to talk to the computer's OS along with a rich library is the most important missing component IMO.

Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user.

The result is users mostly clicking on signs to choose among predetermined tasks, like monkeys in a lab.


That's because computers are dumb servants. And that's a good thing, because computer solutions should be task specific. The human has agency and sometimes imagination, and the computer's job is to solve a problem as transparently as possible with as little cognitive load as possible.

Human language is optimised for human relationships, not for task-specific problem solving. It's full of subtext, context, and implication.

As soon as you try to use natural language for general open-ended problem solving you get lack of clarity and unintended consequences. At best you'll have to keep repeating the request until you get what you want, at worst you'll get a disaster you didn't consider.


Yes, the machine should be a humanity-amplifier, not a humanity-replacement.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” Frank Herbert, Dune


Herbert is describing a humanity amplifying phenomenon.


Herbert saw machine-thinking / AI as taking away human-ness, not adding to it. Or at least his Bene Gesserit did. Many quotes throughout the series about the corrosive effects of letting humans defer their complicated choices to machines, etc.


> And that's a good thing, because computer solutions should be task specific

Why? I enjoyed your comment but I don't follow why this is obvious?

Why should computer solutions be task specific rather than general?


So that when you ask what 1+1 is you get 2 and not a general solution


Why not both?

If I were making an API call, I'd want to get 2, but as an end user, I'd be delighted in "It's 2 and here's how I arrived at it ...'


"computer solutions should be task specific". Sums it up perfectly


There are things about human language that are fundamentally at odds with effective communication with a computer (or engineering in general).

One example is ambiguity. Every human language has faculties for ambiguity, which is crucial in many human interactions and relationships. The ability to make implicit requests or suggestions while maintaining plausible deniability is valuable, even in situations that are non-adversarial.

In contrast, when communicating with an OS or engineering a mechanical device, ambiguity is a negative and it's crucial to use language that is deterministic.

There is a very real degree to which people are only capable of thinking clearly about complex systems to the degree they are comfortable with tools such as mathematics and programming that can be used to unambiguously describe them.


> In contrast, when communicating with an OS or engineering a mechanical device, ambiguity is a negative and it's crucial to use language that is deterministic.

Very much a historical artifact. Modern systems (LLMs, for example) can in principle handle ambiguous inputs.


Is ambiguity ever desirable when communicating with them?


I would argue no - the entire concept of prompt engineering exists for this reason.


I agree. I think one way to achieve that is to first acknowledge the important distinction between modes of programming and levels of abstraction. See, even in programming, we’re still restricted to a given domain. A web dev does not program to the network stack, thus needs not to known NICs and TCP frames internals. The web dev only interacts with the lower levels via high level parameters (data) through the low level APIs, which are already done and settled, not directly with code. The same goes for each layer and domain. Now, expecting users to write actual code is not realistic or sustainable. What’s more reasonable is to imagine a scenario where users needs only to give the parameters for the functions which are already done and settled. So, composing super high level functions in an environment which disposes of an extensive library of utilities seems the way to go. Users already do parameterization everyday, such as when they press buttons and fill forms. What’s missing is simply an abstraction which gives them the power to compose those functions. Regarding the paradigmatic framing for an end user language, I think stack-based programming offers a superior model, because the context window is directly visible and easy to track and reason, but more than that, stack languages offer a good avenue for learning, since it’s the easiest model to teach by analogy, even physical analogies can be made. It boils down to just pushing and popping in a coherent order. The order of operations is in fact the program. Hard to beat this simplicity and universality.


to my mind at least part of the issue here is human communication languages are fundamentally lossy by nature. Everything has too many meanings and requires inference. This is why when able to communicate with code it gets so much easier because it has to be exactly right or the computer fails. And at least because it is consistent we can debug and fix that communication and once that is done it will work with reasonable consistency.


That describes a limitation of computers (and current interfaces to them) though.

Requiring humans to describe stuff "unambiguosly" is the easy cop out to that.

Getting computers to handle the ambiguity and resolve it as good as humans is what would be really amplifying. LLMs are a good step to that, compared to a regular programming language/interface.


It's also a limitation of human-human communication, and why nation's have ambassadors (who presumably have a shared context from which to start from when dealing with a foreign nation).


I don't think diplomacy and ambassadors are there to handle ambiguity in communication.

There are there to handle conflicting interests and goals.

To that end, ambiguity in communication is something they use on purpose, not something they're there to solve.


> Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user.

Most humans would have nothing to say to an OS. I am a software engineer and most of the time I function many levels of abstraction away from the OS. Most of my work doesn't even run on the same OS I work in, and when it runs, it talks to an OS that's not even running on an actual computer, but a construct that looks like a computer, but is entirely made up by a hypervisor.


>A first-class programming language (not an LLM) to talk to the computer's OS along with a rich library is the most important missing component IMO.

That "programming language" might just be ability to have generative interfaces (LLM based).

>Humans communicate mainly with language and no OS provides this in a satisfactory way for the average user. LLMs literally fix this.


>The idea of computing as the shared stage to reflect our own intelligence

We tried that, and it worked briefly. But the end result is the modern web/app landscape: commercialization, tits and cats, hating, techo-feudal and government control, partisanship bs, spam, narcisism - and rare sprinkles of intelligence here and there.


That’s true, but I think the core of human communication has always been half of what you’ve listed - low intelligence, hating, spam and narcissism. It’s just more obvious and amplified online as you can see everyone engaging with it all at the same time. In pre-internet times, you’d need to be physically present and see maximum a bar-full amount of people doing that.

I’m still very hopeful we will use the tech to help up us with some non-communication related things. Maybe something that’ll even off-ramp people outside the internet world.


>In pre-internet times, you’d need to be physically present

Because of that, the very real possibility of getting a punch in the face if you went over the line also helped curb those behaviors somewhat.


A sentiment often expressed, but I find it too close to “An armed society is a polite society” for my comfort.



We tried that, and it worked briefly.

Where and when?


In the early days of the internet up to the early years of the web.


do you have any specific examples?


I feel like the LLM interface will enable that. I wonder what Alan Kay makes of the current LLM revolution (he does talk a bit about it in the question section @ around 1:35)


I am a big fan of just in time on the fly UI and LLMs seem to make that possible now with some of the fast token outputs there are a couple of experiments [1]using image for now and I expect this to be more useful in not too long.

[1] https://twitter.com/sincethestudy/status/1761099508853944383...


I'm not Alan, but I'm pretty sure he isn't too happy about it.


You can run a friggin full 32 bit lisp on the untyped lambda calculus!

https://woodrush.github.io/blog/lambdalisp.html


As someone who knows not enough people care about the math, please ignore this advise and actually learn the math. You might come up with a better representation in the process. In any case you'll learn more than just it works, but how and why. And if your goal is to apply this method in other places you will have gained a good idea about how.


Yes! I am much more productive in those hours too despite being "a morning person" with regards to getting up early and being spry. On my own hobby work I find myself crushed by hitting flow at like 1 am, finally, and then oh crap I've gotta work in six hours, better get some sleep


TTY queries are written to stdout but read from stdin. That's not user interaction. E.g. if you're system doesn't have ioctl for window size (or you're over a remote serial etc), setting the cursor to the bottom right and asking it's position. Those programs break with no stdin because a tty is inherently bidirectional communication!


Such queries are not universally supported, and a program using them has to be prepared to expect a lack of response to the query. (This is one of many issues with such queries.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: