Hacker News new | past | comments | ask | show | jobs | submit | ArbitraryLimits's comments login

I don't think so - the whole point is that other don't want the same behavior from you as you would want from you, in their shoes.


Read my post again. It covers that issue. Perhaps more concisely than you are used to, but it is in there.


I'm perfectly comfortable with concision, thank you.

Clearly one can interpret the Golden Rule two ways depending on whether you take into account the other person's different values or not.

For example, you're probably patting yourself on the back for being "concise" despite using more words to praise your original comment than it would have taken to clarify it, whereas I'm resenting you for the way you've ostentatiously foregone social niceties at my expense.

Is that really what you would have wanted in my place? Despite your ability to parse it, I don't think you understand the Golden Rule very well.


SF is (or used to be until recently) the West Coast's banking center, and everyone in that industry has to be ready when markets open on the East Coast at 5am Pacific.


> In Ruby, unit tests stand in place of static compiler checks. I haven't heard a strong argument against them nor a replacement for them.

How about static compiler checks? :)


I don't know a great deal about ruby, but I'd wager that the way it's designed would make static type checking essentially impossible. And you probably understand this, but still. Dynamic languages allow fundamentally unsound types, for example:

    def foo(x):
        print x
        return foo
This function has no computable type (in a standard system anyway); it's "a -> a -> a -> ...". However it's perfectly obvious what it does, and conceivable that it or similar functions might exist and be useful in actual code. Dynamic languages allow behaviors which are impossible in statically typed languages (properties created at runtime are another example).

Of course, one could use gradual typing to get around some of this.


Still not an adequate replacement for unit tests. The compiler can prove that your code is correct but not that your logic is.


In the same way that writing a unit test for something proves your logic is correct? This is not intended as a snark or something. Just stating the obvious that your unit tests are no silver bullet to a working correct piece of software.

My 2 cents: Combine static(ish) typing with tests and a number of (semi-manual) test scenario's and you get a few steps closer to a correctly working piece of software.


Manual testing is what really confirms that your code is working properly. Automated testing verifies the conditions necessary for your code to pass manual testing. The real value of automated tests is for when you need (or someone else needs) to come back and change something.

I don't think static typing is necessary in that case, but I understand it has benefits in some situations.


> In the US system ... everyone else is promoted and then assigned

It's worth pointing out that one of the reasons this system works in the US army is that non-commissioned officers are trusted with a great deal more responsibility than their Soviet counterparts were. The mechanical promotion of junior officers would be impossible without 15 year veteran NCOs taking care of a lot of the grunt work that Soviet officers got stuck with, simply because the system wouldn't trust NCOs with it.


Wouldn't this only apply to neural networks as used for classification? I mean the general paradigm of deforming curves until they're separated by a hyperplane seems pretty obvious now that I see it in front of me, but what about neural networks used to approximate continuous functions?


I'll take a stab at this (I'm a decade out from my last machine learning class, so no guarantees on correctness): The only reason it's fitting a hyperplane is because one class is being mapped to the continuous value -1.0 and the other class is being mapped to the continuous value 1.0 and there's a thresholding step (the hyperplane perpendicular to the line onto which the continuous values are being projected) at the end to determine the class. If you're doing regression instead of classification, your training data will be fed in with more output values than just 1.0 and -1.0 and you'll omit the thresholding at the end, but otherwise the behavior and intuition should be the same as in the article.


This is basically what VxWorks (real-time OS) started doing in their 6.x series - the MMU is active and all memory addresses are virtual, but no two processes get access to the same virtual address range. That way you can debug with virtual addresses turned on and see page faults instead of whatever hilarity ensues from overwriting the operating system itself, then turn the MMU off and stop taking the execution speed hit of translating addresses. It turns out that almost everyone just leaves memory protection on all the time anyway, since duh, it's the only sane thing to do.

Fun fact, the most reliable indicator that you have this kind of setup is whether the Unix emulation layer (if present) offers fork() - if only one process can access a given address then you obviously can't create a copy of a process that uses pointers.


How is the process isolation implemented in such a setup ?

The traditional approach is to flush the tlb and load new a new memory map on a context switch in order implement isolation , though what you're talking about sounds like something else.


That's true but I think the bigger reason is that a single address space makes system calls as cheap as regular function calls, since there is no kernel boundary any more.

Ironically, VxWorks's latest major rev was all about turning memory protection on since overwhelmingly people prefer the performance hit of kernel calls over the heisenbugs that come from stomping on the kernel's code and data structures.


But, like you said, you still need protection of kernel data, which means that you need to execute kernel code on a stack not writeable by the user, at a greater permission level that allows ONLY the kernel code to modify kernel pages. Single-address-space or not I can't see these needs being met in a cheaper way than what we already have, presuming that kernel code/data are in never-invalidated global pages available in every address space.

Edit: Oh, I see from your other comment that you're talking about the benefits to people who don't feel this need.


I'm a little embarrassed that I never got the word play in "tarsnap" until just now...


Yes, but you have to admit it does need a call-to-action link that's actually a button.


That's a lot of buzzwords in one sentence. Reminds me of the early 90s when everything had to use neuro fuzzy wavelets.


It's almost as if that sentence itself were pieced together by some kind of blagosphere-mining machine learning algorithm. I guess that's kind of what writers do?


Either they do that, or the press a button on one and then go do what they really wanted to do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: