Hacker News new | past | comments | ask | show | jobs | submit | ittekimasu's comments login

I can only wonder how they much they'll charge for precious soldermask and silkscreen inks.


Anything less than the cost of the current process would be a win for the industry, right? Even if it's more expensive, isn't there benefit to having it available on demand?


Let them. I'd happily pay if it meant I could manufacture PCBs locally in a matter of hours, rather than wait weeks (right now if you want a cheap PCB, you wait 2-3 weeks).


Do you know the story of how the British went from petty traders to "greatness"?


And then back again. That's empires for you.


Of course, one could argue that those of us in the US are in fact the British Empire, the narrative slightly complicated by that civil war period of 1776-1812 of course.


True, this seems to be how Foreign policy people view things. This viewpoint shows up in things like "Five eyes" etc.


Great idea! Maybe this could help Venezuala. Oh wait...


Parent is referring to the ODE \dot{z} = i z.


This is a question of what it "means". It's origins can be traced to problems dealing with SO(n), and in this sense it's uniquely defined by the Lie-algebra of \mathfrak{so}(n), but in the Clifford-algebra world they become blades (more appropriate for CG).

I consider the cross-product to be one of the most abused objects in Physics; my first course in Newtonian mechanics was a nightmare, filled as it were, with the ghosts of all the characters killed so as to make the plot ostensibly "simple".


Actually it's a bit more like 42.


How is showing that two representations are fundamentally different because they objectively and radically differ in computational complexity open to interpretation?


It shouldn't be.

I'm skeptical of your parsing argument because the normal way to construct type theory terms is inductively in their "native" tree form, which is cheap. Looking at them as a formal language isn't so convenient.

As a PLer, I hope dependently typed langauges see wide use not because they are the one true foundation of everything, but because they are the richest lingua franca for every academic discipline to represent their ideas—I'm personally enamored of such theories as a foundation for math, but I don't know much about the theory of computation and I'm fine if others just see it as a tool.

Anyways once that happens and fields have distilled their intentions into precise models in the lingua franca, perhaps the walls between fields can be broken down and we finally grok each other's concepts.


> their "native" tree form, which is cheap

It is not cheap at all, as type checking can be arbitrarily hard, depending on the type system, and perform arbitrary computation at the "validation" stage. This is real, significant (and useful, depending on circumstance) computational work, that no formalism or representation can reduce (only hide, by pushing it over to the collaborator).

> but because they are the richest lingua franca for every academic discipline to represent their ideas

I disagree. I strongly recommend you (and everyone) read this terrific essay by Lamport on a lingua franca for algorithms: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Not only is TLA just as powerful as dependent types, it is far simpler[1]. Lamport says that the reason PLers aren't interested in TLA+ is because it's so simple, so there's not much to write about it. You see plenty of papers about embedding all kinds of things (separation logic, cost models, concurrency etc. etc.) with dependent types, things that are so trivial in TLA+ that they're barely worth a mention. Ideas from Hoare logic, behavioral refinement, differences between notions of program equivalence from process calculi (trace equivalence vs. bisimulation) simply collapse into simple, familiar logic in TLA (the only thing missing is probabilistic reasoning).

Lamport wrote about his experience trying to publish a paper showing that specifying real-time systems with TLA is trivial (it's a matter of defining a time variable): http://research.microsoft.com/en-us/um/people/lamport/pubs/p... You can only write a few papers on a simple approach that solves many problems, but lots and lots of papers that show how to use complicated approaches to solve them.

A lingua-franca must be simple. TLA+ uses notation and concepts that are familiar to all mathematicians and all computer scientists, plus a couple of new concepts that can be learned in a few days. Dependent types (and intuitionistic logic) are almost as arcane today as they were decades ago, virtually unknown outside the circles of logic and PLT, each of them is a particularly isolated sub-discipline within math and computer science. That so many papers are written about dependent types is strong evidence that they cannot serve as the lingua franca, and pretty conclusive proof that they cannot serve as the lingua franca just yet.

That PLers, as Lamport writes in his comment on my post, fail to see that Plotkin's SOS is an abstract state machine, and that some fail to see that computing a set-theory function and a type-theory function are two different computational problems with radically different computational complexity is further evidence that language models obscure rather than reveal.

Of course, as Lamport also says, real-world programming is very complicated, and so programming languages are justifiably complex (and whether dependent types can help with that remains to be seen[2]). But that complexity is absolutely not required for the purpose of a lingua franca with clear and simple semantics (TLA's semantics are far simpler than any programming language) for the purpose of specifying and analyzing algorithms.

Disclosure: I've contributed some code to the TLA+ project.

[1]: Not only do properties ("types") and algorithms share the same terms in TLA+, they are the same objects. This confuses some people to believe that TLA+ doesn't support higher-order algorithms, when, in fact, it becomes a non-issue. Instead of a parameter of a certain type, you have a concrete program that is the type (e.g., the program that nondeterministically returns, say, any even integer).

[2]: I'm skeptical because I believe we're close to the "Brooks limit": http://blog.paralleluniverse.co/2016/07/23/correctness-and-c...


A couple of clear, encouraging, articles about how to use TLA and the benefits it provides would pay dividends (to both you and your readers) orders of magnitude greater than this obscure, roundabout, oblique approach.

> This confuses some people to believe that TLA+ doesn't support higher-order algorithms, when, in fact, it becomes a non-issue.

An article explaining this would be astonishingly interesting (and probably clarify this whole discussion).


That PLers, as Lamport writes in his comment on my post, fail to see that Plotkin's SOS is an abstract state machine

Is this even true? I don't think I've ever met a PLer who didn't know that an operational semantics describes an abstract machine.


I am curious what you think of math.andrej.com/2016/08/30/formal-proofs-are-not-just-deduction-steps/


He mostly talks about proof assistants for general mathematical theorems, something I don't use, but in TLA+, proofs are of logical formulas, the steps are logical formulas (unlike in dependently typed provers), and a computation is just a (temporal) logical formula, so TLA+ already lets you incorporate computations into proofs, but the process isn't automatic. Then again, TLA+ isn't really designed nor used as a general theorem prover (though it certainly can prove general mathematical theorems) but as a tool to specify and verify algorithms.


Peter Thiel is also on the steering committee of the shadowy Bildeberg group.


Source please ?


Wouldn't a Thinkpad be a better idea ? You can get used ones for fairly cheap.


Recommending an x220 with a new battery / ssd / ram. Probably still cheaper than an ipad.

Although the ipad / or I'd suggest android tablet may be the better way to go. Charging from battery packs is easier than a laptop.


The thinking behind giving an iPad mini is purely just because we have one lying unused in a drawer. Here's an excerpt from what he actually had to say (rough draft translation by my wife from Dari so the wording is a little off, I.e "mechanical physics" should be mechanical engineering):

https://twitter.com/resobscura/status/761285673672409090


Also see, Dawkins review of Alan Sokal's book,

http://www.physics.nyu.edu/sokal/dawkins.html


See, homo economicuses don't care whether cultures or languages die; you can see this in the above "translation is not economically feasible .." sermons. While that may indeed be true, the thing though is that with such deep linguistic apartheid, the market and purchasing power in India will essentially be limited to the colonial class - with a mass of underlings bearing the currency and its inflationary forces.

Once the regime inflates itself out of existence for the sake of all those suckling on taxes, revolution comes and all the nuts the squirrel accumulates now will flee him. Like Mao's murdering mobs, that can bring great destruction.

The horizon of the homo economicus, I fear, is not quite as far ahead as some of us, nor is the French revolution as far behind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: