Hacker Newsnew | past | comments | ask | show | jobs | submit | more azag0's commentslogin

How does the Safari extension play with the native content blockers like 1blocker? Is this supported? I noticed that with 1blocker enabled, the initial grade is usually higher than what it would be without 1blocker.


Czech/Slovak speaker here. Slovak kept the original sound /uo/, which is written as ô. There are many words with a simple ô/ů substitution, for example Czech “trůn” /tru:n/ vs. Slovak “trôn” /truon/, meaning “throne”.

So no, they are not phonetically equivalent, but they are related.


As far as I know it's not trôn but trón in slovak ...

EDIT: Thinking about it, kůň and kôň (horse) might be a better example


The rule to read the author list, simplified: first author did the work. Last author funded the research. Everyone else contributed somewhat.


That’s only true in some areas (e.g. some biological fields). In other areas (e.g astronomy or much of physics), it’s decreasing order of contribution. In mathematics, the author list is usually alphabetical.


It is also traditional that the first author, in addition to doing the most of the scientific work, takes on most of the writing He/She also edits together the various other authors contributions.


One metric used to evaluate publication history is the combined count of first and last authorships for this very reason.


I think this recent paper [1] sheds quite a bit of light on this.

[1] https://arxiv.org/abs/1703.00810v3


Really don't think that's the best paper to say "sheds quite a bit of light on this". That paper has been somewhat controversial since it came out.

I think https://arxiv.org/abs/1609.04836 is seminal in showing unsharp minima = generalization, the parent's paper is good for showing that gradient descent over non-convex surfaces works fine, https://arxiv.org/abs/1611.03530 is landmark for kicking off this whole generalization business (mainly shows that traditional models of generalization, namely VC dimension and ideas of "capacity" don't make sense for neural nets).


I was surprised how easy it is to setup a time-machine-like backup with rsync. I'm backing up every hour to my server, and a cron job prunes the backups every night. Disarmingly simple.


How does this go with the often quoted mantra that you can only beat compilers today if you're an extremely skilled asm programmer? Or is the problem you describe just about executable size rather than speed?


Optimizing for size is easier because you only have exactly one metric to consider: how many bytes your instructions take.

When optimizing for speed you have to consider many factors like the relative speed of each instruction, cache behavior (including size of the cachelines, associativity, number of layers, relative speed of the layers...), pipelining, branch prediction, prefetching, whether moving your data to SIMD registers could be worth it, what to inline and what not to inline, what to unroll and what not to unroll, constraint solving to optimize things that can be computed or asserted statically etc...


And that's for a single processor! There's myriad CPUs the end user could be using.


x86 and x64 are the only two that someone could be using on desktop, right?


Timing rules can be very different even between different models of the same processor, let alone between different ranges (i3 vs i7) or generations (Skylake etc). An example: https://gmplib.org/~tege/x86-timing.pdf


I don't think there are any differences between an i3 and i7 of the same generation in terms of instruction timings.


True, caches are a whole different story though.


Different microarchitectures can have big differences in performance between different instructions.


Instruction set makes much less of a difference than the actual microarchitecture. For an extreme example, see Pentium 4 vs Core. Something that runs fast on one could be dramatically different on the other.

The only time ISA really influences optimization is for unique ones like IA-64/Itanium. Otherwise, optimizing for e.g. a modern Xeon vs a POWER8 is not terribly different.


Why was this flag killed?


How many cores? How big is each cache?


Well, the code wasn't compiled by today's compiler, it was compiled in late 2000. Visual Studio 6 maybe?

Even today compilers tend not to optimise the function preamble/postamble away. I'm only half in agreement with the mantra: you probably can beat the compiler, but is it worth it?

There are a few situations where it's genuinely a good idea to write in assembler to be explicit about predictable behaviour. Short security-critical constant-time functions are a good candidate.


There are a lot of assembly language instructions that do slightly different things than standard C++ or C, but if the programmer is aware of them they can "handle" the differences.

For example, the xchg instruction doesn't have any C equivalent. (although it has a C++ equivalent: std::swap) The programmer may see:

    A ^= B; B ^= A; A ^= B
These two are swapped. A C compiler may be smart enough to know this is an xchg instruction, or it might turn them into xors. Hard to say, really.

---------------

Most of the low hanging fruit have been taken up for sure. Almost every "memcpy" turns into "rep stos" for example (which is the assembly-language equivalent to memcpy).

A high-level programmer may not know that "memcpy" turns into "rep stos" however, and may emit his own memory copying for-loop.

At very least, a good optimizing C / C++ programmer needs to know about these little things. They'll let the compiler turn "memcpy" into "rep stos" (for -Os) or AVX memory store instructions respectively instead of writing their own less efficient loops on the matter.


Optimising for size is a relatively "obvious" goal, although it still takes a lot of skill to do it well. Optimising for speed is much less obvious however, the x86 architecture is incredibly complex when it comes to working out what code will be faster.


Well, it should also be noted that the responsible compiler in this case is at least 17 years old.


> How does this go with the often quoted mantra that you can only beat compilers today if you're an extremely skilled asm programmer? Or is the problem you describe just about executable size rather than speed?

Word 2000, so a 17+ year old compiler. Compilers have gotten a lot better since then.

Having worked on a compiler team back in the mid 2000s, even then I'd say it was easy for almost anyone to spot areas where a human could optimize more.

Now days, much less so.


This is a case of knowing the rules so well that you know when you can break them.

Its also a historical artifact from the days when many programmers wrote assembly yet compilers started getting good.

There’s also an element of avoiding premature optimization: don’t assume the compiler will product slower code or that if it does it will matter in your specific application.

At the very least you should give the compiler a chance, profile, then hand-tune after you’ve fixed all the low-hanging fruit.


That mantra applies to "most" programmers.

I think he was talking mostly about size.

Odds are good most programmers tinkering in machine code won't beat the performance of the compiler. That takes experience. It is a good rule of thumb.

I think it is easier to write smaller (size) code than a compiler, but when you measure performance it will beat you often until you get good. Alignment, x86 tricks... It takes a bit of knowledge to do well.


simias has most of it but note also that that file appears to have last been compiled in the early 2000s. Compilers of that era were far less advanced, especially since many large companies were pretty conservative about the optimizations enabled (fixing a bug meant mailing CDs for many customers).

The general trend is that it's been getting harder and harder to do that easily, which means people want to be more focused — something like OpenSSL can still justify hand-tuned assembly for various processor families because it's a widespread hotspot but as compilers continue to improve the number of places where it's worth the maintenance cost is going to keep shrinking.

In the early 2000s, the scientific HPC programmers I worked with were careful to maintain a portable C implementation which they could use as a check both for correctness and for an optimization baseline — it wasn't uncommon for a new compiler and/or processor to substantially close the gap relative to a lot of hard manual work.


What does “cease providing” mean? Don’t you have the software? Isn’t the whole point of a standalone program that once you have it, the company can just go bust?


In the context of iOS at least, it has the potential to stop working or lose functionality as time goes on. Compatibility for apps that are not continuously updated is not high on Apple's priority list.


Disclosure: I work for AgileBits, makers of 1Password.

We recently updated 1Password 6 for iOS to work with the new Dropbox version 2 API. Going forward it should be fine until the next Dropbox API changes and presumably that will be a lot smoother than the change from version 1 to 2 that we just did... at least, I hope. We had several people put a lot of time into that transition.

I'm not sure if your comment applied directly to us, but for anyone wondering we tend to update 1Password pretty regularly. We are not at risk of ever being an app that falls on the "not continuously updated" lists I don't think. We were there on day 1 for Drag and Drop on iPad, and day 1 for Face ID. Much like we were there on day 1 for Touch ID and extensions.

1Password for Mac does not depend on Dropbox's API, it syncs directly to the Dropbox folder on the Mac. So as long as that doesn't change it has zero dependency on Dropbox itself.

Kyle

AgileBits


No, sorry, I probably wasn't clear. I didn't mean to say that 1Password does not provide updates.

I was just talking about the hypothetical situation for iOS apps (and thus their users) _if_ the apps don't get updates.


Have a look at Qbserve.


Why the quotes around "Python implementation in JS"? Isn't it literally just that?


They're quotes from each respective project's homepage :)


I'm always blown away by how crisply is Alan Kay able to put complex thoughts into words. I feel that the level of clarity I can achieve on the scale of a sentence or two he can achieve on the scale of several paragraphs.


I'm similarly struck with how open the computer science frontier was in the 50s, 60s, and 70s.

Every effort required some first principles rationalizing to justify the R&D.

Alan, and others from Xerox PARC, have a very "meta" outlook on the entire field.


Counterpoint:

The words he says are nice, and his depth of knowledge is profound, but he hasn't actually provided any evidence making the case that lisp is better and/or in what scenarios it is better.

That this sort of rhetoric is the way almost all programmers and language designers try to find truth and communicate ideas is probably why we ended up all programming in javascript. We have no rigor, we are like Freud era psychology with flowering and impressive sounding prose but no real data. And while psychology still has a long way to go, we have even further.


I disagree. I found his meaning clear. The core of the essay is that thinking with the right abstractions provides greater boons to reasoning capacity than comparatively trivial differences in human IQ. To make things a bit more concrete, take any game; Go, Poker, Chess. If these were truly (approached as) combinatorial problems then humans could not stand a chance at them. But over time, people have learned patterns, compacted them and shared them such that the average level of play is what masterful players might have once struggled to reach.

Lisp provided concepts captured enough structure that it cut through a lot of combinatorial complexity in the space of possibilities, making reasoning about the construction of computer programs and algorithms a lot less random.

A complaint might be he left out what aspects of Lisp provided that combinatorial boost but I'm sure many HN users could furnish those reasons. But one easy observation is the sheer number of languages and language concepts that are rooted in it. Newtonian physics is not the best physics but it fundamentally changed the way people reasoned about the world. Lisp is not the best language but it fundamentally changed how people reasoned about computer programs.

One objection one might make is that the real Maxwell/Heaviside Equations can instead be found in the Lambda Calculus/ISWIM. Despite the name, Lisp did not originally derive from the lambda calculus but just about all typed functional languages and future versions of lisp are of the LC. But a quick answer to that is influential languages like SASL and Meta Language were prototyped in and strongly influenced by Lisp (cognitive boost).


Godel's Incompleteness Theory states that within an axiomatic system, there are statements that are true that you can't prove true. You can only prove them true if you go up a level.

In programming (assuming a "computer language" as an analog to "axiomatic system"), there are abstractions that cannot be implemented. You can only implement them by going up a level.

In LISP, you have this "going up a level" ability with macros. The only reason this works for LISP is that the language is written in terms of data LISP can manipulate, meaning you can manipulate code at compile time. Forth has an ability to "go up a level" as well, with a defined way to extend the compiler.


LISP macros are formalized in LISP. You don't "go up a level" by using them. You go up a level by going into the human brain and thinking about your code in a way the computer doesn't.


Wow. That's a really interesting way of explaining macros.


I wonder if LISP and LSD encourage similar ways of thinking.


Based on my own experiences with both, I'd say: Yes. Although I'm sure you couldn't prove it mathematically (yet).


I’ve been using Clojure off and on for a few years but skipped learning about macros because I haven’t (yet) needed to write my own. Your comment makes me regret that decision.


"A complaint might be he left out what aspects of Lisp provided that combinatorial boost".

He said it right there: “unlimited programming in an eyeful” (the bottom half of page 13 in the Lisp 1.5 manual)


What is that a counterpoint to? He's not saying Lisp is 'better' than language X or that you should write your next world-conquering app in Lisp. He talks about ideas and inventions that can act as a sort of 'force multiplier' in thinking. In his analogies, he points out this applies to both fundamental innovations (Calculus) and innovations in representation (Heaviside formulation of Maxwell's equations). He argues that Lisp is that type of innovation in Computer Science.


>That this sort of rhetoric is the way almost all programmers and language designers try to find truth and communicate ideas is probably why we ended up all programming in javascript.

It sounds like all the knowledge should be delivered right into someone’s mind in a couple of minutes. It is right there, go and learn it for a week at least.

When I try to directly convince random internet people that e.g. Lua has better async[1] than js due to language primitive that is more abstract and cannot be reimplemented in lower-order primitives, all I hear is usually butwhys and dontneeds, real data on both sides (at best; at worst they claim this code cannot be asynchronous). Instead, folks stick to cumbersome syntactic methods fitting their evolved familiarity. They don’t need anything beyond js, that’s why there is js. And this is one small step torwards less cognitive load and yet we have a friction. Good luck explaining macros, call/cc, evaluation models, etc to them to the point they actually get it en masse.

That said and to be honest, I never mastered lisp skills because it is not actually easy (and I’m not very smart too). You enter the completely new and broad area, where you almost have no clue what you can do. The same would feel a junior entering, say, java after dos basic. But more confusing, because junior has lots of time and is just learning and you’re a professional that “already knows it all” (not really) and has little/no time.

[1] https://news.ycombinator.com/item?id=15472204


Yes. It’s similar to the difference between a mechanic and an engineer: a mechanic figures out something works, but the engineer can then take the next step to discover/know how. If someone cannot articulate or identify advantages, then they are pushing unproven, possibly wrong “religious” beliefs like a “mechanic.” Zealous communinal reinforcement is political, Macaroni and silly when there are no convincing arguments other than “because it’s new” or “because.”


Agreed, and well said.

I'm very much a pragmatic kind of person, and my unpopular opinion about code is that Lisp is the greatest single language ever designed, if your goal in designing a language is to produce articles proclaiming it as such.

And I am sooo sympathetic to the promises Lispers make. I want to believe. But ultimately, show me the code. Why are the stellar Lisp success stories so few and far between that one of the big ones is Paul Graham's from the 90s?

If it's so powerful that almost no one can use it, and it rules in almost no problem domains, ... how is it so powerful again?


Lisp's popularity came too early. Lisp really needed some relatively powerful machines to take advantage of it, and when it was actually popular most machines were way too weak for it. Now that machines are finally powerful enough to handle it, it's no longer popular, and it has no army of programmers using it as they are using some of the more popular languages like Java or Javascript.

So, yeah, you're unlikely to see that many "killer apps" made with it, as the killer apps of today usually require a lot of programming resources that the Lisp community just doesn't have. What apps Lisp programmers do come up with are mostly not going to be so uniquely Lispish that they couldn't be replicated in another language (see Reddit's rewrite from Lisp to Python[1]).

On the other hand, there's Emacs. That's one killer app whose power arguably hasn't been beaten or replicated elsewhere. But that very much depends on your taste -- and if you agree with that you're probably already a Lisp convert.

Then there's the whole "worse is better" thing. All sorts of inferior technologies have triumphed in the marketplace -- for reasons that usually have little to do with their technical merits. Lisp has itself been a victim of that. It's a vastly superior language to so many more popular ones out there (and many other language designers agree, if the regularity with which Lisp features are pilfered in to other languages is anything to go on), and yet so many programmers are really ignorant of it or have only been exposed to little toy Lisps in their college years, mistakenly thinking that modern Lisps are no better, don't realize how much their own favorite languages owe to Lisp, nor how much more that their language doesn't have that Lisp has to offer.

[1] - https://web.archive.org/web/20051208055316/http://reddit.com...


Well, Clojure seems popular.


Don't confuse lisp-as-an-executable-device-to-think-meta and lisp-as-one-of-programming-language-choices. Alan is talking the former.

The idea---not just to think about programs per se, but to think about abstract properties and operations of certain category of programs and to be able to actually write it down---gave the exponential power to you. Lisp was the first such tool as a programming language that allowed you to break the ceiling of first-order thinking.

Nowadays there are other languages that provide meta-level tools. And once you get the idea, the language doesn't really matter; you can apply it to any language, albeit you may have to build scaffolds around it to compensate lack of language support (Cf. Greenspun's tenth rule). So, if you already got the idea through other languages, it's good for you. You'll see Lisp as an arcane prototype of modern languages.

But note: since Lisp can write meta-level description in the same language as the base language, you can top up this ladder infinitely, meaning you can think of program-generating-program, then program-generating-program-generating-program, and so on, using a single set of tools. It is indeed powerful but somewhat crude, so modern advanced languages intentionally restrict that aspect and just try to cover the sweet spot. Even if you're familiar with metaprogramming, it may still be worth to play with Lisp to see what (meta)*programming is like, to know its pros and cons, and to get the idea that might come handy someday when single-level metaprogramming isn't enough.


> If it's so powerful that almost no one can use it, and it rules in almost no problem domains, ... how is it so powerful again?

Couldn't you substitute calculus, real analysis, or abstract algebra for Lisp in this statement and make the same claim?


Like Steve Jobs he is amongst the group of people who could say anything anyway they like and some people will find a way to glorify it by a factor of a million.

If you remove their name from the things they have said or written, at least many of them seem fairly average sensible statements, like something you would read here from a random member.

I'd be interested to see a "blind test" for these things. I bet the bias would show.

As in, display something from Alan Kay but post it on a random internet forum as JoeSchmo11, and you will have lots of people disagreeing with it and tearing it apart without hesitation.

But oh look if Steve Jobs or Alan Kay or Elon Musk's name is attached to it suddenly becomes a unique gem of knowledge to be cherished by the humanity for the rest of time and surely the fault lies with anyone not agreeing with it right? Because after all they could not possibly be more right than those authority figures?

I'm not making a judgement about this, neither am I surprised by it. Just saying that you should ask yourself exactly how much of your own brain power and judgement do you use to judge the quality of an idea and how much of your conclusion is conveniently borrowed from your image of the author?

It reminds me of when Elon Musk talked about the probability of our universe being a simulation.

I'm not saying he is right or wrong, just using that as an example. But if JoeSchmo11 said exactly the same thing people would just dismiss it.

But when Elon Musk says it all the newspapers go mad and write stories about it.

Again I'm not saying it shouldn't be like this, reputation matters, I get it.

But my point is that the fact that this happens to the degree that it does is in some ways a sad admission that we are incapable of evaluating thoughts and ideas independently of their originators and on their own merits. Instead we borrow from the past credibility and the authority of the people they are attached to.


What is the sad admission here? That not everyone establishes 100% of their belief system from first principles, and instead adopts some of it empirically based on other reasoners' beliefs? I don't consider that a bug.

If I don't know anything about X, and the Alan Kay of X comes and tells me something about X, it's pretty likely to be true. If equally-ignorant-as-me-person tells me something about X, it's not as likely to be true. That's the meaning of knowledge and ignorance.

None of this precludes thinking for yourself about anything. It's just the thing you do in addition to thinking for yourself about it, if you want to have true beliefs.


That this happens is of course difficult to disprove. My only defense is that very often, I read things from acclaimed people and think "bullshit". I don't have any particular reason to like what Alan Kay says besides having read previously things from him that I liked.

Also, in this particular case, my point wasn't really about the truth value of what he claims (which I'm not fit to judge), but about the clarity of how he claims it.


Honestly, I didn’t have any effing clue who Paul Graham is after reading half of his essays and description of Arc. Realized that he wasn’t yet another dev blogger only after someone mentioned on HN that HN is somewhat his. Also, it feels like Joel Spolsky was read in my environment long before stackoverflow.com existed (or became popular). Just one single data point against your hypothesis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: