But it's also very obvious that general relativity can only be an effective theory that fails for very strong fields. No one I know actually thinks there are singularities, those just point to places where GR starts to significantly deviate from reality. Navier–Stokes equations are very accurate on macroscopic scale, but regardless of their mathematical properties, there are no actual blow ups in real world, because ultimately, the liquid is composed from atoms. Likewise, the fields in GR must be ultimately quantized in some way, which will almost certainly break the singularities.
The infinities in QFT are really just a hack. I mean, all those calculations are perturbational in the first place, so they can be hardly considered a "true" picture of reality.
It's rather the degree to which Python is dynamic that makes it slow. PyPy could be considered an implicit JIT compiler for Python, yet it is still far slower than Julia. The level of magic you can apply to Python objects that the interpreter/compiler must support is just a different league than Julia. I'd be interested if someone could compare to JS.
In the case of python it's clear that the heavy reliance on c extensions is a blessing and a curse: it's kept python relevant in communities like science even though it isn't very fast. However one of the lessons of Graal seems to be that such extensions can seriously prohibit improving performance, since they are opaque to JITs.
I’d expect the conditioning to be subconscious. Doesn’t matter the computer is locked down. Your brain sees a computer, which triggers a multitasking, distraction-rich mode.
Vast majority of scientists is not able to write idiomatic Fortran, yet alone idiomatic C++. Scientific C++ code that didn't have an oversight from a professional C++ developer will be always horrible. Scientific Fortran code written without such oversight can sometimes be bearable. This is perhaps the main advantage of Fortran.
Eh, I'm talking mostly about the large scientific code packages that are being developed with millions of dollars in funding and large, organized teams. The people writing these sorts of codes know what they are doing and a lot of migration to C++ is because they are more familiar with it and it's easier to hire skilled people.
it is in the original language design which is all about type-genericness and separation of implementation from interface with zero-cost abstractions. Creating an array with a different `getindex` dispatch is a great example of what Julia's type system was made to do! The standard library chose to use 1-base indexed contiguous fixed-dimesion dynamic arrays, but that's not the right choice for every problem.
As evidence of this, check out JuliaArrays (https://github.com/JuliaArrays) which is a whole Github organization devoted to the development of alternative array types, like StaticArrays (which are stack-allocated immutable arrays) or CatViews (arrays which are non-contiguous and constructed from views of multiple different arrays). The nice thing about Julia is that, if packages are written to work with generic types, they can natively (and efficiently) work with these "non-standard" array types, making them easy to integrate into the scientific ecosystem.
It is ugly because 1) 1-based indexing and x-based indexing are treated differently; 2) x-based indexing has to use more complex syntax, which in effect discourages the use of non-1 indexing; 3) this strategy sets potential pitfalls (e.g. implementing length and size for non-1 indexing arrays). A cleaner design, I guess, would be to specify index range on declaration like pascal static arrays and use low(A):high(A) for iteration rather than 1:length(A). This, however, complicates 1-based use cases.
Generally, I don't think there is a good way to achieve flexible indexing without causing troubles somewhere, so I don't think Julia has really solved the problem.
1) No, they are just different dispatches to getindex.
2) No, iteration is through indices(A) or eachindex(A), etc., which are the preferred way of iterating anyways. You shouldn't do 1:length(A) which is a MATLABism that works but I would say isn't good Julia.
3) Defining new dispatches for length and size is a pretty standard use of the language?
"Non-standard" arrays with non-standard indexing already work in lots of packages. It could be better (that's one of the things that I am advocating for), but it's not a language tooling issue whenever it's a problem, it was the developer going `::Array` and thus requiring a contiguous 1-base index array where other AbstractArrays would actually work.
@attractivechaos I don't know how to reply to your last reply, so I'll do it here. 1:length(A) is bad because it's using a standard construction for intervals of numbers, but using it for indices. We don't want to get rid of it because 1:5 or 0:0.2:1 is something that is very common and necessary, but I don't see how to tell one that they should instead use eachindex(A) except through proper docs. 1:length(A) is so common in MATLAB though that I am sure people will carry it over, and I'll PR to their library to fix it. I'm not sure how to fix a knowledge issue like that.
You're not understanding generic types and its relation to (1). There's only one way to access an array: getindex. That's the function that's called with A[i]. However, you can use an immutable to put a thin (zero-cost) wrapper over an array, and define dispatches to getindex to do whatever you need it to do. So it's both implicit syntactically because the user just does A[i], but it's explicit because the user has to choose a different type. getindex is then usually inlined and then compiled according to the type, making it a thin abstraction over the implementation.
There are iterators which don't have a size or length. You can write generic algorithms which require an AbstractArray which HasLength and query at compile time for things like that and throw appropriate errors (those are called traits).
There is still a lot of development to do here, but the basics like this are pretty much solved except when new users treat Julia like MATLAB, but I'm not sure how anyone could control for that.
You can click on the timestamp, and there will be a reply link there. I think reply links are hidden for a little bit of time after posting, but I'm not sure why.
This is an anti-flame war feature, designed to let people cool off before replying (fast paced discussions were often contentious before that was introduced).
On 2), if you think 1:length(A) is bad, why not forbid it from beginning (e.g. use low(A):high(A) instead)? To find an x-based array length, why not just length(A), instead of length(linearindices(A))? Decisions like such are remedies of immature early design. Also, what if I use length() on an x-based array? Abort or a wrong number silently? On 1), having two different ways to access array, the most fundamental data type, is already worrying enough. On 3), the page says "don't implement size or length". That is very uncommon in most other mainstream languages.
Julia has potential to become a great general-purpose programming language, but this indexing issue will practically limit it to the numerical computing community. Perhaps achieving that is already good enough.
One of the most mindboggling thing about the recurrent 0- vs 1-based indexing discussion is how incredibly rarely that difference is ever used programmatically in Julia. Most of the large julia packages are programmed in a way that doesn't care whether the array is 0 or 1 based. It is important in some other languages, and then I just think people are happy that this is something that everybody can agree to disagree on. I don't think the discussion is very productive though.
Hmm... I wanted to learn more about 0-indexed arrays, but after looking around, I still have not figured out how to declare 0-indexed arrays without extending the AbstractArray interface or using another package.
I don't have any particular feelings toward one or the other (it is a convention, get over it), but I think that zero-based indexing is just an artifact of C that stuck around.
In C, the array syntax is "mostly" just syntactic sugar for pointer arithmetic.
When you do "a[n]=value;" this is equivalent to "*(a+n) = value;". To get the nth cell of an array, you just add "n" to your base pointer "a".
Array indexing, therefore, is consistent with the pointer arithmetic.
That said, and funnily enough, Fortran, which is much older than C, has 1-based indexing (by default, but you can configure 0 based indexing, if I remember correctly).
Note how, in the PDF version [0], Dijkstra numbers the pages starting on zero (handwriting, upper right corner), but whoever created the PDF disregarded its message and did numbering starting on one (lower right corner). :-)
> The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.
I don't buy it. Wouldn't people want their programs to run fast regardless of this?
Ah, so if I were to manually craft a commit in a text editor in the format:
tree sha1
parent sha1 of parent I want to attach it to
author some string
committer some string
The commit message
I could add this to the git object store manually under the same sha1 file and a client could just fetch it? Would the client try to fetch the faked objects when it already has the real objects in its copy of the object store?
That is, would it think it has the commit because the sha1 hasn't changed, but the tree sha1 has been updated and it would presumably refer to blobs that the client doesn't already have and try to fetch them. Or would it not proceed because it already has the commit?
It doesn't seem to verify hashes of objects on checkout, but it does when receiving packfiles. So it's difficult to see how this could be an exploit unless the attacker has access to your local .git directory.
I’m sure there’s a law with someone’s name that states that. But just in case it hasn’t been claimed yet, I’m proposing that we call it the fuck you law. Because the next time someone comes to me to ask me to fix their trello to zappier to email to google sheets setup they use as a project management tool, I want to be able to say, “Fuck you and there’s a law that says so.”
No it doesn't. I have many of my git repos in Dropbox but I'm not using Dropbox for sharing. Having those in Dropbox means I get automatic backup and that they are available when I switch to a different computer, which I do, but not frequently. As only I use my Dropbox account, I'm aware of the potential sync problem, but it's never been a problem. I do run fsck & gc more frequently than most, but I probably don't need to.
EDIT: I should emphasize that this model is way more convenient than manually having to remember to push and pull all the time. Now push is only for publishing outside as it should be.
I've always told my parents who grew up in communist Czechoslovakia that the Chinese communism is a very different beast from the eastern-bloc communism they've know (which Vaclav Havel described masterfully in his texts). But this article would feel very familiar to them.
The infinities in QFT are really just a hack. I mean, all those calculations are perturbational in the first place, so they can be hardly considered a "true" picture of reality.