One example that just occurred to me: The base 2 log of the number of legal positions of a 9x9 go game is about 126.3, which means if I use a good hash of board positions yielding twice that number of bits, I have a better than 50/50 chance of having no collisions. That is very good to know if you've got anything like a transposition table in a Go playing program.
Generally, a Zobrist hashing system is used (which is a specific type of transpositon table) for Go. It's also probably used for chess, but I don't follow chess AIs as closely.
Dropbox Pro is same rate (https://www.dropbox.com/plans). You do have to buy a it $10/month at a time, but you can get to it whenever you like without paying 9 cents per GB. AWS is usually worth it but it is never super cheap.
Which is around the same price as google drive: 1TB ~= $10. You will also get it immediately without going through the 3-5hrs penalty and supports instant preview of most files.
Although, the increment is much higher (Next tier is at 10TB ~= $100)
I use both -- Dropbox and Arq/Glacier. Dropbox does have fantastic accessibility and it's irreplaceable for sync, but for stuff like family photos it's nice to have a very cheap backup-of-my-backup
Or you could pay $7/month for office 365, get 1TB+, office and 60 skype minutes. You can even buy a $75 x86 tablet and get 1 year of free office 365 for a total cost of $6.25/month.
Even if a JIT compiler can prove that all the code in your app doesn't change that function pointer, because the variable is volatile, the compiler must assume that you intend to read from actual metal every time you refer to it and it can not predict what the value will be. Even a JIT compiler is not allowed to optimize away that read, or else you'd never be able to write a driver.
Because a JIT may have enough knowledge of the underlying system to know that the pointer is not pointing to a memory-mapped / DMA'd / etc area, and as such can be assumed to remain constant.
Yes, it refers to a memory location, without implying anything about the semantics of the bits located at that location. You can't dereference or assign to the location because you don't know the type at that location. You can however assign that pointer to a typed pointer variable to actually read or write to that memory location. This is useful when you really care about the bits of memory but you're variable pointing to that memory could just as well be an (int64_t ) as a (char ) and those types are not interchangeable with each other, only with (void ). So library functions that just care about memory locations, not the semantics of the bits there, take (void ).
Some of this may be technically incorrect. This is my own mental model of the C language which is sometimes incomplete.
I'd say it may be somewhat helpful to realize, that both "void" and " void * " are kinda wild cards in C's type system; they are there, but they're "breaking the rules". And " void * " is not exactly the same to "void", as " char * " is to "char".
Hmm, I'm a bit confused. Isn't that a function call, rather than a function declaration? If it's a function call, it's passing a bunch of types in, which I thought was not valid C?
Ah, now I get your question. So, it is a function declaration, not a function call. Um, sorry: a declaration of a pointer to a function, where this function would take as arguments: some (unnamed) void pointer, some (unnamed) int value, and some (unnamed) size_t value; and would return a void pointer.
Um; then there's the equal sign, so this is not only a declaration, but a definition too; but definitely not a call.
A call is further down in the original blogpost, in the below line:
He says several times that JavaScript succeeded in spite of being a bad language because it was the only choice. How come we're not all writing Java applets or Flash apps?
Well, about ten or fifteen years ago, "we all were" would have been the answer. Except that back then, there were multiple choices -- plug-ins meant you could choose Java, or Flash, or ActiveX (Visual Basic 6, anyone?), or VRML for that matter.
The number of security issues that plug-ins have had in the last two decades makes most of them non-starters nowadays, although there are still plenty of sites that use them extensively (say, Childrens' game websites like Neopets and Nick Jr.'s website) depending on the target audience.
There were other advantages. To write JS you just need a text editor, and it's easy to pick up. To write Flash requires spending several hundred dollars. To write Java requires the JDK and to learn Java.
Especially on 1995 technology, that mattered. Compiling Java took a while. I didn't use Flash enough to retain an impression of speed, but it sure wasn't instantaneous.
It's also the reason why Flash was so prevalent until recently and is still installed in 90-something % of desktop computers: it's faster. Significantly faster, and very specially so in the 90s and early 2000s.
Used to to do a good amount of flash development - you could actually do it with just a text editor and a compiler (which was free). There were also quite nice free IDEs, like FlashDevelop.
If I had to pinpoint it, I'd say Flash's primetime was around 2005-2008 perhaps, and FlashDevelop was available then. Guess we probably define it's prime differently haha, I'm thinking more of when it matured - AS3 as a language, lots of tooling choices, etc.
I wasn't ever anything close to a professional Flash developer, I'll take your word for it if you say that was the best time to be developing for it.
I was thinking about the days of Homestar Runner, Weebl and Bob, Newgrounds, and so on, when flash cartoons and games were (for kids, at least) a huge part of internet culture, and everyone wanted to be a Flash animator. Youtube kinda killed the Flash cartoon medium, sadly. Sure, videos are simpler and don't rely on a proprietary binary blob, but there's nothing like loading up a Strong Bad email and clicking random things (or, uh, holding down tab) trying to find secrets.
Ah, don't give me too much credit haha, was more of a side-project thing for me, definitely wasn't a professional, especially at the animation side of things (as opposed to the programming side). I was also more involved with the games side of Flash, which Flash became much stronger at as ActionScript 3 came out which coincided with much better Flash performance. Flash advertising and simple animations were probably stronger earlier.
I'm just interested in the topic because it's kind of neat to look back at the internet and observe its history and the changes its gone through. Just did a little wikipediaing for fun - here are when a few different websites / notable games were released:
or VBScript for that matter.. I think there's some confusion about why JS won. JS couldn't easily manipulate the DOM either until JQuery in 2005-2006.
The fact that Java, ActiveX etc.. had full control of the system and causes problems ensuring security was an issue, but it is not the reason why JS beat them all.
Don't discount the power of 1) free and 2) easy to use software that is 3) not controlled by a single corporation. JS is the only web programming language that is all of these.
Yea, maybe Python or Clojure in the browser would be cool. I would argue Clojure is absolutely more difficult for a novice to learn, and Python provides what additional benefit? JS was there first.
The only reasons why plugins existed is you couldn't do these kinds of things in the DOM. JQ, and the subsequent advances in browser technology, HTML, CSS, JS - made it so you can. Also, other things being equal - programmers will choose elegance over bloat, less layers of abstraction over more. Plugin architecture became just an unnecessary layer between the programmer and the browser, after HTML/JS/CSS caught up.
JS did not become ubiquitous by accident, or because it was the only choice. There were many choices (all being pimped by big well-funded companies). JS won because it was the better than the alternatives.
While security is the main answer, it was also that Java and Flash aren't necessarily available. That is, getting them to run on another machine was frequently a huge issue, especially if you tried to put in any kind of complexity.
Javascript, on the other hand, was omnipresent and comparatively accessible. It was the least bad option by a wide, wide margin. For a different comparison, I switched from Java applets to PHP in the early 2000s. I didn't really get into Javascript until many, many years later around 2009: before that, Javascript was mostly a way to make Flash work properly.
Oh, yeah, especially after Microsoft stopped shipping Java.
There was also the version issue to worry about. "Pardon me, Mr./Ms Customer/User -- would you mind terribly going and downloading and installing a 20 MB Java update on your 14.4k dialup connection before using this page?"
I always found it a bit hilarious how Sun, after getting Microsoft rather onboard the Java train, albeit with their necessary native extensions, decides to sue them and put an end to it. And promptly kills off Java distribution and adoption by the largest software developer in the world.
Even stranger is how Sun, a hardware/platform company, decided making a popular platform that's hardware and platform independent would help their business. Sometimes I wonder if there was a really well thought-out plan, or people were just doing things.
The "necessity" of those extensions is debatable, and they meant that code wouldn't be portable to Sun's implementation. There was real cause for concern, and there weren't a lot of other options for fixing it.
Sun probably also realized that they weren't about to compete directly with mighty Microsoft on platform lockin of all things, so they played a different game.
Flash still powers Youtube for most users, Silverlight for Netflix and Unity's plugin is required for most 3D games on Chrome's Marketplace (not sure where else to look for successful HTML5 games).
The original machine that influenced C's model of computers was the PDP-11 (http://en.wikipedia.org/wiki/PDP-11). It had a mov instruction instead of load/store. It had no dedicated IO instructions. It could be treated as a sort of generic random access machine (http://en.wikipedia.org/wiki/Random-access_machine) and that is what C did and still does. So there was a reality that C simply modeled, and it was copied (with all sorts of modifications) many times.
If you keep the same distance between you and the car in front of you regardless of your speed - which you seem to say -, I hope you are never behind me.
You're right. But in case of a traffic jam we operate at speeds that allow us to do this. Look, the Article's author actually mentions it too:
Quote:
"The difference between these is negligible at high speeds, but at a low enough speed, it becomes difficult to maintain a 2 second following distance from the front bumper of the car in front of you without impinging on the rear bumper of the car in front of you, especially if said car is more than 0 feet long. So under these circumstances the flow rate of the highway decreases below 1 car every 2 seconds — maybe to 1 car every 5 seconds. So now you have to wait 5 seconds for every car in front of you in line."
There you have it. We have plenty of space/time to play with in case we're in a traffic jam since we approach very very slow speeds.
Obviously I'm not saying you should stay 10ft behind somebody when going 80mph.
I personally have often gotten the impression that a large portion of developers already have The One Programming Language in their mind. It's usually Java or C# or sometimes still C++.
JavaScript fills me with joy because it is so unlike these languages and I get to watch people grow when they stop kicking and screaming and learn it. JavaScript doesn't have any super unique features or anything; it's just so not Java while frequently being suddenly essential to people who think in Java.
This is a fundamental place where the worlds of C# and Java differ. Where the Java world would say, "developers might misuse this, better not have it in the language," the C# world says, "developers could really use this, better put it in the language."
But it doesn't appear (anecdotally I admit) that people feel that way. Seems the "whole world" (speaking loosely) is turning against Java because of this very philosophy.
Because Java has been so conservative, people actively hate it's verbosity, boilerplate-ness, and lack of language features (anonymous functions, first class functions, etc.).
So Java has, for many years, helped huge teams of mediocre developers avoid certain kinds of self-inflicted wounds by being conservative in terms of language features. And the result seems to be that Java is increasingly scorned.
I know if I had to replace my C# work with Java, I'd feel incredibly frustrated at the lack of language power. In fact, I wouldn't choose to do it -- it would have to be a hell of a project or opportunity to pull me into it.
(Luckily for me, there are many better options, like Clojure or Scala on the JVM, or Haskell, Python, Ruby, etc. off the JVM).
I work on a pretty huge C# code base. Use of LINQ is the least of it's problems. If anything I'd say one problem is people not knowing about things like lambda expressions, linq, generics or whatever. It can really make code harder to parse when it's written in ways that don't take advantage of the full power of the platform.
If you're doing that it's crap code; you should have defined a few named object types. Maybe your "Dictionary<string, object>" is e.g. actually a "PropertyMap". this is not a problem in the c# language.
Not at all - you just missed a few facts that need considering. Lets rip it to bits some more:
It's in system.windows.forms isn't it? Don't really want that dependency and associated resolution being dragged in to a web app otherwise the compiler has to load the entire assembly's metadata.
Also, it requires full trust.
Oh and finally it isn't serializable.
Which is why we end up with SerializableDictionary<K, V> which is even longer and is an adaptor for Dictionary<K,V> which implements serialization.
That's why it all sucks.
And I haven't even included ConcurrentDictionary thread safety yet.
In my experience, that kind of stuff is usually a problem mainly because of C#'s verbosity from lack of type inference. Although, with 4 unnamed elements there, it might start making sense to create a new type, and then it's just "List<List<MyRecord>>".
Some of the code was certainly written before those features was available, especially some C# 4 features. I'm sure someone might look at my code someday and wonder why I didn't use async and await. I'm talking about code written today though.