Not really... All great game engines were spawned out of AAA titles. You should never try to build an engine without a set of games that stretch its limits in all directions, otherwise something will always go wrong. There isn't a single person alive who can manage the complexity of a game engine. As soon as you start splitting things up, you run into problems of distributed development. Getting this right without eating your own dog food is unlikely, at best, no matter how experienced your team is...
What makes the difference between an experienced engine developer, like CryTek and Unreal, and a hobbyist or a company who does it for the first time, is mostly that while both of them will definitely build a game with the engine while creating the engine, the experienced team will actually produce a re-usable, well designed engine, while the inexperienced team will produce a hunk of junk that doesn't fit together and doesn't extend past the game they were trying to develop.
Seems like it. Still the tons of games that are and have been MADE with Unity over time have led it to become what it is now.
"Unity was founded in Copenhagen by Nicholas Francis, Joachim Ante, and David Helgason. Its story began on an OpenGL forum in May 2002, where Francis posted a call for collaborators on an open source shader-compiler (graphics tool) for the niche population of Mac-based game developers like himself. It was Ante, then a high school student in Berlin, who responded.
Ante complemented Francis’ focus on graphics and gameplay with an intuitive sense for back-end architecture. Because the game he was working on with another team wasn’t going anywhere, they collaborated on the shader part-time while each pursued their own game engine projects, but decided to combine forces upon meeting in-person. In a sprint to merge the codebases of their engines, they camped out in Helgason’s apartment for several days while he was out of town. The plan was to start a game studio grounded in robust tech infrastructure that could be licensed as well."
Sort of - the original developers made a game which failed commercially and they decided to sell their tools instead. But the engine has changed a lot since those days, and Unity's lack of real dog fooding is a common complaint from people who use it professionally. Unity don't see the problems that only come up when you try to ship a real game in a team, on console or mobile in particular.
US universities are always amusing to me. You pay about 100 times more than for a German university and end up with an education where you have trouble understanding the content they teach in Germany after graduation, unless you come from an elite university. But hey, it's really easy to find Standford equivalents in Germany as long as you search for a good university for your major, instead of just searching for a university. Of course it doesn't come with the prestige and the network, but honestly is that really worth a quarter million dollars? In the end, if you are smart and willing to learn, the only thing you really get from US elite university is prestige and better equipment and research opportunities. However, all of that isn't really relevant for a whole lot of people.
Well, it is fairly common knowledge that US universities are not worth the price they charge for undergraduate education if you are not in a position allowing you to network heavily. They are very good (and by that I mean very rich) for research however. For a European, going to the US before your PhD makes little sense.
I think that if you get into Stanford, then yes it is worth the eye-watering tuition. If you get into the median school, then it probably isn't unless you're studying engineering, accounting or another field where graduates have good hiring prospects and earning potential early in their careers.
Studying abroad for an entire degree is underrated among Americans.
I would say its a measure of maturity as a developer if you have no interest in touching something that has no use and is 99% hype. I would not have a problem of working on a useful blockchain in my job, however it remains to be seen if there is such a thing.
Blockchain is like a solution without a problem. The only thing that can be done with blockchain that can't be done without (i.e. decentralization) is something that mostly has no application in the real world. And there is no need for it because the technology for that exists since over 2000 years. If it was needed, someone would have done it already long ago.
But who knows, perhaps at some point in the future a use case will emerge.
Interesting. I hope what the author says isn't actually done. So the computers generated a proof nobody can understand, but fret not, a second computer program can assure us of its validity. Well that's heartwarming. I hope they are not building the foundations of future math on this house of cards.
If you think a shorter elegant proof is more desirable, well, I totally agree with you. I think math is more about the proofs than the results. On the other hand I have zero problems trusting a proof checked by a computer. I don't know you but I usually trust much more computers with tedious computations than myself.
> fret not, a second computer program can assure us of its validity
Formal methods are rigorous enough and mature enough to be helpful in avionics software development, why not in research mathematics?
> I hope they are not building the foundations of future math on this house of cards.
Mathematics is always a house of cards, and human mathematicians will always be fallible. It's already possible for a published result to later turn out to be invalid, in turn invalidating papers that relied on it.
How about: a group of highly sophisticated and very skilled mathematicians generate a proof no mere mortals (and lesser mathematicians) can understand, but fret not: another genius mathematician can assure us of its validity.
Hmmmmmmm. Yeah, that sounds much better. What do you think?
That sounds more like clickbait. Just as if someone took the latest marketing buzzwords and somehow tried to make a product out of it. If there was any meat to this, they would lead with the use case and say "Hey here: This is the problem the world has. Here is how we solved it". But anything blockchain related already went more like "Uhh look, here is a blockchain. Enjoy". This is worse...
This seems to be just like Ethereum without the blockchain. What does "building directly on the internet" even mean? What do I ever need distributed, cryptographic consensus for? The use-case escapes me completely, this even seems more useless than Ethereum, where I can see at least some applications thanks to the blockchain. But that thing without the blockchain just doesn't make any sense. But hey, it got WASM, so must be cool.
> What do I ever need distributed, cryptographic consensus for? The use-case escapes me completely,
Generally the use case for decentralized systems is if you want to disintermediate an existing middle-man who has proven untrustworthy, expensive, or problematic in some way. Or, you want to enable a group of parties to coordinate their activities in ways not previously possible without a (currently non-existing) trusted intermediary. The middle-man function is automated and federated. That’s it.
Any business model dependent on a middle-man is potentially a business model for decentralized systems, if the economics, risks and feature requirements compare favorably. Bitcoin addressed arguably the most impactful one - money - but there may be others.
>this even seems more useless than Ethereum, where I can see at least some applications thanks to the blockchain.
DFINITY is a “blockchain” in the general meaning of the word, aka a public decentralized computing platform with a native currency and attendent economic incentives. It may use other data structures under the hood than most current blockchains, but the use cases are mostly the same as Ethereum.
the fact that you call Ethereum useless shows that you are way too ignorant to voice such a strong opinion.
educate yourself and try again.
i work with bchains since 2013 and ethereum is a major driver for a lot of innovation and real life applications. i've built an energy trading platform on top of it that allows local business to localize their energy management. its deployed and working, granted , in a small poc here in Adam. but still.
also what do you think powers 99% of all altcoins?
is there a hub of some sort to follow ethereum projects? last time I checked (few months I believe) cryptocats was the app with more activity.
also I'm curious about your project, if there's anything you're willing to share
I don't know. For me Apple has lost major brownie points here. I will be looking for option of a non-iPhone when next time I switch. It remains to be seen how this hits Apple regardless of the outcome, but I think generally Epic will be in a tight spot, should they lose the trial... Which makes it a battle of survival for Epic and a minor bump on the road for Apple. And unequal court case to be sure.
It will be inadmissable because of the psychological effects. Imagine having someone claim you murdered someone. Also you happened to drive by that park where the person was murdered at the same time.
Now someone wants to frame you, perhaps because you are a celebrity or a person with money to extort from. So they claim that they saw you murder this person.
Normally, this wouldn't hold any water and no jury would convict you, we are not in the medieval ages anymore. There would be no evidence of you around the crime scene, because of course, you weren't there.
Now imagine that they are able to produce audio recordings of a conversation you had with the victim, screaming at them and saying "you are gonna pay for this". And then they also happen to have video evidence of a smart phone camera that happened to capture you stabbing the victim.
Now tell me, if the jury sees all this and then it gets dismissed, because of "deepfake" claims or whatever, how will it affect the outcome of this trial?
If you are truly innocent, this might still work out alright, unless you are not white. But imagine there is even the SLIGHTEST connection to the real world. It doesn't have to be completely fake. Maybe you had some fight with that victim earlier, or there are witnesses who testify under oath that you had several heated arguments with them, etc.
Deepfakes can change the entire outcome of trials, by biasing the jury and proceedings against you. That's why any audio and video evidence essentially needs to be rejected without a very very thorough forensic analysis of its authenticity. And even then you would still not know for sure.
Deepfakes will change the entire outcome of trials the same way photoshop changed the entire outcome of trials before it.
Also you should see the type of video evidence used in court. It’s grainy 5 FPS surveillance footage that doesn’t even show their face. Deepfakes are not necessary if you want to frame someone.
The jury wouldn’t see it if it got dismissed, typically by a Motion in Limine. Opposing counsel also gets to object to evidence before it’s shown to the jury during trial. If the trial judge allowed it and, on appeal, the appellate court decided it shouldn’t have been shown to the jury, there would be a new trial.
In your examples, it would most likely be a Mistrial scenario, the jury would be dismissed, and the prosecuting team would have to start over again from scratch later, assuming they aren't themselves in trouble for introducing faulty evidence.
I’d be really curious to hear about a case where prosecutorial misconduct was punished. Don’t they have something akin to the “qualified immunity” that the police enjoy?
You really don't need to read assembly for any but the most rare cases. What you need is a profiler to give you the hotspots and you need to stop prematurely optimizing on a line-by-line basis. Just stop it. Optimize O(n) performance (but also don't go wild, unless there is a business reason), keep your fingers off micro-optimizations, unless so indicated by the profiler and only if there is a business reason as well...
Totally agreed. Getting comfortable with a profiler is a superpower.
FWIW, this is also one of Rob Pike's rules of programming:
> Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
It's implicit, but the "proven" part implies use of a profiler.
This is a matter of experience. If I have a tight loop looping millions of times but I have a heap allocation inside, I can be 99% sure this would be slow, without looking at the profiler and if there is an easy way to avoid it without obfuscating the code (e.g simply taking it out of the loop, or using the stack instead of the heap), I'd just do it. Profiling is a waste of time in such cases.
Also, bottlenecks happen in surprising places, and allowing slow/bad code just because the profiler doesn't scream about it on the development machine is a recipe for surprising performance issues in the future. The slow code might become a problem when a user has slightly different task for the program.
But I think the aphorism covers the case you’re describing too.
It’s possible that your loop is slow, but if you’re working on a program of sufficient size and complexity, you simply won’t know if it’s the bottleneck without using a profiler.
The purpose of the statement is to save you the trouble of 10xing performance of a function, so that it takes .1% of all execution time, instead of 1%. Do sufficiently complex programs, and without a profiler, you won’t really know if a loop is taking 1%, 10%, or 80%.
While I agree with you, I also remember a presentation which showed that there was big variability in performance caused by minor change in the environment: just using a different username could lead to very different performance.
And there are also the 'thousand cuts' issues for which profilers are "useless"..
Counterargument that you should focus on letting the compiler help you which is just a redirected way of "helping" the compiler.
Types are a really good example of where you want to help the compiler. Type signatures let you define how the type can be used. By adding as much information about how the type should be constrained, you can help the compiler prevent you or someone else from doing something stupid and on occasion, you can earn some optimisations in the process.
This sounds fascinating. Why didn't we have Google much earlier to teach us all how to replace a 3-4 year degree with 6 months of online course work.
Oh and they accept it for their own hiring. That's really good, except that Google, like many other big companies never gave a damn about education anyways. And that's a good thing, to be sure. But they still care about technical skill. So unless they are aiming the shotgun at their feet and teach people how to game their own interview process (which at least in case of Google is doubtful, since their process was quite elaborate a few years back), I just hope they offer these courses for free, because you ain't gonna get hired at Google with this. Seems more like a marketing stunt.
Tens of millions of people lack the financial means to afford four years of college. Unfortunately, a college degree (or at least a certain amount of university experience) is the de facto gateway to professional jobs in today’s world. A means to help people bootstrap themselves into an entry level position in a real career has to be seen as a positive.
What? Google doesn't give a damn about education but they actually care so much they're lying about their commitment to hiring people from the degree curriculum it designed?
What makes the difference between an experienced engine developer, like CryTek and Unreal, and a hobbyist or a company who does it for the first time, is mostly that while both of them will definitely build a game with the engine while creating the engine, the experienced team will actually produce a re-usable, well designed engine, while the inexperienced team will produce a hunk of junk that doesn't fit together and doesn't extend past the game they were trying to develop.