It needs to do a lot more than that if you want to have something that doesn't alias like crazy. And significantly more if you want it to sound like a game boy.
Well, the output of the wave shaper approach in the article is exactly the same sharp digital square wave that you'd get from a trivial square wave generator.
But I overlooked the point that the GP mentions that the processor source code must be loaded from a separate JS file. That's some quite annoying overhead.
What I was trying to imply was that it's not enough to build a square wave generator to sound like a game boy. Even if that generator in the game boy was a perfect square wave generator, which I'm not sure it is.
The sound created by that generator passes through a fairly complicated filter known as The game boy's speaker. To properly create a game boy sound, you need to find or take an impulse response of that speaker and convolve it with the output of your oscillator.
Well, and a single recorded impulse response is probably not enough, either, if you want to be super accurate. The directivity pattern is probably awful and shaping the sound as well.
I know all of that. But I just didn't want to get into these details in this thread.
I'm not an expert on GB chiptune, but from what I've heard from enthusiasts is that different GB models sound different, and even within the same model there are variations. That said, it wouldn't surprise me if the GB waveforms aliasing, at least from the digital side, given that it was operating with pretty minimal synthesis capabilities. There's probably some extra low- and high-pass filtering that shape the sound after the waveform itself is generated. Looking at some examples of actual GB PWM waveforms, for sure some high-pass would make a pure PWM waveform more GB-like. And some low-pass could help a bit with aliasing.
it does, you definitely "feel" the change. I actually have to do some digging to figure that out, I unfortunately do not have my childhood GBA anymore so I have to rely on audio clips to make that call
Don't forget to factor in the Gameboy speaker in your listening of those audio clips. That's a major factor that will change how these waveforms sound very significantly. Those clips, and the classic sounds of chiptunes, are never "just" the sound coming off the DAC.
I understand the use case of this, but when I see it I always wonder if, and think I would prefer, some external code generation step instead rather than falling back on macros in the preprocessor. Like an external script or something.
Now you have a additional stage in your build, a bunch of new code to maintain, and either a bespoke language embedded in your standard C++ or a bunch of code emitting C++ separately from the code it logically belongs with.
Compare with a solution that's 100% standard C++, integrates into your build with zero work, can be immediately understood by anyone reasonably skilled in the language, and puts the "generated" code right where it belongs.
CMake makes this pretty painless. My codegen targets have only two additional instructions to handle the generation itself and dependencies: add_custom_command to call the codegen exec, and then add_custom_target to wrap my outputs in a "virtual" target I can then make the rest of my program depend on, but this is just for tidying up.
And I'll dispute the fact that any complex C prepro task "can be immediately understood by anyone reasonably skilled in the language". Besides, code should ideally be understood by "anyone reasonably likely to look at this code to work in it", not "reasonably skilled".
This isn't complex. It's a bit unusual, but not hard to understand if you understand the basics of how #include and #define work.
If you're working on the sort of C++ codebase that would benefit from this sort of code generation, and you're not reasonably skilled in C++, then god help you.
Are you talking about the X macro itself, or more generally?
I may be the obtuse one here, but for a more complex example, it took me a few hours to manage to make nested loops using Boost PP (for explicit instantiations). Even so, I avoid having to write a new one that's not a quick copy-paste because it's quite different from usual C++ programming, so my painfully acquired understanding quickly evaporated... as I suspect is the case of anyone who doesn't particularly focus on the C prepro.
In the end, it's just simpler to get some Python script or C++ program to write a string and dump that to a file than to write something illegible with the C preprocessor, if doing something at all complicated (in my opinion).
I'm talking about X-macros. There's a wide range of preprocessor shenanigans, from "everybody needs to know this" to "oh my god why." Each construct needs to be evaluated on its merits. IMO X-macros are closer to the simpler side of that spectrum. Consider writing things out by hand if you just have a few, but if you have a lot of things repeating like this, they're a fine tool to use. Boost PP is a whole different level of ridiculousness and I don't see ever using that sort of thing for anything serious.
> Now you have a additional stage in your build, a bunch of new code to maintain, and either a bespoke language embedded in your standard C++ or a bunch of code emitting C++ separately from the code it logically belongs with.
The preprocessor is already a bespoke language embedded in your C++, and code written in it is generally harder to maintain than, like, Python.
The cost of doing something non-standard is real, but not infinite; at some point the benefit in code maintainability and sanity is worth it.
I agree that you can go too far with it and it becomes better to do it a different way, but the X-macros technique is straightforward and easy to understand.
I've done this in C with the C preprocessor and Java with m4[0].
The upside of doing it natively is that it keeps the build simpler. And everybody at least knows about the existence of the C preprocessor, even if they don't know it well. And it's fairly limited, which prevents you from getting too clever.
The big downside of doing it with the C preprocessor is that the resulting code looks like vomit if it's more than a line or two because of the lack of line breaks in the generated code. Debugging it is unenjoyable. I'd recommend against doing anything super clever.
The upside of doing it out of band is that your generated source files look decent. m4 tends to introduce a little extra whitespace, but it's nothing objectionable. Plus you get more power if you really need it.
The downside is that almost nobody knows m4[1]. If you choose something else, it becomes a question of what, does anyone else know it, and is it available everywhere you need to build.
Honestly, integrating m4 into the build in ant really wasn't too bad. We were building on one OS on two different architectures. For anything truly cross-platform, you'll likely run into all the usual issues.
ETA: Getting an IDE to understand the out of band generation might be a hassle, as other folks have mentioned. I'm a vim kinda guy for most coding, and doing it either way was pretty frictionless. The generated java code was read-only and trivial, so there wasn't a lot of reason to ever look at it. By the time you get to debugging, it would entirely transparent because you're just looking at another set of java files.
[0] This was so long ago, I no longer remember why it seemed like a good idea. I think there was an interface, a trivial implementation, and some other thing? Maybe something JNI-related? At least at first, things were changing often enough that I didn't want to have to keep three things in sync by hand.
[1] Including me. I re-learn just enough to get done with the job at hand every time I need it.
This is what I do, these days. Whenever I would previously have reached for X-macros or some other macro hack, I tend to use Cog [1] now instead.
It's quite a clever design; you write Python to generate your C++ code and put it inside a comment. Then when you run the Cog tool on your source file, it writes the generated code directly into your C++ file right after your comment (and before a matching "end" comment).
This is great because you don't need Cog itself to build your project, and your IDE still understands your C++ code. I've also got used to being able to see the results of my code generation, and going back to normal macros feels a bit like fiddling around in the dark now.
IDEs understand preprocessor macros, so IDE features (jump2def, etc) work with this. IDEs also can expand the macro invocations. So, I prefer the macros when possible :-).
The C# "source generator" approach is a good compromise; it runs within the build chain so has the ease-of-use of macros in that respect, but they don't need to be written in a weird macro language (they are C# or can call external tool) and when you debug your program, you debug through the generated source and can see it, more accessible than macros. Not sure if there is something similar in C/C++ integrated with the common toolchains.
But when working outside C/C++ I've found myself missing the flexibility of macros more times than I can count.
> But when working outside C/C++ I've found myself missing the flexibility of macros more times than I can count.
Me to, and that's even in Lisp!
Preprocessor macros are hard and bugprone because they share the failings of Unix philosophy of "text as universal interface" - you're playing with unstructured (or semi-structured) pieces of text, devoid of all semantics. And this is also what makes them occasionally useful - some code transformations are much, much easier to do when you can manipulate the text form directly, ignoring syntax and grammar and everything.
Only the final value must be correct code - starting point and intermediary values can be anything, and you don't need to make sure you can get from here to there through valid data transformations. This is a really powerful capability to have.
(I also explicitly compared preprocessor macros to "plaintext everything" experience that's seen as divine wisdom, to say: y'all are slinging unstructured text down the pipes way too much, and using preprocessor way too little.)
Using the C preprocessor is standard, available, compatible and the major usage patterns are "known". For a lot of cases, they're way easier to reason about rather than learning how an external generation tool is used to generate the code. In order to understand these macros all I need is to read the source code where they're used.
Nothing C++ related in the pattern though. This C preprocessor trickery is practically so classic you couldn't necessarily even call it a "trick".
After trying to wrangle Boost PP and other advertised compile-time libraries such as Boost Hana (which still has some runtime overhead compared to the same logic with hardcoded values), I've finally converged to simply writing C++ files that write other C++ files. Could be Python, but I rather keep the build simple in my C++ project. Code generation is painless with CMake, no idea with other build configuration utilities.
CMake has a particularly irritating flaw here, though, in that it makes no distinction between host and target which cross-compiling, which makes it really difficult to do this kind of code generation when supporting this use-case (which is becoming more and more commoon).
Right, I hadn't thought of that, to be honest. If I understand correctly, you're saying the codegen targets will be compiled to the target arch, and then can't be run on the machine doing the compiling?
I think one solution might be to use target_compile_options() which lets you specify flags per target (instead of globally), assuming you're passing flags to specify the target architecture.
That only works if it's mostly the same compiler, unfortunately. They could be completely different executables, calling conventions, etc. I don't know why CMake still has such a huge hole in its feature set, but it's quite unfortunate.
One case I benchmarked was Bernstein/Bézier and Lagrange element evaluation. This is: given a degree d triangle or tetrahedron, given some barycentric coordinates, get the physical coordinate and the Jacobian matrix of the mapping.
"Runtime" here means everything is done using runtime loops, "Hana" using Boost Hana to make loops compile-time and use some constexpr ordering arrays, "hardcoded" is a very Fortran-looking function with all hardcoded indices and operations all unrolled.
As you see, using Boost Hana does bring about some improvement, but there is still a factor 2x between that and hardcoded. This is all compiled with Release optimization flags. Technically, the Hana implementation is doing the same operations in the same order as the hardcoded version, all indices known at compile time, which is why I say there must be some runtime overhead to using hana::while.
In the case of Bernstein elements, the best solution is to use de Casteljau's recursive algorithm using templates (10x to 15x speedup to runtime recursive depending on degree). But not everything recasts itself nicely as a recursive algorithm, or I didn't find the way for Lagrange anyways. I did enable flto as, from my understanding (looking at call stacks), hana::while creates lambda functions, so perhaps a simple function optimization becomes a cross-unit affair if it calls hana::while. (speculating)
Similar results to compute Bernstein coefficients of the Jacobian matrix determinant of a Q2 tetrahedron, factor 5x from "runtime" to "hana" (only difference is for loops become hana::whiles), factor 3x from "hana" to "hardcoded" (the loops are unrolled). So a factor 15x between naive C++ and code generated files. In the case of this function in particular, we have 4 nested loops, it's branching hell where continues are hit very often.
That would be fairly interesting to look at the actual code you've used, and have a look at the codegen. By a chance, is it viable for you to open-source it? I'd guess it should bear lots of interest for Hana author/s.
What compiler/version did you use? For example, MSVC isn't (at least wasn't) good at always evaluating `constexpr` in compile-time...
> hana::while creates lambda functions, so perhaps a simple function optimization becomes a cross-unit affair if it calls hana::while. (speculating)
Hmm, I'd say it (LTO) shouldn't influence, as these lambdas are already fully visible to a compiler.
I never thought to contact them, but I might do that, thanks for the suggestion. This is something I tested almost two years ago, I have these benchmarks written down but I've since deleted the code I've used, save for the optimal implementations (though it wouldn't take too long to rewrite it).
I tested with clang on my Mac laptop and gcc on a Linux workstation. Version, not sure. If I test this again to contact the Hana people, I'll try and give all this information. I did test the constexpr ordering arrays by making sure I can pass, say, arr[0] as a template parameter. This is only possible if the value is known at compile time. Though it's also possible the compiler could be lazy in other contexts, as in not actually evaluating at compile time if it figures out the result is not necessary to be known at compile time.
Oh yeah, you're right, I was confusing translation unit and function scope.
Yeah, it's all done automatically when you build, and dependencies are properly taken into account: if you modify one of the code generating sources, its outputs are regenerated, and everything that depends on them is correctly recompiled. This doesn't take much CMake logic at all to make work.
In my case, no, it's dumb old code writing strings and dumping that to files. You could do whatever you want in there, it's just a program that writes source files.
I do use some template metaprogramming where it's practical versus code generation, and Boost Hana provides some algorithmic facilities at compile time but those incur some runtime cost. For instance, you can write a while loop with bounds evaluated at compile time, that lets you use its index as a template parameter or evaluate constexpr functions on. But sometimes the best solution has been (for me, performance/complexity wise) to just write dumb files that hardcode things for different cases.
External codegen introduces a lot of friction in random places. Like how your editor can no longer understand the file before you start building. Or how it can go out of date with respect to the rest of your code until you build. If you can do it with a macro it tends to work better than codegen in some ways.
New people turning 18. It's no different than the FL studio model, which has worked admirably well for much longer than this has been a company. The population isn't static.
> For a long time now, many people who's religious values are deeply important to them have felt disrespected and looked down upon by "the left"
Imagine saying this on the thread of an article that is literally about people using their faith to justify artifact smuggling and other crimes. The disrespect was entirely brought on by their insanely shitty and sociopathic behavior. Their faith isn't what is causing them to be disrespected. It's their behavior.
> My overarching sense is that this whole saga has been largely Mr. Alexander’s fault and it could have been easily avoided.
> Alexander has dropped the Mother Of All Bombs on this situation, displaying disrespect towards the reviewing industry, and regarding reviewers as trivial annoyances that can be easily brushed aside. The outcome of this saga and who will ultimately withstand the fallout remains to be seen, but Mr. Alexander almost certainly looks like an ass at this moment in time, and in my opinion, any negative assessment he receives is largely self-inflicted.
>There’s no doubt in my mind that Eric Alexander of Tekton Design is largely in the right, and in principle, challenging these reviewers was mostly justified.
The next sentence is revealing though:
>The problem, and the reason we’re here now dwelling on it, is how he went about it.
I'm not sure if I understand the first of the quotes, honestly, given the rest of the content. But that seems to be what GP was referencing.
I remember seeing his posts on ASR. Some really fun stuff like how the air coming out of the screw holes for the feet would produce a supersonic boom; in a ported enclosure. Wildly entertaining.
Are you really going to try to say that the absurd levels of bile, vitriol, and threats that he received for doing his job during a pandemic has zero knock-on effect for other public health officials?
Woah, I'm not sure how you got that from my comment. It was a genuine question since OP made it seem as if Dr. Fauci had played some role in the USDA's handling of the H5N1 epidemic. In fact I was concerned OP was trying to pin yet another conspiracy on him.
That sort of arrogance is absolutely out of control in the tech industry and it's bizarre because I've never seen it at the remotely same level anywhere else.
It can make it difficult to work in the industry because you find yourself surrounded with expert beginners who (generally privately) think they're geniuses.
I love working with people who aren't afraid to solve problems, but are also firmly in the camp of recognizing how clueless we usually are. We shouldn't be terrified of failure, anxious about what we don't know, etc. But man, some humility goes a long way.
The alternative leads to terrible software, team dynamics, work-life balance, etc.
Heard a guy about six months out of undergrad once declare (completely serious) that of course he knew how to run a school district, he attended public school!
Wow did that make me distrust every suggestion he made.
Hahaha. As an undergrad in my University, I asked a guy just before an exam on electro magnetic waves if he had studied properly, and the guy told me dead serious, that he knows Ohm's law, and he can derive everything from it!
In a different way. There's the old joke and doctors and God, and you will certainly find attorneys who are full of themselves. But while I've never met an attorney who thought they were an engineer simply because they were excellent lawyer, I've encountered plenty of engineers who believe themselves to be masters of the law (including here on Hacker News), having logically deduced it from first principles with their superior intellect.
> I've never met an attorney who thought they were an engineer simply because they were excellent lawyer
Not sure about attorneys, but there are certainly legislators / regulators who think that, or who at least think that every problem they throw at engineers, like implementing end-to-end encryption that their government can backdoor but foreign governments can't, is instantly and easily solvable.
That's basically the opposite phenomenon: you know so little about how an industry operates that everything they do seems like magic to you, so you end up making absurd demands of them.
The phenomenon discussed here has engineers believe they can practice law and medicine themselves. So they're not asking lawyers to get them out of a murder caught on national television, or doctors to cure their cancer in three days. They think they can do these themselves.
One of the most important skills a lawyer can have is being able to comprehend highly complex topics in a very short time with minimal information to a reasonable level of confidence that they can advise genuine subject matter experts (experts the lawyer counts on knowing more about a topic than the lawyer does) about risk issues.
This is, of course, one of the most important skills anybody can have, but most people are terrible at it (whether by lack of talent or lack of practice) so our society pays lawyers to do it for them.
The proliferation of Hanlon's Razor has been one of the most damaging things to society.
People as a whole are not incompetent, every individual (and every grouping of individuals) have goals and will take appropriate actions to achieve them with intent, but somehow a neologism has tricked people into believing this is the exception and not the norm.
There's two different questions here: one is "is the way things are currently done stupid" (to which the answer is often "yes"). The other is "can a random outsider do better just by thinking about it" (to which the answer is usually, though not always, "no").
It's the same principle as another comment I made a few days ago ([1]). It's not hard to identify problems that really are problems, but finding effective and feasible solutions to those problems is often far more difficult, especially if you're an outsider. The mistake isn't in identifying the problems, it's in thinking that you can come in totally blind and know how to solve them. (Or, put another way, in thinking that you as an outsider can tell the "dumb and easily fixed" problems from the "horrifying systemic nightmare" problems.)
It's because most of the time people see mostly powerless people trying to do their jobs and messing up. They don't have as much of a frame of reference for how powerful people act, especially because there is so much mystification in the media (literally owned by the said powerful people). The rule you apply to your friends and co-workers isn't suitable for the maniacal supervillians running society. Of course, those guys also fuck up in bizzare and stupid ways too, so people will point that out and be like look, they're just bad at their evil jobs!
Any real tech guy would go, "Wait a minute, if he came back from the future to sire John, wouldn't that mean that even that wasn't the first timeline? So that would imply that even without the hardware we have in the lab that someone somewhere will invent something similar, which will then in turn lead to this Skynet and so forth?"
reply