For what it is worth, there is Muon, a third-party implementation of Meson written in C99 [1].
Haven't used it myself yet, though Muon has been in steady development over the last few years, and the developers claim that they implement the vast majority of the Meson core features.
I was going to try replacing the KeePassXC build (which is one of the more complicated CMake builds I've had hand-to-hand with) to see what that experience would be like, but they report all the Qt flavors are unsupported: https://muon.build/releases/edge/docs/status.html
I'm aware I could just vanilla Meson but I was specifically interested in kicking the tires on the dbg in Muon since CLion just recently added debugging to CMake and I wanted to compare and contrast
I have been using OSQP [1] quite a bit in a project where I needed to solve many quadratic programs (QPs). When I started with the project back in early 2017, OSQP was still in early stages. I ended up using both cvxopt and MOSEK; both were frustratingly slow.
After I picked up the project again a year later (around 2019ish), I stumbled across OSQP again. OSQP blew both cvxopt and MOSEK out of the water in terms of speed (up to 10 times faster) and quality of the solutions (not as sensitive to bad conditioning). Plus the C interface was quite easy to use and super easy (as far as numerics C code goes) to integrate into my larger project. I particularly liked that the C code has no external dependencies (more precisely: all external dependencies are vendored).
First of all, there is no single accepted definition of "neuromorphic" [1]. Still, as a point in favour of the "neuromorphic systems are analogue" crowd: the seminal paper by Carver Mead that (to my knowledge) coined the term "neuromorphics" specifically talks about analogue neuromorphic systems [2].
Right now, there are some research "analogue" (or, more precisely "mixed signal") neuromorphic systems being developed [3, 4]. It is correct however that there are no commercially available analogue systems that I am aware of.
Unfortunately, the same can be said for digital neuromorphics as well (Intel Loihi is perhaps the closest to a commercial product, and yes, this is an asynchronous digital neuromorphic system).
I can't comment on the "power hungry" part, but FLAC only requires and (to my knowledge) has ever only required integer math. Source: just looked at my own FLAC implementation [1].
As weird as it may seem, you should not forget that free software licenses are built upon the fabric of copyright. Without copyright, free software could not exist in its current form.
For GPL-like "copyleft" licenses, there would be no way to enforce that binary distributions of derived works are accompanied by their source code. Similarly, in the context of permissive BSD/MIT-style licenses, there would be no way to enforce attribution.
So, given that FOSS---which a large portion of the HN crowd depends on---cannot work without copyright (at least not in its current form), the recent discussions may be less of a surprise.
Maybe... although I personally think that the GPL and other 'copy left' licenses aren't the reason open source has prospered, nor do I think enforcing attribution really helps the FOSS world that much.
People write and share code because it is useful to do that, not because licenses require them to.
I think FOSS would do fine with no copyright, and in fact more software might end up open source if we had ZERO copyright... why not make your code open source and get back contributions when your code would end up being shared anyway?
IMO Linux won because its license forced everyone who used and extended it to open their changes - this changes the calculus for firms building products on Linux to make it more worthwhile to upstream their changes to reduce their local maintenance burden.
Compare this to something like the playstation 4, which used freeBSD as the base of their OS, and contributed nothing back to the project at all.
Linux "won" because SGI very loudly shit the bed and betting on Itanium, taking IRIX with it, DEC was in the process of making the future of Tru64 very confusing with the Alpha, a computer that didn't support their flagship OS, and HP was EOLing HP/UX because they bet everything on IBM's OS/2 Warp.
The only big UNIX vendor left was Sun Microsystems, and Solaris indeed dominated the 90s dotcom era. Everybody was running SPARC and SunOS servers.
It wouldn't be until the mid 2000s when Linux started picking up the pieces left behind after Red Hat started their server product and certification program.
For a long time, Linux was strictly a hobbyist OS. It later dominated by simply being the last one standing after everyone else fell.
The true competitive threat to the Unix vendors wasn't each other, it was Microsoft.
And "for a long time" was actually a fairly short time. Linux began to approach feature-comparability fast, and ran on PCs, not $10,000 workstations (that were getting beat power-wise by PCs).
This seems a bit of a chicken and egg problem... why would the early adopters have chosen Linux, before others had been forced to contribute back their changes? The first company to adopt it wouldn't have received any benefits, only an obligation. Why pick it over BSD?
I think there were likely other factors that made it win out.
They would also get the "promise" that the system they were betting on would also get contributions from other companies, making a safer long term bet.
No. AT&T created Unix but was unable to market it due to a previous antitrust action. So they gave it away. (They required a license signature (I've signed that! :-)), but did not charge and were very lenient.)
The UCBerkeley Computer Science Research Group (if I've got my acronyms right) was one of the recipients and went on to add on many, many features and releasing the result under the BSD license.
Many people built companies like Sun around selling BSD Unix, including many alumni of UCB.
Then AT&T got out from under the consent agreement and began selling its own Unix, System V. By this time, Unix was a major player in the workstation market (a market that has largely disappeared as PCs got more powerful).
By, say, the mid 1980s, there were many, many companies selling many, many varieties of Unix, all descended from the original Unix via BSD or via BSD+System V. Most of them had some unique, valuable features (Irix's graphics, AIX's LVM and journaling file system, etc.) and all of them had modifications to lock customers into their version. This is where the POSIX standards come from (second only to ecmascript market manipulation goofiness), and why things like Autoconf/automake and the much-loved imake (not really) exist. There was much in-fighting; Sun vs everybody else, everybody else vs. IBM, etc.
Then two things happened: PCs got more powerful and began eating into the bottom of the workstation market (PCs ran DOS+Windows, which was and arguably still is, technically inferior to multi-user-by-design systems like Unix[1].) And PCs got more powerful and began to be able to run more advanced OSs (think "memory management").
At this point, the Unix world began to conflict with the Windows world. Unix was technically superior, Windows had more public and developer mind-share. But the Unix world was still more interested in fighting each other and stapled all of their arms and legs to that particular tree.
The end result was that Windows became and remains the most-used operating system[2]. All (almost) of the commercial Unixs died (almost; there's still some animated corpses around)[3]. The two counter-examples are MacOS, which is completely locked to Apple hardware, and Linux.
Linux is the interesting case. Windows and commercial Unix all had a 15- to 20-year head start. But Linux achieved (mostly) feature parity quickly and did not break down into multiple, competing streams. Both of those are due to the GPL; you can fork GPL software all you want, but you cannot add a feature to a fork and expect it not to be back-ported into the original if it's useful. You also have a very hard time locking users into your fork.
The bottom line is that Microsoft won the Unix wars, because the Unix licenses allowed companies to take Unix proprietary.
[1] Modern Windows is kinda-sorta based on VMS, another workstation OS, but not really and then they walked that back, and so on....
[2] I don't really consider Android or iOS to be general-purpose OSs. And they're both rather their own little islands, no matter how much the underlying tech shares with the rest of the universe.
[3] The Free/Net/Open/DragonFly BSDs are, I'm sorry to say, noise. And did you notice that I had to mention four of them?
There were other open source licenses at the party before the GPL dropped its controversial "viral" turd in the punchbowl - and many of them still exist nearly unmodified. (e.g. BSD with attribution removed, etc.)
That just isn't how things work. I believe you are making the wrong assumption that the same amount of open source work would exist. If that were the case, then yes, it wouldn't matter that much. But, a lot of contributions to open source wouldn't have existed if it wasn't for the licenses. That's the examples you've been given.
There might be some contributions that wouldn't have happened, but there also might have been others that did happen. My hypothetical was in a world with no copyright or license at all.... so all proprietary code would be copyable, too
Proprietary code would by definition not be copyable, because it would be... proprietary. It's the opposite to open source. The thing that incentivises open sourcing proprietary source code is exactly things like licenses... Your imagined scenario is just "no license for open source, and free reign for closed source". It makes no sense to me.
If no code was copyrightable nor licensable, people could reverse engineer any code they get access to. It would be hard to keep code completely proprietary. You would not be able to distribute your code at all.
I am not saying this is the way we want to go, I am just curious about the thought experiment.
Ok, if it's for a thought experiment then I'll play along. Reverse engineering source code is not as easy as you make it out to be. Not only that, but reverse engineering in the sense of black box is not a violation of most existing licenses. And, if we're talking about decompiling code into something useful, that too is a tall order.
This is just one example of the "analogue hole" [1] problem shared by all anti-cheat/DRM systems.
At least in theory, there is no technology that can prevent exploits like this short of dystopian levels of surveillance and locking down computing devices even further.
By that I mean encrypted communication on all computer buses (including USB, HDMI), and only allowing access to those busses via physically hardened "secure" enclaves, up to (in the end game) big-brother-like surveillance (think electronic proctoring solutions).
I think that this is exactly the problem with such DRM schemes---the ensuing cat-and-mouse game will inevitably lead to trampling the user's freedoms, because locking down computing devices and environments to ridiculous levels is the only way in which DRM can be made to work.
Of course, for now, cheats like the one featured in the article should be fairly easy to detect (at least from what I've seen in the linked video).
The motion of the bot is extremely jerky; a simple rule-based system, or, if you want to be fancy, a neural network based anomaly detection system should be able to detect this.
On the side of the cheat authors, this could be easily circumvented if they include a "calibration phase", where user input trains a simple neural network to stochastically emulate the dynamics of the user's sensor-action loop. The cheat could then act slightly faster than the user, giving them an edge while still using their unique dynamics profile.
I wonder where this will lead eventually, and I genuinely feel sorry for all the people who pour their heart and soul into competitive gaming; I don't think that this kind of cheating is something that can and should (see above) be prevented in the long-run.
The best possible outcome I can imagine is that online gaming becomes more cooperative or once more converges back to small groups of people who know and trust each other.
The solution is really simple - make all competitive gaming events LANs with standardized hardware that is not touched by players before the event starts.
For regular online gaming, you can train a neural net to detect cheats like this, biased by the players score. If the cheat is introducing enough error for the player to be killable, its not ruining the experience for the rest of the players.
>By that I mean encrypted communication on all computer buses (including USB, HDMI)
That only delays things since in the end you still need a human being to be able to play. So you can have a camera looking at the screen and a mouse/keyboard with some wires soldered to the key points.
Indeed. Or a robot arm moving the mouse. The analog hole will always exist. However, it may prove hard to make a computer move the mouse like a human, and type like a human. Heuristics will likely be able to separate human from bot input for quite awhile still.
The game makers probably enjoy a large advantage in size-of-dataset vs cheat makers.
>will inevitably lead to trampling the user's freedoms
People keep saying this but it happened 20 years ago. This reminds me of shit like the postal service requiring photo ID to receive a package and people complaining about NSA hundreds of years later. Now you need a phone to play a game and some of the most popular need literal photo ID checks. Imagine, sending your photo ID which if stolen people can steal your money, to a bunch of newgrads running a game studio. This is what people (kids and manchildren) accept to address the overstated problem of game cheating. I played thousands of hours of games for 20 years and the number of cheaters I ran into is around 10 or 20. Most players of games (including the ones who complain about "cheaters") do not even have a clue what a game cheat is. They think some guy has some cheat that only works in this weird scenario that happens 1/100 games. Yeah, can you guys stop making me need photo ID for to play some stupid game? This is no different than every obnoxious statist concern that gets addressed by some charlatan who purports to be saving the world by ruining everyone's day (almost any time I install or configure a game my day is ruined, imagine a typical dependency hell but 10x worse). And no, I haven't ran into little cheaters because of "sophisticated anticheat" (stuff like punkbuster is extremely incompetent), it's because public hacks simply get blacklisted once they become big enough to matter.
On a way more simple level, back in my days I wrote bots for flash/browser games basically by detecting specific pixels on the screen and acting according to it. Sounds stupid, but with simple games this could work very well.
I never got 'detected' how would they? And some of my bots easily got more skills than I ever had
I agree that this is a much better option. Also, if you use two OpenWrt devices, you can enable WDS mode to build a true layer 2 bridge. That is, you won't need Proxy ARP and DHCP relay. For example, DHCP and IPv6 will just work out of the box.
Edit: From what I can tell, support for WDS depends on the WiFi chipset. "iw list" must explicitly include "WDS" as a "supported interface mode". At least the Broadcom chipset on the Raspberry Pi Zero does not support this, but, for example, the Atheros chipsets used in a variety of routers do.
Building something like this was kind of the idea of Scholarpedia [1], founded by Eugene M. Izhikevich (theoretical neuroscientist). The articles are reviews written by experts in the field, peer-reviewed, and supposed to be updated over time. Most articles happen to be in neuroscience and related fields, but that was more an accident and not by design.
Unfortunately, the project has never really taken off, and only few new articles have been added over the past few years. And of course, just as I am writing this comment, I realize that [2] now redirects to some domain squatter and is blacklisted by my DNS server...
Well, as at least one other commenter in this thread already pointed out, this is possible with WDS (Wireless Distribution System). However, this needs to be supported by the access points. If it is supported (for example on APs running OpenWRT), it is literally just a matter of enabling WDS on the station and client APs, and bridging the wireless interfaces to the ethernet interfaces.
I've been using this setup in my home network for years now (with a dedicated OpenWRT device for each wired "island") and it works great.
Edit: To clarify, yes, this establishes a single broadcast domain. For example, DHCP and ARP requests are propagated through the entire network.
You are certainly not wrong. In this specific case however, I'd like to point out that the author, leonerd, has been busily working on "making stuff" to this end for years now.
They are the lead author of libvterm, a popular modular terminal emulator library that is for example used in neovim and emacs-libvterm.
They have also been working on libtermkey, a library that accepts input from the divised keybinding system, now part of libtickit mentioned in the post.
[1] https://sr.ht/~lattis/muon/