>> People created AbstractFactoryFactoryBuilders not because they wanted to,
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
I find that these area all pretty bad with more advanced code still, especially once FFI comes into play. Small chunks ok, but even when working with specification (think some ISO standard from video) and working on something simple (eg a small gstreamer rust plugin), it is still not quite there.
C(++) same story.
All round however, 10 years ago I would have taken this assistance!
great post. I posted in this thread above about using a Lomi to convert our organic waste into organic fertilizer (along with a worm farm), a and cultivating nitrogin fixing bacteria with our outdoor fish pond and a flood and drain system.
Soil is great to grow in, if you treat it well.
I will say that my only problem with Simard is that she anthropomorphizes the fungi and the behaviors she documented could just as easily be explained by osmotic pressure. Chemicals in a solution of water have “fairness” built into them. The broker doesn’t need to have a strategy for exchange, just siphon off a finder’s fee for making the introductions. The magic is low friction channels that can move solutions over a long (for a single celled organism) distance. That’s magic enough for any kingdom of life.
Sugary sap? Water will enter and sugars flow out. High nitrogen content? Same same.
We use a Lomi to convert our organic waste into compost I can add to my worm farm
and then from the worm farm, mix with outdoor soil and grow in that. A automated a flood and drain system with our fish and cultivate nitrogen fixing bacteria with that, and water the plants with this water every couple of days.
Using these two approaches I have not had to buy any nutrients in years and our soil is doing well.
Why get an electric powered gadget made of plastic and proprietary soft/hardware that will 100% for sure end in a dump in less than 20 years when all you need is a good ol compost bin?
Lomi doesn't really "compost" your scraps, it dehydrates and grinds them. The actual compost activity happens (on an accelerated timeline, due to the pre-processing) in the soil you amend with your Lomi "compost." It's good marketing though
Space is the main reason. We live in the city and the amount of organic waste we (family of 4 + numerous pets) produce is staggering. additionally meat attracts animals. This I run an overnight cycle and I can add it to my compost heap and let it degrade more and no issues with rodents and other animals who dig through food waste. after using this device, in the future I would always grind/break up my organic waste as fine as possible just to save space.
I was originally pretty skeptical of the Lomi as well after seeing this very same video. But my friend got us one and we have been using it for a while now. Sure, it has the same parts as a breadmaker, and it's mostly just drying out and cutting down the organic material into more useful sizes, exactly like he says, but when you put in the enzymes and have it run its dirt cycle it does actually produce meaningfully good compost all with much lower footprint a garden composting setup. I'm not sure I'd pay to buy one new, but but it's not a scam.
Just remember that any positive effect you might achieve by a lifetime of composting is grossly negated by the production, usage, and inevitable way to the landfill of this thing. Startups like these are part of the problem, not the solution.
Just throw your scrapes and buy compost then, it'll be cheaper and easier. The city already transforms bio waste in gas and compost anyways, and much more efficiently than what you can do at home given the scale.
This is another "I'm doing my part" gimmick that solves literally nothing when you look under the hood
our city has no bio waste. we make all our own dried fruit, eat mostly fresh from market (so little to no plastic for our veggies), but an immense amount of organic waste.
it allows us to get all our organic and bio plastic waste for a big family with pets, including most bones once we have cooked stock from it, in a compost heap in the city.
we tried composting before and the volume of organic waste we produced was too much and we had to dispose a lot of our waste in general trash (our location has no organic waste disposal that runs in our neighborhood) meant animals ripped our curb side bags open.
I am not a degrowther to save the planet either, so a company putting compostable products in place of plastic ones seems like good economic activity.
I think you misunderstand that case
"compiled its scores and statistics by employing people to listen or watch the games, then enter the scores on the computer which transmits the scores to STATS' on-line service, to be sent out to anyone using a SportsTrax pager.[1]"
Notice how they watched the game and got the statistics like that. The restrictions are about using the scoreboard and the data displays and reselling/commercialising that data.
It is however legal to watch the game and compile and distribute your own stats due to the game entering public domain.
Due to this many betting companies and data collection companies have to pay people to watch the game vs just scraping the scoreboard (which is the context from which I learnt about this). ironically at venue OCR is a common way to get scoreboard data.
I'm not a lawyer, but my interpretation of the lawsuit based on the Wikipedia article is that game results/scores are public facts and hence not copyrightable data. I don't see how the method by which that public data is collected changes anything materially about that case. Are you saying that inferring the score based on the scoreboard is what makes this illegal (why?)? What if they would infer the score using motion/ball tracking instead?
if you using CV to track the player, the ball etc from a broadcast it is fine, the scoreboard however not so straight forward. fwiw, doing CV from broadcast for accurate scoring of sports is neigh near impossible due to edges, but human in the loop systems exist. there are also numerous in venue CV systems which auto collect game and player information.
I don't think it's possible to be in compliance with every law in every jurisdiction simultaneously. There are over 300,000 federal laws in the US, and apparently no one knows how many laws each of the 50 states has. That's 1 of the world's 195 countries
Things like camera intrinsics and extrinsics are not fixed. 1000 bytes seems small to me given the amount of processing in modern cameras to create a raw image. I could easily imagine storing more information like focus point, other potential focus points with weights as part of the image for easier user on device editing.
As an Akamai user I already serve all my DASH traffic (video) over http3. Akamai itself return to origin only supports http 1.1 LL-HLS forces me use HTTP2.
The problem here is Akamai really in only supporting HTTP1.1 to the origin.
Cloudfare I think only supports HTTP2 to origin.
Does Fastly yet support QUIC to origin? Does Cloudfront, I could only find information about it supporting QUIC the last mile.
Maybe more CDN support will drive web server support.
I maintain auto-generated Rust and Zig bindings for my C libraries (along with Odin-, Nim-, C3-, D- and Jai-bindings), and it's a difference like night and day (with Zig being near-perfect and Rust being near-worst-case - at least among the listed languages).
> Do you find zig easier than the ffi interface in Rust?
Yes, but it's mostly cultural.
Rust folks have a nasty habit of trying to "Rust-ify" bindings. And then proceed to only do the easy 80% of the job. So now you wind up debugging an incomplete set of bindings with strange abstractions and the wrapped library.
Zig folks suck in the header file and deal with the library as-is. That's less pretty, but it's also less complicated.
I've somehow avoided Rust, so I can only comment on what I see in the documentation.
In Zig, you can just import a C header. And as long as you have configured the source location in your `build.zig` file, off you go. Zig automatically generates bindings for you. Import the header and start coding.
This is all thanks to Zig's `translate-c` utility that is used under the hood.
Rust by contrast has a lot more steps required, including hand writing the function bindings.
You only hand-write function bindings in simple or well-constrained cases.
In general, the expectation is that you will use bindgen [0].
It's a very easy process:
1. Create a `build.rs` file in your Rust project, which defines pre-build actions. Use it to call bindgen on whatever headers you want to import, and optionally to define library linkage. This file is very simple and mainly boilerplate. [1]
2. Import your bindgen-generated Rust module... just use it. [2]
You can also skip step 1: bindgen is also a CLI tool, so if your C target is stable, you can just run bindgen once to generate the Rust interface module and move that right into your crate.
Exactly.
gpu's have become too profitable and of strategic importance, to not see several deep pocketed existing technology companies invest more and try and acquire market share.
there is a mini moat here with cuda and existing work, but some the start of commodification must be on the <10 year horizon
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
reply