I'm not sure what the current state of the art is, but for the longest time it was pretty common for USB peripheral ICs to have small flash devices attached to them in order to be able to store VID/PID and other USB config information, so that the device is enumerated correctly when it's plugged in and can be associated with the correct driver etc. And depending on when the device was designed, 512kB might have been the smallest size that was readily available via supply chain. It would not have been strange to use a device like that to store 10s of bytes!
The ISO thing is a little bit weird, but to be honest it's a creative way to try to evade corporate IT security policies restricting mass storage USB devices. I think optical drives use a different device class that probably evades most restrictions, so if you enumerate as a complex device that's a combo optical drive/network adapter, you might be able to install your own driver even on computers where "USB drives" have been locked out!
When the system was designed, the way to get a CD to an end user was to spend at least in the range of ten thousand dollars to get discs mastered and pressed, and then convince physical stores to sell them for you. As well as being a lot of effort, there'd be a clear paper trail. You couldn't just burn one and leave it in a parking lot.
Back in the bad old days of version control (thinking of VSS here), I was overall pretty satisfied with how the check-in/check-out mechanics worked for Word docs and the like. In this case you have the benefit of the sequential workflow, in fact enforced or hinted by the tool itself, while also getting rid of the recurrent weakness of email-based document storage. There were plenty of other things to dislike about VSS (like, pretty much the rest of them) but it wasn't so bad for maintaining documents.
I agree with you both. I'd be worried if Vanguard suddenly became like Robinhood.
There's a new version of the Vanguard app on the App Store called "Beacon" that includes a very useful visual overhaul (account balances & performance are front and center) but keeps that wonderful Vanguard focus on, well, staying the course and not much else.
The StateMover concept sounds pretty interesting and is almost like the reverse of the integrated-logic-analyzer approach that the major vendors have adopted in their tooling. I assume that in simulation land that your debug environment is based on timing simulation, which unless they've "fixed" the net name mangling, is not exactly pain-free in its own right.
I don't think that long lines scaled particularly well with increasing number of LUTs and clock rates. All the black magic voodoo that goes into matching prop delay for resources like that tends to be applied to clock distribution. At least that's how it was a few years ago.
Something that I think is missing in the discussion here with regard to payment methods is: know your market segmentation. If you are targeting b2B (i.e. large business), there are going to be a lot of circumstances where credit card payments are a non-starter.
From personal (F500) experience, I know that I am going to have to move mountains in order for purchasing to accept a commercial arrangement with monthly credit card payments, which means I will usually move on to a competitive solution if one exists. In fact, one of the first questions I usually ask a vendor is "do you sell through (preferred reseller already listed as an approved vendor in our purchasing system)" as I know this is going to make my job of getting the purchase approved 100x easier.
So in conclusion, know your market segmentation and how your potential customers' expectations for how they will do business with you.
This is true and is the reason I break my own rule in this case - because of large companies that are only willing to pay through an annual PO/invoice process. None of our customers are quite big enough to require the use of a preferred reseller, but I've heard of that arrangement as well.
Am I reading this right that the standard remote ID broadcast is specified as "something in an unlicensed band, everything else about it you figure it out"? Isn't the point of this to be interoperable with other receiver systems for things like BVLOS operation? Seems like a funny place to throw in a shoulder shrug.
> With regard to direct broadcast capabilities, the ARC recommended the FAA adopt an industry standard for data transmission, which may need to be created, to ensure unmanned aircraft equipment and public safety receivers are interoperable, as public safety officials may not be able to equip with receivers for all possible direct broadcast technologies.
So the final rule will probably name a specific standard, but it’s TBD for now.
I know that this doesn't directly address the question you're asking, but to give an idea of the order of magnitude of the effect: the Doppler shift in frequency rounds to f_carrier * 2v/c. In the case of anything with "reasonable" speed, 2v/c is going to be very small (~10e-6 for Mach 1), and thus you would be talking about very minute differences between the transmitted and received pulse in terms of either overall pulse length or number of wavefronts received vs. sent (for what it's worth my intuition is that the pulse length actually shortens, but either way it's not measurable by the receiver).
As an aside: it looks like the timing diagrams in this article were created with a tool called WaveDrom. I've used this tool in the past and been impressed with what it's able to do in terms of creating nice timing diagrams for digital design documentation, a critical part of communicating how these designs (and interfaces!) are supposed to work.