Exactly what I came to comment. Same thing here, they seemed to hyperfixate on the extension pack which most VBox users would need. We had to get everyone off that as soon as possible.
Of course this isn't limited to VBox, but their database as well. Just avoid.
Just noting that USB 2.0/3.0 support no longer requires the extension pack, and the list of features [1] that require the extension pack has been gradually decreasing. It may not be as necessary as it once was.
If your organization uses any Oracle software then I'm certain that the organization has agreed to let Oracle audit it for license compliance at any time.
Stock debian is 5 years with their LTS project, but they have a paid "ELTS" project that adds an additional 5 years. So 5 years for free, 10 total years as a paid support option.
https://wiki.debian.org/DebianReleases
This is just saying that they had the right to drill MORE wells revoked, presumably they already have wells that will continue to pump water out of the ground.
>reported that the new wells would have pumped up to 3,000 gallons (11,000 liters) of water per minute.
> At Lake Itasca, the average flow rate is 6 cubic feet per second. At Upper St. Anthony Falls in Minneapolis, the northern most Lock and Dam, the average flow rate is 12,000 cubic feet per second or 89,869 gallons per second. At New Orleans, the average flow rate is 600,000 cubic feet per second.
I switched for similar reasons, but went from Ubuntu to Fedora a couple years ago. I like Debian as well, but its a bit slower to release at every 2 years unless you stay on testing or backports. Fedora releases are every 6 months, and the kernel seems to update faster than that. I've been using docker and toolbx for things where I need vendor tools that are somewhat distro specific. Even if you use Debian I'd recommend toolbx for developers.
This post inspired me a few years ago to start a very impractical learning side project. Most keyboards don't overly prioritize latency. Development of a keyboard is easy using a USB stack from the manufacturer, but it might not prioritize latency.
I'm working on making my own FPGA based USB 2.0 HID device that will let me minimize any latency in the stack. PCB layout is mostly done, I'm working on a DECA FPGA board to prove out the USB hid software now. I started this pre covid though when Mach XO2s were inexpensive and available, so I have no idea who I will need to fight these days to get parts when I get to that point.
There is lots of wiggle room too in how you implement your debounce algorithm to optimize latency too. I'm excited to control the whole stack to try to make this as fast as possible. The Logitech lightspeed products came out after I started this project though and are far more practical for most people. I have one of those at home and will try to benchmark and compare them when I get there.
I have written USB device firmware for AVR, and read docs for a bunch of microcontroller families' with different internal HID device units that I wanted to port to.
If you do take suggestions for your next version, there are some hardware features that I would like:
* Let the MCU replace the next unsent input report, and atomically. Many USB devices can only queue reports, not replace them.
* Allow the microcontroller to know when a input report has been received. (ACK)
The first would make it possible to get the lowest latency with reports such as keyboard reports that contains only the state of momentary switches.
The second would make it possible to ensure that an event has been received, even if the host would poll at a slower rate than what the firmware works at.
The situation is especially complex for mice, where the reports have inputs that are relative — to the previous report.
I'm not familiar with that definition, I typically see debouncing used as any means to filter out the state as it is changing from the mechanical action.
I've seen simple BSP debounce example code that affects latency for both press/release. For example you can make sure the IO hasn't changed in X ms before accepting it as settled and reporting the event up. This way would incur latency on both press and release. In fact, the first answer I see on google does this:
https://www.beningo.com/7-steps-to-create-a-reusable-debounc...
You could report the event right away when there is an activation, and just not allow a deactivation event to be reported until the debounce time has expired. I suspect this is what you mean by debounce only applying on deactivation, but I'll bet some of the keyboards tested on that list are not doing this.
I mean, sure, there if you're looking for a general-purpose way of doing things then that example is fine. If you have a normally-open switch and a latency sensitive application then there's one pretty clear implementation.
I've seen implementations where the CPU gets the pulse, waits for debounce interval, check whether the pulse is still happening and only then sends "the button is on" signal, which obviously is terrible for latency.
Proper debounce will send signal immediately then ignore the state for few milliseconds.
> Debouncing is about preventing inappropriate deactivation, and is unrelated to time to initial activation.
It's both. Contacts generate noise on both press and depress.
I thought it's about preventing inappropriate reactivation? As in, the key slightly bouncing back and forth when you press it, thereby registering two (or more) activations per press
I suppose we should really talk about changes in state rather than activation/deactivation as it's the same problem. But the basic point is that detecting an edge on an "armed" switch is all that's necessary to fire the related event and confirm the state change definitively.
The debouncing logic is about how we determine when to re-arm the switch for the next transition - i.e. how to reject the "false" reversions to the prior state. So it shouldn't have any impact on the "physical action to key-down event" latency in systems with a reasonable steady state. I guess for cases where "pound on the same key again and again as quickly as possible" are in scope it does?
ptp is <1us synchronization. From my testing NTP is ~20-60us after about 10 minutes of sync, but it intentionally drifts the phase around. On average, NTP is pretty close.
If you look at the white rabbit FPGA PTP updates, its in the ns range.
Any kind of GPS + most intel nics will get you PTP with an accurate clock. If you didn't need to sync too many devices you could use a single system with a bunch of nics as your "switch".
This post didn’t sound right to me, but I realized that my raspi4 GPS NTP server has been running ntp and not chrony. Chrony is better at modeling non deterministic timing behavior, so I swapped to that.
It’s been ten minutes now and chronyc tracking has been marching the offset down. It’s sub 1 us at this point.
System time : 0.000000123 seconds fast of NTP time
Last offset : +0.000000366 seconds
How to get this precise time out of a non deterministic OS? Beats me. Once I figure that out I can finish my clock project.
My best lead is to step through the different python timing and scheduler implementations and see which has the lowest jitter relative to the PPS on an oscilloscope.
Assuming you're using a PPS signal and a kernel driver, presumably there's an interrupt handler or perhaps a capture timer peripheral that is capturing a hardware timer when the PPS edge occurs. It doesn't matter too much when the userspace code gets around to adjusting the hardware timer as long as it can compute the difference between when the PPS edge came in and when it should have come in. The Linux API for fine tuning the system time works in deltas rather than absolute timestamps, so it is once again fairly immune to userspace scheduling jitter.
Even good hardware oscillators can have a wide amount of drift, say 50uS per second, but they tend to be stable over several minutes outside of extreme thermal environments. Therefore, it's pretty easy to estimate and compensate for drift using a PPS signal as a reference. Presumably, that compensation is partially what takes a while for the time daemon to converge on.
Additionally, the clock sync daemon likely takes a while to converge because it isn't directly controlling the system time. Rather, it is sending hints to the kernel for it to adjust the time. The kernel decides how best to do that, and it does it in a way that attempts to avoid breaking other userspace programs that are running. For example, it tries to keep system time monotonically increasing. This means that there's relatively low gain in the feedback loop, and so it takes a while to cancel out error.
It's possible for a userspace program to instead explicitly set system time, but that really isn't intended to be used in Linux unless time is more than 0.5 seconds off. The API call to do that is inherently vulnerable to userspace scheduling jitter, but it's fine since 0.5 seconds is orders of magnitude longer than the expected jitter. You get the system time within the ballpark, and then incrementally adjust it until it's perfect.
If you're not using a kernel driver to capture the PPS edge's timestamp, then you're going to have a rougher time. Either you're just going to have to accept the fact that you can't do better than the scheduling jitter (other than assume it averages out), or you're going to have to do something clever/terrible. One idea would be to have your userspace process go to sleep until, say, 1ms before you expect the next PPS edge to come in. Then, go into a tight polling loop until the edge occurs. As long as reading the PPS pin from userspace is non-blocking and your process doesn't get preempted, you should be able to get at least within microseconds. You can poll system time in the same tight loop, allowing you to fairly reliably detect whether the process got preempted or not.
Thank you for the detailed response! The PPS is currently driving a hardware interrupt on the raspberry pi that is read in by kernel mode software. My project is to drive an external display. Normally I would bypass the raspberry pi altogether and connect the PPS signal to the strobe input of the SIPO shift register. The problem is that the PPS signal cannot be trusted to always exist. Using a raspberry pi has a few benefits. Setting the timezone based on location, leap seconds, and smoothing out inconsistent GPS data. So while opting to use system time to drive the start of second adds error, I think the tradeoff for reliability is worth it.
I have considered adding complexity, such as adding a hardware mux to choose whether to use the GPS PPS signal or the raspberry pi's start-of-second. I should walk before I run though.
If you want to precisely generate a PPS edge in software with less jitter than you can schedule, you can use a PWM peripheral. Wake up a few milliseconds before the PPS edge is due, get the system time, and compute the precise time until the PPS is due. Initialize the PWM peripheral to transition that far into the future, then go back to sleep until a bit after the transition should have happened, and disable the PWM peripheral.
This works because a thread of execution generally knows what time it is with higher precision than it can accurately schedule itself.
I'm not sure I understand how you're using a PPS signal to drive a display, though. Is it an LED segment display? I assume you want it to update once a second, precisely on the edge of each second. Displays generally exist for humans, though, and a human isn't going to perceive a few milliseconds of jitter on a 1Hz update.
Nixie tubes driven by a pair of cascaded HV5122 (driver + shift register). The strobe input is what updates the output registers with the recently shifted in contents. The driver takes 500 ns to turn on and the nixie tubes take about 10 us to fire once the voltage is applied.
I know it's absurd to worry about the last few ms, but it's part of what interests me about the project. The goal is to make The Wall Time as accurate as I can. I could go further with a delay locked loop fed from measuring nixie tube current. There is room push down to the dozens of nanoseconds of error relative to the PPS source, but I am content with the 10s of microseconds. I can't imagine ever having access to a camera that could capture that amount of error.
Thanks for the tip. Hardware timers are best. I'll likely have to take some measurements to calibrate the computation time of getting the system time and performing the subtraction.
Sounds like fun! For what it's worth, ublox GPS modules and their clones should be configurable to always produce a PPS signal regardless of whether or not they have a satellite fix. The module would probably do a better job than software on a pi could during transient periods without a fix (due to how accurate the oscillators need to be in a GPS module). So, as long as you can trust the GPS module to exist and be powered, you should be able to reliably clock your display update with it. The only reason really to generate your own PPS would be if you want it to work without a GPS module at all, perhaps by NTP or something; you're then of course again looking at only a millisecond or so of accuracy.
I'm using an uputronics GPS/RTC hat that has a u-blox M8 engine. I set it to stationary mode for extra accuracy. I'll have to look into other configuration options.
NTP gets worse if you sync more than two devices across a broader network with other switched traffic, more into low 100s of µs. PTP does not degrade similarly and yes, most of PHYs made since middle of the last decade support it.
> If you look at the white rabbit FPGA PTP updates, its in the ns range
As I recall, I had even better performance than that. Around the tens of picoseconds. But I guess the advertised 1 ns is a conservative estimate. The precison is incredible but its not magic, they squeeze the maximum amount of determinism out of custom hardware and fiber optic links. It is a bit of pain too set up, as you need to calibrate each link individually every time you change the fiber or the SFP.
Even so, a good RNG will make attempts to verify the quality of the entropy. For example, Intel's RNG will statistically observe bit patterns from their RNG. Eg, it fills up a FIFO and checks for the counts of certain bit patterns. If they aren't sufficiently random, it will throw away the data. Even if they dont have sufficient RNG data, unless you can predict how a PRNG is seeded it should be safe for quite some time without new true entropy.
https://www.reddit.com/r/sysadmin/comments/147k6az/oracle_is... https://www.reddit.com/r/sysadmin/comments/d1ttzp/oracle_is_... https://www.theregister.com/2019/10/04/oracle_virtualbox_mer...
We banned virtualbox in our organization since vmware workstation (or virt-manager) is way cheaper than dealing with oracle.