For completeness, 33.6 required insane levels of signal clarity on the phone line, and was mostly fiction outside of urban and dense suburban areas.
Prior to 14.4k, there were other generations of modems that came before: 9600, 2400, and even 300 baud modems were all you could get in their respective eras. Each of which were cutting edge at the time.
56K (also called V.90 or "V.everything"), leaned into the quantization that happens on digital phone trunks, rather than let the analog-to-digital conversion chew up your analog modem waveforms. The trick here is that the psuedo-digital-over-analog leg from your house to the local exchange was limited by a few miles. Try this from too far out of town, and it just doesn't work. And to be clear, this was prior to DSL, which is similar but a completely different beast.
Oh, and the V.90 spec was a compromise between two competing 56K standards at the time: K56Flex and X2. This meant that ISPs needed to have matching modems on their end to handle the special 56K signaling. Miraculously, the hardware vendors did something that was good for everyone and compromised on a single standard, and then pushed firmware patches that allowed the two brands to interoperate on existing hardware.
Also, line conditions were subject to a range of factors. It's all copper wire hung from power-poles after all. So, poor quality materials, sloppy workmanship, and aging infrastructure would introduce noise all by itself, and even during weather events. This meant that, for some, it was either a good day or a bad day to try to dial into the internet.
My first modem was 1200... In the early 90's, I got a 9600 baud modem, which is where it felt like things were really taking off. A whole page of text in less than a couple of seconds! I ran my own BBS on 9600 for years.
I think it presents a conflict of interest. Considering we're talking about system security, it's best to not leave this up to the ethics of just one team.
Also: a lot of development teams in security-oriented fields are doing a lot of self-investigation and improvement anyway. Red Teams still have value, and prove that time and again, in spite of that.
IMO, having another team attack your stuff also creates "real" stakes for failure that feel closer to reality than some existential hacker threat. I think just the presence of a looming "Red Team Exercise" creates a stronger motivation to do a better job when building IT systems.
My hunch is that they're doing this for three reasons.
1. Decompressing the gas can be used to do work, like turning a turbine or something. It's not particularly efficient, as you mention, but it can store some energy for a while. Also the tech to do this is practically off-the-shelf right now, and doesn't rely on a ton of R&D to ramp up. Well, maybe the large storage tanks do, but that should be all. So it _does_ function and nobody else is doing it this way so perhaps all that's seen as a competitive edge of sorts.
2. The storage tech has viable side-products, so the bottom-line could be diversified as to not be completely reliant on electricity generation. The compressed gas itself can be sold. Processed a little further, it can be sold as dry ice. Or maybe the facility can be dual-purposed for refrigeration of goods.
3. IMO, they're using CO2 as a working fluid is an attempt to sound carbon-sequestration-adjacent. Basically, doubling-down on environmentally-sound keywords to attract investment. Yes, I'm saying they're greenwashing what should otherwise be a sand battery or something else that moves _heat_ around more efficiently.
This is more of a compressed-air battery than a sand battery, except that the "air" is CO2 and it's "compressed" enough to cause a phase change.
Heat-based energy storage is always going to be inefficient, since it's limited by the Carnot efficiency of turning heat back into electricity. It's always better to store energy mechanically (pumping water, lifting weights, compressing gas), since these are already low-entropy forms of energy, and aren't limited by Carnot's theorem.
I don't know much about this CO2 battery, but I'm guessing the liquid-gas transition occurs under favorable conditions (reasonable temperatures and pressures). The goal is to minimize the amount of heat involved in the process, since all heat is loss (even if they can re-capture it to some extent).
I suppose that liquid CO2 just requires much less volume to store, while keeping the pressure within reason (several dozen atm). For it to work though, the liquid should stay below 31°C (88°F), else it will turn into gas anyway.
So, in a hot climate, they need to store it deep enough underground, and cool the liquid somehow below ambient temperature.
> they're using CO2 as a working fluid is an attempt to sound carbon-sequestration-adjacent
Um no, that's unfair. CO2 is an easy engineering choice here. It's easy to compress and decompress, easy to contain, non-flamable, non-corrosive, non-toxic and cheap. It's used in many applications for these reasons.
While CO2 is now a great evil among the laptop class, it has been a miracle substance in engineering for roughly 200 years now.
Not just new code-bases. I recently used an LLM to accelerate my learning of Rust.
Coming from other programming languages, I had a lot of questions that would be tough to nail down in a Google search, or combing through docs and/or tutorials. In retrospect, it's super fast at finding answers to things that _don't exist_ explicitly, or are implied through the lack of documentation, or exist at the intersection of wildly different resources:
- Can I get compile-time type information of Enum values?
- Can I specialize a generic function/type based on Enum values?
- How can I use macros to reflect on struct fields?
- Can I use an enum without its enclosing namespace, as I can in C++?
- Does rust have a 'with' clause?
- How do I avoid declaring timelines on my types?
- What is an idiomatic way to implement the Strategy pattern?
- What is an idiomatic way to return a closure from a function?
...and so on. This "conversation" happened here and there over the period of two weeks. Not only was ChatGPT up to the task, but it was able to suggest what technologies would get me close to the mark if Rust wasn't built to do what I had in mind. I'm now much more comfortable and competent in the language, but miles ahead of where I would have been without it.
For really basic syntax stuff it works, but the moment you ask its advice on anything involving ChatGPT has confidently led me incredibly wrong right-sounding trails.
To their credit, the people on the Rust forum have been really responsive at answering my questions and poking holes in incorrect unsafe implementations, and it is from speaking to them that I truly feel I have learned the language well.
Yeah, start the installer, quickly look at the temp directory for the files, nab em then quit the installer. This and many other janky techniques are what I use to survive in the jungles of the Windows platform.
I would also like to promote one of my most favorite tools ever: InstallWatch Pro by Epsilon Squared
It takes a complete HDD and Registry snapshot, you install something then it takes another snapshot and shows you the diff in a easy to read format.
Yeah Im sure even ChatGPT can spit out a script that can do this work. It just seems like this particular software by this company is really simple and super solid.
I wish there was an equivalent for MacOS & Linux as the scripts I have tried to make(or had ChatGPT try to make) just don't cut the mustard. I'd rather just have some commercial software do this even if I have to pay for a license.
I was there, writing sites professionally when this was rolled out.
They're more or less deprecated, but I missing having a 1st-class building block that allows you to resize areas of the screen. Recommendations are to use anything but a <frameset>, but there's no replacement for that one feature.
It reads like they're trying to drum up investment. This is why the focus is on the pedigree of the founders, since they don't have a product to speak of yet.
It's also interesting to look at other architectures at the time to get an idea of how fiendish a problem this is. At this time, Commodore, Nintendo, and some others, had dedicated silicon for video rendering. This frees the CPU from having to generate a video signal directly, using a fraction of those cycles to talk to the video subsystem instead. The major drawback with a video chip of some kind is of course cost (custom fabrication, part count), which clearly the Macintosh team was trying to keep as low as possible.
Both the key 8-bit contenders of yore, Atari 8-bit series and Commodore 64 custom graphics chips (Antic and Vic-II) “stole” cycles from the 6502 (or 6510 in the case of C64) did "cycle stealing", when it needed to access memory.
I remember writing a cpu intensive code on the Atari and using video blanking to speed up the code.
And yet despite the lower parts count the Macintosh was more expensive than competing products from Commodore and Atari that had dedicated silicon for video rendering. I guess Apple must have had huge gross margins on hardware sales given how little was in the box.
That was my takeaway as well. Pivoting a conventional structure in a column-major storage system just smells like a look-aside of some kind (a database). Plus, you lose the benefits of bulk copy/move that the compiler will likely give you in a standard struct (e.g. memcpy()), should it be on the big side. From there, we can use lots of other tricks to further speed things up: hashes, trees, bloom filters...
At the same time, I don't completely understand how such a pivot would result in a speedup for random access. I suppose it would speed up sequential access, since the multiple array storage scheme might force many more cache line updates.
These days one's program rarely does purely sequential random access. Usually, there are several threads or requests, all going into same data structure.
In the extreme case where all memory accesses of a immensely massively parallel program are deferred, the result is complete absence of cache misses [1].
While our programs usually not ran at Hyperion scale, they still can benefit from accesses shared between processing of several requests.
For one such example, consider speech recognition with the HCLG graph. That graph is created through composition of four WFST [2] graphs, the last one is Grammar, derived from SRILM n-gram language model. This HCLG graph has scale free property [3] due to all G FST states has to backoff to the order-0 model, and some states are visited exponentially often than others.
By sorting HCLG graph states by their fan-ins (number of states referring to a particular one), we sped recognition process up 5%. E.g., most referred state gets index 0, second most referred is 1, etc, and that's it.
No code change, just preprocessing step that, I believe, lessen the pressure on the CPU's memory protection subsystem.
The speech recognition process with HCLG graph uses beam search [4], there are several (9-20) hundredths of states in the beam front to be evaluated. Most of them have outgoing arc to some of the most visited (fan-in) states. By having these states close to each other we incur less page faults exception handling during processing.
[4] https://en.wikipedia.org/wiki/Beam_search
Basically, we shared more MMU information between requests in beam front.
PS
Fun fact: WFST created by OpenFST has "structure of arrays" layout. ;)
For completeness, 33.6 required insane levels of signal clarity on the phone line, and was mostly fiction outside of urban and dense suburban areas.
Prior to 14.4k, there were other generations of modems that came before: 9600, 2400, and even 300 baud modems were all you could get in their respective eras. Each of which were cutting edge at the time.
56K (also called V.90 or "V.everything"), leaned into the quantization that happens on digital phone trunks, rather than let the analog-to-digital conversion chew up your analog modem waveforms. The trick here is that the psuedo-digital-over-analog leg from your house to the local exchange was limited by a few miles. Try this from too far out of town, and it just doesn't work. And to be clear, this was prior to DSL, which is similar but a completely different beast.
Oh, and the V.90 spec was a compromise between two competing 56K standards at the time: K56Flex and X2. This meant that ISPs needed to have matching modems on their end to handle the special 56K signaling. Miraculously, the hardware vendors did something that was good for everyone and compromised on a single standard, and then pushed firmware patches that allowed the two brands to interoperate on existing hardware.
Also, line conditions were subject to a range of factors. It's all copper wire hung from power-poles after all. So, poor quality materials, sloppy workmanship, and aging infrastructure would introduce noise all by itself, and even during weather events. This meant that, for some, it was either a good day or a bad day to try to dial into the internet.
reply