Hacker News new | past | comments | ask | show | jobs | submit login

OMG we still have mercury delay line memories as our "working memory", no wonder the world is a damn mess. https://en.wikipedia.org/wiki/Delay_line_memory

"Miller thinks the brain is juggling the items being held in working memory one at a time, in alternation. “That means all the information has to fit into one brain wave,” he said. “When you exceed the capacity of that one brain wave, you’ve reached the limit on working memory.” "

File it under "path dependence," Pops. TIE (This is Evolution.) (a la "TIL")




It's interesting - we have delay line memory effectively, but rather than bits the individual chunks we can carry are these big fuzzy nebulous concepts which possibly encode quite a lot of information in one chunk - albeit with many errors.


Consider that there are ways to keep semi-coherent non-quantisizing thoughts. You can sythnesize such an abstract concept if you step through a list of 'named' concepts that are overlapping with it, to generate the combination of all the 'named' concepts. You can even associate one such abstract concept to another, like when you learn a single vocab, but the repetition you need to enable long-term memory has a strong impedance mismatch with the way you generated that abstract concept.

Also, there are ways to form thought structures that you can't use a normal single-step debugger on, as the intermediate states have no useful interpretation. They are however much, much more capable, e.g. able to provide you with the ability to scan through isles at a large store, while walking reasonably fast through it, turning your head left and right. Your eyes scan rough at first, moving to get more detail for those areas in an isle that though process wants more detail on. Due to the inherent delay/reaction time, this needs an interleaving of about 3~6 steps delay between subsequent, sequential viewings of the same area, if all these viewings are decided with full knowledge of this area. The higher the interleaving, the higher the load on this working memory, but the less multi-viewings without intermediate deciding have to be done of the same area, as you can't handle more areas than this interleaving factor at the same time.

A nice aspect of this fuzzy nature of associations is that you can directly combine fuzzy associations from two such abstract concepts you can synthesize. Don't refine those concepts too much though. Compare e.g. the visual impressions from one historic city center with those of another. Don't try to list each and compare them one-by-one, try to get a fuzzy state of non-individually-enumerated-at-any-point visual impressions from one city center, place it to the side (you can imagine it on a side in a virtual space with rough directions relative to your brain, but don't try to relate it to the physical world, or you snap out of the coherence and possibly loose part of the memories), and then gather the same for the other city center on the other side. Maybe switch between them a few times, like 2 or 3 times each, and then just dissolve the seperation, e.g., drop the association of which side the concept was on, and just all the other mental-space location information. Just consider it no longer important, don't think about it in the moment. Prepare how to drop one of like 4 or 6 or so low-complexity semi-abstract thoughts (there should not be a linguistic expression accurately describing it that is shorter than 7 syllables, this counts for each though separately), before you do the fusion. If you then run through the things you get if you brainstorm small linguistic expressions (like maximum 5 syllables, preferably fewer) or visual things (that are drawable to recognition in under 30 straight lines of finite length, to the visual recognition skill common in Pictionary), you get what both of these city centers have in common, with much of the sampling done on the combined probability distribution this essentially is. The reduction in noise/errors is related to what a quantum computer does, but the limits are sadly much lower.

Be careful, you might like to use such to get rid of (some) emotions, and that can hit some feelings like hunger/thirst without trying to. It usually takes years to get a good handle on those after you loose them.

Manipulation of these lower-level/monolithic though processes can be done by creating a self-feedback one that is trained to tell you as a one-dimensional, non-quantisized "feeling" (you probe it similar to how you consciously probe a specific bodily sensation, but by asking for an abstract concept (naming it creates too much overhead, as you don't need to directly refer to it from linguistic communication) instead of a region of your body. Like how you can feel how dry your eyes are if they are dry, with less quantisation than what you'd use if you try to put it in words, and with less fudging than if you try to put it in numbers (even it those have decimals).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: