Generally you are better off coding with "intrinsics", compiler extensions that represent the instructions more symbolically, if in fact the compiler offers what you need.
I am not sure the really interesting AVX-512 instructions have intrinsics yet. For those it's asm or nothing.
Potentially both. Most compilers have vectorization optimizations if you compile for an architecture that supports it.
However, a lot of software is compiled on one machine to be run on potentially many possible architectures, so they target a very lowest common denominator arch like x86-64. This will have some SIMD instructions but (I don't think) AVX-512.
So if a developer wants to ensure those instructions are used if they're supported, they'll write two code paths. one path will explicitly call the avx512 instructions with compiler intrinsics and then the other path will just use the manual code and let the compiler decide how to turn it into x86-64 safe instructions.
thanks for that! so it sounds like, if i purchase a chip that supports avx512, and run an operating system and compiler that supports avx512, i can write "plain old c code" with a minimal amount of compiler arguments and compile that code on my machine (aka not just running someone else's binary). and then the full power of avx512 is right there waiting for me? :)
A compiler turning C(++) code into SIMD instructions is called "autovectorization". In my experience this works for simple loops such as dot products (even that requires special compiler flags to enable FMA and reorders), but unfortunately the wheels often fall off for more complex code.
Also, I haven't seen the compiler generate the more exotic instructions.
if you are targeting more than one specific platform, do you like, include the immintrin.h header and use #ifdef to conditionally use avx512 if it's available on someone's platform?
It would be simpler to use the portable intrinsics from github.com/google/highway (disclosure: I am the main author).
You include a header, and use the same functions on all platforms; the library provides wrapper functions which boil down to the platform's intrinsics.
From what I have seen, this is unfortunately not very useful: it mainly only includes operations that the compiler is often able to autovectorize anyway (simple arithmetic). Support for anything more interesting such as swizzles seems nonexistent. Also, last I checked, this was only available on GCC 11+; has that changed?
I wonder how much compilers could be improved with AI?
I'd imagine outputting optimized avx code from an existing C for() loop would be much easier than going from a "write me a python code that..." prompt.
typically if it's available, compilers will use the avx512 register file. This means you'll see things like xmm25 and ymm25 (128 and 256 bit registers) and those are avx512 only. However, compilers using 512-wide instructions is kinda rare from what I've seen
In my experience, clang unrolls too much, so you end up spending all your time in the non-vectorized remainder.
Using smaller vectors cuts the size of the non-vectorized remainders in half, so smaller vectors often give better performance for that reason.
(Unrolling less could have the same effect while decreasing code size, but alas)
In the event that you don't purchase any physical parts for making controllable robots, you'll find similarities in programming game physics, which really is just a simulation of the world we live in and how to propell an object around that world.
> Failure to report the specific crime of child sex trafficking / rape is a crime itself.
Generally, not, AFAICT. State laws (in some cases, under a federal mandate requiring states to do so to receive federal funds) designate a variety of mandatory reporters (one of the most universal of which is health care providers), but there is no federal universal reporting mandate.
>> failure to report the specific crime of child sex trafficking / rape is a crime itself.
Ok but where is the evidence? I don't think the cops can just showup to Epstain's associates door and try to snatch confessions without evidence. They have lawyers. Why would they confess?
I think this is what takes time -- they have to build more than a circumstantial case that the crime was committed and as you said the suspect is not going to confess so they have a large hill to climb.
Flying to an island on his plane (even after everything we know about that has happened on that island residence) does not itself indicate any wrong doing -- they have to fond more evidence that links the threads together. If they charge or show their hands before they have enough it sets various clocks in motion that can harm the long term cases.
I have faith they are still working as many of these as they can looking for charges -- its only faith.
I think we've hit the point where trying to point fingers at certain ideologies is yielding, well, absolutely 0 returns.
If someone has no solutions and only excuses, it doesn't really help the situation anymore, and the area will continue to be a less than desirable place to raise children
Attention any quant firms-- I'll do it for 250k, 80 hours a week, 24/7 on call, fire me if I don't deliver satisfactory results.
10 years of experience in Cpp, Java, Javascript, Python, SQL, the last 5 of which as a lead (aka designing the data model and optimizing slow parts of the code base through better ds and algo implementations).
Only catch is I won't show up physically in NY or Philly or Chicago, etc.
While I agree all of these scenarios are good reasons to leave, I have a feeling (based on being an American for all 30 years of my life now), these answers would be classified as "TOO REAL" and as a result you might be filtered for "culture fit" aka manager doesn't think he/she can control you.
I agree, but at that point if they're going to make this kind of observation on my resume ("hmm, you move around a lot"), the reply is going to be of equal tone ("I move around a lot because the companies I choose tend to make bad decisions and change direction with equal frequency").
What you are describing at the old company is not a failure of old tools, but rather a failure of management/employee self-management at that company.
Any tool can be used to do good or evil. They were using old tools to do evil things-- namely, writing bad code.
The only caveat here is that if I had to maintain bad bash scripts or bad koobieboobie cicd automated shlalala, I'd always choose bad bash scripts, as the blast radius is smaller and easier to reason about.
How can we more effectively show a robot that circular objects with a certain position, scale, rotation, mass, color/pattern, or an arbitrary label ("that thing over there") is what we want? Seems like a fun question to solve.
As a professional programmer and a relatively optimistic AGI enthusiast, why would the current ML methods not work, given sufficient CPU/RAM/GPU/latency/bandwidth/storage?
In theory, as long as you can translate your inputs and outputs to an array of floats, a neural network can compute anything. The required number of neurons might not fit into the world's best RAM, and the required number of weights and biases for those neurons might not be quickly calculated by a CPU/GPU however.
The current AI approach is like a pure function in programming: no side effects, and given the same input you always get the same output. The “usage” and “training” steps are seperate. There is no episodic memory, especially there is no short term memory.
Biological networks that result in conscious “minds” have a ton of loops and are constantly learning. You can essentially cut yourself off from the outside world in something like a sensory deprivation bath and your mind will continue to operate, talking to itself.
No current popular and successful AI/ML approach can do anything like this.
Agreed, but I also wonder if this is a "necessary" requirement. A robot, perhaps pretrained in a highly accurate 3d physics virtual simulation, which has an understanding of how it can move itself and others in the world, and how to accomplish text defined tasks, is already extremely useful and much more general than an image classificiation system. It is so general, in fact, that it would begin reliably replacing jobs.
Ok, so now we just have to define "AGI" then. A robot, which knows its physical capabilities, which can see the world around it through a frustrum and identifies objects by position, velocity, rotation, which understands the passage of time and can predict future positions for example, which can take text input and translate that into a list of steps it needs to execute, which is functionally equivalent to an Amazon warehouse employee, we are saying is not AGI.
An Amazon warehouse worker isn’t a human, an Amazon warehouse worker is a human engaged in an activity that utilises a tiny portion of what that human is capable of.
A Roomba is not AGI because it can do what a cleaner does.
“Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can.”
I think the key word in that quote is "any" intellectual task. I don't think we are far from solving all of the mobility and vision-related tasks.
I am more concerned though if the definition includes things like philosophy and emotion. These things can be quantified, like for example with AI that plays poker and can calculate the aggressiveness (range of potential hands) of the humans at the table rather than just the pure isolated strength of their hand. But it seems like a very hard thing to generally quantify, and as a result a hard thing to measure and program for.
It sounds like different people will just have different definitions of AGI, which is different from "can this thing do the task i need it to do (for profit, for fun, etc)"
I think you're on to something very practical here.
Chat GPT allows for conversation that is pretty remarquable today. It hasn't learned the way us humans have - so what?
I think a few more iterations may lead to something very, very useful to us humans. Most humans may just as well say Chat GPT version X is Artificial, and Generelly Intelligent.
One big gap is causal learning. A true general intelligence will have to learn how to intervene in the real world to cause wanted outcomes in novel scenarios. Most current ML models capture only stastical knowledge. They can tell you what interventions have been associated with wanted outcomes in the past. In some situations, replaying these associations seems like genuine causal knowledge, but in novel scenarios this falls short. Even in current day models designed to make causal inferences, say for autonomous driving, the causal structure is more likely to have been built into the models by humans, rather than inferred from observations.
Yes but that doesn’t mean you won’t need new architectures or training methods to get there, or data that doesn’t currently exist. We also don’t know how many neurons / layers we’d need, etc.
The brain itself is infinitely more complex than artificial neural networks. Maybe we don’t need all of what nature does to get there, but we are so many orders of magnitude off its redonk. People talk about number of neurons of the brain as if there’s a 1:1 mapping with an ANN. Real neurons have chemical, physical properties, along with other things probably not yet discovered going on.
This is an interesting comment. I agree that I hear the "all we need is 86 billion neurons and we will habe parity with the human brain", and I feel it is dubious to think this way because there is no reason why this arbitrary number must work.
I also think it is a bit strange to use the human brain as an analogy because biological neurons supposedly are booleans and act in groups to achieve float level behavior. For example I can have neurologic pain in my fingers that isn't on off, but rather, has differences in magnitude.
I think we should move away from the biology comparisons and just seek to understand if "more neurons = more better" is true, and if it is, how do we shove more into RAM and handle the exploding compute complexity.
Well, I would put it back like why would it? When you understand how these things work, does it sound anything like what humans do? When prompted with a question, we do not respond by predicting words that come next based on a gigantic corpus of pre-trained text. As a professional programmer, do you think Human intelligence works like a Turing machine?
The first interesting thing I'm told is that real biological neurons operate as booleans in the real world, whereas in computer land I'm told it is preferable to use float neurons. I suppose that you could chain biology neurons together in groups to achieve float-like behavior.
So that's just one small example that we don't need AGI to be a model of the real human brain, with synapses and blood-brain barriers and everything. Rather, we just need one system to do n number of tasks at roughly the same level as a human for it to be "general". Maybe it's not AGI, but it also is not a hardcoded robotic arm that can only work with square objects of a certain dimension.
If you had a robot that was pretrained in a virtual world, assembled in the real world, and then it begins testing and observing and resolving its own physical capabilities (moving arms and legs to stand and jump and backflip)... and then it also had a vision system to scan for threats and objectives... and then it also could resolve text and voice prompts to learn its next objective ("go get my favorite beer can from the fridge")... and the robot knows to ask you more questions to learn what your favorite beer is and also it knows how to preserve its own life in case the dog attacks it or the fridge topples over on it... then I think you have an extremely useful tool that will change the world, regardless of if it is labeled as AGI or not.