I think, they are trying to push back against generated pages. I faced this exact problem myself. We recently published an interactive source code navigation tool [0] where you can find examples for commonly used functions from some embedded SDKs. Google indexed it immediately and almost immediately it got a spike of views.
Then, an interesting thing happened. Most pages simply disappeared from the results. Search console shows them as indexed, no problems, no manual actions, but if you google up those functions, there results are not there.
It took some statistical analysis to figure out that they appear to be capping the number of pages. Out of all the pages Google crawled, it picked some percentage of the "most important" ones and it's showing those. The importance, by the looks of it, was computed from the number of incoming links, prioritizing pages for common stuff like int32_t that nobody googles.
It's not ideal, but it kinda makes sense. It's 2024. You can use AI to generate plausible content for any search query you can think of. And unless they put some kind of limits, we'll get overrun with completely useless LLM-churned stuff.
You have confirmed my fears: I'm publishing a text heavy webpage with separate articles per page as well as a large single page. Google will not like that.
I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.
I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.
Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.
Samsung phones use both, in a very literal sense. Their flagship devices usually have both Snapdragon and Exynos variants for different regions, and their lower end devices are mostly Exynos across the board.
The S23 line was an exception in using Snapdragon worldwide, but then the S24 line switched back to using Snapdragon in NA and Exynos everywhere else, except for the S24 Ultra which is still Snapdragon everywhere.
Yes it's a confusing mess, and it's arguably misleading when the predominantly NA-based tech reviewers and influencers get the usually superior QCOM variant, and promote it to a global audience who may get a completely different SOC when they buy the "same" device.
Last time I checked, the latest Qualcomm Snapdragon was faster and more energy efficient than the latest Exynos. Especially the top-binned chips that Samsung gets.
Still, the fact that Samsung can swap out the chip in their flagship product with virtually no change other than slightly different benchmark scores means that these chips are pretty much fungible. If either manufacturer runs into serious problems, the other one is ready to eat their market share for lunch.
Yes, Exynos is still behind in performance and thermals. Exynos modems are also still garbage and plague Pixel phones with issues. Though slowly improving with each generation, it's awful that it's being troubleshot in public over years.
As someone with a close family member who is autistic, I am always bothered by the phrase "if you have met one person with autism, you have met one person with autism". Autism as a diagnostic classification is so broad that the non verbal are lumped in with those with rigidity behavior, when at least to me they seem like they should just have a different diagnosis. This is not just playing semantics if they're able to correlate against specific sets of genes. This work seems highly relevant, IMO
The expression you quoted is completely agreeing with you! It’s a play on the expected idiomatic ending “then you have met everyone with autism”, pointing out that the diagnosis is broad and everyone is different
A characteristic autistic trait is having a narrow and deep tunnel of attention.
Perhaps so narrow and deep that you're unable to learn language, because it requires a more holistic processing, and less focus on individual components.
Perhaps so narrow and deep that you're overwhelmed by sensation, processing every touch, sound, sight stimulus individually, leaving little energy to put everything together.
Or perhaps only so narrow and deep that you are extremely focused on math. Or collecting insects. Or memorizing train routes.
I don't know if this is an explanation. But it is extremely plausible for a wide variety of outcomes to be usefully categorized by a singular trait.
The symptoms cluster together and are related. Someone with sensory issues is also likely to have food aversions, for example.
It's also useful for diagnostics and treatment. It means you don't have to fight insurance as much or need rediagnostics to get needed therapies. I don't need to get my child with food aversions, speech delay, and sensory issues a new diagnosis for each just because some people with autism don't have those issues.
I guess then my thought is that whether it's one disorder or 4 or any number perhaps is best understood as a statistical question.
For example, if a parent exhibit autism symptom X (e.g. trouble understanding emotions), are there kids more likely to inherit symptom X, or ANY autism symptom.
If X is uniquely heritable, then perhaps it's best as multiple disorders. But if X leads equally likely to X, Y, or Z then it's better understood as one disorder.
That sounds like a problem with the medical gatekeeping industry rather than anything fundamental. Like a blanket diagnosis of "human" would get you the same thing, but for the middlemen realizing that would completely destroy one of the levers they use to defraud.
With a prevalence rate of < 2% (at least in Australia) this seems like an incredibly mathematically flawed take. Whilst a broad/blanket diagnosis isn't useful for making generalisations about individuals in that group, it's certainly societally useful.
All models are wrong, some are useful. Of course it has some utility, otherwise it would drop out of use on its own. The problem with big catchall symptom based diagnoses are what they drive focus towards and away from. I get that the scientific process has to start somewhere, by putting similar things in a bag, before it can tease out mechanisms and groupings. But when such simplistic models remain how doctors communicate with patients, it crowds out more nuanced understanding. Like even the word "spectrum", trying to add some depth to the pop culture model, is really just a fancy word for a single scalar.
> it crowds out more nuanced understanding. Like even the word "spectrum", trying to add some depth to the pop culture model, is really just a fancy word for a single scalar.
I just disagree with this take.
For people with autism, the broad criteria help to serve as guideposts for common experiences shared by those with autism. When doing treatment, everyone gets into the specifics of what autism means for the individual.
What you are complaining about is similar to someone complaining that cancer is too broad of a term. After all, the word cancer describes a spectrum of mutations and symptoms everywhere in the body.
How about for people "without autism" that have some of the characteristics (probably everyone), trying to examine their own mental workings (ideally more people) ?
How about for people with "mild autism" that have now been labeled by the medical system as being distinct from people "without autism", even though the main difference was merely passing some arbitrary threshold?
The difference with cancer is that cancer is an unequivocal negative. You can't be just "a little cancerous" and just embrace it. Whereas autism we're seemingly talking about variances in distinct components of what makes up intelligence. So setting some arbitrary threshold below which you're "fine" and above which you have a "problem" is really an artifact of the medical industry and larger economic system rather than actual mechanics.
I think a person is usually capable of figuring out whether some of their traits pose a "problem" in their life or not. And if they're not capable, you're probably able to figure out the answer to that question already without their involvement.
Healthy people usually don't try to find a diagnosis for their mental state.
I don’t disagree that pop culture has distilled spectrum down into a magnitude, but that isn’t how the DSM describes it or how professionals diagnose it (or in my experience how they communicate it). The metaphor is supposed to be like the light spectrum not “less autism ranging to more autism”. Severity scale is distinct to interacting traits of social issues and restricted interests and repetitive behaviors (the spectrum bit).
When I said single scalar, I was referencing the light spectrum - it is literally just less energy ranging to more energy (per photon). We just experience it so vibrantly as different colors (etc) because the difference between specific energies are quite important at the level of our existence. So unless there is a single underlying factor whose magnitude causes all of the different distinct traits of autism, it's a poor analogy.
Your comment would probably be less confusing for non-physicists if you said frequency instead of energy (I know, E=hf).
Two meanings of the word spectrum are used in this discussion, definition quoted from wiktionary:
1. "A range; a continuous, infinite, one-dimensional set, possibly bounded by extremes." "Specifically, a range of colours representing light (electromagnetic radiation) of contiguous frequencies"
2. A plot of energy against frequency, e.g. "[t]he pattern of absorption or emission of radiation produced by a substance", or the output of a Fourier transform.
You were talking about the former, BoiledCabbage the latter.
---
I agree with you that the former makes a terrible analogy of autism; and to be honest I really don't see how the latter can be an analogy of autism.
Two beams of light of equal intensity and different frequency contain equal energy not differing amounts of energy.
The actual difference in frequency is the composition of that energy.
If of course you compare a dimmer beam of light with a brighter one the dimmer one will have less energy.
So no less energy is due to lower intensity light, not due to different frequency. You can pretty trivially have 5 flashlights each with a different color of light and all with the same energy.
Sure. The varying "composition" of that energy is what forms the spectrum - it's a single scalar. You're adding one more dimension of dim/bright, making the entire description be two scalars.
Look up the definition of "spectrum", and contrast with "gamut".
I disagree - I'm not adding an additional dimension. You're implicitly including the dimension of intensity without calling it out explicitly.
The only way that two different otherwise identical light sources can output different amounts of energy is if they have different intensity.
Or put differently, in your example if you have different wavelengths and hold intensity constant then energy is also constant. It's your example that has implicitly introduced the concept of intensity without explicitly saying so.
And the evidence of that is my example. My example is exactly your example, but holding the intensity consistent between the two different light sources. If you do that, then your example fails.
Different wavelengths of light at the same intensity output the same energy. Meaning wavelength does not change the energy output by light.
Even though photons exist, light is fundamentally not a particle. Your statement would hold if light were a particle and only a particle and did not also have wavelike properties. And generally when discussing light as human perception it's the macro scale and wavelike behavior that is being discussed.
I was talking about the spectrum, not brightness or intensity. The spectrum represents a variation in just a single quantity - energy per photon, or wavelength if you prefer. The spectrum is orthogonal to brightness/intensity/power.
You may be interested in reading Wittgenstein on trying to define what a "game" is. In short he found that there are no conditions necessary or sufficient to make something a game. Nevertheless, games exist.
It’s amazing how much Wittgenstein anticipated. You can see here how his philosophy anticipated meaning as being fundamentally a probabilistic superposition of the relationships between words,
with no fixed form, predicting today’s model architectures:
> for instance the kinds of number form a family in the same way. Why do we call something a "number"? Well, perhaps because it has a direct relationship with several things that have hitherto been called number; and this can be said to give it an indirect relationship to other things we call the same name. And we extend our concept of number as in spinning a thread we twist fibre on fibre. And the strength of the thread does not reside in the fact that some one fibre runs through its whole length, but in the overlapping of many fibres.
"hot" is still a meaningful word even though 100 F°, 1000 F°, and 1,000,000 F° aren't comparable at all. They're nonetheless still all experiencing heat.
Yes, if we could pin it to a linear scale of Degrees Autistic (Farenheit), that could be estimated with reasonable precision for all day-to-day relevant values by feeling the nearby air on your skin, nobody would complain about "Austism" being too broad.
Am I missing something you can though. That's actually kinda
how it is. I detest the phrase "high functioning" but that group is roughly your outside temperatures. You'll notice the difference between 30° and 80° and the same temperature 72° can feel different in the summer, winter, before it rains, when it's humid/dry but is still the same intensity. Then there's 1000° degrees where (and this is someone I know) he stripped naked, ran through downtown, and yelled at random restaurant workers calling them fascists for not lettting him in and then got
into a fight with the cops.
I think broadly that's what the "spectrum" characterization is meant to convey. And you should expect this, in code there's one happy path and a million different ways to err, some more catastrophic than others.
> Autism as a diagnostic classification is so broad that the non verbal are lumped in with those with rigidity behavior, when at least to me they seem like they should just have a different diagnosis.
As someone with a diagnosis, I've gotten into the habit of referring to myself as "on the spectrum" when discussing with people rather than using a specific term. It helps emphasize the fact that there's a range of potential manifestations, and (hopefully) helps remind people that their expectations based on past experience might not fit my behavior exactly.
The problem was that a diagnosis of Asperger's was unreliable and therefore useless. We definitely need to identify individuals within the diagnosis of autism spectrum disorder that can reliably be identified and benefit from specific interventions. However, Asperger's did not provide that.
I think this is precisely why for-profit healthcare is wild. If it weren’t for ideology we could get behind socialised care and cut out all of the nonsense.
Mostly anecdotal, my school psychologist back in the day sure believed it, and this would vary from place to place. She was a champion of "You give children the diagnosis that gives them the services they need". Autism being the one which gave children the services they needed, and she often expressed frustration at not being able to get such a diagnosis.
In general Aspergers basically meant no verbal delays, where Autism meant verbal delay. Autism was also around longer as a diagnosis. In general, I think theres a reason they changed the name of Aspergers to Autism and not the other way around.
Interesting that the diagnosis of autism apparently wasn't available to her. Do you know what she would have been referring to by the autism diagnosis being able to get children the services they needed, that the "asperger's" diagnosis would not?
It can be helpful to get a diagnosis of autism for kids in public school. Kids end up needing additional one on one time and resources are limited. Those with the biggest problems are the first to be approved for these resources, and a formal diagnosis makes it easier to get that approval.
Asperger's was not reliably diagnosable between healthcare workers trained to diagnose it. In other words, a diagnosis of Asperger's in someone's medical chart was not a reliable way of knowing if they had Asperger's.
This isn't really providing any clarity to the question, since you're simply restating your previous claim. How is a diagnosis of autism any more reliable?
As in it has a higher inter rater reliability. The statistical term of reliability that is used when describing the likelihood to reproduce a measure under similar conditions.
What matters here is not the general inter-rater reliability for all autism diagnoses, but only those for the patients which would have otherwise been described as having Asperger's, which can often present as a very mild disorder. Given that many Asperger's cases are mild, it is no surprise that there is low inter-rater reliability. Is there evidence that for this type of individual, there is a higher inter-rater reliability with the diagnosis of "autism"?
I'd argue that having the descriptor of "asperger's" is much more useful than simply having a blanket descriptor of "autism". Low functioning people who are described as having autism, have very little in common with most of the high functioning type.
Do you really need a citation that a non-verbal, uncommunicative and non-independent "non-functional" traditional autism case is different from what would often be referred to as "Asperger's", which is common amongst software developers? Or take people like Elon Musk, Bill Gross and David Byrne, who all claim to have Asperger's, Bill and David having been formally diagnosed. These people, who achieved great success, and went through most of their lives without needing a diagnosis, are clearly nothing like the non-functional patients with clearer traditional cases of autism.
First you claimed that they "have very little in common", and now merely that it "is different"... although in the next sentence you go back to "clearly nothing like"?? You seem quite undecided about what you actually want to say.
They're obviously "different", but there's plenty of reasons to believe that they have a lot in common, so going against established science should require some citations.
The reasons why the change was made (https://www.thetransmitter.org/spectrum/why-fold-asperger-sy...) still make sense to me. The autism spectrum is quite wide, and I'd 100% believe something meaningful coming from the source study, but the specific category of Asperger's was based on factors that don't seem to matter much and weren't being reliably evaluated.
To me the point of the phrase was to counter prejudice and gatekeeping. "You can't be autistic because I know another autistic person who does the opposite," is a sentiment that autistic people in my life have encountered. It doesn't extend to researchers because (arguably) researchers aren't in the business of telling people that because they are uncomfortable with behaviors associated with autism.
For comparable BER. I wonder what SNR is needed for speech to be intelligible?
To be fair, if someone made a voice encoding only with that had a lot of error correction bits, it would probably work at much longer distances. Some of the codecs are 2 kilobits per second for human voice? That's got to have a way better margin for the same channel bandwidth than analog decoded by the human brain. This way we get the digital advantage and lossy compression.
You may find the MELP Vocoder and history interesting: https://melp.org/ (MIL-STD-3005)
MELP targets 2.4kbps, and there are later examples for even lower bitrates (e.g. 1.2kbps, 600bps)
One thing to keep in mind is that a lot of these codecs or technologies are designed to operate at very low bit rates because the signal carrier is required to be capable of operation in "contested environments" where jamming and/or other environmental effects are present (and throughput potential is the tradeoff for assurance).
Yes, that's very fast. The same query on Groq, which is known for its fast AI inference, got 249 tokens/s, and 25 tokens/s on Together.ai. However, it's unclear what (if any) quantization was used and it's just a spot check, not a true benchmark.
https://www.theverge.com/24167865/google-zero-search-crash-h...
Except for The Verge of course.
reply