Still puts them in a weak position to critique others on their use of words. We might like to hold the BBC to a higher standard but none of the big news sites are good at details like this.
True, I did make a mistake, but in my defense I’m but one person making a passing comment in an internet forum. Even then, if I had noticed my error before the edit window (which I do not control) expired, I would have issued a correction.
The BBC, on the other hand, is a major organisation employing professional writers and editors. It’s their job to inform clearly, not throw mud in people’s faces with the kind of indirect wording used to conceal intentions.
The situations are not in the same category. I made a mistake in word usage; the article’s title is manipulating meaning, using public relations-style words carefully chosen for the goal of minimising backlash.
Furthermore, I don’t think you need to be perfect to point out imperfection. It is perfectly valid to go to a restaurant and say “this pizza tastes bad” even if you don’t know how to cook.
Grammatically correct, but missing the narrative subtext.
"Job losses" is a passive construction because it hides the fact that the agent - Amazon - made a deliberate and conscious decision to destroy these jobs.
People do occasionally lose things deliberately, but more usually it happens to someone through carelessness or accident, often with associated regret.
This is an example of framing, where a narrative spin is put on events.
"Amazon destroyed 14,000 jobs" would be far more accurate. But we never see that construction from corporate-controlled media outlets.
Companies create jobs. They never destroy them. "Losses" always happen because of regrettable circumstances or outside forces.
The company's hand is always forced. It's never a choice made out of greed (the truth) but because of a plausible excuse.
The next pointer doesn’t have to go first in the structure here. It can go anywhere, and you can use @fieldParentPtr to go back from a reference to the embedded node to a reference to the structure.
For problems in the plane, it's natural to pick two coordinate functions and treat other quantities as functions of these. For example, you might pick x and y, or r and θ, or the distances from two different points, or...
In thermodynamics, there often isn't really one "best" choice of two coordinate functions among the many possibilities (pressure, temperature, volume, energy, entropy... these are the must common but you could use arbitrarily many others in principle), and it's natural to switch between these coordinates even within a single problem.
Coming back to the more familiar x, y, r, and θ, you can visualize these 4 coordinate functions by plotting iso-contours for each of them in the plane. Holding one of these coordinate functions constant picks out a curve (its iso-contour) through a given point. Derivatives involving the other coordinates holding that coordinate constant are ratios of changes in the other coordinates along this iso-contour.
For example, you can think of evaluating dr/dx along a curve of constant y or along a curve of constant θ, and these are different.
I first really understood this way of thinking from an unpublished book chapter of Jaynes [1]. Gibbs "Graphical Methods In The Thermodynamics of Fluids" [2] is also a very interesting discussion of different ways of representing thermodynamic processes by diagrams in the plane. His companion paper, "A method of geometrical representation of the thermodynamic properties of substances by means of surfaces" describes an alternative representation as a surface embedded in a larger space, and these two different pictures are complimentary and both very useful.
Instead of differentiating c^(-xn) w.r.t. x to pull down factors of n (and inconvenient logarithms of c), you can use (z d/dz) z^n = n z^n to pull down factors of n with no inconvenient logarithms. Then you can set z=1/2 at the end to get the desired summand here. This approach makes it more obvious that the answer will be rational.
This is effectively what OP does, but it is phrased there in terms of properties of the Li function, which makes it seem a little more exotic than thinking just in terms of differentiating power functions.
Yeah, differentiating these infinite sums to pull down polynomial factors is a familiar trick.
It happens in basic moment generating function manipulations (e.g., higher moments of random variables). Or from z-transforms in signal processing (z transforms of integrals or derivatives). And (a little less obvious, but still the same) from Fourier analysis.
The concept applies to any moment generating function, z-transform, whatever. It’s clearest for the geometric distribution, where the distribution itself has the geometric form (https://mathworld.wolfram.com/GeometricDistribution.html, around equation 6).
I agree that the Li function seems like a detour, but maybe it can make some of the manipulation easier?
> Mind that all of this does not impose how we actually scale temperature.
> How we scale temperature comes from practical applications such as thermal expansion being linear with temperature on small scales.
An absolute scale for temperature is determined (up to proportionality) by the maximal efficiency of a heat engine operating between two reservoirs: e = 1 - T2/T1.
This might seem like a practical application, but intellectually, it’s an important abstraction away from the properties of any particular system to a constraint on all possible physical systems. This was an important step on the historical path to a modern conception of entropy and the second law of thermodynamics [2].
KaTeX supports server side rendering to an html string. If you do this, the client only needs to load the css component of katex, and not the js component.
Entropy in Statistical Mechanics is a quantity associated with a probability distribution over states of the system. In classical mechanics, this is a probability distribution over the phase space of the system.
Two probability distributions with different entropy can both assign finite probability density to the same state, so an increase in entropy does not preclude the possibility of the system returning to its initial state.
A great deal of confusion about entropy arises from imagining it as a function of the microstate of a system (in classical mechanics, a point in phase space) when it is actually a function of a probability distribution over possible states of a system.
A further wrinkle: Liouville's Theorem [0] shows that evolution under classical mechanics is _entropy preserving_ (because the evolution preserves local phase space density, and entropy is a function of this density). An analogous result applies to quantum mechanics. However, a simple probability distribution parametrized by a few macroscopic parameters rapidly becomes very complex as it evolves in time. When we imagine the entropy of an isolated classical system increasing over time, the meaning is that if we want to model the (very complicated) evolved probability distribution with a simple probability distribution (describable in terms of a few macroscopic parameters), the simple distribution must have entropy greater than or equal to the complex evolved distribution, which is equal to the original entropy before evolution.
It's difficult to reconcile the idea that entropy is a function of a probability distribution (not a function of a system's microstate) with the idea that Thermodynamical entropy is an experimentally measurable (kind of...) property of a system. Jaynes' "The Evolution of Carnot's Principle" [1] is the clearest description I've seen of the relationship between Thermodynamic entropy and Statistical Mechanical/Information Theoretical entropy. Many of Jaynes' other papers [2] on this topic are also illuminating.
“Amazon confirms 14,000 job losses,” is not an example of the passive voice.
“14,000 workers were fired by Amazon,” is an example of the passive voice.
There is not a 1:1 relationship between being vague about agency and using the passive voice.