Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Whatever happened to Wolfram Alpha?
340 points by zandorg on Nov 6, 2021 | hide | past | favorite | 264 comments
I did a search on comments on HN for Wolfram Alpha. Most posts are 8 years old, none newer, some older.

What's going on? Did Wolfram Alpha stop being useful, or did people just forget about it?



I use it regularly. Sometimes it’s broken, and maybe nobody notices but me? :)

Their natural language queries for things that I know they know about are amazing. Here are some that I have used recently. You really need to see these results to appreciate them.

I wanted to know how tall my daughter might be.

   8 year old female 55 lbs
http://www.wolframalpha.com/input/?i=8%20year%20old%20female...

I wanted to know the nutrition content of an egg sandwich.

   1 egg, two slices whole wheat bread, one slice of cheddar, two pieces of bacon
http://www.wolframalpha.com/input/?i=1%20egg%2C%20two%20slic...

I was curious about the relative usage of two names over time.

   Michael, Henry
http://www.wolframalpha.com/input/?i=Michael%2C%20Henry


Also a frequent WA user. I use it for things I could calculate, but are much faster to just ask in plain text.

How much that cloud instance really costs

  $0.03/hr * 1 month
Bandwidth calculations for hosting providers

  10 TB per month in Mbps


you might want to try units(1).

https://www.gnu.org/software/units/units.html

the input language is less flexible than wolframalpha/google, but i quickly got used to it. it's nice to have something local and reliable. you can also define custom units.

i prefer using it in terse mode:

    $ units -t 0.03$/hr*1month
    21.914532 US$
    $ units -t 10TB/month Mbps
    30.421214


qalc[1] is also quite nice if you're looking for a command line calculator; it handles units well, but has some other fancy features, and has a very lax parser which i find to be a huge plus.

    $ qalc '0.03$/hr*1month'
    error: "r" is not a valid variable/function/unit.
    (0.03 × (USD / hour)) × (1 × month) = $21.915

    $ qalc '0.03$/h*1month -> CAD'
    (0.03 × (USD / hour)) × (1 × month) ≈ CAD 27.28388011

    $ qalc '10TB/month -> Mbit/s'
    10 × (terabyte / month) ≈ 30.42056430 megabits/s

    $ qalc 'integrate(x+x^2)'
    integrate(x + (x^2)) = x^3 / 3 + x^2 / 2 + C
[1] https://qalculate.github.io/manual/qalc.html


Units advises there are 118.20896 smoots per furlong..


Wouldn’t that be properly expressed as 118 Smoots plus 5 Ears?


people still measuring in smoots smh


For calculator problems like that, I use J:

   */ 0.03 24 30
21.6

   1e6 %~ (10e12 * 8) % */ 30 24 3600
30.8642

Usually requires some massaging, but still takes seconds.


While APL dialects are very nice for this sort of thing, they generally don't understand units of measure or know about physical constants; you have to put those into them yourself. Here are some of my recent units(1) queries:

    141 pounds force 30 mm  # in joules
    1160/4
    log(3)/3/(log(2)/2)  # how much more efficient is one-hot ternary than one-hot binary?
    5V 7 μs / 7.3 A
    .0117% half avogadro mol / 1.251e9 years / (potassium+chlorine)g  # how radioactive is lite salt?
    3.27$/gallon  # in $/liter
    sqrt(2 2000 electronvolt/electronmass)
    18.8 foot pounds force # in joules
    163$/(7.9 g/cc * 1500 mm 3000 mm 3.2 mm)  # cold rolled steel price is higher than steel sold by weight
    m3/4 / 15 cfh
    2 pi sqrt(200 um / gravity)


Julia is also excellent for working with units (Units.jl, IIRC).


Unitful.jl


Yeah, it's great for these types of things. It also has a bunch of values built in, so you can do things like:

  (day length of jupiter) * 80


Same here, the way it seamlessly wrangles even the most ridiculous combinations of units is insanely useful. Just yesterday I used it to calculate power consumption for a house by timing one of those spinning wheel meter things. Something like "(10 rot / 46 s) / (375 rot / kW*h)" and it gave me straight answer in watts.

I definitely could've worked that out by hand, but it would've taken a minute or a few, mostly on unit conversions. With WA, I can just think in variable relationships and not worry about units at all.

Don't get me wrong, it often returns complete garbage, see all the memes of Siri passing non-math questions to it. It's annoying to figure out or explain to someone because the syntax is very loose and you just kind of need to get a feel for it, but once you do, it's really powerful.


I use Google for those pretty often.


I do the same for basic calculations. I was surprised things like 9:00 EST in CET don‘t work in google search, but do in WA.



You are absolutely right. I misremembered. Using GMT does indeed work only on WA:

9:00 GMT-7 in CET

https://www.google.com/search?channel=fs&client=ubuntu&q=9%3...

https://www.wolframalpha.com/input/?i=9%3A00+GMT-7+in+CET


Google handles GMT and UTC but doesn't handle offsets from there and, frankly, it's understandable and I wouldn't bother either. What it does handle though is countries and their DST settings:

> 9:00 UTC in Thailand


Ohh does WA respect time zones in searches?

A regular complaint I have with google is (simplified example) converting EDT to MST. Google will “helpfully” correct me and convert EDT to MDT instead, which is explicitly not what I asked for. It’s stupid (I can usually figure it out on my own) but that would be a huge win for me.


You also might want to try google search. They display calculations for these particular queries and quite a few more.


Google is also often wrong. For example, my computer is set to US English, as is my profile. Yet somehow it still gets confused on decimals and commas (they are switched in my current country’s locale).


Also, if you use the Firefox search bar only the first 20 chars are sent to google, so longer calculations are truncated before they're calculated and wrong answers come back with no warning. Not a Google problem per se but a risk of using Google to calculate still.


or frink [1], which started off as a tool like the others mentiond here, but is now a full fledged units-based programming language. See some examples of them here [2]

1: https://frinklang.org/ 2: https://frinklang.org/#SampleCalculations

Its been on HN before.


The sandwich example was brilliant! I never expected that to be possible (the example of packing smaller circles in a larger one in another comment is also brilliant but less useful for me today I think.)


Just asked a friend about this:

> 1 egg, two slices whole wheat bread, one slice of cheddar, two.. leaves of lettuce ..

and he said it's wrong and useless (!) - giving me examples and numbers as:

protein assimilability from bread is 40% etc.

Is there a way to get correct answers from Wolfram regarding this ?

(assimilability of doesn't work)

Edit: Excuse me, what's wrong with you downvoters - it's a legit question. Or is there something wrong with assimilability? Are you happy being off with your answers by 60% - or jealous that a human can have better answers?


Wolfram isn't reporting how much protein you'll get from eating something; it's reporting how much there is in the bread. Protein assimilation depends on a huge range of factors, and varies significantly between individuals (based on everything from gut microbiome to health factors to how much you chew your food to your saliva production to... Well, it's a long list). There's no way a website could report the amount of protein you will get from bread. Reporting how much is in the bread makes much more sense. It's a shame your friend didn't explain that.

This is something that actually annoys me immensely when people say "you eat too much!" to fat people. Two people can have the exact same diet and the exact same exercise regime, and if one assimilates particular foods more effectively they'll be getting more calories, and put on weight. Food intake is far more complex than many people believe.


>This is something that actually annoys me immensely when people say "you eat too much!" to fat people. Two people can have the exact same diet and the exact same exercise regime, and if one assimilates particular foods more effectively they'll be getting more calories, and put on weight. Food intake is far more complex than many people believe.

I don't see why that statement is inaccurate. It's not "you eat more than me" but "you eat too much." As in you eat too much versus how much your body is able to burn of the calories it assimilates.


The problem is that it is usually presented as a "simple" solution. "Just eat less. Reduce your food intake until you're at a calorie deficit". For some people, that can mean eating three small, but satisfying meals a day. For others, it can mean eating extremely strict rations for only two meals a day, leaving the person constantly hungry and cranky. Then it becomes a will power issue, which as we all know is a function of brain energy reserves (right, we all know that, right?!). Throw in a mentally challenging job versus just phoning it in and it's really not actionable advice.


I agree that a dramatic change is very difficult and the level of difficulty varies from person to person. However, obesity is probably one of the worst long term health predictors. If it leads to diabetes, almost all outcomes get much worse. The change is worth the difficulty.

For me, I quantified what I was eating and simply reduced it a bit by careful tracking. I also did quite a bit of relatively low heart rate exercise and did do some shift of the calories away from carbs. I also identified some intake that was purely habit and not sustaining, like late evening snacks, and eliminated or modified those. Lost 35 pounds in a few months. It may take a while, but the math works over time. It is relatively simple, but it is not easy. I kind of turned it into a game and that helped a bit. At any rate, I wish anyone who decides to try the best of luck.


It's a willpower issue for 3 days, the time it takes for your stomach and appetite to readjust to a lower volume of food intake. Anyone who's fasted knows how easy skipping meals is--it's certainly not the agonizing test of willpower you and many nonfasters seem to think it is.

And by the way, if diet and exercise are not the path to weight loss, then what is?


You also don't need to eat that much less if you're at a stable weight. 10% less a day means you lose a pound every 1-2 weeks. In my experience people seem to not like it when you tell them, after they ask, that you lost weight simply by eating a bit less every day consistently for a year.


Trying currently to lose weight: the reason why I don't 'like' this answer is because I don't track my food intake closely enough to be able to know what removing 10% means.

So I guess that the first step is write everything you eat in a way you can monitor it, to be able to reduce it by a small amount if necessary..

Any advice on how to do it?


Don't do a bunch of tracking: it's too much effort and you'll have a hard time sticking with it. Try 16:8 fasting (you can only eat within an 8 hour period each day). I also recommend reading this post to understand how the body works in terms of weight loss: https://karpathy.github.io/2020/06/11/biohacking-lite/


You don't really need to track your intake perpetually as I see it. But measure you weight weekly. Same time and day to better account for water/food/etc.

If your weight is not going down then try to eat somewhat less. Maybe skip a side or order a salad instead of fries or get 1% milk with your coffee. Or cut a potato from your dinner if you're cooking.

That said, tracking for a while is good to figure out what you can cut since you may not realize how much you eat (snacks, night snacks, soda, etc.).

In my case I stopped eating those free chips at work and stopped drinking a can of coke with lunch. I also tried to avoid large dinners but just large enough ones that I wouldn't go to sleep feeling hungry.


Get a food scale and measure everything for a few days, storing the information in an app like Cronometer.


Diet and exercise are indeed not the path to weight loss. This is well known: most fad diets work this way in some fashion or another, and it’s well known that most fad diets fail.

Since it touches my field, physics, why people have this misapprehension, (“a calorie is a calorie” is an attempt at a thermodynamic statement) I feel somewhat qualified to talk about part of this even though I am not an endocrinologist or a nutritionist, they would have better answers for you in many other respects.

Thermodynamics is necessary but not sufficient to understand the problem. There are many physical problems with ending the explanation there.

The first is that it ignores equilibrium. So, the claim is that I can diet and exercise down to the weight that I want and then return to the lifestyle that I had before but maintain this new weight. That is, when you say diet and exercise you are talking about temporary interventions and no temporary intervention is going to permanently disrupt the equilibrium. Put another way, most people calculate a basal metabolic rate or total daily energy expenditure at their present weight, and leave it at that. If you're a physicist, you start to want to calculate it at two different rates, you want to see the slope between the two, so you get units of kJ/s/kg, but a kg of fat also maps to a certain number of kJ so this is actually a time constant of something like a year—some crude differential equations then suggest that the time constant is something like the half-life of your weight, so if you start living like someone who is 50 lb lighter than you, after a decent chunk of a year you will be 25 lb lighter, then 37.5 lb lighter after another... Basically just that we regress to a weight set by lifestyle. So the focus on an intervention is wrong. Instead one needs to focus on a whole lifestyle shift. You need to focus on setting a new equilibrium, not on burning calories.

But this is a really crude model and that gets into the second point, which is that you are assuming that the system is linear, like an electronic circuit made only out of inductors and capacitors and resistors. The problem is, it is not, it is in fact a complex system of feedback loops braided together. Picture’s worth a thousand words here,

http://biochemical-pathways.com/#/map/1

You know, that thing.

Once you have feedback loops, there is no guarantee that changing the input voltage to an electronic circuit by 10% will reduce some voltage observed inside the system by 10%. It might, it might not. Changing a complex system requires a fundamentally different approach. Often to change one output, the entire system needs to be reconfigured.

As a direct consequence of this, it turns out that most people who go on diet plans hit “the wall.” At the wall, the feedback loops in your body are downregulating your basal metabolism and your perception of available energy. They are jacking up hormones that make you hungry, and also inducing you to wear more sweaters and other such things. They impel you to have “cheat days.” Part of the cause of this may be that your body does not know how to burn just fat. If your body runs out of energy it starts burning everything, both fat and muscle, to make that energy. As a result if you don't target your exercise and diet to build muscle, losing weight quickly actually can maybe drop your lean muscle mass, and your body is reacting to this global damage by telling you that you're sick, because you are. At least, that's one explanation I have seen, I am not a doctor and do not have any qualifications in this way. For all I know, maybe the body is using your fat to try to sequester some sort of toxin or pollutant from the environment, and suddenly dropping the weight releases all of this crap into your blood and that's the reason that your body suddenly wants to put on weight again. Don't ask me these questions

These sorts of feedback loops are why I would recommend listening to endocrinologists, the endocrine system is a signaling system in the body, so these people are very keenly aware of all of these feedback loops and how they reinforce each other. In his recent Metabolical, Dr. Lustig, a research endocrinologist, suggests that focusing on weight for health outcomes is actually totally backwards anyway, that there are more thin sick people than fat sick people in terms of absolute number, and that sickness should come first and wait is probably just a symptom that some people don't express. He gives some better advice about the benefits of healthy eating—studies where they kept calorie consumption and weight the same, and demonstrated huge improvements in health markers, simply by switching out sugary kid food for starchy kid food. Stuff like that.

The insight from complex systems is that telling people to focus on diet and exercise is deeply blaming and that blame might drive shame spirals that are causing the problem in the first place, which is again where I have to step back and hand the problem over to psychologists this time. Viewed this way the problem is that you have an unhealthy relationship with food, and it is unlikely that telling you to diet and exercise is going to magically make it a healthy relationship with food. Mindfulness exercises while eating could for example be a better option. Telling people to eat when they are hungry, but they have to put it on a plate and sit in a dining room and put away their phone and enjoy the food with gusto and stop when they are full: this might help with these binges.


> Diet and exercise are indeed not the path to weight loss. This is well known: most fad diets work this way in some fashion or another, and it’s well known that most fad diets fail.

I think you're misunderstanding what I'm saying. I'm not suggesting people go on keto or weight watchers. Those fad diets don't necessarily fail because they're ineffective, although they probably are--they fail because they are highly prescriptive and restrictive and it's difficult for people to actually execute the diet.

What I'm saying is that reducing total food intake for 3 days creates a lasting decrease in appetite. You can prove this to yourself by skipping breakfast for a few days: after a while, you will simply not be hungry at breakfast time.

The best post I've ever seen on weight loss comes from Andrej Karpathy, head of AI at Tesla: https://karpathy.github.io/2020/06/11/biohacking-lite/


I skipped breakfast (and lunch) for two years. I learned that I could push back against the hunger pain, but I was hungry, and my appetite did not decrease (I was hoping it would).


Americans love to come up with complicate theories about why it’s impossible for them to lose weight.

Go to Europe or East Asia and you’ll see it’s definitely possible and definitely influenced by diet (as in what you eat).

The reasons people are fat here are the huge serving sizes, the corn/meat/milk subsidies, and car culture.


I mean, I could say the same thing about quitting cigarettes. I absolutely realize it’s hard to do, and that’s why so many people still smoke. But my advice would be the same...


The problem here is that you think you're giving people advice when really you're just telling them what to do. The difference is that advice comes with kindness, compassion, an understanding of how the advice is affected by someone's situation and context, and deep knowledge of the subject you're advising about.

Equating changes to diet for weight reduction to quitting cigarettes shows you probably don't have that.


How does the poster comparing two difficult life changes indicate a lack of compassion?


He's telling them what will actually work, not simply what to do.

Eat more than your base calories and you will be fat. Smoke and you will be unhealthy.

Reality can't be expected to be kind, compassionate and understanding.

It's simply reality, and that's the way it is.


... and that is also extremely bad advice. There is very wide variability in how nicotine addiction affects different people. Some people can quit after a pack-a-day habit and have no problems. Some people have trouble with getting off a pack-a-week habit.

Someone who is having extreme difficulty quitting smoking could benefit from working with a doctor to discuss quit-smoking aids or even seeing a therapist to work through their addiction.

No shit, the person needs to "just stop". Way to point out the obvious. Most people don't have a "just stop" button.

IDK, maybe you're just bad at giving advice. Maybe you should just stop.

EDIT: this is seriously an article on The Onion in the making. "Nation wakes up to random forum poster telling them to 'just eat less'. Obesity epidemic ends overnight." The proof is in the pudding here. Telling people "just eat less" is shitty advice.


People need to develop agency in actually doing something to lose weight which essentially comes down to eating less. It might be painful in the short term but is a huge benefit in the long term.

All I've found online is people giving excuses as to why one body type cannot do this or that, which essentially are the same reasons smokers give when trying to quit(too stressed, can't quit cold turkey etc).


It’s true. Most of us have vices. Psychologically it’s more comfortable to make excuses up. But we are responsible for our own behavior.


It really is as simple as “eat less”. Input / Output.

The obesity epidemic is complex but big factors include poor decision making, psychological issues, sugar sugar sugar.

Still, at the end of the day it’s input / output. You can’t gain weight by sucking in too much air.


> Someone who is having extreme difficulty quitting smoking could benefit from working with a doctor to discuss quit-smoking aids or even seeing a therapist to work through their addiction.

Oh—absolutely! As I said, it’s hard, and frequently requires professional help, strategies, etc.

But, it ultimately comes down to, you have to find a way to quit! You shouldn’t let yourself off the hook.


Cravings go away, saying to just eat less can be telling somebody they have to be hungry for the rest of their lives.


Is that true? I was actually under the impression that people's appetites do eventually adjust (especially if you reduce your intake slowly), although it can take years.


Except it is as I see it. The goal is to eat less and to achieve that you need figure out what you can eat less of that will make you still feel fed. Sure it doesn't apply to everyone but nothing does.

For example, for me, 600 calories worth of chips will keep me feeling fed for an hour or two. 600 calories worth of pure brisket can keep me feeing fed for 8 hours. You can guess which I tend to eat more of when I'm trying to lose weight.

edit: Also if you're at a stable weight then we're talking 10% less food per day and not 50% less.


For that to be true the base metabolic rate would need to be vastly different between individuals. While it is true that there is significant variation (according to Wikipedia more than 100%), most of that variation (60%) seems to be explained by differences in lean body mass, which is the other side of loosing weight, exercise. From those results I would argue the is little evidence that some people would have to cut dien to almost nothing while others could almost continue eating like before


> it's really not actionable advice.

It really is though. It is hard and requires discipline but it’s actionable.


So you're saying that "fat people eat too much" is a statistically inaccurate statement?


I'm saying that someone can eat what is defined as "a healthy diet" and still gain weight.


Not if "too much" is relative to their physiology. How much is too much varies for each person.


>protein assimilability from bread is 40%

This is wrong; the digestibility of gluten is 80-90%. Your friend was probably thinking of the PDCAAS, which is more like 45 for gluten. But this is nutritional quality vs an egg white equivalent as defined by the bioavailability and concentration of essential amino acids (egg = 100 by definition; the score is based on the lowest fraction of any EAA, so gelatin — no tryptophan — has PDCAAS 0), not the fraction absorbed or utilized. For an idea of what utilization looks like see e.g.:

https://mdpi-res.com/d_attachment/nutrients/nutrients-10-001...


> The net protein utilization is profoundly affected by the limiting amino acid content—the essential amino acid found in the smallest quantity in the foodstuff. It is therefore a good idea to mix foodstuffs that have different weaknesses in their essential amino acid distributions.

> The limiting amino acid for wheat is lysine.

From what I gather, you still can process all of the protein from wheat if you get lysine from somewhere else:

> A vegetarian or low animal protein diet can be adequate for protein, including lysine, if it includes both cereal grains and legumes.

This also means that any statements about protein utilization from compound meals are more-or-less bogus if done without calculating the different amino acids.


It might help to explain what your friend says is “wrong and useless” so others could provide feedback.

It also might help to avoid insinuating things about strangers online in order to promote discussion and not stifle it.


Downvotes start to happen before my excuse. I've payed with karma and got an answer. Thanks for your human feedback.


You can improve the protein assimilability of bread by combining it with a high-lysine protein (and I'm not sure but I think eggs and cheddar might fit the bill) but you may not care if you're looking for low-glycemic-index low-fat calories rather than amino acids specifically.

I don't know what's wrong with the downvoters.


It seems that Wolfram Alpha also has some difficulty figuring out whether I'm talking about raw oats or cooked oats, even when I use the word raw in my query. As a result, it can be off by a factor of 3. I agree that it's not useful if you have to carefully check the output every time.


I would expect it to work properly with the word oatmeal.


Yeah; I use it for the occasional repeating specialized query, but have never broadened my usage to anything more-general.


> You really need to see these results to appreciate them.

Seems more like the quality of the queries rather than the results. Many of the complaints I see about google and friends is related to them dumbing down search for the global common denominator.


I really struggled with the "natural" language queries for this: https://www.wolframalpha.com/input/?i=+new+south+wales+covid...

Any advice on rephrasing it to work would be welcomed. Downside to allegedly natural language query systems - there's no concise explanation of syntax it recognises.



1. It's slow, even for simple microsecond computations like log(2). Takes about 5-20 seconds to load a page on my 1Gb fiber connection. Opening Python/SymPy Gamma is much faster for most things. https://gamma.sympy.org/input/?i=log%282%29

2. Every time I use it, a box saying

    NEW: Use textbook math notation to enter your math. TRY IT
pops up over the result, and clicking the X doesn't hide it the next time I search. This adds ~3 seconds to the result time.

3. I'm a long-term Mathematica user, but typing literal Mathematica syntax usually never works, except for simple expressions.

4. Results are PNGs, and copy-pasting a numerical result takes a few unnecessary clicks. "Plain Text" > Copy.


> Takes about 5-20 seconds to load a page on my 1Gb fiber connection

Wolfram Alpha is implemented in Mathematica, which --- to understate the situation --- was never intended as a high performance backend server language. I suspect that's the reason for the bad performance.

"As a result, the five million lines of Mathematica code that make up Wolfram|Alpha are equivalent to many tens of millions of lines of code in a lower-level language like C, Java, or Python." [1]

Sure, there's something to be said for implementing logic in high-level code, but without a plan for lowering that high-level logic to machine code in a way that performs well, you're setting yourself up for long-term pain.

[1] https://blog.wolframalpha.com/2009/05/01/the-secret-behind-t...


I doubt the bad performance is due to evaluating expressions itself. If I type N[Log[2]] into Mathematica, it evaluates in less than a millisecond. It's probably because Wolfram Alpha is using natural language process to try to process my query and then finally deciding that by N[Log[2]], I mean N[Log[2]]. And it's probably not because of that, but because their grid scheduler isn't optimized for sub-second latency.


Ha, hearing the word "process" in Wolfram's voice, there.


Big fan! No, I mean Stephen Wolfram is a big fan… of Stephen Wolfram


> Sure, there's something to be said for implementing logic in high-level code, but without a plan for lowering that high-level logic to machine code in a way that performs well, you're setting yourself up for long-term pain.

Whatever the reason for the performance issue (I don't know enough about WA to speculate what/why/how), I feel like noting the existence of the wolfram compiler[0] and the various language interfaces[1]. Anyone interested in using Mathematica/WL might get a kick out of exploring those more, at the very least.

[0] https://reference.wolfram.com/language/Compile/tutorial/Over...

[1] https://reference.wolfram.com/language/guide/CLanguageInterf... (a lot of the paclets are bindings for C libraries too)


Mathematica is extremely performant for most of the built-ins, the overhead of interpretation is nearly negligible for all but the tiniest operations.

There is also no reason to think that their request-response boilerplate is written in Mathematica, Mathematica is fully integrated with a lot of languages and runtimes.


> Opening Python/SymPy Gamma is much faster for most things.

Is there a way to make it plot multivariate functions? I tried but whenever I enter two variables it says "Cannot plot multivariate function." I've seen many Python packages plotting multivariate functions so I'm convinced it should be possible.


I don't think so. You'd need to run it in a terminal with something like

    from sympy.plotting import plot3d
    x,y=symbols('x y')
    plot3d(x*y, (x, -10,10), (y, -10,10))


I usually use python for math stuff also, however I think the log(2) example is maybe the wrong example. I basically got an instant result for that (just recorded this): https://imgur.com/a/g5slHsR


Your Internet bandwidth is not relevant when talking about a compute-heavy backend like this. Wolfram|Alpha is not going to load any faster on a 1Gbps connection than it will on a 20Mbps connection, other than some static assets, but even that isn't going to be hugely noticeable if we're talking about 2ms RTT on fibre vs 8-20ms RTT on cable/DSL. If you're downloading a giant file off a nearby CDN, then sure, 1Gbps fibre is useful. I can max out my 1400Mbps cable connection downloading things this way (it's mind-blowing...), and my latency to my upstream gateway outside of my house is 8ms. But Wolfram|Alpha isn't going to load 40% faster for me than it will for you since it's I/O bound and your end-to-end latency is waiting for the backend to complete your request.

I will say, though, that Wolfram|Alpha could be "optimised" in the sense that it could do less fancy JS and be a simple box with a submit button, like SymPy Gamma.


I think that's the point. "My internet speed is fast enough that it is not the cause of slowness, so any delay is all on Wolfram|Alpha."


Throughput is not latency, though. 1gbps on a dedicated line is not the same as 1gbps on an oversubscribed residential node.


If I didn't include that note, someone would say "Is is slow because you're on 56kbps dial-up?"


Siri and Alexa pass a lot of questions to Wolfram Alpha.

When Apple first started using it, they were responsible for 25% of all WA traffic. With Alexa, I assume that the majority of WA's queries are coming from smart assistants at this point. (https://9to5mac.com/2012/02/07/four-months-in-siri-represent..., https://www.theverge.com/2018/12/20/18150654/alexa-wolfram-a...)


I used it for Calc 1 and 2. It helped me check my work for Limits, derivatives, integrals, Reimann Summs, Series, Sequences. I love the part that says "Show Step By Step" because I can figure out which step I made an error.

The answers in the back of the book didn't tell me step-by-step how I solved the problem. It just gave me the answer and there are many times I couldn't figure out which step I made the error. Usually it was some dumb mistake, but by identifying the dumb mistake, I could remember to double check that similar step in future problems.

I had a hard time using it for Classical Physics to check my work.


Same. It also has a problem generator to practice different kinds of problems (https://www.wolframalpha.com/problem-generator/?scrollTo=Cal...). Note the step-by-step solution is paid.


Have you tried using Sympy? It's not as sophisticated as Mathematica but it's a lot more usable than Wolfram Vertical Line Alpha.


Having someone, or a program, show you where you went wrong is a good way to learn nothing. All the learning comes with struggling when you almost have the answer.


Same, helped me quite bit back when I was taking Calc 1 and 2 for that same reason.


I think the strategy of Wolfram Research has shifted from trying to sell Wolfram Alpha as a standalone service, to selling the Wolfram Language with WA functions for retrieving standard datasets. A finance professional, for example, probably did not gain much information from asking WA "would it be better to invest $100 in GOOG or FB in 2013?", but the `FinancialData` function for pulling end-of-day stock prices enabled these people to do interesting analysis that they couldn't have done otherwise.

(source: conjecture, but I did work at WR for 3 years and on the initial Wolfram|Alpha release)


What was your experience at WR? I’m curious what the sentiment is towards the new physics work by those who work there, do you stay in touch?


Overall very positive, Stephen is a brilliant visionary and the software (Mathematica back then) was the best thing for someone early in their career to work on. Some of the ideas I picked up around symbolic computing and functional programming were quite helpful later on, and the whole experience opened some doors that wouldn't have otherwise. It's almost been a decade, so I unfortunately don't have much insight into sentiment these days.


"how many 3mm circles pack in 15mm circle"

WA offers answers with drawings. Google cannot do that.

https://www.wolframalpha.com/input/?i=how+many+3mm+circles+p...


Of course if you try the query with spheres you see Wolfram Alpha's typical catatrophic failure.

I love WA and use it all the time, but it's so hard to know when a query will work and when it won't. When it fails it fails hilariously.

Here's some of my favorite queries:

- https://www.wolframalpha.com/input/?i=2.2+bagels%2Fday+*+ave...

- https://www.wolframalpha.com/input/?i=time+dilation+given+v+...

- https://www.wolframalpha.com/input/?i=400+miles+%2F+20mpg+*+...

- https://www.wolframalpha.com/input/?i=US+unemployment+rate+v...

- https://www.wolframalpha.com/input/?i=warp+speed+6+in+deep+s...


For most of these, Wolfram Vertical Line Alpha seems to give reasonable results. However, for the third one, because I'm in Argentina, it helpfully converts US$79.80 into Argentine pesos, getting an answer that's off by about a factor of 2: AR$7969.44. As https://preciodolarblue.com.ar/ explains, the current bid and ask prices for the dollar are AR$195 and AR$199. Wolfram Vertical Line Alpha is apparently using the "official" rate of AR$99.45 or so; this is the rate at which the government converts your dollars into pesos if you are an exporter, but you cannot convert your pesos into dollars at this rate without special permission, granted, for example, if you are going on vacation to Disney World.


You're severely underselling Google's incapability, e.g. https://i.imgur.com/UoIZSU2.png


WTactualF? When has * ever been anything other than multiplication? Why would the resulting links all be discussing division?


I think Google search doesn't include the special symbols, so it's like searching "48 6".


I don't know, I was looking for "how to configure cors for specific vhost in nginx" and all I got was Apache SO links. Had to use -apache.


I get increasingly frustrated by the spammy SO mirroring spam sites getting into the top 3 results.


Oh yeah there's that too and now I also get them in other languages than English but they are just Google translate version of SO.


I recently noticed that a number of SO questions have even been turned into Youtube videos containing a slide-show of the answers, now :-(


I absolutely hate this. Scanning a video is so much more difficult than just scanning a written explanation. If these videos are being monetized, I think that's a problem. If they are, someone could just create a channel by converting SO questions into videos.

The question I have is what kind of keywords are people using on these videos that Google feel is more worthwhile than the actual text of a written version of the content? Or is the algo so heavily weighted to pick a youtube link?


Having worked a stint in social media for 4 years there was this huge guideline from Facebook to push publishers to churn out videos. I suppose Google ranking algorithm favors Youtube but i don't get Google's reasoning behind that. Engagement because of embedded ads ?


Use verbatim search too. All words must exist without aliasing.

(google aliases ubuntu and debian, john/jon/Johnathan for example)


Sometimes even this doesn't work. I used verbatim search and got back results which didn't contain the word I looked for.

I then just sadly wonder how the heck this could be possible and resignedly slowly shake my head.

I could wish for a feature where I double-double quote the word to empathically indicate that this word must exist in the result and not left out under any circumstances. But then again I am sure that the search quality will continue to decline and even double-double, triple-quote, quadruple-quote words &c won't help anymore. Sort of a quote inflation.


The + symbol used to mean this, then some lunatic woman from google explained to everyone it had been removed, but it was OK as quotes were the same.

She was either a highly incompetent buffoon, or a liar for PR purposes, as quotes are not the same.

Why the change? Because it caused issues with Google+ searches from their new fancy pants Facebook clone.

Soon after, due to protest, verbatim was introduced.

It was fine for at least 5 years, but someone keeps reducing its effectiveness.

Clowns. All I hear is clown music, when I Google search.

I mean, who rolls out a product so disjointed that the very search for its users is broken, then like a year later, rolls out a broken fix?!

Google, that's who. The product failure king.


>google aliases ubuntu and debian,

WHAT the FUCK. Is there a more convenient way to bypass this than "quoting" "every" "word?"


Yes verbatim search , under search tools after an initial search.

Google takes quotes as just stronger suggestions, fyi, but verbatim is supposed to prevent this.


click tools -> show all results -> verbatim


i just tried this for: 48 * 6

the results after choosing verbatim are even worse


Verbatim gives no aliasing, interpretation. 48 means 48, not forty eight.


This is correct. In this query, the '*' is being disregarded. Then, I assume, more people on the internet discuss 48 and 6 in the context of long division than in the context of multiplication.


I'm not certain but in this context * may be a wildcard.


It is. Try searching for 16*9 for the good reason it shows both the calculator, and then links to 16:9 and 16x9 aspect ratio content.

It's reasonable to think that the calculator already answered the question, and I'm not looking for pages on the simple multiplication once I've already seen the answer.

Imagine the uproar if those results didn't come up because a bunch of children's math quizzes were found instead.


So what you're saying is that google sucks at context clues


It clearly understands enough to trigger displaying the calculator.


What we'll discover is there is a team dedicated to determining when to display the calculator. Then there will be another team entirely that picks how to interpret the query for website results. The two teams will never have met, spoken, exchanged information between the two. The team searching websites will mysteriously have never thought that someone might search a webpage for a math equation.


I don’t understand the problem. You are asking what is 48*6, and the correct answer is right at the top.


This is amazing! The rest of this thread completely buried the lead. Delightful.


Hope I'm not sounding like a grammar nazi but its "buried the lede" I only recently discovered - the reason is really interesting - https://www.merriam-webster.com/words-at-play/bury-the-lede-...


As a non-native speaker I would welcome more "grammar nazis" in places where well educated native speakers can be found.

One of the reasons children learn new languages quite rapidly is because they get corrected the whole time.

Not correcting people hinders actually their progress in language learning… Even if it might seem impolite it's the one thing that helps a lot, if not even most, in mastering a foreign language!

So thanks for being a "grammar nazi". We need people like you.

(No, that doesn't apply to the causal typo. But I guess most people can differentiate such a thing form true grammar and spelling mistakes; especially if that are "typical" mistakes).


No it isn't. "Lede" is a neologism arising from people convincing themselves they had inside information. It's been "burying the lead" as long as the phrase has existed. Your own link explains that.


The link says it can still be lead. Lede seemingly came about fo random reasons. I learned the opposite today. That I can still write lead instead of lede. M


Curiously it can't calculate it if you change 15mm circle to 15mm square.


For me, I never got into using it much (due to lack of experience with Mathematica syntax). I had some niche uses like "how many work days between <date1> and <date2>" but that's hardly so important.

Instead I use the SymPy Live shell https://live.sympy.org/ which does most of what I need in terms of math calculations. I'm a big fan of the sharable links (the thumbtack button below the prompt) that you can post in comments to show an entire calculation encoded in the URL querystring, e.g., https://live.sympy.org/?evaluate=factor(x**2%2B5*x%2B6)%0A%2... (factoring a polynomial), or https://news.ycombinator.com/item?id=23158095 (linear algebra helper function).


Sympy live shell is decent, and the latex rendering is pretty sweet. But, it's on ancient versions of everything, runs slowly, and has a C- UI.

Instead, I use Colab with Sympy + latex output and matplotlib (and most other things you could want to import, pre-installed). It's running new versions of things, and backed by more power, with an option to pay for even more. The latex rendering took a bit of poking around stackoverflow, but works just fine.

Feel free to copy:

https://colab.research.google.com/gist/dmlerner/23543255fdde...


What do you mean? I used it to solve a nasty impedance network for the real and imaginary components yesterday and the solutions were accurate.

Edit: Maybe it's just good enough that people treat it as a tool and see no need to market it. It consistently has worked fine-ish for years and is useful at what it does.


My meaning was just that I saw it sometimes referenced on HN, but I haven't seen it mentioned for a while now. Hence my search and results showing 8 years since.

I guess what I should be doing is looking at the Alexa ranking of Wolfram Alpha.


You should search comments, rather than stories. It's very regularly referenced in HN comments, often for calculations, sometimes in other contexts.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


I appreciate the conversation around WA this Ask HN has started, but yeah you've basically completely answered the original question by pointing this out.


Fair enough. I was definitely searching comments (not stories), but I might not have filtered by Date, hence the lack of recent results.


Sorting by popularity has been broken for years because comment scores aren't public any more.


They're just serving up answers which is boring to HN readers. Where's the drama in collecting data privately? Where's the drama from censoring results? No drama == No interest? Gawd, I have become cynical.


Could please share your query / code to do this? Seems like it would make a good Example. Thanks!


This is for two pairs of R/C in series ( R||C + R||C ).

Real:

https://www.wolframalpha.com/input/?i=Real%28+%281%2F%281%2F...

Imaginary:

https://www.wolframalpha.com/input/?i=Imaginary%28+%281%2F%2...

Edit: Sorry, I don't know how to make the search query text show up since it has special characters, probably best to just use the links to see the query.


I use it exclusively when I'm drunk, to calculate how drunk I am

"4 drinks in 3 hours at 64 kg"



Perfect link to start my day with


I'm unable to try to compute something similar by indicating the quantity and the percentage of alcohol, such as :

"2 beers (composition of 8% alcohol, 44cl) in 1 hour at 80kg"

I tried with or without parenthesis and with varying query. Never worked.

Any ideas?

(I'm interesting on knowing the level of blood alcohol percentage and the duration it takes to go under the limit, depending on the percentage of alcohol and quantity)


I can't beleive this works.


I don't think it's accurate though. It says I'd be at around half of the DUI limit after 2 glasses of wine in one hour. That's certainly not right.


How much do you weigh? That sounds close to right for most people I think.


Surely there is a huge difference between men and women considering men frequently weigh twice as much as women.


And it can't ever be accurate. Thing is, how drunk you're going to get within some time frame depends on what (and how much) you're eating, and whether your stomach had some food in it before you started drinking. If there's stuff in your stomach, its sphincter is closed while it's digesting it. Stomach itself absorbs alcohol (and nutrients) much slower than the intestine.

TL;DR: don't rely on this calculator to determine if you're too hammered to drive. If there's any doubt whatsoever, call an Uber or use public transportation.


Oh wow, so there is a reason for the old "Don't drink on an empty stomach"


This is something people from countries where people drink primarily hard liquor (Russia, Finland, much of Eastern Europe) know pretty well even if they don't know how to explain it. Another trick is to watch _what_ you eat. Meat and fat stay in the stomach for much longer, so if you focus on that, alcohol won't hit your liver all at once.


Some experiments to be performed :-)


A more important question: what happened to Wolfram? I think they missed an opportunity to have an enormous market by pricing themselves into a niche. They had so much cool stuff that could have played a much larger role in most developers lives. And which would have funneled more users into higher end premium products.

Every now and then I go to their site to have a look -- and then realize that I'm not going to go subscribe to some piece of software I'm unsure I will be using enough to justify the cost.


Wolfram himself is working on physics and fulfilling a life long dream. (He was just on Lex’s podcast.) Say what you will about his contributions but it is hard to argue he hasn’t been enormously successful at achieving his goals of developing an entire cathedral of work he can use for his own intellectual persuits.


they have plenty of customers and are always hiring so I don’t think there’s much pressure to change their business model… they have a 15 day trial and IMO the ~$200/yr I pay for a dual boot personal license is worth it for the documentation alone, AMA

I think their online book is a very nice intro: https://www.wolfram.com/language/elementary-introduction/2nd...


So the price is right on the limit for something I'd like to play around a bit with at home and "maybe it sticks". This is something I'd like to learn for the programming environment. But I'm not going to write software based on this that actually does something. I don't want to produce software that has a $200/year dependency. So then the amount of time I can invest goes down sharply.

I mean, good for them that they're doing well. They probably don't need the money. But their technology is highly unlikely to ever be part of any software I write. And it isn't because it is a bad fit. I do lots of stuff that would benefit greatly from having Mathematica plugged into it.

Thanks for the tip on the book. I might actually buy the paper edition and read that.


I used to use it extensively during my early PhD work for back of the envelope calculations. Unfortunately it became steadily harder to enter queries and have them understood. About half a decade ago they broke about 70% of what i used it for by refusing to show results for modestly complex calculations and instead throwing up nag messages for the paid version. The paid version last i saw was not available through an institutional license.

Last time I tried to use retrieval features for nuclear data there was absolutely no citation info or documentation whatsoever, just numbers from who knows where. WA had so much potential but peaked about 3 years after it came out as far as i can tell. That being said it's still vastly superior to doing calculations with google.


> The paid version last i saw was not available through an institutional license.

Does your institution have mathematica? In mathematica you can query WA directly, and it gives you as much (or possibly more, from how it seems to behave for me) computing time as people with WA pro subscriptions. I use it all the time for stuff like graphing complicated implicit 3d surfaces or doing multiple integrals, stuff where I know the relevant mathematica command but I would rather not type it out fully


I use it semi-regularly; once a week or so. It's a genuinely useful tool that was just greatly oversold on launch. Things I use it for:

- Converting units while cooking. I prefer to cook by weight, and for most ingredients, you can do something like "2 cups of flour in g"

- Stuff I'd have used a scientific calculator in an earlier era: simple systems of equations, plots, etc.

- Comparing stats on countries, e.g. GDP growth in various countries


The issue with recipe weight unit conversions might be that the author literally had a cup or spoon or whatever with a specific capacity which would not equal the standard units, therefore you are converting one inaccurate amount to an other one.


I'm not arguing in this case that it's more accurate, just that it's sometimes easier: if I've got a mixing bowl on a scale, I find it easier to pour things in by weight rather than to measure them all out. On recipes I often cook, I edit / write in the weight in grams to speed things up.


It's safe to assume any recipe written in the last 50 years is using the standardized units. It isn't literally 100% true, but close enough that it's not worth worrying about.


If that’s the case, then the issue is immaterial


I used it a lot while pursuing my electrical engineering degree. It's ability to solve almost any mathematical formula and to show you the solution step by step is just plain awesome.

I guess it's safe to say I would not have passed some algebra and electrical engineering exams without it.

One tip I have (not sure if it still works though): Buy the Android or iOS app for a few bucks to get access to the step by step solutions if you can't afford the pro subscription.


> It's ability to solve almost any mathematical formula

Let's not go wild here ;)


The pricing seems random to me. (Even it seems cheaper than last time I've checked).

It's ~20% more expensive in Euro than in Dollar. (And Poland, which I checked for curiosity as it's in the EU but does not use Euro, has a price in Pound with is even higher; Poland is not a rich country).

Also I don't think charging for example people in countries in Africa as much as for example US people makes any sense.

The service is really great for some questions but the commercial offer never added up for me.

If the software would be OpenSource and run on prem I would consider buying some additional online services for it (even at the current random price point and without having a real use case; it's not more expensive than an average online game, so bearable). It would make also that "Wolfram Language" worth having a look at. But I don't bother even glimpsing at closed source programming languages. That's especially one of the things they do very poorly.


>It's ~20% more expensive in Euro than in Dollar.

Keep in mind US prices don't include sales tax (VAT).


My fault. I thought digital services like that fall into the reduced VAT category. But it seems not the case.

So ~20% more makes actually sense.

Thanks for pointing this out!


My guess is that it’s a bit too complicated/slow for a lot of ordinary people and too finicky for a lot of technical people.

I’m a frequent Mathematica user and I find almost all of my use cases require several different attempts to get the desired result w/wolfram alpha. Meanwhile, most people who don’t get the right result the first time will probably just give up and not think to rephrase the query.


I mainly use it as an english dictionary of math terminology.

Although for the basics of differential geometry like the Weingarten equations and the Dupin indicatrix WA is lacking - as is Wikipedia except for the articles in the german Wikipedia. And I haven't found a way to get to the 'Weingarten equations' searching for 'Weingarten', you only find him by the full name 'Julius Weingarten'. :(

https://de.wikipedia.org/wiki/Weingartenabbildung https://www.wolframalpha.com/input/?i=weingarten+equations https://de.wikipedia.org/wiki/Indikatrix https://www.wolframalpha.com/input/?i=dupin+indikatrix


If only there was some way to contribute to the english wiki (article or search) when you are lucky enough to understand the (better) german one...

:-)


That's a bit recursive. I'd have needed english translations of the german articles to get to know the english math terms to be able to write about them in english :)


Maybe the German wiki pages on the specific math terms are already translated to English. If not maybe google can help.

Not saying it's easy.


The problem WA is attempting to address is nearly impossible: to trust WA as a reliable source of information, you have to be confident it will be able to answer the question you're asking. If you work in a specific problem space, you can probably know that, but even if WA does know your particular area, you likely know even better ways to answer your questions.

Putting it another way, it's too hard to know what WA knows and doesn't know. I alluded to this in a post I wrote back when WA first came out: https://gcanyon.wordpress.com/2009/06/07/bing-wolfram-alpha-... "As Alpha grows and adds new problem domains it will become more and more useful, but it will continue to be necessary to understand what it can and can’t do, and how to get it to divulge what it knows."


Honestly, Google can now do most of the basic things that WA could do.

And the more complex things WA could do oftentimes require a bunch of trial and error to figure out the correct syntax/phrasing to use to get correct results, to the point where it was just easier to either do the calculation manually or find a dedicated site for it.

So it has just lost utility for me.


WA not perfect, noted.


It's still around but I imagine it is experiencing a bunch of competitors biting chunks out of it.

A lot more people can script now, so open source packages of computer algebra systems (Sage, numpy, scipy etc.) Probably take a small bite.

And then you have closed source ones to consider like Matlab.

The second largest chunk probably being bitten out of it is its web and app competitors (desmos, symbolab, etc.) Alexa rankings show that these see a lot more traffic and engagement (2 - 3 times).

Finally, a small portion of its functionality is now covered by search engines. I imagine they'll continue to gobble things up. There are also a few good Web tools, I used one for a linear algebra course I found a lot better than the freeware version of WolframAlpha that came with my Raspberry Pi.

I can't find any reports on its revenue or net income. I would be super curious who uses it. Maybe it's growing... who knows? I also remember it being recommended a lot in the early 2010s.


You are mixing things up here. The headline is about Wolfram Alpha. You are talking about Mathematica.


I'm talking about both. When I was comparing them to competitors like Symbolab I was using the Alexa ranking for alpha.

I find it faster and more accurate to use a specific package in an interpreter than query Wolfram Alpha or use Mathematica. And for the simpler things a search engine will do!


I used to use it a lot but google now provides most answers as well and much faster. Wolframalpha performance is still sluggish and 6 second loading for a bunch of text (simple queries like `6cet to pst` is frustrating)


For me it stopped working several years ago and wouldn't ever return answers for queries. I futzed about with it to try and make it work; came back a few times over a couple of years as I had been a big fan. Just assumed they'd killed it somehow. Mentioned it on HN and others said it worked. For some reason it works again for me now -- it not working allowed me to discover Geogebra, which was nice and served a lot of my previous uses for WA.


I think students these days use it for math/calculus, but it isn't seen as something special because they've always had it. It wasn't novel like for us.


My students seem to prefer Symbolab now.


How many astronomers does it take to change a light bulb?

https://www.wolframalpha.com/input/?i=how+many+astronomers+d...

None; astronomers prefer the dark.


Nothing. It's still solving my homework. (Sometimes.)


They put their "step-by-step" explanations behind a [login/pay]wall which made it significantly less useful.

Out of sight, out of mind. It's still there


I think they've always been like that.

Good thing is, they have a montly cost, but the mobile app you just buy once and it works forever. And it's not that expensive iirc.


Step by step solutions were free around 9/10 years ago


  > They put their "step-by-step" explanations behind a [login/pay]wall which made it significantly less useful.
Maybe, but what else can do step-by-step explanations? Perhaps octave?




I was enthusiastic, but for medium complexity questions I spend more time footing with syntax then it would take to do it myself. I probably use it for a high complexity question once every few months. I’m happy that it exists, on balance


> I did a search on comments on HN for Wolfram Alpha. Most posts are 8 years old, none newer, some older.

You searched wrong. Excluding today, the most recent comment was 7 days ago, and there were quite a few more in the past month.

https://hn.algolia.com/?dateEnd=1636070400&dateRange=custom&...


As someone who just signed up for an open university this semester, I'd love to hear opinions about Octave and Maxima for general purpose use. Especially for study, such as replacing Wolfram Alpha's step-by-step solutions.

I'm a Linux user and prefer an open-source solution. But I have no objection to paying a reasonable amount of money for a good commercial solution. Maybe Maple is worth looking at?


It depends on what your need is. I used Maxima (wxMaxima) for a quick prototyping of handwritten formulations and as a reference for some simplifications, roots etc.

Of course, its CAS capabilities are still useful. But I find that simplifications done on paper are often more straighforward, than making some expressions transform into the expected form in Maxima. Also, it's somewhat handy to have the ability to output the formulas in TeX format.

I vaguely remember Maple being more apt at expected simplifications.

Either way, I believe that Sage, Octave, Maxima etc. should be rather supplemental to textbook-based learning. In such way their results won't appear as pure magic, but as somewhat expected outcome of analysis.


I think it highly depends on what kind of math you're expecting (and how heavily you want to rely on a CAS). If most of your work is just simplifying equations and computing numerical solutions, basically any system will do whether that's Octave, Sympy + Numpy, etc.

I haven't used Maple for a while so can't speak to it's current functionality but there's been several times I've wanted to do something in Sympy/Octave and haven't found it whereas I can almost always get Mathematica to do what I want with a quick search. I tend to rely heavily on it for some more complicated/specific symbolic operations (e.g. symbolically transforming probability distributions) and for that use case, I haven't found anything better.

I'll also say that if your use case if more numerical/programming oriented, the language used might be an important factor. I personally don't like Wolfram Language and use very few of its language features and prefer Python for anything that Mathematica isn't suited for out of the box.


I prefer Python to Octave and Maxima. Numpy, scipy, and matplotlib for numerical stuff, and sympy for symbolic stuff. Having them together in the same general purpose language is really convenient, and Jupyter notebooks are fantastic. Sage is also good, but I've moved on to sympy. I don't know a way to get step-by-step working from a library, but sympy gamma can do some, so it's probably possible to some extent.

My experience suggests avoiding Maple like the plague. Sympy (and Sage) can do everything I ever used it for much nicer and easier.


I use it whenever I have something mildly annoying to convert, especially dates. e.g. https://www.wolframalpha.com/input/?i=1636221900+unix+time+i...

Probably an incredibly trivial use-case but still useful regularly for me...


I use GNU date(1) for this:

    $ TZ=Europe/Warsaw date --date=@1636221900
    sáb 06 nov 2021 19:05:00 CET
    $ TZ=Europe/Warsaw date --date=2021-11-06T19:05 +%s
    1636221900
    $ echo $(( ($(TZ=Europe/Riga date +%s --date=2021-11-05T17:00) - $(TZ=America/New_York date +%s --date=2021-12-05T09:00)) / 3600 ))
    -719
However, this is super dangerous, because for whatever reason date(1) lies if you give it a nonexistent timezone, pretending that it understands you but actually giving you UTC:

    $ TZ=Mars date --date=@1636221900
    sáb 06 nov 2021 18:05:00 Mars
There's a list of valid timezones that you can conveniently browse with tab-completion after you spend 14 keystrokes to navigate there:

    $ TZ=/usr/share/zoneinfo/Europe/
    Amsterdam    Berlin       Chisinau     Isle_of_Man  Lisbon       Mariehamn    Paris        San_Marino   Stockholm    Vaduz        Zagreb
    Andorra      Bratislava   Copenhagen   Istanbul     Ljubljana    Minsk        Podgorica    Sarajevo     Tallinn      Vatican      Zaporozhye
    Astrakhan    Brussels     Dublin       Jersey       London       Monaco       Prague       Saratov      Tirane       Vienna       Zurich
    Athens       Bucharest    Gibraltar    Kaliningrad  Luxembourg   Moscow       Riga         Simferopol   Tiraspol     Vilnius      
    Belfast      Budapest     Guernsey     Kiev         Madrid       Nicosia      Rome         Skopje       Ulyanovsk    Volgograd    
    Belgrade     Busingen     Helsinki     Kirov        Malta        Oslo         Samara       Sofia        Uzhgorod     Warsaw       
    $ TZ=/usr/share/zoneinfo/Europe/Riga date
    dom 07 nov 2021 06:50:44 EET

I wish I had a really good calendar math utility program that handled this sort of thing properly.


> I wish I had a really good calendar math utility

Might be a good learning exercise in machine learning: translating natural-language queries from that domain to whatever standard utility.


Maybe, and it wouldn't have to be as slow and unresponsive as Wolfram Vertical Line Alpha or obscure your answers as an attempt to upsell you, but I think it would still tend to have the same kinds of essential usability problems: a gulf of execution in figuring out how to phrase a query so the system would understand it, and a gulf of evaluation in figuring out whether the calculation it had carried out was the calculation you wanted.


It doesn't work so good for times but I often use Google search to multiply numbers with units together and get a result in the units I want without having to worry about screwing up unit conversions.

Example: 4 atomic mass units * (1000 nm/sec)^2

Google Result: 6.64215616 × 10-39 joules

I use this all the time. I use wolfram alpha for solving equations or systems of equations but I use google for unit conversions because it's got better input parsing (frankly).

I should try the wolfram alpha math entry mode probably, I think that didn't exist when I started using it. If I could manually enter the equations with stricter formatting to ensure it's interpreted properly I'd use it more.


A reminder that GNU Units still exists, e.g.

  $ units
  Currency exchange rates from FloatRates (USD base) on 2021-01-17 
  3677 units, 109 prefixes, 114 nonlinear units
  
  You have: 4 amu * (1000 nm/s)^2
  You want: joules
          * 6.6421563e-39
          / 1.5055352e+38

  You have: ^D
It’s slightly less DWIMish (you have to say “atomicmassunits”, “atomicmassunit”, “amu”, or “u”, not “atomic mass units”) and somewhat awkward as a separate tool, but then resorting to your web browser for unit conversions is awkward in a different way. Non-interactive invocations, like units VALUE-OR-UNIT UNIT, work as well.

[1]: https://www.gnu.org/software/units/


Thanks for the reminder =)

Alas, I often have to do these kinds of calculations on a random publicish computer or my phone and Google's converter is platform-independent. But not using Google services when feasible is certainly net good.

And of course my TI-89 had equally good unit conversion for practical purposes (since you can define your own units) so somehow the world is still playing catchup to a calculator from the 90s...


If you’re organized enough to have space for Termux on your phone, it does wonders in this department. I feel silly every time I punch Python code into that teensy touch keyboard, but damned if I know anything else that has a better input UI and isn’t orders of magnitude less versatile. (Maple Calculator and microMathematics are still on the “there was an attempt” level, in my experience.)


There's an Android GUI for a units(1) calculator in F-Droid. I have it on my phone.


The TI-89 was/is an amazing device.


+1 for 'units'. I like it for conversion between millilightseconds and miles, to get the theoretical best-case latency between two places.

i.e. if it's x milliseconds ping, it can't be more than m miles away.


You have a missing factor of two.


... Seriously, though, if you’re actually need this type of calculation regularly and didn’t just pick a random example, atomic-scale calculations are absolutely miserable to do in SI (and this is not a problem, it’s a human-scale, engineering system, after all; and its metrological aspects, which were the actual advance originally, are completely unimportant here).

If I had to do this in my head or with a desk calculator, I’d just do it in high-energy units (c = ℏ = 1, mass and energy in eV, length and time in eV^-1). So,

  4 amu = 4 × 0.93 GeV (a proton weighs 939 MeV, an amu is slightly smaller due do binding energy, rounding to 1 GeV is good enough for most purposes) ≈ 4 GeV,

  (1000 nm / s)^2 = (1e4 Å / s)^2 = (1e4 / 1.97 keV^-1 s^-1)^2 (an angstrom is a typical atomic size, a keV is a typical [large] atomic energy, a fermi aka femtometer is a typical nuclear size, a MeV is a typical [not so large] nuclear energy, remember any of 197 MeV fm = 1.97 keV Å = 1, though again 200 is almost always good enough) ≈ (1e4 / 2 keV^-1 s^-1)^2 = 25e6 keV^-2 s^-2,

  4 GeV × 25e6 keV^-2 s^-2 = 4e6 keV × 100e6/4 × keV^-2 s^-2 = 1e14 keV^-1 s^-2.
This is slightly inconvenient, we wanted energy in eV, but the seconds don’t seem to want to go away. I don’t remember Planck’s constant in eV s, but I do remember 2 keV Å ≈ 1 and 300e3 km/s = 3e8 m/s = 1, so let’s sprinkle it with those,

  1e14 keV^-1 s^-2 ≈ 1e14 keV^-1 s^-2 × (2 keV Å)^2 / (3e8 m/s)^2 = 4/9 × 1e14 × 1e-16 keV Å^2 m^-2 = 0.44 × 1e14 × 1e-16 keV × (1e-10)^2 ≈ 0.44e-22 keV ≈ 0.44e-19 eV.
The hardest part is pretending to be a normal person: you have to remember what an electronvolt actually is in normal units. Good thing this is numerically the same as remembering the charge of an electron in coulombs (1 eV = 1.6e-19 J),

  0.44e-19 eV = 0.44e-19 eV × 1.6e-19 J / eV (turns out converting to a decimal fraction wasn’t a good idea after all, powers of two FTW) ≈ 4/9 × 16 × 1e-1 × 1e-19 × 1e-19 J = 64/9 × 1e-39 J ≈ 63/9 × 1e-39 J = 7e-39 J.
Good enough to a couple percent.

OK, I won’t pretend that this is easy or that I did it flawlessly the first time just now, but I do think this looks like a skill you could plausibly learn, unlike the textbook “SI all the things” calculation. The good news is that you’ve just seen essentially all the relevant constants you’re going to have to remember, except maybe Avogadro’s number if you’re going to have moles somewhere.

(One place where this doesn’t help is first-principles chemistry, things like electrolysis, because you need to subtract large binding energies to get a change that’s hundreds to thousands times smaller. Calculating things to a couple percent just isn’t good enough.)


Yes, I am familiar with this system. If anything, being a physicist is all the better reason to want a computer to deal with the units though...

My example was entirely contrived of course, a less contrived one would be estimating how long a gas cylinder will last. The tank name plate might say it has 200 cubic feet (sigh) and you need to flow at 10mL/min. How many months does the tank last? I'm talking about quick engineering tasks, not theory.

BTW, the answer is about 13 months, whatever that is in eV^{-1}:

https://www.google.com/search?hl=en&q=200%20cubic%20feet%20%...

Which took me about 15 seconds to type. Just different use cases.


units(1) can handle this but by default it gives you the answer in seconds.

    You have: 200 ft**3 / (10 mL/min)
    You want: months
     * 12.921493


It just can't answer the questions I have. Last time I tried it, I was looking for buoyancy of various gases, but it insisted any such question necessarily referred to stuff on water.

It did OK figuring the fake "temperature" of LHC beams that fusion people like to quote because they sound more impressive than GeV.


It's fun for life expectancy.

Step one: Ask for your own life expectancy.

Step two: Ask for the life expectancy of someone years' younger.

Step three: What.

Step four: Oh.


Huh? This was not at all surprising, someone younger than me had a lower life expectancy, while someone older was higher


This sounds bad for the state of the world?


No, people die. A 99 year old can't have a life expectancy of 70 years.

You want life expectancy at birth, by year of birth, for proper comparison.


I wrote https://gitlab.com/kragen/bubbleos/blob/master/yeso/toki.c to display my current remaining life expectancy as a clock, counting down.


I imagine it just stopped being new.


The name makes it seem like pre-beta test software.

I'm waiting for the final release, and then I'm waiting some more for it to be declared stable, and then I'm waiting some more for it to catch on and be declared popular.

Not really, but that's what the name suggests to me.

I just tried it here because of TFA and it's good.


For certain types of queries, Wolfram Alpha gives wonderful answers that are superior to almost every other general purpose search engine:

distance to the moon

https://www.wolframalpha.com/input/?i=distance+to+the+moon

W.A. shows the actual current distance to the moon (as of right now, 224,520 miles). Google shows this as 238,900 miles, presumably an average value, but it has no explanation at all of what the number is. W.A. also includes a graph showing the variance. And a lot of other info.


Many thanks to WA for getting me through high school and college calculus.


I use it to solve differential equations.


I needed to plot something real quick to see if I’m picking correct function. I had don’t remember how many data points but no much, under 200 I believe. W.A. told me to piss off and pay for it. So obviously I went and spent an hour more to learn how to do that with Gnuplot and did it with Gnuplot. Now I always go for Gnuplot right away.


I still use it all the time fore unit conversions, odd time based questions, etc. I find it's way better than the Google results because if I think of something after the fact I can tack it on and WA figures it out better than Google. E.g. "12 ft to meters * 3" is not handled right by Google but is handled how I want by WA.


I use it regularly, like twice a week.

When I'm making exercises to explain to my students in the math class, I use W.A. to double check the answer.

I also use it for calculation for comments in HN. Sometimes I need to make a back of the envelope calculation, and W.A. can convert the units and other boring stuff.


I only ever use it for date math

For whatever reason, I like keeping track of 1000 day anniversaries

https://www.wolframalpha.com/input/?i=1000+days+after+today

Shortly before any kind of 3rd anniversary or birthday I try to remember to check this.


You might like this little toy website I made a couple of years ago:

https://interesting-anniversaries.com/

From my readme:

“Have you ever wanted to know when you turn 2 billion seconds old? How about 33,333,333 minutes old? When do you get to celebrate your 555,555th hour of life? As it turns out, all three of those milestones occur in the same 24-hour period!”


Wow Hackernews never ceases to amaze me, I enjoyed this. TIL I missed my 1 billionth second. You should also make a programmer mode, one that shows you powers of 2 (like 1024 days old)


That’s a good idea, and would probably help fill out the calendar a bit more!


I still use it regularly too - even more so after listening to Stephen Wolfram’s 3 part podcast [1] with Lex Fridman where he discussed the latest developments in Wolfram, Mathematica etc

[1] https://youtu.be/ez773teNFYA


I use it ~6 times a year.

My luck is mixed:

* ~33% - It works

* ~33% occasions - I mess up syntax and give up

* ~33% - I mess up syntax, but believe it SHOULD be possible, and push much longer than I untended. Until finally settling on a partial solution, and wishing I knew more - but also recognising I should used a different tool e.g. excel


What do you mean? I use wolfram alpha for math question, plots, solving integral etc all the time


I use it often to calculate... Stuffs. They are great in calculating but I really need to envelope them with at least 6 layers of bracket for it to parse my equation correctly or it will use some weird notations/arrangements.


I use their online notebook environment a lot. https://lab.wolframcloud.com/ It's almost a free web version of their Mathematica.


I used it a lot more often back in college compared to now. Usually now it's for random one-off calculations I'm too lazy to do myself, like "how many weeks since 4/28" (my puppy's birthday).


I still use it regularly. the app is my go-to calculator.

it just hasn’t been updated in quite some time. there are a lot of ways the back end could support new UI features etc., but something seems to be holding it back.


I still definitely use it for teaching big O comparisons in a live / malleable way. Not sure I know of a comparable resource for that but maybe someone out there in the HNstroverse does?


I teach high school math. My kids use it all the time! This isn't entirely a bad thing. It's a very, very useful (and natural language) symbolic integrator for them.



Used it a bit at university to compute some complex integrals if I was stuck or feeling lazy. That was 11 years ago.

Don't think I've even visited the website in the past 6 years.


It did about a lot of the heavy lifting in my master's thesis, not gonna lie.


This? https://www.wolframalpha.com/

It's alive and quite healthy


I still use it once in a while when I don't want to bother converting non-base10 units, like to know the date in 90 days, or how many hours in x days, etc.


I think Siri gets some of its results from Wolfram Alpha.


I use it a couple of times a week. Sometimes it’s brilliant, and sometimes I have to find my answer somewhere else. Most of the time it does its job.


I just used it recently to plot weather data, population density, crime rate, and average home price of cities I was thinking of living in


I've been using it pretty much daily for over a decade now. I don't know how people ever did engineering without it.


Still the go-to when I need to solve a complex integral or even diff EQs. Amazing how well it parses that type of entry.


I bought Mathematica and I'm using its free text input instead. But I guess behind the scenes, it's WA again.


I also use it regularly. I do remote sensing and fine that and wikipedia indispensable.


I use it to check some results from derivative exercises for my calculus class.


It’s still there, and I use it regularly, probably several times per week.


There is no 'DOT' for a mathematical calculator. So confusing!


At University pretty much everyone I know uses it for homework.


WA will give you answers your TA can't!


I use it frequently. Unit conversions, and the solver.


I still use it frequently for any random calculations.


Since nobody is mentioning it, around that time Wolfram Alpha started paywalling a lot of the more useful features. I used to use it in school and stopped when that happened. I'm not sure if they changed course since then.


I use WA for complex math at least once a week.


Used a lot by math students to check answers.


This question reminds me of the time someone wrote to the letters page of a print publication (I think it was The New Statesman) asking "Whatever happened to the composer of the theme music for Trumpton?" (a popular children's TV series in the 1960s) and the composer wrote back saying "What do you mean whatever happened to me?"


To understand the state of Wolfram Alpha, you have to understand the guy behind it.

Wolfram Alpha was a pet project of Stephen Wolfram, the creator of Mathematica. He had grand visions for it. And for the first few years, it seemed like he was doubling down on it.

But then he got bored and started tackling a bigger problem: his own solution to the "theory of everything" problem -- something that has eluded the world's best physicists for decades.

But he was confident that he could best them all. Because he created Mathematica.

The scientific community wasn't having it:

https://www.scientificamerican.com/article/physicists-critic...


I'm not sure you intend it, but your comment kind of makes Wolfram sounds like some sort of crank.

He's a leading thinker obsessively interested in this idea that everything around us is the product of a simple, fundamental ruleset.

He's sitting on the bleeding edge of human knowledge where, honestly, everyone is at risk of being full of shit. Scientific consensus isn't really any kind of indicator of future breakthroughs.

To each their own - let Wolfram be Wolfram.


He is both a crank and an innovator, and comments regarding his crankery are perfectly appropriate.

I think his "new kind of science" needs to be singled out from Wolfram alpha and Mathematica as especially crank-ish. It appears to be an attempt at a grand foundational philosophical statement, but it doesn't interact with pre-existing literature that covers similar territory, conveys ideas with pictures and informal statements without robust definitions, doesn't have an underlying bedrock of concepts or uniform vocabulary, and doesn't have the focus or clarity of purpose to rise to the level of being right or wrong. And it nevertheless maintains a grandiose tone of establishing an entirely new domain of science

It's not necessarily wrong, but it is unfortunately very vague and concerningly childish, even though I think it does have some meaningful things to say. It's a very fair example in favor of crankery.


He has done some exciting work, but he hasn't done any physics in ages. If you don't (by choice or ability) do the work to prove your ideas, you can't expect anyone else to. If he wants to revolutionize physics, he can't leave that to others. That attitude is a defining characteristic of a crank.

Cranks can do good work, but when they get out of their depth and don't realize it, blaming everyone else, that's when they become cranks.

I like Wolfram, and I think there are some interesting and fundamental insights in among the relentless self promotion, but ANKOS is a painful read, even though I find cellular automata a fascinating model.


They don’t make him sound like a crank, but like a narcissist, which Wolfram definitely is. Not that it’s really a bad thing, most lang devs are a little narcissistic in my experience. Comes with the territory. But the guy named his language after himself. No one does that! Creating a language is already a very ego-driven endeavor, but naming it after yourself is next-level egoist.


His work is very interesting, and much of it novel, but it's the way he presents it that makes him a crank. Claiming he has a grand unified theory of physics and all that.


For those interested in hearing about his theory/work, Sean Carroll (theoretical physicist) did a very long podcast with him about it.

I'd highly recommend Sean's podcast in general for those interested in physics topics and prefer a more technical discussion that the usual physics podcasts.

https://www.preposterousuniverse.com/podcast/2021/07/12/155-...


> But he was confident that he could best them all. Because he created Mathematica.

That's unkind, and that's definitely not what he says.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: