Hacker News new | past | comments | ask | show | jobs | submit | JBorrow's comments login

I’m not so sure. For me a reliable way to find a “good” dentist is to find one that’s attached to a (big) medical school or university. Of course this is easier to do in cities than in rural areas though. I’d always take a waiting list and, say, a dentist from the UC system than one that’s a regular practice.


Thankfully there are a few amazing Diamond/Platinum open access journals popping up (that are often ‘arXiv overlay’, meaning they simply provide peer review services to arXiv-hosted papers). These journals are free to publish and free to read but still provide the useful categorisation/review/cataloguing services of traditional publications. Notably this includes a post-review DOI.

Relevant for the HN crowd is the Journal of Open Source Software: joss.theoj.org.

[I am an editor at JOSS]


Calling PostgreSQL a 'legacy C codebase' is funny.


Was just about to type the same thing. It's hard to escape the feeling that this article belongs somewhere around the Peak of Inflated Expectations on the hype cycle chart.


Indeed. The whole discourse around rust is getting more ideological than technical day after day.

It's like some people are getting irreasonable intolerant to non-rust languages.

It's... Puzzling, to say it politely.


I don't think the only utility of a depth model is to provide synthetic blurring of backgrounds. There are many things you'd like to use them for, including feeding into object detection pipelines.


They use a distributed data management tool called RUCIO: https://rucio.cern.ch to distribute data on the grid.


At some level this is an almost uniquely American problem; see similar statistics here from Great Britain: https://www.gov.uk/government/statistics/reported-road-casua..., Australia: https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i... which have been dropping rapidly since the 1970s. So, relative to other very similar countries, the U.S. is doing much worse.


These countries also presumably enforce stricter licensing laws, charge appropriately for driving urban tanks (American pick ups), and actually enforce some traffic laws with serious consequences not a slap on the wrist with inexpensive tickets.


As someone 'in academia', I worry that tools like this fundamentally discard significant fractions of both the scientific process and why the process is structured that way.

The reason that we do research is not simply so that we can produce papers and hence amass knowledge in an abstract sense. A huge part of the academic world is training and building up hands-on institutional knowledge within the population so that we can expand the discovery space.

If I went back to cavemen and handed them a copy of _University Physics_, they wouldn't know what to do with it. Hell, if I went back to Isaac Newton, he would struggle. Never mind your average physicist in the 1600s! Both the community as a whole, and the people within it, don't learn by simply reading papers. We learn by building things, running our own experiments, figuring out how other context fits in, and discussing with colleagues. This is why it takes ~1/8th of a lifetime to go from the 'world standard' of knowledge (~high school education) to being a PhD.

I suppose the claim here is that, well, we can just replace all of those humans with AI (or 'augment' them), but there are two problems:

a) the current suite of models is nowhere near sophisticated enough to do that, and their architecture makes extracting novel ideas either very difficult or impossible, depending on who you ask, and;

b) every use-case of 'AI' in science that I have seen also removes that hands-on training and experience (e.g. Copilot, in my experience, leads to lower levels of understanding. If I can just tab-complete my N-body code, did I really gain the knowledge of building it?)

This is all without mentioning the fact that the papers that the model seems to have generated are garbage. As an editor of a journal, I would likely desk-reject them. As a reviewer, I would reject them. They contain very limited novel knowledge and, as expected, extremely limited citation to associated works.

This project is cool on its face, but I must be missing something here as I don't really see the point in it.


Fully agreed on point A, but I've heard the "but then humans won't be trained" argument before and don't buy it. It's already the case that humans can cheat or get buy without fully understanding the math or ideas they're working with.

This is what PhD defences are for, and what paper reviews are for. Yes, likely we need to do something to improve peer review, but that is already true without AI.

From a more philosophical point of view, if we did hypothetically have some AI assistant in science that could speed up discovery by say 2x, in some areas it seems almost unethical not to use it. E.g. how many more lives could be saved by getting medicine or understanding disease earlier? What if we obtained cleaner power generation or cleaner shipping technologies twice as fast? How many lives might be saved by curtailing climate change faster?

To me, accelerating science is likely one of the most fundamentally important applications of modern AI we can work on.


Scientific advancement IS the change in the understanding of humans.

Your fallacy lies in pretending science can progress in an objectively defined space where saying 'progress twice as fast' even makes sense.

But it does not. It progresses as people refine and improve their understanding of the universe. Books or LLMs don't contain understanding, only data. If less people grow a deep understanding of mathematics and the universe, science progress will slow down, even as metrics like numbers of graduates or papers published go up.


> If I can just tab complete my N body code, did I really gain the knowledge of building it?

Yes, because fixing it requires about the same effort as writing it from scratch. At least with current level of AI. When it works well, you just move it to a library and use it without worrying about implementation, like we do with all other library code.

Using AI doesn't make the problem any easier for the developer. The fact that it generates functions for you is misleading, code review and testing is harder than typing. In the end we use AI in coding because we like the experience, it doesn't fundamentally change what kind of code we can write. It saves us a few keystrokes and lookups.

AI might be useful in literature review stage, formal writing and formatting math. Who's gonna give millions worth of compute to blindly run the AI-Scientist? Most companies prefer to have a human in the loop, it's a high stakes scenario depending on cost.


> fixing it requires about the same effort as writing it from scratch

Today’s research AI doesn’t work, but that’s independent from why a working version would be problematic.


The original purpose of science was to get closer to god. Things change.

Arts degrees were also once 7-8 years before the french fought for radical simplification to get them down. There is no law of nature that says PhDs cannot be simplified down more too.

The stupification argument comes kneejerk with every new medium, you don't have to believe everything you watch on TV or form a crutch to a screen oracle. Either way, we need not concern ourselves with the average scientist who fails forgettably by being stupid when the potential upsides are great scientists who succeed with it.

One point of an AI paper farm could just be to increase that discovery hit space a little while you sleep but no one claims it can or will replace every scientist, hopefully only the pessimistic ones.


PhD cannot be simplified further down without making it not "significant contribution to science/art/craft". However, DAs (Doctor of Arts) could be granted for hard sciences. The threshold for DA is "outstanding achivement", which would be fit for a post-AGI doctorate in hard sciences.


You don't need to understand electromagnetism in order to watch television.

Also, institutions without a purpose should not be kept going.

Ie. Ai needs to take over the entire life cycle for research before this is real issues. I don't see that happening anytime soon.


LLM have unleashed the dreamer in each and every young coder. Now, there is all sorts of speculation on what these machines can or cannot do. This is a natural process of any mania.

These folks must all do courses in epistemology to realize that all knowledge is built up of symbolic components and not spit out by a probabilistic machine.

Gradually, reality will sync (intentional misspelling) in, and such imaginations will be seen to be futile manic episodes.


> These folks must all do courses in epistemology to realize that all knowledge is built up of symbolic components and not spit out by a probabilistic machine.

Knowledge ends up as symbolic representation, but it ultimately comes from the environment. Science is search, searching the physical world or other search spaces, but always about an environment.

I think many people here almost forget that the training set of GPT was the hard work of billions of people over history, who researched and tested ideas in the real world and build up to our current level. Imitation can only take you so far. For new discoveries the environment is the ultimate teacher. It's not a symbolic processing thing, it's a search thing.

Everything is search - protein folding? search. DNA evolution? search. Memory? search. Even balancing while walking is search - where should I put my foot? Science - search. Optimizing models - search for best parameters to fit the data. Learning is data compression and search for optimal representations.

Symbolic representations are very important in search, they quantize our decisions and make it possible to choose in complex spaces. Symbolic representation can be copied, modified and transmitted, without it we would not get too far. Even DNA uses its own language of "symbols".

Symbols can encode both rules and data, and more importantly, can encode rules as data, so syntax becomes object of meta-syntax. It's how compilers, functional programming and ML models work - syntax creating syntax, rules creating rules. This dual aspect of "behavior and data" is important for getting to semantics and understanding.


my guy you're so confident yet you forget AlphaFold, it designs protein structures that don't exist.

Who's to say that a model can't eventually be trained to work within certain parameters the real word operates in and make new novel ideas and inventions much like a human does in a larger scope.


Claims of inventing new materials via AI were debunked...

https://www.siliconrepublic.com/machines/deepmind-ai-study-c...

DeepMind is overselling their AI hand when they dont have to.

"whos to say that" - this could be a leading question for any "possibility" in the AI religion.

"whos to say that god doesnt exist" etc. questions for which there are no tests, and hence fall outside the realm of science and in the realm of religion.


If you go and look at the list of authors on those papers you will see that most of the authors have PhD doing something protein folding related. It's not that some computer scientists figured it out. It's that someone built the infrastructure then gave it to the subject matter experts to use.


AlphaFold doesn't solve the protein folding problem. It has practical applications, but IMO we still need to (And can!) build better ab-initio chemistry models that will actually simulate protein folding, or chemical reactions more generally.


Models like AlphaFold are very different beasts. There's definitely a place for tools that suggest verifiable, specific, products. Overarching models like 'The AI Scientist' that try to do 'end-to-end' science, especially when your end product is a paper, are significantly less useful.


I’ll believe it when I see it and/or when I see the research path that goes there.

Judge a technology based on what it’s currently capable of and not what it promises to be.


We already substitute "good authority" (be it consensus or a talking head) for "empirical grounding" all the time. Faith in AI scientific overlords seems a trivial step from there.


Why shouldn't we?


We shouldn't because it isn't science. It's junk epistemology, relatively speaking.

We should because it's a cheap way of managing knowledge in our society.

So there's a tradeoff there.


IMHO authority has no place in knowledge preserving institutions. Generally authority instills more ruin than good.


> Over time, as both Spawn and the underlying models improve, it will be able to build more complex software.[^Citation needed]

It seems to me that with the transformer model world we now live in, utilities like these are excellent at generating things that appear in many tutorials/posts/etc. However, there's little-to-no evidence that they are able to generate anything truly custom. For instance, any application requiring any significant level of domain-specific knowledge (your last point), and given the model's architectures there's no reason to believe that will occur. What is it that makes you so confident that will become a possibility?


That's a fair point, I can't say for certain that tools like Spawn will be able to build really complex software. However, I think there's a ton of software that could be built with current model capabilities that simply isn't getting built because the remainder of the process still has a high learning curve (ex: learning how to use Xcode, figuring out app permissions etc.)


I would be interested to hear what fraction of this script and README were generated by large language models. At first glance, the code contains a number of repetitive anti-patterns that 'feel like' they are Copilot-isms (e.g. large stacks of elif statements instead of using appropriate data structures), and the README is very verbose and includes a high fraction of filler words.


Pretty much the entire project.


I think so too. Also, what's the point of using git with commit messages like that? https://github.com/Dicklesworthstone/visual_astar_python/com...

LoL.


I use git like that. It's my backup solution. There's no logic to the commits. I just commit and push when I feel I've done enough that it would suck if my computer fails and I lose it.


I've done that when testing CI pipelines or pushing up minor tweaks to a personal project... but committing and pushing so often.. maybe OP was editing right in GitHub?


I was developing on one remote machine and running/testing it another machine. This is just a fun little learning/diversion project... I don't really care about it having a meticulous git commit history.


Buses are only slow and inconsistent because of cars.


Exactly, a bus that doesn't have its own lane is a failure, as they get stuck in traffic it'll always strictly slower than cars. So people—if they can help it—drive cars, making more traffic, making buses slower, causing people to prefer more cars...

While if they run in dedicated lanes there is a counterbalance: the more people drive the worse traffic is, making the bus a comparatively better option, making people drive less... :)


Not true. In my city in sweden they underpay drivers, and don't want to pay for them to do the bus driving course. So they have no drivers… so busses don't show up. So when they show up they might already be full.


beyond lack of dedicated lanes, the schedules in L.A. are just terrible. Most run hourly and don't really run late.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: