Hacker Newsnew | past | comments | ask | show | jobs | submit | IvyMike's commentslogin

Always gonna have to side with Peter Norvig on this one: https://pindancing.blogspot.com/2009/09/sudoku-in-coders-at-...

> They said, “Look at the contrast—here’s Norvig’s Sudoku thing and then there’s this other guy, whose name I’ve forgotten, one of these test-driven design gurus. He starts off and he says, “Well, I’m going to do Sudoku and I’m going to have this class and first thing I’m going to do is write a bunch of tests.” But then he never got anywhere. He had five different blog posts and in each one he wrote a little bit more and wrote lots of tests but he never got anything working because he didn’t know how to solve the problem. I actually knew—from AI—that, well, there’s this field of constraint propagation—I know how that works. There’s this field of recursive search—I know how that works. And I could see, right from the start, you put these two together, and you could solve this Sudoku thing. He didn’t know that so he was sort of blundering in the dark even though all his code “worked” because he had all these test cases.


I love what Norvig said. I can relate to it. As far as data structures are concerned, I think it's worth playing smart with your tests - focus on the "invariants" and ensure their integrity.

A classic example of invariant I can think of is the min-heap - node N is less than or equal to the value of its children - the heap property.

Five years from now, you might forget the operations and the nuanced design principles, but the invariants might stay well in your memory.


That story reads like what happens when the average senior engineer tries to do a hardish usaco problem; turns out algorithm engineering is different from your average enterprise engineering; turns out there are people in both camps


> “Well, I’m going to do Sudoku and I’m going to have this class and first thing I’m going to do is write a bunch of tests.” But then he never got anywhere

There's a blog post I read once and that I've since been unable to locate anywhere, even with AI deep research. It was a blow-by-blow record of an attempt to build a simple game --- checkers, maybe? I can't recall --- using pure and dogmatic test driven development. No changes at all without tests first. It was a disaster, and a hilarious one at that.

Ring a bell for anyone?


Norvig mentions it in the article linked in the post to which you are replying. The game was Sudoku. The person was Ron Jeffries. https://ronjeffries.com/articles/-z022/01121/sudoku-again/


Thanks!


To be fair, I'd like to be forgiven for anything I did in 2006.

It is a story that reads like a fairy tale, but it is time to give the guy a break.


That's why I linked to Jeffries' post where he gave more context.

Though, in this particular case, he then went on to go back down the TDD Sudoku rabbit hole and, though he does seem to eventually write a program that works, the path to get there involved reading existing solutions and seems rather drain circly, which makes his post I linked seem a bit like making excuses. IDK. I don't really care beyond mild bemusement.


Haha, he really continued digging himself into the hole, i see it now.

"I’ve found some Python code for Sudoku techniques. I do not like it. But it’ll be useful, I reckon, even though we aren’t likely to copy it."


I think the point Norvig is making there broadly agrees with this post though. In the Sudoku affair, Norvig had the DSA knowledge there, sure, but his point is more that you need to be willing to look up other people's answers, rather than assuming you have enough knowledge or that you can slowly iterate towards a correct answer. You can't expect to solve every problem yourself with the right application of DSA/TDD/whatever.

That's the same as the blog post: you need to know enough DSA to be able to understand how to look for the right solution if presented with a problem. But Batchelder's point is that, beyond that knowledge, learning testing as a skill will be more valuable to you than learning a whole bunch of individual DSA tricks.


More context, from an earlier HN comment: https://news.ycombinator.com/item?id=3033446


This totally misses the point of the article. The article agrees that knowing when a problem is a data structure and algo problem is a key strength. The article also isn’t saying that all development should be done TDD.

The point of the article is that knowing how to test well is more useful than memorizing solutions to algo problems. You can always look those up.


To be fair to the other guy a Sudoku solver easier to bang out than a tiny distributed operating system environment that happens to solve sudoku, even if your language does help you.


If you write all the tests, I'm sure the LLM can figure out the implementation.


I have a BirdNET-Pi in my backyard constantly listening to and identifying bird songs. It's pretty neat. They also sell a purpose built device if you don't want to build a pi.

The data is sent to https://app.birdweather.com/ where I'm assuming bird scientists are doing something with it.


Seizing the means of production!


Da, Comrade!


The oligarchs resulting from the fall of Soviet Trumpistan are going to be the most obscenely rich people history has ever seen.


They already are, in a replay of the robber baron era.


Well, the current administration and the National Socialists do have some things in common.


> When I was an undergraduate at MIT I loved it. I thought it was a great place, and I wanted to go to graduate school there too, of course. But when I went to Professor Slater and told him of my intentions, he said, "We won’t let you in here."

> I said, "What?"

> Slater said, "Why do you think you should go to graduate school at MIT?"

> "Because MIT is the best school for science in the country."

> "You think that?"

> "Yeah."

> "That's why you should go to some other school. You should find out how the rest of the world is."

-- Surely You're Joking Mr. Feynman


I'm still hoping someone makes an earthquake detection system where the data is just derived from people posting "Earthquake?" on Twitter/Threads/Facebook/Etc. Plot the geotagged tweets and it seems easy to get both the location and magnitude.


I remember many years ago seeing exactly this project being led by a researcher in Chile [1].

It's not really a new idea, i don't know what happened to this project though.

[1] https://portaluchile.uchile.cl/noticias/119844/twitter-ayuda...


I don't think that is fast enough since the window for alert is seconds to minute. The alert lets people get to safety and stop systems like trains.

Tracking social would be useful for plotting where quake was felt.


This reminded me about an old blogpost I read. This linked post may not be the one I remember, but it's close[1].

Back in 2011 there was an earthquake that New Yorkers felt. There were New Yorkers who read tweets of people further south on the East Coast posting about feeling an earthquake, and then the New Yorkers feeling the same earthquake a few seconds later.

There were some news outlets that picked up the story which you can find, but not exactly what OP was discussing.

[1] https://www.ralphehanson.com/2011/08/25/earthquakes-social-m...


The USGS created a system to do exactly this about 15 years ago. I’m not sure whether they’re still running it but at the EMSC, we've been running a similar system for many years to highlight earthquakes important to the public and improve our messaging. Twitter doesn’t give access to geotags anymore but we do manage to roughly estimate an earthquake’s location by analysis of the tweets. Estimating magnitude is much more difficult. Naturally there are some false positives but it works well overall.

[1] https://www.usgs.gov/media/audio/shaking-and-tweeting-usgs-t...


We actually use the twitter detections to launch analyses of the seismic data in order to get confirmed results for events that aren’t reported yet [1] but there are some statistics for the twitter detections in the supplementary material of that article [2]. Basically, in 2016-2017 (wow, so long ago), we detected 893 earthquakes via twitter, with a median delay of 67s and a median separation from the published epicentre of 94km. Note that estimating earthquake epicentres is nontrivial anyway and so, for comparison, 10km accuracy would often be considered ok. So the twitter, I mean X, method isn’t optimal but it gets you down to the right region. Partly it’s because geocoding the tweets is inaccurate and partly it’s because people live clumped together in cities rather than smoothly spread over the surface of the earth.

[1] https://www.science.org/doi/10.1126/sciadv.aau9824 [2] https://www.science.org/action/downloadSupplement?doi=10.112...


I swear Twitter or Google was working on this?

https://scistarter.org/the-twitter-earthquake-detection-prog...

I did find this and some papers that seem related


When Twitter had an open API, some tech teams actually used it as an additional source for detecting incidents that internal monitoring missed (similar to how electricity grid operators watch TV to understand when demand surges are going to occur due to half time in sports games, etc).


Not just electricity, water too.

Many people go to the bathroom when their favorite show ends or when it's halftime during a sports match.


Google has this. I remember recently feeling a minor earthquake, and googling it. The message that came up said that others had felt it too in my area, and then it showed up on official databases a few hours later.


The most useless detection system because you are either fine or buried under rubble at that point. Every real detection system attempts to catch the p-waves to warn users in real time ahead of shaking.


Certainly it's not going to compete with real time systems, but much like computing the speed of light using your microwave and a chocolate bar, it's just kind of neat to see how accurate such a system can be.


From the classic tweet[0]:

> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

> Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus

[0]: https://x.com/AlexBlechman/status/1457842724128833538?lang=e...


Palantir Technologies is named after the "palantír" from Lord of the Rings. Famously, a well-used and safe technology.


Really takes a special guy to read a fantasy novel and identify with the villain.


> Really takes a special guy to read a fantasy novel and identify with the villain.

IIRC, the palantír were kind of neutral, and created by elves.

Honestly, I think these techlord LOTR fans identify with the heroes, and lack the insight to realize they'd actually be the villains. They're totally Sarumans raping the forest to build an Orc army to conquer the world.


The official video for the Laibach cover of Sympathy for the Devil comes to mind.

( The Robert Fripp, Toyah Wilcox Sunday Lunch cover Sympathy is just a distracting nod to growing old disgracefully )


From the site:

> All researchers must apply and present a researcher card, which may be obtained in Room 1000. This ensures that proper identification is on file for all individuals accessing the building to establish a legitimate business purpose. Abuse of any researcher registration to circumvent access by the general public may result in a trespass situation and a permanent ban from access to all NARA facilities.

What the hell does "legitimate business purpose" mean? What "business" need is there for JFK Assassination records (which I think are at this site), for example? If I'm getting a PHD or writing a book, is that a "business" need? I suspect not.

Also, "Abuse of any researcher registration to circumvent access by the general public may result in a trespass situation and a permanent ban from access to all NARA facilities" seems like a very poorly constructed sentence.


I've been to NARA 2 in College Park several times. I'm reading this as meaning that only researchers who will request records can enter the building now. The statement seems to be clumsily worded.

It suffices to say that it would be hard to justify closing down NARA 2 for researcher access. Room 2000 is the main reading room and it is one of the largest reading rooms I have ever been in. The building was built for people to come and visit and do research.

NARA 2 is a high security facility as it is. The last time that I visited was in 2019. You are searched one time upon entering the building. You (as a researcher) enter and go down to a large basement locker room where you can place most of your items in a locker. You can take a laptop and a scanner/camera to the first floor, get searched another time, then go up an elevator to the Room 2000, get searched again, and then take a seat and request materials (using triplicate forms, the last time I was there). You are searched again upon leaving the reading room.

Based on my experience, it sounds like they are going to remove one of the searches and put it at the entrance rather than at the elevators for the second floor, though I admit this is speculation.

The more difficult aspect would be having no parking access at the facility itself and having to take a bus there. I've taken the Metrobus to NARA 2 before and it was quite complicated the last time I went there, and I generally like public transportation. Every time I visited after that, I drove and parked in the garage, usually on the roof. That said, I can learn to manage the bus.


My partner works for NARA, but not in this office. Outside of the large amount of departures and RIF actions taken for the agency, there's lots of challenges regarding staffing for people who come in off the street and do not have succinct, coherent research questions. Staff are duty-bound to respond to all queries, regardless of how good they are.

I imagine this research card policy does two things:

1. Raises an easy bureaucratic barrier for people who just drop in and expect/demand help

2. Gives staff an opportunity to refuse access to people who may have non-research intent from accessing the building

It's likely the example you provided qualifies as a business need. They just don't want you hanging around and getting in the way of them helping people who scheduled a consultation, have an appointment, etc.

Totally agree on the poorly-constructed sentence. I wish they had said it more succinctly/precisely.


I believe you are interpreting “business need” as “commercial need” when I think it’s more like “what is your business here?” Purely anecdotal, but when I visited Moffett Federal Airfield to visit the aviation history museum there I asked the security guard at the gate checking my ID if I could ride my bike around the base afterwards. He said I needed a business purpose to being on the base and that visiting the museum was a business purpose but biking around aimlessly wasn’t.


It means if you are a crazy person, you can no longer waltz in off your motorcycle and demand all documents related to alien spacecraft held at Area 51 or the real unedited Zapruder film that clearly shows Walt Disney was the triggerman, etc.

My guess is anyone could still pursue whatever crazy theories they wanted, so long as they conducted their research legitimately, i.e., as a legitimate _process_ of research, with no value judgment on the topic or end goal.


How do you estimate they will judge process legitimacy?


Probably by if you're cooperative and lucid enough to file the papers ahead of time and schedule your visit.


The actual process of getting a research card does not mention any business need. It just asks to show ID and watch a training video:https://www.archives.gov/research/start/researcher-card. It specifically mentions student IDs.

But maybe that page is not updated yet with new policy.


> Also, "Abuse of any researcher registration to circumvent access by the general public may result in a trespass situation and a permanent ban from access to all NARA facilities" seems like a very poorly constructed sentence.

It means you'll be banned if you write anything negative about the dear leader and his eternal administration.


While the sentence is poorly constructed, it clearly doesn't mean that.


[flagged]


Oh the covid lockdowns were fascist now?


> People who assume that everyone is an idiot but themselves are rarely correct.

This is an off-by-one error.


How will ChatGPT learn the next computer language now?


From my perspective LLMs shine in programming languages because they (languages) are invented by people to be more formal so they are more predictable. Even for syntactically different programming languages there are similarities we don't see with our own eyes, but the training has no problem detecting. My favorite example is when I tried to find how to represent an int array for reflection in Kotlin. Ask ChatGPT or a sufficiently large model about this, the answer will be IntArray::class.java , but try to find the exact line with google - few or nothing, more on github search in the sources, still not much. So LLMs "detected" the system of making type signatures in Java/Kotlin and were able to successfully predict because the rules are consistent. In human language it also works, but to a lesser degree, so if you give it a verb/subject pair that makes sense and ask for the equivalent ones, you will get some that still make sense but are literally absent in the full corpus of the web.


How will there be new programming languages after people stop practicing programming and the skills decay because an AI is performing the deliberate practice?


We are in the future, nothing new needed here.


Galaxy brain: indent using U+001F Unit Separator


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: