Hacker Newsnew | past | comments | ask | show | jobs | submit | mkw5053's commentslogin

Before my 2 year old, I led a math book club in SF. This was one of the books I taught/led with the group and it's still one of my favorites.

I'd love any and all feedback (even critical)!

Many people are pleasantly surprised by how fast you can go from 0 to making API calls in your app (<2 minutes), it's one of the main things we're optimizing for right now.


Has it ever been confirmed if Robert Del Naja is Bansky?


It's not him but probably was attending some of their gigs.


I've been watching Craigslist prices for Gaggia Classics for over a year just for this. Now if only I had the counter space for the build in addition to my DF64 and Flare 58!


Notable that 'Leading the Future' explicitly models itself on Fairshake, which spent $130 million in 2024 and achieved 48 of 51 endorsed candidates winning. At that success rate, $100 million in AI PAC spending could determine 30-40 House seats' positions on AI regulation. For context, the EU's AI Act passed with zero industry PAC spending, while China's AI regulations proceeded without Western-style lobbying.


“ Fairshake supports candidates committed to securing the United States as the home to innovators building the next generation of the internet.

Providing blockchain innovators the ability to develop their networks under a clearer regulatory and legal framework is vital if the broader open blockchain economy is to grow to its full potential here in the United States.

Fairshake is a federal independent expenditure-only committee registered with the Federal Election Commission and supports candidates solely through its independent activities.”

https://www.fairshakepac.com/


From what I've seen, spending has almost no effect on competitive elections. Groups like Fairshake are more about punishing candidates who take opposition positions. I haven't looked at the fairshake data in-depth but I'd guess they just invested in candidates likely to win who aren't vocally opposed to their position.


> From what I've seen, spending has almost no effect on competitive elections. Groups like Fairshake are more about punishing candidates who take opposition positions.

Which... does not influence elections?


> From what I've seen, spending has almost no effect on competitive elections. Groups like Fairshake are more about punishing candidates who take opposition positions.

A large number of elections for the House of Representatives aren't competitive. The candidate from the incumbent party is going to win no matter how bad they are and no matter how good the other candidates are. No amount of money spent on that election will change things.

However, in a large number of those districts only a small fraction of the voters from that party vote in the primaries or attend the caucuses where that party chooses its candidate. There usually isn't a lot of spending on this. A well funded primary challenger has a very good chance of knocking the incumbent out in the primary or at the caucus.

The threat of this is how Trump keeps the Republicans in the House almost completely under his control. Look at all those Republicans in the House who voted for the "Big Beautiful Bill" and then went home to get completely excoriated by their constituents at town halls for not holding out to get the parts of the bill that were terrible for those constituents removed.

They knew that would be the reaction. But Trump told them that if they didn't vote for it or delayed it to make more changes he'd fund a primary challenger.


[flagged]


It's my understanding that one of the most methodologically rigorous papers is Gilens and Page (2014) [0], which analyzed 1,779 policies over 20 years and found that when rich and average Americans disagree, the rich win 90% of the time, regardless of how many regular citizens support or oppose the policy.

[0] https://archive.org/details/gilens_and_page_2014_-testing_th...


Wow, seats are cheap! We should totally let the people with the most money buy them, that will bring us stability.


I wonder if we could design a system where everybody in the populace chips in a little bit, and the people buy some representatives of our own.


I have been thinking the same. Use their own tools against them.


Depressingly this is exactly how it went in the 19th century only instead of railway barons we now have tech barons.


“At that rate” - your math only makes sense if their spending is the entire reason those campaigns won. I haven’t dug into the numbers, but if those are house and senate campaigns then it’s a small fraction of total spending.


Who is going to spend a dime in lobbying for or against the EU’s AI Act? The absolutely irrelevant Mistral, which is starved for money and would rather stay on the good side of the European commissioners and oligarchs?


My stack lately has been Next.js with Prettier and very strict lint, type, and complexity gates, plus opinionated tests that aim for real signal over mere coverage theatre.

I want to find the time to fine-tune GPT-4o using before/after examples of code that fails the gates and then passes them. The hope being it generates gate-compliant code on the first try much more often, which is cheaper and more reliable than relying on a base model with brute-force retries. I think this might also align with research showing that grounding models in execution feedback yields order-of-magnitude gains in sample efficiency and improved code quality, not just speed [0][1].

References

[0] https://arxiv.org/abs/2307.04349

[1] https://arxiv.org/abs/2410.02089


What does "real signal" mean in this context? I think (hope?) most people agree test coverage for the sake of coverage isn't particularly helpful but it's not clear to me how an "opinionated test" would differ from the tests you'd find in your standard "we want 100% coverage" projects.


I am trying to capture the idea that opinionated tests assert invariants and contracts at integration boundaries, while coverage-driven suites often spend effort on shallow checks. The goal is fewer but stronger tests that actually prevent regressions.

Here is one attempt at defining a TESTING.md that I pass to Claude Code [0].

Honestly, it is different from what I now use in other projects. For example, I have found that mutation tests are rarely worth the added complexity. I have not yet found a good way to enforce a deterministic gate that only permits “good” tests, so I settle on defining expectations in a markdown file that I pass to whichever coding agent I am using.

[0] https://github.com/Airbolt-AI/airbolt/blob/main/TESTING.md


Very cool!

I was just going down a rabbit hole yesterday about the use of AI techniques (or lack of success) in deciphering still-forgotten languages. Unsupervised models have partially cracked Ugaritic and Linear B [0], and Pythia/Ithaca restore Greek inscriptions at scale [1], but Linear A or Proto-Elamite still stall because the corpora are too small and there is no bilingual ‘Rosetta Stone’. The most promising direction now seems to be hybrid pipelines that combine vision encoders to normalize glyphs with constrained decoders guided by phonotactic priors.

[0] https://arxiv.org/abs/1906.06718

[1] https://arxiv.org/abs/1910.06262


AI is great at pattern recognition, but when the sample size is tiny and there's no known language to anchor it to, it’s like trying to solve a jigsaw puzzle with half the pieces missing and no idea what the final image looks like.


How fascinating: in a paragraph entirely _about_ language, written entirely in my first language, I can barely recognize a fair chunk of the terms.


Ha, fair point! Let me try again :)

People have tried using modern AI to crack lost languages. In some cases it works a bit. For example, a model learned to match Ugaritic (an ancient Semitic language) to Hebrew with no “dictionary” at all. In another case, a system called Pythia can guess missing letters in damaged Greek inscriptions with higher accuracy than human experts.

But with truly lost scripts like Linear A or Proto-Elamite, we run into two problems: there are only a few hundred very short texts, and we don’t have any bilingual “Rosetta Stone” to anchor them. AI can spot patterns, cluster symbols, and even suggest likely word boundaries, but it cannot yet produce actual translations. The current hope is to combine image recognition (to clean up messy symbols) with language models guided by rules of possible sound systems, then loop in human experts to check or reject the guesses.


Thanks so much for the added detail and context. I hope I didn't come off derisive! I really do find it fascinating language is so complex a "thing" I could spend 40 years learning English, and still find myself frequently feeling like a complete noob.


I also just got an email tonight for early access to try CC in the browser. "Submit coding tasks from the web." "Pick up where Claude left off by teleporting tasks to your terminal" I'm most interested to see how the mobile web UI/UX is. I frequently will kick something off, have to handle something with my toddler, and wish I could check up on or nudge it quickly from my phone.


Gas released from fresh ice on the illuminated hemisphere can push millimeter-scale grains out at a few m/s, and since these particles weigh thousands of times more than typical cometary dust, solar radiation pressure at 3 au is too weak to bend their trajectories, letting them overtake the nucleus and form a sunward anti-tail. Finson–Probstein dust dynamics predicts this plume should flip to the normal anti-solar direction or fade once 3I/ATLAS moves inside 1 au later this year. Watch the position angle around September to see the theory tested.


I kept finding myself having to write mini backends for LLM features in apps, if for no other reason than to keep API keys out of client code. Even with Vercel's AI SDK, you still need a (potentially serverless) backend to securely handle the API calls.

I've been working on an open source LLM proxy that handles the boring stuff. Small SDK, call OpenAI or Anthropic from your frontend, proxy manages secrets/auth/limits/logs.

As far as I know, this is the first way to add LLM features without any backend code at all. Like what Stripe does for payments, Auth0 for auth, Firebase for databases.

It's TypeScript/Node.js with JWT auth with short-lived tokens (SDK auto-handles refresh) and rate limiting. Very limited features right now but we're actively adding more.

Currently adding bring-your-own-auth (Auth0, Clerk, Firebase, Supabase) to lock down the API even more.

GitHub: https://github.com/Airbolt-AI/airbolt


A way to single click install stuff like this (a moderner cPanel) would be excellent for letting non backed people deploy apps like this.

I guess a bunch of yaml for each of the main PaaS services would be nearly that.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: