Hacker News new | past | comments | ask | show | jobs | submit | onekorg's comments login

I'm curious, did they give you details about the way it was graded?


Not sure if still the case, but remember watching a talk (around 2021) where they mentioned they were using different providers. Among them AWS for some regions.


> while people are still in consideration, we have an obligation (we don't always meet) to be responsive, because people are pending their career decisions on our own decisions.

I had been receiving very quick responses but haven't heard back in a bit over a week.

Last email said I was moving to the final round for the platform position. It said you'd follow with a couple more emails with further instructions but haven't received them. Sent a couple of pings over email.

Reaching out over HN in case this is a bug on your hiring tools or my email setup. I understand that sometimes things get busy behind the scenes. Just need an ACK to know if I'm still under consideration and make better informed career decisions.


I think it's a mix of several things at play:

1. Fast growing organizations struggle to keep up with the communication overhead when rapidly onboarding new engineers. Most common open source frameworks lack good interfaces for developing isolated components in the same project. In the short term it's easier to spin up a new project than defining and enforcing interface and dependency boundaries.

2. Cloud providers and consultants are incentivized to propagate the myth that distributed systems is the best solution for all problems.

3. Engineers looking to grow are incentivized to add popular new tools. In particular, the less equity you have in the company, the greater financial incentive you have to become an expert of a tool in high demand and land a job elsewhere with higher pay.

4. In my experience very few engineers learn the fundamentals of computers and systems. Instead they follow "gurus" that tell them the current "best practices" are. I think it's easier to feel you're doing a good job by making all your code comply to some style guide, or building systems with an architecture discussed in some cloud provider blog.

5. A VP of engineering I worked with told me in private that one of the reasons we were adding a lot of distributed systems components was so that we could sell ourselves as a tech company to VCs in the next funding round rather than a tech enabled business. I doubt that VCs care about this, but it's telling that a VP of eng thinks it matters.

6. If you start breaking up your monolith into a distributed system you won't feel the pain until you have several systems that are struggling to coordinate and keep data consistent. For the first few months or even years you'll only see the upsides of quicker iterations. It can be enough time that all the engineers that added the distributed systems got promoted and left for another job.

For companies growing quickly or large companies I don't see how you're able to mitigate the communication overhead without adding distributed systems. It allows different teams to ignore each other for the most part and respond to the market quicker. It's often easier for teams to re-build systems than trying coordinate with a different team that has different incentives.

But for all other companies I think people are adding distributed systems prematurely. But lots of individuals in the decision making chain are incentivized to add them. Unless you have an experienced CTO that can enforce a sane policy, it's inevitable that someone will add a distributed system without understanding the nuances that come with it.


I once had an employer that would ask us to perform monthly myers briggs and big 5 tests. I was in a tough financial spot and needed that job. Did my best to roll with it.

One day they wanted to try out an IQ test. 40 questions in 20 minutes. Timer starts when you click a button. I was fed up.

Poked around the testing site and it was an SPA, all JS. Poked through the network calls, saw an odd base64 payload. Decoded it and saw a JS object with "cyphertext", "iv", and some other field I don't recall.

I went looking through the JS sources and found a "decrypt" function. Added a breakpoint before it returned and reloaded the page. Had all the questions without starting the timer.

Took my sweet time going through each question. Compiled all the answers and started the timer. I still got 2 of the 40 questions wrong.

A couple of days later my manager sets up a meeting with me. I assumed they caught wind of what I did. I'm ready to get defensive and say how wrong this is.

My manager starts the meeting congratulating me! My results are within top 2 percentile. That, and a previous big 5 result tells them I have a bright future in the company and they want my input in all big projects going forward.

I went from being the random junior, to everyone in management thinking I'm the next Carmack.

I didn't know how to feel about it. Is it morally wrong?

It taught me a lesson of how biased we are. How much we need to be told how to feel about others. I was the same person, yet a number changed everyone's perception of me.

I don't ever want to take a real IQ test. I don't know how I might start behaving differently if I see a number associated with my "intelligence".

Left that company a few months later and told them about it. They were a bit angry but I hope they learned a lesson in how harmful biased tests can be.


> 1) If you update the bottom branch, you need to manually rebase each branch above it. That becomes brutal if your stack is >3 branches.

I do this relatively easily with `--rebase-merges` flag.


I am starting to teach at my local University and this was just the thing I needed to read. Thanks a lot Sam.


I read the 2019 paper[0] where they beat 5 pros in a 6-max table. It did not assume fixed stack sizes, it uses a technique they call "action abstraction" where they train on some number of stack sizes. During a game they use these pre-computed values as starting points for real time searching of the game tree.

Their papers are very well written. With enough determination someone could build a multi player bot.

I was also thinking about trying to build something simpler that works decently well. They published a paper a while back about a bot that only did real time search with no training. It performed better than older poker bots that used to require offline computation. I don't think you'd be able to beat pros with it, but it should do well vs amateurs.

[0] https://www.science.org/doi/10.1126/science.aay2400


Are you really sure the starting stack sizes are variable? From what I remember, they aren't.


My bad, for some reason assumed you were talking about bet sizes.

Starting stack sizes were static, but I don't think it's something important. They varied during the course of play. If you're playing cash instead of tournament you'd just need to set each player's stack sizes when they join the table.


I got very deep into the Poker rabbit hole from another hacker news article a couple months back.

This is the journey I would suggest for someone starting out:

1. Johns Hopkins Poker course[0]. Taught by a CS professor who's very passionate about Poker. It's targeted at college students and it's very well explained.

2. There are two different MIT Courses. I thought that 2016's [1] was better than 2015's [2]. But 2015 has better guest talks. "Poker Economics" and "Decision Making" are must watch.

3. Matthew Janda's books "Applications of No-Limit Hold 'em" and "No-Limit Hold 'em for Advanced Players" are both really good. They take a theoretical approach to the game. Applications was written before good poker computers were common. The author goes very deep into all the thought process and things you want to balance when playing.

NLHE for Advance Players runs common situations through strong bots and abstracts away good strategy principles that you can apply at the table. It's a lot easier to read than Applications.

4. Another good book that covers some of the math and optimal opening ratios is "Modern poker theory". It's more focused on showing charts, ratios, understanding the math. It also has good principles but Janda's books seemed easier to put in practice.

5. There's also some really good seminars on Jonathan Little's channel. "Play more Aggressively"[3] is particularly interesting.

[0] https://www.youtube.com/channel/UCBYvo69VZ2Gl3ZtdPmcsW9Q

[1] https://www.youtube.com/watch?v=62nDLA_A8gs&list=PLUl4u3cNGP...

[2] https://www.youtube.com/watch?v=OTkq4OsG_Yc&list=PLUl4u3cNGP...

[3] https://www.youtube.com/watch?v=rXiZPwJ-s8E


The story goes that Hyrum tried to call this "The Law of Implicit Interfaces", but within Google everyone started calling it Hyrum's law and that's what's caught on.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: