San Diego is one of the best places to live in the US. Cost of living, homelessness, and failed neighborhoods are no worse in San Diego than in Seattle or Austin. Benefits are perfect weather, proximity to Silicon Valley VCs, recruiting from the University of California system, and a willingness from talent to relocate to join your startup.
Everywhere has challenges. Attracting real talent in the midwest is challenging. Cost of living in industry hubs is also challenging. There's no perfect location to start a startup.
Source: 7 years at startups in the midwest, the south, Seattle, and now San Diego.
This is the most accurate description of a "weight loss journey" I've ever read. I was obese (40%+ body fat). Now I'm fit and healthy. Like you, it took me a decade to get there. Anyone who thinks it's easy or simple to make the lifestyle changes that requires, hasn't done it themselves.
It's not about the diet - most diets actually work, if you can stick with them for the rest of your life, but that's a huge if. CICO or keto or IF or carnivore or whatever, doesn't matter. Permanent weight loss requires the much harder task of fundamentally changing your brain's relationship to food. The only way that happens is through practice and painful failure.
You do have control over your weight. But losing fat and maintaining a lean body will be one of the hardest things you ever do.
> requires the much harder task of fundamentally changing your brain's relationship to food. The only way that happens is through practice and painful failure.
I've been on this journey for over ten years now -- basically on the practice and pain path you describe. But it wasn't until I read Allen Carr's "Good Sugar Bad Sugar" book that I began to mentally relate to sugar and carbs like any other significant addictive drug like nicotine or heroin. The sugar and carbs are not just passive calories that you ingest -- they actively affect how your brain and body relate to food.
Once you make that mental switch, then it's much easier to drop and stay off them. I mean who would say it's okay to have heroin cheat days?
It doesn't help if almost all external nudges in the domain that matter most (affordable, easily accessible food) are terrible. The few times I visited the US I was constantly shocked at how the food environment felt like it was trying to make me obese at all costs.
"The only way that happens is through practice and painful failure."
There are other ways to do this, but they require you to leave your world behind. For many, their environment is what built and maintains their unhealthy relationship to food.
For me, the day I left Texas was the day I started dropping pounds. Then, the day I left Utah for London was the day I dropped even more pounds. Then, the day I started running was the day I began approaching my lower limit (likely bordering into unhealthy body fat % on marathon race day).
The biggest challenge of you and the commenter above you was that you were swimming up stream. Hopping into a stream that leads away from fat behavior, instead of toward it can do much of the work for you.
I fully appreciate that's a luxury, but it's something people who have a choice should consider.
I worked at a larger services marketplace, helping data scientists get their models into production as A/B experiments. We had an interesting and related challenge in our search ranking algorithms: we wanted to rank order results by the predicted lifetime value of establishing a relationship between searcher and each potential service provider. In our case, a 1% increase in LTV from one of these experiments would be...big. Really big.
Improving performance of these ranking models was notoriously difficult. 50% of the experiments we'd run would show no statistically significant change, or would even decrease performance. Another 40% or so would improve one funnel KPI, but decrease another, leading to no net improvement in $$. Only 10% or so of experiments would actually show a marginal improvement to cohort LTV.
I'm not sure how much of this is actually "there's very little marginal value to be gained here" versus lack of rigor and a cohesive approach to modeling. The data scientists were very good at what they do, but ownership of models frequently changed hands, and documentation and reporting about what experiments had previously been tried was almost non-existent.
All that to say, productizing ML/AI is very time- and resource-intensive, and it's not always clear why something did/didn't work. It also requires a lot of supporting infrastructure and a data platform that most startups would balk at the cost of.
If you have historical data to validate against, you can set a leader board on models run against older data, and always leave part of the data out and unavailable for test
This encourages a simple first version and incremental complexity, rather than starting very complex 6 months in, and never having an easy baseline to compare to. A simple baseline can spawn off several creative methods of improvement to research.
The other case is that the models should be run against simple cases that are easy to understand and easy to confirm. This way there's always a human QA component available to make sure results are sensible.
This looks really slick, can't wait to try it out!
If anyone is curious about other tools in the same space, our data scientists use Dash[1] and plotly to build interactive exploration and visualization apps. We set up a Git repo that deploys their apps internally with every merge to master, so they're actually building and updating tools that our operations, marketing, etc teams use every day.
Dash is awesome. I've been using Shiny in R for similar purpose. Do you have any blog post or some more details around the deployment process and your use-case of using Dash?
This tool would be a lot more useful if it allowed for filtering by chosen course of study. For example, University of Washington has a median earnings that is slightly below expected, but I can pretty much guarantee that their computer science graduates are earning well above median.
That was my thought too, to make a simplistic example if a university had both say an engineering school and an art school, it might presumably do worse than a university with only an engineering school. So this metric might favor smaller, focused schools which happen to concentrate on education areas with high median salaries...
> So this metric might favor smaller, focused schools which happen to concentrate on education areas with high median salaries...
I don't think that part's necessarily true. If a school focuses on an area with high median salaries, the model will take that into account in the predicted salaries, so the school will have to have even higher actual salaries than typical for the field (and its input demographics, SAT scores, etc.) to get a positive value-add. See Caltech for an example of a STEM-focused school that does badly by this measure: from its SAT scores, demographics, and heavy concentration of STEM majors, the regression analysis predicts that it should produce graduates with a median salary of $82k. But the actual median is $74k, so its value-add is taken to be -$8k.
Some of the schools that do well are in areas with poor salaries, but score highly because they do better than you'd expect (or than the model would expect, anyway) for that area and student demographics. Otis College of Art and Design has a predicted salary of $29k from the regression analysis, but actual median is $42k, so implied value-add +$13k.
Interestingly, having gone to Caltech, I suspect that its extreme focus on STEM research actually hurt it here. Mostly because that focus results in a very large portion of undergrads going on to grad school (much larger % than any other university), and grad students don't earn very much.
Hi there, Mike! Care to expound on any solutions to make Tech more industry oriented? Are there things you'd rather have been exposed to more in your undergrad education given where you're at now?
Not all the majors at Caltech are going to be value adds, and of those that are, they are generally leading to bench/engineering rather than business dev/marketing/executive positions
It assumes you select a school that is focused on your field of choice, so if you pick an art school, your preference is for art, and if you pick an engineering school, your preference is for engineering.
The issue is that you can't select the University of Washington's engineering school. You can only see University of Washington which has both an engineering and an art school, and so it doesn't tell you what the data will be if you only look at engineering students.
any time data is shown in the article, it should be best to provide access to raw data as anyone can analyze with their own angle. When I see static graphs and analysis I smell distorted facts. (of course in this case you can do search, but giving access to raw data would be more useful.)
These schools are all over the place! I graduated from the Arkansas School for Mathematics, Sciences, and the Arts. Math and science schools are truly fantastic places for advanced high school students to get a quality education.
I have to admit, I never expected to see my old HS come up on HN for some reason (though I'm still not used to the and the Arts, I went when it was still just the Arkansas School for Mathematics and Sciences.
Arkansas alum here too! Still some of my fondest years!
I think schools like this are _especially_ useful in rural/poor states like our own, where cultural and educational norms are a bit behind. Imagine going from a school where the highest level of math is trig and there are no comp sci courses to one where you can take differential equations, astrophysics, or robotics.
For me personally, it was validating to find a like-minded community where nerdiness is a virtue instead of a scourge.
aewhite covered most points, but I'd also point out that a simple use case of a cluster with multiple indices of varying sizes (such as using ES as part of the ELK stack to store logs - a new index is created every day) will run into many of the same problems with the default balancer. Since the number of shards per index isn't configurable after index creation, shard size growth and disparity is unavoidable.
We've tested for stability when adding and removing nodes, but haven't compared the time-to-balance of tempest versus the default balancer. Because an ES cluster remains fully functional (you still have access to all data) while a rebalance is in progress, we chose to optimize for resource usage rather than time-to-balance. There's not really even a good way to compare the balancers' time-to-balance, since they're both highly configurable (range_ratio and iterations in tempest's case, the 4 balance weights in ES's case) - default values probably aren't "equivalent" in terms of time-to-balance, since it's a very minor concern compared to resource usage and stability.
Oh nice! That looks close to what we were trying to do with this plugin. I'm not sure it would've worked within the constraints of the Elasticsearch environment, but the additional confidence of finding a solution that optaplanner provides by using multiple algorithms to solve the bin-packing problem (NP-Hard) looks quite promising.
Everywhere has challenges. Attracting real talent in the midwest is challenging. Cost of living in industry hubs is also challenging. There's no perfect location to start a startup.
Source: 7 years at startups in the midwest, the south, Seattle, and now San Diego.