Hacker Newsnew | past | comments | ask | show | jobs | submit | xearl's favoriteslogin

I have a longstanding fascination with K and other "modern" APL derivatives.

There are a few intersecting truisms about coding that I believe: one is that people's working memory varies: some have an immense amount, some less. Humans definitely process spatially better than in time series (e.g. comparing side by side rather than turning over a page.)

This implies you should prefer succinct code and languages because they are less memory load for engineers working on them.

At the same time, a corollary is that a smaller standard library / language is generally better, in that less needs to be learned by an engineer for full coverage of the language.

Another truism is that some people's processing speed is higher than others, and in general I think of the combination of working memory + speed as roughly equivalent to "g", general intelligence.

K occupies this weirdo place though, because it's absolutely succinct, a very small language as counted by number of atoms supported by the interpreter, and also incredibly hard to scan.

One of the K intros I read mentioned that the language is designed to be something that takes down your thinking; essentially the idea is that the workflow is "drink coffee with fellow PhDs, annotate on the chalkboard, and then when ready, capture it directly." This seems about right to me with my own K/J/Q experiences -- the bulk of the time is spent thinking about structuring a problem solution.

I compare this to go, a language I love for its long-term readability and maintainability, where I spend a lot of time writing boilerplate and dealing with errors in-situ.

At any rate, somehow there's a sort of event horizon of terse solution making where you come out the other side and need a 170 IQ to feel comfortable, and Mr. Whitney lives where he lives, and I live where I live. :)

People complaining about how ugly the C code is here are definitely missing the point: he has bent C's preprocessor to his will in order to encapsulate how he thinks about coding: essentially functional, vectorized. It's using C to write a DSL for solving programming problems interesting to Arthur Whitney.

I think it's fascinating on those terms. In a world where you have to read 10,000 lines of code from 100 developers, the C is terrible, and hard to parse. In a world where you will mostly write code to a style you've honed over 40+ years, it's super expressive, minimal, pared down to what matters, and probably fits his brain perfectly.


Firstly, congratulations on your launch!

My personal opinion as a Product Manager is that there is no all-in-one tool, there never will be an all-in-one tool, and anyone promising an all-in-one tool has some significant blinders on around the job functions they have been exposed to.

Sales will use Salesforce. Engineering will use JIRA. And Product will use some generic roadmap-first tool like Asana, Trello, Miro, Monday, or MS Project. Product will beg everyone to use their tool, but no one will because it's for looking at work, not executing on it.

Product will not understand because we Product Managers don't actually DO things, we talk to people, organize ideas at a coarse level - and we have a simple tool that articulates the vision perfectly! Except it doesn't automatically integrate with the CI/CD pipeline, it doesn't automatically log the last touch with a high-priority prospect, it doesn't pass designs from the designer to the UX engineer. It doesn't actually DO anything. It becomes just another SaaS platform in our Okta stack that only the PMs look at - and it gets out of sync with the actual reality reflected in the other SaaS offerings.

And the reason it doesn't do those things is because the people who built it thought that THEY were the ones who could build the all-in-one project management tool and all those sales people, engineers, designers, lawyers, and marketing writers would log in to theirs. Insert relevant XKCD about standards here. The truth is no PM tool will ever be the system of record for the actual work, but everyone is afraid that the complexity of integrating with others will turn them into JIRA.

So we get these vaguely opinionated ways of moving around and arranging candy-colored cards, and those cards never quite show us the state of the project we are managing, so we have to bug people to "update their status in <Monday/Asana/Trello>" before the weekly meeting. Which they don't want to do because it delivers no value to THEM.

Which is all a long-winded way of saying - please think about how a piece of work from each of those functions might be represented in your view, and how that integration would happen automatically.


Other replies already addressed the screen time/hanging out together aspect, so I will comment on being unsupervised.

I was a kid on the early-ish internet and I was free to "surf the interwebs" unsupervised. The internet is not the same anymore but I think the general rules still apply. This was a very valuable learning experience for me.

Based on my own experience (anecdata, I know) what I found really helped me is grownups around me explaining things clearly and hammering a few facts into my brain:

1. Don't put anything about your real self on the internet (this is increasingly harder due to social media, I'm glad I was just on IRC back in the day).

1a. What goes in the internet will stay in the internet forever. Mind the info you get out there, even if it's supposed to be a private message. Leaks happen.

1b. Encourage them not to use their real name and address, to be pseudonymous at the least (or better, completely anonymous). Help them set up accounts that don't link to their identity (specially email which is the center of your online identity nowadays).

2. Not everyone on the internet is who they say they are. On the internet nobody knows you're a dog.

2a. Be clear on what grooming and pedos are and that they're out there to catch you off-guard.

2b. Show them what spam, scams, malicious sites, phishing, etc. look like and how to prevent damage.

3. No matter what happens or how deep in shit they are they can come to you for help. You won't approve the ugly things they do, but you will forgive them and help them clean up the mess. If in doubt, come get help. The earlier you ask for help, the faster the cleanup.

Make all of this real by showing them what could happen. Show them real cases (there's plenty on the news) and the consequences. Show them how easy it is to trick the other side of the conversation. E.g. it was eye-opening for me to watch a friend of my brother pretend he was an MD from a completely different city on the IRC. He was just a horny teen looking to meet women. He often joked about how we were probably chatting with other men lying about their identity too.

Once your kids are old enough to understand this then they can go on the internet 100% unsupervised (it was around 8-9 y/o for me but everyone is different).

This will take a while given your kids' age, but we all know time flies. Better get them ready before the time comes!


Nix/NixOS has a few key advantages:

- everything from kernel and drivers up is specified declaratively in a uniform, expressive way. You version control your entire infrastructure. Not in the sense of a Dockerfile. The Ansible stuff, the Dockerfile, the build of your library, the build of your own software is in one language that can happily and easily reference each other as a unified system.

- with a few caveats, you are cryptographically guaranteed to get the same result, every time, no matter what. if it worked once, it will never break again.

- it has mechanisms reminiscent of git or something: you can checkout a different computer or set of computers, and if you don’t like the result, revert to what you had before.

- it treats patching of arbitrary software as a completely typical and first-class activity. if anything doesn’t work how you want, you write a patch, check it in, and it works now and always will. upstream at your leisure or not at all.

- you get exactly what you want on your system, no more, no less. no gnome-keyring-daemon bullshit on your headless server Ubuntu 20.04 LTS.

- caching of artifacts both locally and via cache servers run by you or others or both is very granular. no giant slabs of docker composition towering ever higher.


One thing I don't see addressed here that is critically important is injury prevention, specifically as part of your workout plan. Getting injured will derail you (in some cases permanently), and I've dealt with that myself.

Climbing was recommended here, and I think that's a great idea. Hiking is usually a better idea for those just starting out though.

Look into Training for the New Alpinism and Training for the Uphill Athlete. The core tenets of these are Injury Prevention and Endurance training, not pure strength. This is remarkably similar to how special forces train. The basic idea is that you do a long, easy introduction, then strength building, then endurance. This is a 6-12 month cycle with a goal you target at the end. Workouts are made up of some long, low effort (Zone 1, so you barely feel winded) workouts that end up being walks for those out of shape, and one high effort jog/run type workout, with a couple strength days a week. None of this should be difficult; it's the volume that makes it so. One critical note: your cardio can be whatever aims towards your goal, whether that's running, rucking (hiking with weight on), swimming, biking, etc.

I'm actually preparing to start a cycle myself after some life events. I'm by no means an expert, but this system has worked for me.

Here's a training spreadsheet; feel free to copy it. I got it off of a forum a while back, so I can't easily give credit, but I did not make it so I can't take credit.

https://docs.google.com/spreadsheets/d/1zlIF6sCvO4je1YfXohIF...

I use TrainingPeaks to track my actual data and have found that being able to parse my own data and do analysis has helped immensely. Also, Garmin Running Dynamics changed my running for the better, once I knew what the heck to DO with the data.


Yo!

My time to shine. I'm a partner in a gym. I'm fit. I'm a full time software developer. People think I'm an athlete.

You're trying too hard. You need to develop a habit to get fit.

Consistency > intensity. Sooooo! Here is my tip.

Go to the gym. Win the day by doing really easy stuff. Everything should be easy. Then finish your session before you are tired or sore. Go get food. A shake is perfect. You need to win.

Do this 3x a week until you really enjoy going to the gym. Typically this is a month or so. You can get it going faster if you're doing this with a buddy.

I cannot stress this enough. Consistency is so much more important than intensity that it just isn't important to even think about intensity.

Thinking you will be fit by the end of the year is also a mistake. Fitness is a long term problem. Start now. Be consistent. A few years from now you'll be in a room and notice that you're fittest person in the room. Or you'll help someone move and you'll get tapped to move the heavy stuff, because obviously you'll do the heavy stuff.

People will claim "you're just fit". You'll know you're weak compared to other people. The game will just keep going.

If you want a program, you can't beat starting strength (no affiliation): https://startingstrength.com/get-started

For running. You're running too hard. Just run slower. I'm serious. Run so slow you feel like you're not running. Do 5km twice a week to start (Or even just walk 5km twice a week!). It builds from there if you just keep doing it.

When you're starting, just be weak and slow. You're putting the pressure on yourself and it wont pay off. Be consistent at all costs.

Good luck man.


I'm a professional forecaster (i.e. getting paid for it) at a large e-commerce company. We have extensive experience with Prophet and a host of other approaches (all the traditional models in Hyndman's book/R package, some scattered LSTM/NN implementations). Here's my quick take (the article is a lot more extensive than the median blogpost, and likely warrants a more extensive study than I have time for right now.)

Prophet main claims ("Get a reasonable forecast on messy data with no manual effort. Prophet is robust to outliers, missing data, and dramatic changes in your time series.") are surely exaggerated. As the article shows, time series come in many different shapes, and many of them are not handled properly. It deals well with distant-past or middle-of-the-sample outliers, but not with recent outliers. It cannot deal with level changes (as opposed to trend/slope changes). None of this should be a surprise if you take some time to understand the underlying model, which unlike most neural nets is very easily to completely understand and visualise: it's really a linear regression model with fixed-frequency periodic components (for yearly seasonality and weekly seasonality) and a somewhat-flexible piecewise-linear trend. The strong assumption that the trend is continuous (with flexible slopes that pivot around a grid of trend breakpoints, which are trimmed by regularisation) accounts for most of the cases where the forecasts are clearly wrong.

That said, it does occupy a bit of a sweet spot in commercial forecasting applications. It it's largely tuned for a few years of daily data with strong and regular weekly and yearly seasonalities (and known holidays), or a few weeks/months of intraday and weekday seasonalities. Such series are abundant in commerce, but a bit of a weak spot for the traditional ARIMA and seasonal exponential smoothers in Hyndman's R package. These tended to be tuned on monthly or quarterly data, where Prophet often performs worse. In our experience, for multiple years of daily commercial-activity data, there are no automated approaches that easily outperform Prophet. You can get pretty similar (or slightly better) results with Hyndman's TBATS model if you choose the periodicities properly (not surprising, as the underlying trend-season-weekday model is pretty similar as Prophet, but a bit more sophisticated). Some easy win for the Prophet devs are probably to incorporate a Box-Cox step in the model, and a sort-term ARMA error correction, then the model really resembles TBATS. You can usually get better results with NNs that are a bit more tuned to the dataset. But if you know nothing a priori about the data except that it's a few years of sales data, your fancy NN will probably resemble Prophet's trend-season-weekday model anyway.

All of these assume that we're trying to forecast any time series' future only from its own past. If you want to predict (multiple) time series using multiple series as input/predictors, that's a whole new level of difficulty. I don't know of a good automatic/fast/scalable approach that properly guards against overfitting. Good results for multiple-input forecasting approaches probably requires some amount of non-scalable "domain knowledge".


I'll try:

In most languages, if you call a function f(x), the value x will be implicitly reduced before the resulting value is passed as an argument to the body of the function f. Most languages don't offer any alternative, although Lisp does offer alternatives, which are to quote x, or to implement f as a macro which is expanded at compile time.

In Kernel, the fundamental means of combination is an operative, which does not reduce its operands. It passes the operands verbatim, as they appear in code. For example, in $f(1 + 2), the body of $f does not receive the value 3. It receives the expression "1 + 2" (as a list). The operative also receives a reference to the current dynamic environment of the caller of $f, which can be used to evaluate (1 + 2) explicitly if needed, or this environment may be mutated by $f (Though this mutation is limited only to the local scope of the caller and none its parents).

One motivating example for operatives is the || and && operators in other languages. These operators both have short-circuiting behaviour if the left-hand side evaluates to true/false respectively. If these were implemented as functions, then both the LHS and RHS would be reduced before the operator itself is called, which is not what we want. In other lisps, these operators are special forms, which the compiler is aware of. However, as special forms, they are second class citizens of the language. You cannot assign them to variables. They must appear in their own name when you wish to use them.

The `$and?` operative in kernel is implemented in the following way in the standard library (It isn't a language primitive). This definition also supports an arbitrary number of operands too - it isn't just a binary operator.

    ($define! $and?
        ($vau x e
            ($cond ((null? x)         #t)
                   ((null? (cdr x))   (eval (car x) e))
                   ((eval (car x) e)  (apply (wrap $and?) (cdr x) e))
                   (#t                #f))))
So what of regular functions? In Kernel, they are referred to as applicatives. They are constructed using the primitive `wrap` (which is itself applicative). This takes a single argument which must be another combiner (A combiner is either an operative or an applicative). Usually the argument to wrap is operative because the use-case for doubly-wrapped combiners is limited. Operatives and applicatives are disjoint types, known by the runtime. (All types in Kernel are disjoint and there is no subtyping). An applicative causes the operands of a combination to be reduced before passing the resulting arguments to the underlying combiner of the applicative.

It is basically "whatever thing in your system has an external dependency that's not 100% reliable, isolate it in an actor/process that can crash and be safely restarted".

Expand the concept to trees of dependencies and several possible after-crash strategies and you get very resillient apps.

But to fully appreciate it, you have to try it. It sounds pretty generic when advocated for but when you code a bit around that, it gets pretty amazing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: