Hacker News new | past | comments | ask | show | jobs | submit | more ones_and_zeros's comments login

This is really well done. Really appreciate the Pulumi output option.

If we wanted to support vpc peering between accounts, is it a matter of copy paste?


Yes, but you would have to add the AWS peering resource yourself, which is not included in the generated code.


I had a neighbor in Boston a few years ago that was in the inner circle of the Lyman family. They had "real" jobs, etc but still definitely operated on the outskirts of society and the law. Didn't make great neighbors...


Isn't prometheus an implementation and not an interface? I have "prometheus" running in my cluster, if it's not cortex, what implementation am I using?


It's kinda several things

- The OSS product

- The Storage Format (I guess)

- The Interface for pulling metrics (https://github.com/OpenObservability/OpenMetrics)

I haven't dug into cortex even a little, but the other comments are suggesting it's API compatible but essentially claiming they're production ready because they'll give you things the OSS project won't give you out of the box, i.e. long term storage and RBAC.

Looks like a good thing.


> wrapping prometheus and giving you that production readyness that they're claiming the OSS project won't give you out of the box

No! Prometheus is and has been production ready for many years. Cortex is a clustered/horizontally scalable implemention of the Prometheus APIs, and Cortex has just gone production ready. Sorry for the confusion.


Just want to say, I use prometheus. It's amazing.

But readiness depends somewhat on your use case. If you're on a multi-tenanted cluster and you don't want to explicit trust your users / admins, how do you stop them from messing with your metrics whilst allowing them to maintain their own?

I typically did it via github flow, some others used the operator to give us many proms, some others would just suggest it's missing features.

Indeed, I could probably word my example better though. Apologies if I were putting words in your mouth.


And I have Prometheus data from 2015, so I would argue that's long-term.


You are using Prometheus.

However, Prometheus can use different storage backends. The TSDB that it comes with is horrible.

I mean, it's workable. And can store an impressive amount of data points. If you don't care about historical data or scale, it may be all you need.

However, if your scale is really large, or if you care about the data, it may not be the right solution, and you'll need something like Cortex.

For instance, Prometheus' own TSSB has no 'fsck'-like tool. From time to time, it does compaction operations. If your process (or pod in K8s) dies, you may be left with duplicate time series. And now you have to delete some (or a lot!) of your data to recover.

Prometheus documentation, last I checked, even says it is not suitable for long-term storage.


The TSDB it uses is actually pretty state of the art. I think your pain point is more that it's designed for being used on local disk, but that doesn't mean it isn't possible to store the TSDB remotely. In fact, this is exactly how Thanos works.

The docs say Prometheus is not intended for long term storage because without a remote_write configuration, all data is persisted locally, and thus you will eventually hit limits on the amount that can be stored and queried locally. However, that is a limitation on how Prometheus is designed, not how the TSDB is designed, and which can be overcome by using a remote_write adapter.


> The TSDB that it comes with is horrible.

The TSDB in Prometheus since 2.0 is excellent for its use case.


Yes, Prometheus is an implementation - the HN text has a limited number of words, so I thought "Prometheus implementation" conveyed the fact Cortex was trying to be a 100% API compatible implementation of Prometheus, but with scalability, replication etc


how about:

CNCF's Cortex v1.0: scalable, fast Prometheus API implementation ready for prod (grafana.com)

saves 1 char.


Yes, you're running the Prometheus server. But what Cortex is a Prometheus API compatible service that horizontally scales and has multi-tenancy and other things built in.


Noob question but what about state machines where a given state could transition to more than one other state depending on some outside factors? Or is that no longer considered a state machine?

For a relevant to me example, a VM state. A VM in running state could be transitioned to terminated or stopped or hibernating depending on an admins action.


Your example is a standard Finite State Machine. Multiple possible transitions is the norm for an FSM, and each possible transition is guarded by some predicate which decides if it should be followed.

# transition notation: FromState -predicate-> ToState

    Running -stop_button-> Stopped
    Stopped -start_button-> Running
    Running -start_button-> Running # stay running
    Stopped -stop_button-> Stopped # stay stopped
The stop/start_button button here can either be events that come in from the outside (from dedicated click handlers in a GUI), or be functions or properties that are polled when evaluating next().

Since booting a VM can take quite some time, one might want to introduce a Starting state between Stopped and Running.

The example in the original article is just a special case, where there is only one possible transition from each state, and where the predicate always returns true. Although arguably for a real traffic light, there should be a predicate on the transition that checks that enough time has passed! At least I would model that as part of the FSM, instead of on the outside.

EDIT: fix formatting


Harel's Statecharts (an evolution of state machines) have concurrent states (with branch/fork and merge/wait), which would be one way of solving what you describe.

I believe Harel may have borrowed concurrent (aka orthogonal) states from elsewhere though: state machines have been extended a few different ways over the years.

So you may find similar features elsewhere too.


> A VM in running state could be transitioned to terminated or stopped or hibernating depending on an admins action.

Actually, that doesn't necessarily need concurrency, I misread your question.

Yes, in a state machine, each state can have different conditions (guards) on each outgoing transition. So when running, pushing the stop button would cause transition to the stop/stopping state, pushing the pause button would transition to the pause/pausing state.

Guard conditions are simple boolean decisions, based upon events or other state. And sure, that event/state could be triggered externally to the state machine.

Technically it might not be a 'pure' state machine, but they rarely are outside of toy examples, in my experience — they always have to interact with something, and that thing is often not a state machine. Arguably I'm splitting hairs over philosophical differences here, but hey.


You might queue up events which cause it to transition to another state. If you hit the hibernate button, it might finish rendering the current frame before checking to see if the button was pressed, then hibernate. So it's the same state machine just with a larger input space.


Sure but how does that work with the provided implementation where all states can only transition to a single state, this is ensured at compile time. What does the code look like that allows a state to transition to one of several other states?


No Rust, but here's a Python implementation that I have built on top of before: https://github.com/pytransitions/transitions

You add the concept of finite "triggers", where [state i] + [trigger result j] always takes you to [new state](which could be the same state if you want)

Triggers are just functions where anything could be happening - coin flip, API call, but they return one of an enumerated set of results so the machine can always use their result to go to another state.


Ah ok. I don't write Rust either but maybe it'd look like:

  impl State<Running> {
    pub fn next(self, Trigger<Hibernate>) -> State<Hibernate> {
        State { _inner: Hibernate {} }
    }

    pub fn next(self, Trigger<Terminate>) -> State<Terminate> {
        State { _inner: Terminate {} }
    }
  }


Usually you would just call your functions hibernate() and terminate(). That way you can call hibernate() on State<Running> but not on State<Terminated> or State<Hibernate>.


that's known as an NFA (Nondeterministic Finite Automaton), a variant of FSMs.


Nondeterminism not needed (or desired I think :D) for an FSM that can turn a VM on or off based on start/stop buttons. Its just multiple possible transitions, guarded by different conditions (the buttons).

But yeah, Nondeterministic FSMs are possible. Ie based on a transition probability.


the "nondeterminism" here doesn't mean we're dealing with probabilities, it's a more discrete kind - it just means that instead of

  S1 --a--> S2
you can have

  S1 --a--> S2
    '--a--> S3
    '--a--> S4
i.e. transition to multiple states "at once"¹. then, instead of being in one state, like an FSM, your NFA is in a set of states, like it had multiple threads, and proceeds "in parallel" from each state. probably not the best explanation, but i'm sure you can find good material about this.

---

¹ this a way to represent nondeterminism in a pure/math-y setting: instead of

  def f():
    b = random_bool()
    if b:
      res = "yes"
    else:
      res = "no"
    return res
you do

  def random_bool2():
    return {True, False}

  def f2():
    res = set()
    for b in random_bool2():
      if b:
        res.add("yes")
      else:
        res.add("no")
    return res
or just:

  def f2():
    return {"yes", "no"}
i.e enumerate all the possible results f() could give depending on what `random_bool()` returns.


I'm looking for information for those that provide patient care to covid patients. Infection rate, best practices, transmission to family (I live with a doctor providing care to covid patients...), etc. I've seen some anecdotes and single case reports that are concerning, but anything that is data driven like this will be helpful.

Also, where is non denatured ethanol readily available?


You can use diluted bleach (1:9 bleach:water) for cleaning surfaces. Source is WHO guidelines as linked by raphlinus downthread (hard for me to get a direct link to the comment due to the device I'm reading on, sorry).


Here's a review that seems pretty comprehensive and careful with sources:

https://www.uptodate.com/contents/coronavirus-disease-2019-c...


Everclear is probably your best bet. It depends on where you live. Where I live your only bet is the local Naval Exchange.

:(


What does "blacklist" mean in this context?


What is the end game with this strategy? If you sell the puts today to capture the profits, do you also sell your equities? If you don't sell your equities isn't there a chance the slide further? If you hold the puts to maturity why buy them at all?


The end game is to save the value of my account without having to sell holdings I want to keep long term for tax reasons.

I'm up 8% for the year instead of down 15%.


Maybe you really are more clever than the rest've us, and have beat the market. Or maybe you got lucky. Or maybe we're only hearing about the winning trades, and you've got some losers we're not hearing about... I suspect it's option 2 or 3. Regardless, this is bad advice.


It isn't bad advice to hedge your position with options, although doing it short term as suggested here is an active trading strategy and inherently more risky.

Contrary to popular belief that options = gambling, this is the #1 real utility of them. If 90% of your investments are tied up in S&P 500, it makes sense to hedge that with put options which provide a clearly defined max-loss over the contract duration of the option.

So in a period like this when those put options become valuable due to price drop and general IV, you can do as suggested and sell them to re-coup losses and maintain capital. It doesn't change the lifetime performance of your money placed in the corresponding security, but it most certainly improves the performance of your portfolio as a whole and limits the damage that can be done in any given downturn.

If you only hold securities, index or otherwise, your only recourse is time. I certainly wouldn't recommend trying to time the market, just like I wouldn't buy an insurance policy the day before a loss. That doesn't mean you avoid insurance altogether because you can't predict when you'll need it.


No he just dampens his returns by protecting against extreme downside black swan events


Your puts are going to expire at some point, what after that? you sell all your holding after expiration? or you buy more puts? puts are expensive, I started 2 weeks ago and paid 7% of my portfolio in premium just as insurance.


You sell them before they expire for profit.


If the market drops less than your put strike price, you can sell the puts and keep the shares. If it drops more, you exercise the put, which results in selling the shares at the strike price.


It's the same price as EKS... https://aws.amazon.com/eks/pricing/


I agree the rollout is a little bumpy but I'm curious what workloads you are using k8s for where a $74/mo (or $300/mo) bill isn't a rounding error in your capex?


Think about any medium sized dev agency managing 3x environments for 20x customers. That's 50k/year out of the blue.

My problem is that this fee doesn't look very "cloud" friendly. Sure the folks with big clusters won't even notice it, but others will sweat it.

The appeal of cloud is that costs increase as you go, and flat rates are typically there to add predictability (see BigQuery flat rate). This fee does the opposite.


It's charged per-cluster. GKE encouraged (and was great for) running multiple clusters for all kinds of isolation and security reasons.

This cost increases rapidly for those scenarios.


$3600/year is significant for a startup on a shoestring budget.


Then manage k8s yourself.

Or, better yet, don't use k8s. You don't need it, especially as a startup on a shoestring budget. You can migrate later if you decide you really need to, but just a plain LAMP gets you 99% of the way.


But then you can’t put k8s on you resume for when said startup implodes.


If there were a lower complexity way to deploy containerized apps supported widely I think tons of people would go for it. Currently there's not really much of a middle ground between Cloud Run and K8s offered. It's kind of absurd, honestly.


Google App Engine has been exactly this since 2008.


My impression of app engine is that you have to use all the cloud* services like SQL, cache, etc, which will make it significantly more expensive, even if it does that app layer fine. Is that wrong?


It's wrong today. It was true in 2008, when GAE was Google's entire cloud offering (and there was no Docker or K8s).

Around the time "Google Cloud Platform" became a thing, Google changed GAE from an encapsulated bubble into a basic frontend management system that interacts with normal services through public APIs (either inside or outside GCP). It's more expensive than GCE, but it's fully managed and lets you skip the devops team.


So ask for Google cloud for startups? One free cluster is enough to get started.


> Google Cloud for Startups is designed to help companies that are backed by VCs, incubators, or accelerators, so it's less applicable for small businesses, services, consultancies, and dev shops.[1]

This makes it seem like Google Cloud for Startups is aimed at startups that aren't really on a shoestring budget.

[1]: https://cloud.google.com/developers/startups/


Like every "special offer for startups", it's a vulture waiting for funding round to close.


My boss viewed it as the main way to deploy containerized systems offered by cloud providers and figured we could run most of our internal only things in it for a couple hundred a month - we don't really need the guarantees and scale, and he saw it as a way to avoid creating excess numbers of dedicated VMs, as cloud run isn't sufficient for our non-static stuff. This view up until now has actually been quite accurate because of the dedicated usage discounts.

So I guess the big question in my mind is how do you run containerized apps in the major clouds besides K8s if it's a bulldozer and you just need a cargo bike? Is there something simpler?



E.g. https://aws.amazon.com/ecs/ - that was quite nice https://aws.amazon.com/fargate/ - haven't tried


You could consider the flexible app engine


How is it not a rounding error for Google?


I think you mean opex and not capex here.


Since I follow paulg on twitter I don't have to read the article to know where this is going. On twitter paulg is a "capitalist idealogue" (a term someone else used to describe him which I thought fit very well). and that comes with all sorts of controversial points of view.

The most entertaining/snarky way I can describe it is he is a try hard auditioning for the role of Peter Thiel's best friend.

It's a little disappointing considering the regard I held for him for so long. I try to separate the essays from the twitter account.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: