Hacker News new | past | comments | ask | show | jobs | submit | anoncoward1234's comments login

drugs


whatever works ))


WORD.


Ok, I'll bite.

I posted this in another thread, but check this out: https://stackoverflow.com/questions/50195896/how-do-i-get-on.... That's the amount of crap I waded through trying to rubber-ducky myself into figuring out how to get two pods to talk to each other. In the end, I copied a solution my friend had gotten, and it's still not great. I'd love to be able to use Ingress or Calico or Fabric or something to get path routing to work in Kubernetes, but unfortunately all the examples I've seen online suffer from too much specificity. Which is the Kubernetes problem - it can do everything so trying to get it to do the one thing you want is hard.

Here is the kubernetes cluster I ended up building. If anyone has any ideas on how to add path routing let me know - https://github.com/patientplatypus/KubernetesMultiPodCommuni...


I think part of the problem is that I can't immediately understand what is actually being done. You say you want a per se React frontend to talk to a Node.js backend. But that's not really a pod-to-pod communication issue; both frontend and backend will be communicating with the user's browser, outside the cluster.

Secondly, you deployed an Nginx ingress controller. You don't need to deploy more than one of these in your whole cluster, so you can go ahead and separate this from your program's deployment manifests. Typically, cluster add-ons are installed by running kubectl -f with a URL to a GitHub raw URL, or, if you want to be much cleaner, using Helm (basically, a package manager. It installs with one command and then you can use it to install things into your Kubernetes cluster easily, such as an Nginx ingress controller.)

If you're wondering why the process is such a mess, it's probably just because Ingress is still new. In the future, support for more environments will probably come by default, without needing to install third party controllers. Already, in Google Cloud, GKE clusters come ready with an Ingress Controller that creates a Google Cloud load balancer.

As a side note, I found that the nginx ingress controller was not working by default in my cluster. I noticed some errors in the logs and had to change a parameter with Helm. Don't recall what it was, unfortunately.


The problem with adding the Ingress controller via Helm (and with a lot of other Kubernetes abstractions) is that it spits out a lot of code that is then difficult or impossible to reason about. `Helm Ingress --whateversyntaxdefualt` spits out 1000+ lines of Ingress controller code that is essentially two deployments with a health check and auto spin up, but it's complicated. In production can I use this or is there a security hole in there? What if the ports the health check are using overlap with other ports I have assigned somewhere else? What if something equally silly?

Maybe Kubernetes is new so that's why it's so wild west, but it really feels like a pile of bandaids right now.


I have read through the nginx ingress controller code in Helm before deploying it into production.

What you're saying is pretty much the result of my biggest gripe with Kubernetes, though it's one I don't have a lot of ideas of how to fix; there's too much damn boilerplate. 1000 lines of YAML to store maybe 100 relevant lines.

That being said, can you trust that there is not a security vulnerability when you deploy i.e. NGINX alone? Your answer should not be yes. Even if you read through every single line of configuration and understand it, it doesn't mean something isn't wrong. Google "nginx php vulnerability" for an example of what I mean; innocent, simple configuration was wrong.

I read the Helm chart for nginx ingress because I wanted to understand what it was doing. But did I have to? Not really. I trust that the Helm charts stable folder is going to contain an application that roughly works as described, and that I can simply pass configuration in. If I want to be very secure, I'm going to have to dig way, way deeper than just the Kubernetes manifests, unfortunately. There's got to be some code configuring Nginx in the background, and that's not even part of the Helm chart.


> What you're saying is pretty much the result of my biggest gripe with Kubernetes, though it's one I don't have a lot of ideas of how to fix; there's too much damn boilerplate. 1000 lines of YAML to store maybe 100 relevant lines.

I think that's more a helm issue than a k8s issue. I've been using helm in production for over a year and k8s for almost three years. Prior to adopting helm we rolled our own yaml templates and had scripts to update them with deploy-time values. We wanted to get on the "standard k8s package manager" train so we moved everything to helm. As a template engine it's just fine: takes values and sticks them in the right places, which is obv not rocket science. The issues come from its attempt to be a "package manager" and provide stable charts that you can just download and install and hey presto you have a thing. As a contributor to the stable chart repo I get the idea, but in practice what you end up doing is replacing a simple declarative config with tons of conditionally rendered yaml, plug-in snippets and really horrible naming, all of which is intended to provide an api to that original, fairly simple declarative config. Add to that the statefulness of tiller and having to adopt and manage a whole new abstraction in the form of "releases." At this point I'm longing to go back to a simpler system that just lets us manage our templates, and may try ksonnet at some point soon.


The stable chart thing is so weird. Internally use we some abstractions, but I looks at stable charts and it requires so much time just to understand all of what's going on. Everything is a variable pointed to values, and you can't reason about any of it.

It seems like the hope is, just ignore it all, and the docs are good, and just follow them, but I don't live in any kind of world I can do that.

And the commits, and the direction of all of them seem to go more and more impossible to read conditionally rendered symbols.

I've had such a challenge understanding and using helm well enough. Small gotchas everywhere that can just eat up tons of time. This doesn't feel like the end state to me.


> It seems like the hope is, just ignore it all, and the docs are good, and just follow them, but I don't live in any kind of world I can do that.

Yep, agreed, we've used very few charts from stable, and in some cases where we have we needed to fork and change them, which is its own special form of suck. The one I contributed was relatively straightforward: a deployment, service and a configMap to parameterize and mount the conf file in the container at start. Even so I found it a challenge to structure the yaml in such a way that the configuration could expose the full flexibility of the binary, and in the end I didn't come anywhere near that goal. You take something like a chart for elasticsearch or redis and its just so much more complicated than that.


Right, I'm in particular working on charts for ELK, and it's just a mess. I just took down all my data (in staging, so all good) due to a PVC. The charts won't update without deleting them when particular parts of the chart change, but if you delete them, you lose your PVC data.

So I find the note in an issue somewhere stating, this is.. intentional?.. and that of course you need some annotation that will change it.

Let alone the number of things like, xpack, plugins, the fact that java caches the DNS so endpoints don't work on logstash, on and on.

It seems like everyone is saying operators are going to be the magical way to solve this, but if anything it seems like one set of codified values, that don't address any of the complexity.


You're using a statefulset? Here's a tip: you can delete a statefulset without deleting the pods with `kubectl delete statefulset mystatefulset --cascade=false`. The pods will remain running, but will no longer be managed by a controller. You can then alter and recreate the statefulset and as long as the selector still selects those pods the new statefulset will adopt them. If you then need to update the pods you can delete them one at a time without disturbing the persistent volume claims, and the controller will recreate them.


The Kubernetes creators never intended this verbose YAML format to be the long-term format for humans to work with directly. Heptio's ksonnet is where they want to go: https://ksonnet.io

No, this is not replacing the YAML under the hood, it's just more convenient for humans as a higher layer.


I found ksonnet, by actually looking for smarter json, jsonnet (which ksonnet is based on). Had little experience, while at Google with borgcfg - and while not the same, it's very similliar in spirit, even has easier to understand evaluation rules (unlike borgcfg, which I could never get fully, or I would understand them when focusing, and then if I haven't used them in a while would completely forget again).


> In production can I use this or is there a security hole in there?

What if there's a bug in nginx? That has a lot more lines of code than the controller code. As always, feel free to audit the code, but as with any environment to eventually have to trust someone's code.

> What if the ports the health check are using overlap with other ports I have assigned somewhere else?

Each container can bind to every port, only those that are exposed can conflict. (Similar to how docker works).

Honestly, kubernetes might not solve your use case. I use it because it solves mine (Self-healing, declarative configuring that works seamlessly across multiple nodes - aka accessing multiple nodes as one big computer).


You should not use Ingress. Use Nginx or Haproxy and do it on K8S like you would do it normally and you can scale your nginx haproxy with kubectl scale --replicas=2 deploy nginx

On the outside use metallb which than gets you a single IP which is highly available either via L2 or with bgp (if you have bgp gear) if you are not on the cloud. What people do wrong with k8s is that they think different, which is silly. k8s just exposes a "managed vm" where you can built stuff like you would do on vmware vApps.


I disagree with two statements:

> You should not use Ingress

Why not? It allows you to route your applications automagically with Kubernetes objects. Instead of writing nginx configurations that do what you want, you can just describe how you want your routing to work. I don't see why that isn't useful.

> k8s just exposes a "managed vm" where you can built stuff like you would do on vmware vApps.

Pods aren't even containers, less VMs. They're namespaces with containers in them.

Secondly, while you can use those pods like VM and boot systemd or whatever in them, that's not really the way you're intended to use Docker. Just to quote an official source:

https://docs.docker.com/config/containers/multi-service_cont...

> It is generally recommended that you separate areas of concern by using one service per container.

Instead of treating Kubernetes like a VM manager, the actual intended way to use it is to treat it like a task manager, like systemd or what have you. The pods are meant to represent individual services, and containers individual processes.

The problem Kubernetes solves is managing applications, not machines. The difference is not merely semantic rambling; it's a paradigm shift.


You have to accept that Kubernetes is a platform, and any platform, no matter how simple or complex, will come with its own set of technical challenges. Complexity isn't in itself isn't an evil. Unix is complex.

Just imagine the complexity of something like APT on Debian/Ubuntu, or RPM on Red Hat/Centos. You could run into a problem with installing a package with apt-get or yum, perhaps some configuration script written in Bash that misbehaves during installation. To fix it, you have to understand how it's put together. The same applies to Kubernetes. You have to know the layers in order to work with them. Someone who doesn't know shell scripts or how init scripts work will not be able to work on Unix. Kubernetes is kind of like an operating system in the sense that it's a self-contained abstraction over something lower-level; the complexity of Unix isn't different, it's just that the design and implementation different.

Helm "just" installs parameterized YAML manifests. But Helm doesn't pretend to be an abstraction that simplifies Kubernetes. It simplifies the chore of interacting with Kubernetes, but in order to really use Helm, you have to understand what it is doing. Specifically, you do have to understand the "1000+ lines" of ingress declaration that it spits out. The notion that you can get around the complexity of Kubernetes with Helm is simply false.

To start with Kubernetes, take a step back, forget about Helm, and simply use Kubectl. You can accomplish everything absolutely you need with "kubectl apply -f". Learn each basic building block and how all of them fit together. Learn about pods before you learn about anything else. Deployments build on pods and are the next step. Then learn about services, configmaps and secrets. These are all the primitives you need to run stuff.

Ingresses are arguably the worst part of Kubernetes, since it's a pure declarative abstraction — unlike pods, for example, an ingress doesn't say anything about how to serve the ingress, it just expresses the end goal (i.e. that some paths on some hosts should be handled by some services). Ingress controllers are probably mysterious to beginners because they're an example of a "factory" type object: An ingress controller will read an ingress and then orchestrate the necessary wiring to achieve the end goal of the ingress.

Moreover, you don't need ingresses. Ingresses were invented a little prematurely (in my opinion) as a convenience to map services to HTTP endpoints have make these settings portable across clouds, but what most people don't tell you is that you can just run a web server with proxying capabilities, such as Nginx. This gist [1], which can be applied with "kubectl apply -f nginx.yml", describes a Nginx pod that will forward /service1 and /service2 to two services named service1 and service2, and will respond on a node port within the cluster (use "kubectl describe endpoints nginx" to see the IP and port). Assuming a vanilla Kubernetes install, it will work.

[1] https://gist.github.com/atombender/af2710818af0921e5c55a9ecb...


> the amount of crap I waded through trying to rubber-ducky myself into figuring out how to get two pods to talk to each other

well, it's more the amount of crap you waded through trying to figure out that you were not actually trying to get two pods to talk to each other at all.

Path routing should work, if it's not, what you should do is exec into the nginx pod and inspect the nginx config that the nginx ingress controller generated.

Traefik has an example of this that is basically what you are doing:

https://github.com/containous/traefik/blob/master/examples/k...

https://github.com/containous/traefik/blob/master/examples/k...

The main thing I see you doing wrong is you are using 'type: LoadBalancer' for everything when that is exactly what you don't want.


Ok, so I'm on mobile, so I don't have a chance to look through your cluster atm, but this looks easy enough.

Can you tell me how you set up kubernetes (minikube, Google container engine, kops, etc)?


within a cluster, service names are addressable, eg curl http://backend/api/whatever

I only glanced at your repo, but it sounds like you have an ingress problem. That is, you don't really have two pods communicating, but you need to hit your backend service from outside the cluster. Under the hood this is always accomplished with a node port on a service- a single port on any node in your cluster will forward to said service.

k8s has integration with cloud providers like aws to hook all this up for you, but all its doing is setting up an elb which load balances to that port on every node in your cluster.


Mindfulness isn't over-hyped. Mindfulness takes effort. There's crazy videos of buddhist monks walking on their thumbs. People can raise their body temperature several degrees at will (and it's how a lot of open ocean swimmers survive, say, crossing the english channel). But it takes practice, every day, for years and you can't fake it - you're body knows you're cheating and can't be bribed.

Given that - how do you do a statistical analysis across the population of mindfulness meditation vs. pill popping? Pills always work, often with horrible debilitating side effects (yay Opioids!). Meanwhile the monk I met who learned to walk again after being confined to a wheelchair (gymnastics injury) had to sit under a tree for several years doing muscle exercises and willing the pain away.

So of course science can't rationalize it. How do you measure the strength of will of people? It's not so easy to put these sorts of things into a study. So I guess we'll keep eating the poison pill. Shrug.


People heal from injuries using physical therapy on a regular basis.

I also don't know what walking on ones thumbs has to do with mindfullness.

You don't need to do a statistical analysis. We have other tools that can be used like double blind trials comparing meditation and medication.

Anyways, I think you should be very careful about placing much stock in anything that can't be studied in a scientific way. Such a thing would by definition have no measurable effect on the world or be unchangeable.


> Pills always work

We are clearly on different things.


Kubernetes is cool and all, but there needs to be a lot of simplification for it to be "nice to use". Essentially the problem is that it's the "opposite" of Golang - the number of ways to do the same thing is massive leading to huge numbers of headaches in trying to get things done.

Recently, I spun up a simple pod-to-pod communication example but I found it pretty difficult. If you look up cluster networking in Kubernetes (https://kubernetes.io/docs/concepts/cluster-administration/n...) you'll find a whole fire hose of different options from ingress to calico to fabric and on and on.

This was what it took for me to try and rubber ducky my way to getting networking to work on Kubernetes, and in the end I had to get help from a friend at work (https://stackoverflow.com/questions/50195896/how-do-i-get-on...). It may be better than what came before, but it's not great.


Kops, kubeadm, rancher, kubespray all do what you want, but differently depending on your needs. What were you looking for?


EXACTLY


I don't see your point, but I should have clarified more:

Kops == AWS install Rancher == Easy small cluster on-prem or private cloud. GUI push-button setup kubeadm == CLI tool to set up a cluster manually with basic settings. Not very flexible, but can get a cluster working in minutes. kubespray == Ansible playbooks for setting up k8s cluster. The most powerful of the bunch, especially if you're familiar with ansible. This is the preferred way to run it in production I believe, along with customizations on top.


Why don't you use something like Kops? It will create (and manage) the cluster for you.


Sounds great! Are you guys hiring?


This 'source' should be taken down.

This is a propaganda website - here are the top three current stories:

- http://newobserveronline.com/nonwhite-invader-caravan-storms...

- http://newobserveronline.com/indian-british-high-court-judge...

- http://newobserveronline.com/adl-booted-from-starbucks-racis...

Hacker News should be above supporting white nationalism and hate, especially so when doing so supports its preconceptions.

Gross.


Flag it.


I remember reading this article waiting for my haircut this morning.

Here was my biggest takeaway:

> More realistically, he said, Oman could store at least a billion tons of CO2 annually. (Current yearly worldwide emissions are close to 40 billion tons.)

So we're looking at 2.5% sequestration of what we're currently adding, from the largest deposits in the world. Best case, realistically.

The whole thing sounds like a total non-starter.


Sometimes the way you solve a big complex distributed global problem is a bit at a time. It’s unlikely we’re going to find one single global solution to carbon sequestration.


>Sometimes the way you solve a big complex distributed global problem is a bit at a time.

An example of such a time, from various available such historical "sometimes"?


There are so many examples I don't even know where to start. Disease eradication (through education, sanitation improvement, suppressing disease vectors, vaccination, etc), then there's poverty reduction, the way piracy was eradicated in the 19th Century, the way ozone depletion was tackled. There are endless examples.

Anyway, what are you actually saying other than taking a cheap shot?


Isn't it the case that much of our emissions are absorbed already (e.g. by oceans, about a third of human activity) such that the 2.5% is a lot higher on the net amount.

e.g. say we emit 100 and absorb 95, absorbing 2.5% of our total emissions actually cuts net emissions in half. Or are you already figuring that in?


Seems to me this is easily fixed.

If a jury of peers cannot understand the legal contract during an arbitration, rule in favor of the signor of the contract and against the writer.

You'll see that shit disappear with a quickness.


This is a great idea but I don't think it goes far enough.

It means that 12 lay people can eventually, after some coaching, study, and debate, understand a contract. Or one lawyer, presumably.

The problem remains though, that one lay person needs to understand it without study or coaching, and without taking days to do so. Most of us have probably clicked through hundreds or even thousands of EULAs and other crap without having days to understand each one. Every app, every shrinkwrap, your toaster, and even the bloody nav on your car with its OK button: there's just too much.


The jury should get as much time to read the contract as the average user spends reading the contract and should not get the help of a lawyer. If the company doesn't collect data about how much time users spend reading the contract, then it should be presumed that they didn't read it at all and then only terms that the jury can guess would be in the contract without reading it should be valid.


I mean...Firefox is completely rewritten from the ground up in Rust, a language invented by Mozilla because they rock and they're absolutely killing it. Given that it's the browser with the latest full bore rewrite, shouldn't we expect it to be the coolest atm?


It's not a full rewrite, but many core parts have been rewritten. (Possibly most notably the UI).


I currently work as a solutions engineer on Oracle Cloud. The product is absolute garbage and I'm miserable. If anyone has any good leads on other positions please let me know!


You sound exactly like the two oracle engineers I spoke to circa 2000. “We hate WebDB. If you don’t buy it, we’ll get made redundant and can sign on and get free education to do something other than IT. Please don’t buy it. Think of our children”.

I exaggerate but not by much.


Happy to help - there are plenty of open positions on the AWS team!


Hey thanks for the kind words, I sent you an email. Have a great weekend!


Are you the same person, and if so, why a different account??


Yes, sorry it is the same person, I apologize for the confusion. I was eating dinner and was using my phone and then came home and switched to my computer. In retrospect I can see how that might seem offputting.


Forgot the password to the throwaway?


I was expecting "Happy to answer any questions you have!", but this was a refreshing twist. Hoping for the best bud.


Are you able to share any specific complaints, gripes or issues? This entire thread is an Oracle bash fest without much specific data or information.

Don't get me wrong, not a huge fan of Oracle but am curious to know if there is an actual problem, or if this is simply a hater-ade/fanboy party with no substance.


Given that my comment blew up and has high visibility I feel I probably shouldn't share anything specific. It was a pretty much throw away comment that I didn't expect to get as much attention as it did. The best I can say is that after working for the company I think that the general news.ycombinator.com beliefs on the performance of Oracle are entirely justified. I thought going in that it may have been overblown, like you say some sort of "startups are cool, big corps are evil boo" kind of thing, but it is not. I love the people I work with in my section! Just, not the overall firm.


I wonder if there will be a management witch hunt against the Solutions Engineering team on Monday provoked by your post.


Well the joke would be on management. I am sure that ALL of the solutions engineers think that the product is garbage, given that they are the ones who have to deal with the problems.


I am sure that ALL of the solutions engineers think that the product is garbage, given that they are the ones who have to deal with the problems.

On the other hand, without the problems, the solution engineers might not exist...

...which is, in one sentence, the reason why Oracle consultants exist.


If you are still looking about, our group at Microsoft - CSE (commercial software engineering) is hiring. Some evangelism, lots of open source, the group I am part of works heavily with both the open source side of things as well as the product team around Kubernetes.


> ... but am curious to know if there is an actual problem, or if this is simply a hater-ade/fanboy party with no substance.

You've never had to deal with Oracle sales or "support" staff or -- even worse -- manage or support Oracle products in production, have you? If you had, well, you would already know the answer. Oracle RDBMS might be the one exception.

(FWIW, I mean "real" Oracle products, not, say, MySQL or Oracle Linux, for example.)


The Oracle cloud is available to anyone and comes with $300 of credit: https://cloud.oracle.com/home

Ignore the comments and just try it. Run an instance for a weekend, try to setup a site or install a DB and see how it goes. I think you'll learn very quickly why the reputation exists.


The pricing seems... really weird. Many very different offerings are priced exactly the same. E.g.

VM.Standard.1.1 and VM.Standard1.16, same cost but the latter has 16 cores instead of 1 (same CPU), 112 GB memory instead of 7.

https://cloud.oracle.com/en_US/iaas/pricing


The pricing is per OCPU (Oracle CPU core) per hour. 16 cores = 16x the listed price for a single core. But yes, the table is a perfect example of how bad everything is.


If you ever think you understand the price of anything Oracle, take it as a signal that you are wrong.


Amazon's AWS team probably contacts me once a month with job offers. I suspect you could reach out to them and get yourself a much better job. My friend who works for them loves it.


I've been contacted and interviews a few times with AWS and threat Intel team but never got ANY followup after the second interview. It's disheartening. At this point I'm ready to give up. Why do their recruiters keep reaching out If they never even give a Yay /nay after six months and two interviews?


I've a friend up in Washington that loves it. That said, I've worked with a lot of ex-AWS people that couldn't stand the culture.


Yeah I worked in PDIT on some of the cloud stuff and it was a sad joke. Get out, you won’t regret it.


I've heard IBM's Cloud is much the same.

The only thing it has in common with a proper cloud thing like AWS or GCP is the 'cloud' in the name.


Isn't IBM's cloud offering built on top of Cloud Foundry? (Not saying you are wrong, just trying to figure out which partner is to blame)


No. IaaS is Softlayer which they acquired, a mix between VMs and dedicated machines. Worked well before but networking is now obsolete, operations too manual, and any price advantage is long gone.

The rest of their products are managed services running on this, similar to the other clouds, and they work fine for what they are. Bandwidth costs are always the limiting factor though with any cross-cloud situation and their Watson AI is useless and nowhere near what they hype.


Wouldn't matter what it's built on :-)


I'm having the same experience with Azure after coming from AWS.


Ditto. Kinda.

I moved a lot of infra out of AWS and into GKE which I'm loving. Outside of GKE, which is great. GCP has a few things that I like better than AWS.

I also moved some infra into Azure, specifically into ACS, and then AKS and now ACS-engine.

It is very much not great.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: