Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kubernetes 1.4: Making it easy to run on Kubernetes anywhere (kubernetes.io)
228 points by okket on Sept 26, 2016 | hide | past | favorite | 79 comments


Huge congrats to the team - - Incredible simple setup [1] - Stateful application support with integrated Helm [2] - New cross-cluster federation features (secrets, events and namespaces, and lots more in alpha) [3]

And lots more... please let us know if you have any questions!

[1] http://kubernetes.io/docs/getting-started-guides/kubeadm/ [2] https://github.com/kubernetes/helm/blob/master/docs/charts.m... [3] http://kubernetes.io/docs/user-guide/federation/

Disclosure: I work at Google on Kubernetes


Well yeah it looks great. However for ease of use in "small" scale. it would be cool to have a built in Load Balancer (a pretty pretty simple one, just http, maybe https via LetsEncrypt).

At the moment adding a Cloud Load Balancer per Project is probably not affordable in the smaller project scale (and it would make any PaaS useless). (I mean that is not a "big" problem, it's ok to build it myself (with less quality than you can ;))

But I wanna say Kubernetes is extremly good designed and the amazing part is, is that you can start with a really really really really small cluster (like 1 master 1 node) and later you can raise that pretty easily


We've heard this quite a lot - there's actually a few ways to tackle this, one of the most common is to either use Ingress (which is a built in L7 load balancer) or run an nginx instance in a pod on the cluster (https://github.com/nginxinc/kubernetes-ingress), and, in either case, direct all traffic to your node port across your cluster.

It's really one command to do either - if you have trouble, please ping me (aronchick (at) google)!

Disclosure: I work at Google on Kubernetes


Is there a way to do this properly in AWS without nginx? It would also be great to have features to switch only percentage of the traffic to an app when doing blue/green deployments.


The best way to do blue/green is put them behind a single endpoint. So, if your endpoint is 'app: node-app', then you put the label 'app: node-app' on BOTH your existing version, and your future version, and target all traffic to a service with the selector 'app: node-app'.

Then, you slowly start to spin up your new instances from 1 -> 10 -> 100 (or whatever). The traffic will split automatically because both apps have the same label/selector, and you control the amount by how many instances of each you have.

Disclosure: I work at Google on Kubernetes


does kubernetes take care of 'connection draining', ie making sure requests in flight are completed before killing traffic to an instance?


Sort of a blend. It will send a sigterm to your process first which should be your signal to start draining and exit when they are done. If you don't finish within a configurable timeout then sigkill is sent.


I was wondering how to make this automated through some sort of pipeline that needs a human to click "go on with the next X % of the rollout and how I would do it with kubectl without too much pain.


We've been doing this with multiple deployments (e.g. 1, half, all) and updating the deployments sequentially when the previous one looked good. (these are all fronted by one service).


Take a look at Traefik (http://traefik.io/), it's a reverse proxy you use as an edge service behind the cloud providers L4/L7 LB. It is designed to change dynamically and can listen to K8s ingress changes and reconfigure itself automatically and it has let's encrypt support (although at the moment not so streamlined in k8s but that is supposed to change soon).


Take a look at Vamp (vamp.io) which also integrates with Kubernetes and is a programmable router based on HAproxy. It does loadbalancing, but also percentage and condition-based canary-releasing: http://vamp.io/documentation/installation/kubernetes/

(disclaimer: I'm the co-founder, looking forward to hear your thoughts and feedback!)


Vamp is awesome! It fits really nice into Kubernetes.


I feel that the homepage at kubernetes.io is a poor introduction to the project. All the keywords and short descriptions don't add up to a complete picture of what Kubernetes does and why someone would want to use it. The opening tagline "Kubernetes is an open-source system for..." seems like a complete description of the project, but to a _newcomer_, the sentence is not easy to parse.

The "What is Kubernetes?" page [1] gives a very clear overview of Kubernetes that will make sense both to people who already know about containers in general and those who are new to the concept.

Can someone help me understand what type of person the homepage is targeted at? It just doesn't do anything for me. I'm mainly interested because I find that a lot of projects have very poor home pages, even if the rest of the project is awesome.

[1] http://kubernetes.io/docs/whatisk8s/


Absolutely – as someone looking to enter the container orchestration space, I have been spending time evaluating Docker Swarm (1.12) and Kubernetes. While consensus seems to be that Swarm is immature and "productionability" is questionable, Docker's documentation, while by no means perfect, was by far more approachable than whatever Kubernetes has thrown together.

Perhaps that comes as consequence of Docker shooting for the all-built-in approach, but I'd like to see a better overview and ramp-up in the Kubernetes space – their "101" and "201" docs are laughable.


We are trying to improve the documentation and developer experience. Please try the new kubeadm install docs and let me know what you think.

http://kubernetes.io/docs/getting-started-guides/kubeadm/

Disclosure: I wrote the doc (but don't work at Google) :)


I think that it targets people who already know or have an idea as to what Kubernetes is and are looking to compare/contrast various Docker/rkt schedulers for their infrastructure.

I really doubt that they're targeting anyone who isn't familiar with the above and with as in flux and complex as kubernetes is right now (especially regarding documentation/the site) they probably shouldn't albeit deployment and management simplification seems to be a very important goal in the end.

The kube docs/pages are mostly just from the github repo. I doubt they've had much editorialization unfortunately.


I think this is excellent feedback - we should revamp!

Disclosure: I work at Google on Kubernetes


One of my huge headaches in working with Kubernetes daily over the last year is how the documentation is dispersed. The documentation on the site will point you to manfiests in github.com/kubernetes/ which just point you back to the documentation on www.kubernetes.io.

If you don't know that https://github.com/kubernetes/kubernetes.github.io/tree/mast... exists you're in for a bad time.


Yes - we've made huge progress against the many many repos of docs, but we're not close to done yet :( Please file bugs!

Disclosure: I work at Google on Kubernetes


I often end up referring to the GoDoc and core code when the documentation is lacking. I wish the docs gave a deeper explanation of each component and the object model.

There are also places where the content is too specific and not generally applicable. For example, this addon example that assumes you're using the kube-up convenience script: http://kubernetes.io/docs/getting-started-guides/logging-ela... No explanation of what the magic flag actually does, just "set this and run kube-up". (the real explanation is "copy the appropriate specs to your cluster addon directory")


Yeah I generally look at go objects for docs. Also I think just writing configs in go rather than yaml is better. I wrote a post about it: https://kozikow.com/2016/09/02/using-go-to-autogenerate-kube... .


Thats for the post. I've been using k8s forever but I'm pretty new to go. It's probably a great idea to start doing things in go instead.


I'm confused, those are the docs that are renedered and visible on kubernetes.io.


The docs in github/kubernetes/kubernetes.github.io are what is rendered. The documentation was broken apart at some point. 90% of the time (if not always) when I'm looking for a file mentioned in the actual website docs the link (if one exists) is to the kubernetes repo, not the kubernetes.io repo. Going to the location in the kubernetes repo just gives you a loop back to the website you were just viewing.

If you've used k8s docs for any significant amount of time I can pretty much guarantee that you've encountered this.

Here's an example; limit-example.yml which is mentioned over and over in the limitrange docs but the file is nowhere to be found;

http://kubernetes.io/docs/admin/limitrange/

If you don't know about the kubernetes.io repo (which someone who has never gone through looking for missing docs will not know about) you'll think to look in the kubernetes repo on github for the missing file, where you think it will be;

https://github.com/kubernetes/kubernetes/tree/master/docs/ad...

Where it is; https://github.com/kubernetes/kubernetes.github.io/tree/mast...

Edit: I have been poor on submitting doc bug requests because I don't really know what the state of the docs are. If they just migrated, if they're working on cleaning up things, or what. I suppose I'll just start creating issues regarding them in the future.

edit: I just realized that's limit.yml, I'm not even sure where limit-example.yml is or if there's any difference. Like the go guys mentioned above, there are also a huge number of undocumented (on k8s.io) features that you'd only find by reading the go-defs.


I see. That page is kind of rough. Doesn't even seem to link to the yaml file, just implicitly references the docs path as if the user has a local enlistment of the kubernetes code. Probably should be a kubectl apply link that directly references the yaml blob url on github.

I think any and all help is welcome on the docs, it seems to be a well-known weak spot.


I've been looking at moving our system to Kubernetes, as it seems mostly cloud agnostic and should work well for our use cases. However, I was put off by the somewhat ad-hoc set of installation methods which mostly seemed to boil down to "Run a big opaque script that does a thing to specifically work with your provider".

Really glad to see a dead simple setup - I'm not exactly unfamiliar with operations or containers or Linux or anything, but a turnkey setup so I can play on my DO effortlessly to get an idea is really nice! And I only started looking recently, so having `kubeadm` available now is quite convenient.


If you want a completely non-opaque way (or just see the steps in general) of installing Kubernetes, check out Kubernetes The Hard Way [1].

That said, `kubeadm` is a great addition!

[1] https://github.com/kelseyhightower/kubernetes-the-hard-way


Note that "Kubernetes The Hard Way" is mostly focused on teaching you what happens during cluster bootstrap.

It is NOT a production-ready setup in any form. You should investigate which alternatives exist for your platform (e.g. GKE for Google Cloud, kops for AWS).


Recently I was at a docker meetup and one speaker really embraced Docker for everything. So his only prequsite was an installed docker daemon.

The setup scripts to turn a daemonized docker server into a cluster were all published as a docker image themselves.

So installation was mainly a docker run away whereas he used the shell evaluation of a subshell to start the actual docker run, e.g. $(docker run ...imagename) would print the actual docker command which will in turn contain volume mount options to the docker socket to help setting up the whole machine.

It was quite fascinating to watch this bootstrap method without relying on any package manager but solely a docker engine deployment.

So what I am saying this that this would be an interesting deployment approach for Kubernetes as well.



Thank you


Kubernetes-Anywhere indeed deploys Kubernetes and the only Kubernetes asset is the `hyperkube` container (and even that is just pulled automatically when the node comes up). Makes for fast/easy node provisioning.


You can run all the k8s components inside containers with hyperkube


Would anybody like to shed some light about how openshift origin relates to K8S in general? I would like to better understand which path to follow. Which advantages has openshift origin over "pure" K8S? Thanks!


OpenShift adds a lot of features around building applications from source and authoring those applications as pods in Kubernetes. If you already have a build pipeline set up you may want to stick with pure Kubernetes.

They also have an ansible based installation process that makes your cluster a lot more production ready than the basic Kubernetes scripts.

For me OpenShift is comparable to Heroku or Elastic Beanstalk where developers can deploy applications from source without knowing a lot about the underlying infrastructure. Of course you still need an ops team to manage OpenShift.


Another significant scenario is tenancy - if you are just using Kube for a single team / set of apps, most of the security and policy in OpenShift isn't useful for you. But if you want to share access to that cluster, the security and policy and integrated rbac can allow many teams to collaborate on their own applications and self-service. So platform for others vs platform for ops.

(I work on OpenShift and Kubernetes)


Isn't much / most of that RBAC upstream in k8s 1.3 (code from Openshift written by redhat engineers like yourself)? What is still in Openshift related to AAA that isn't upstream now?


Still a fair amount of the glue and interspersed code that wires it together sits outside, plus the user management code (user, group, identity integration to all the various providers). Also all the out of the box default security. Part of the benefit of being slightly apart from Kube initially (which we did because we had way too many things we wanted to bring together to put into Kube at the time) is that we could be opinionated and say things like:

* every component will use client cert + TLS + specific authorization roles to interconnect

* no ability to configure the cluster without those on

* default to secure by default, be opinionated about authz/n

* lock down everything that could be abused (like letting end users change ingress IPs on services, or direct volume mounting)

* make namespaces the unit of tenancy and restrict regular users from modifying most things in namespaces that impact policy

The raw pieces are in Kube now, but effecting the same opinionated defaults while still preserving the flexibility many in the community want (like direct Keystone integration, or no restrictions on pods by default) will take some time. Our goal is to get there in a way that also makes Kube more extensible and flexible - I don't believe everyone needs everything we believe in, but by doing it in pieces we do get to ensure it's possible for someone else to do it.


Nice! Well I'm a huge fan and follower of your work, so do please keep it up. I appreciate your willingness to eradicate ignorance (mine!).


You may want to check the section "Kubernetes is not":

https://github.com/kubernetes/kubernetes/blob/release-1.4/do...


Great stuff, happy to see this, especially the part that concerns the setup of the cluster, but still too much in early stages. Currently I'm learning a lot on how to get started on AWS and it is still a bit too painful... after using GKE you just don't want to deal with the manual setup.


Also don't forget about minikube, which allows you to play around with a kubernetes cluster locally. You should really be working with this before even attempting to setup a cluster in the cloud.


You should check out kops. It seems to me like its one level of abstraction above kubeadm, and makes creating clusters on AWS _ridiculously_ easy (one command).


I'd like to try out kops, it sounded good. But step one was setting up a zone on route53, we don't use route53. Kube-aws from the coreos folks makes the same assumption, but is still useable, however it doesn't have multi-az capability.

I love kubernetes and I can't wait for the tooling to mature. I'm about to give 1.4 a spin and see where things stand.


You can use Route53 for just a subdomain even if your parent domain is not done through Route53, see https://github.com/kubernetes/kops/blob/master/docs/creating...


Plus, kube-aws doeesn't really allow of easy maintenance of the cluster (i.e. cluster updates) as far as I know.


I'll definitely do it, too bad for the 50 nodes limitation that can't fit my use case.


That's only if you use VPC networking. If you use external networking, you can deploy a networking Daemonset (Weave, Flannel, etc.) and there's no 50 node limit.


So you want to be a devops hero? Obtain google borg inspired container scheduling and management? It's a powerful abstraction, and I've enjoyed using K8s over the last year. However, like most freely available software, there is work to be done.

Here is my take on the state of things:

1. Still very much in rapid development, with features coming at a breakneck pace. I think Kubernetes is sold as production ready a bit too hard - it takes a good amount of effort to make a cluster production ready for anything non-trivial. Expect to contribute PRs to fix the issues you run in. It's probably easiest to use GCE since this has the most active development, I think. Otherwise, Openshift (Redhat's production fork) for on-premise installation. AWS is in a somewhat working state, with many improvements coming (what I'm using).

2. The maintainers are very open to contributions and discussion. PRs are generally accepted within a couple weeks (and given the volume of issues/PRs this is quite amazing). kubernetes.slack.com is a great way to talk to many of the core developers. I wish they had a slack subscription, so we could search the very rich chat history. A lot of stuff is missed unless you read the chat rooms daily.

3. My impression, which could be totally wrong I admit, is that much of the discussion for big changes seems to happen among the core Google & Redhat engineers offline. I wish the project used an open mailing list for these discussions, or did it in Slack (with a slack subscription, so history is available), or some other recorded text means. I don't think the current mechanisms scale to the size of the project and needs of the users.

3b. A lot of time seems devoted to things like scaling to 1000+ nodes, when fundamental things like kube-proxy are broken for basic use cases, in my opinion (kube-proxy uses an iptables hack for VIPs which leads to problems like the connection tracking table filling up and broken keepalive connections during deployments).

4. The provisioning scripts provided were traditionally pretty poor, kube-addons is a broken shell script mess. Fortunately, this is improving quickly. kops/kubeadm help with the provisioning, coreos is doing lots of work here as well, daemonsets to replace kube-addons, and so on. So this area is improving quickly. But expect to do work here if you're serious about using Kubernetes for prod. I'm using ansible/terraform for this.

5. Ingress (external access to your cluster), is still adhoc. This requires custom dev and testing to setup.


Just a few comments.

1. GKE (Google Container Engine, GCE is compute) lags weeks behind any k8s release so if you're looking for bleeding edge it really isn't the best system, you're better off managing the cluster yourself. GKE being bumped to a recent version seems to be a private matter decided by a few Google employees, so you're never sure at all when you're going to get 1.37 or whatever version that is multiple sem-minors behind. It seems like it's usually about 2-3 weeks but I've never found any discussion or issues about its status so I wind up just checking every few days to see if it's been released. I have a mission critical feature that's being released (I hope) with 1.5 (IP persistence for sockets) and it'd be really nice to be able to follow the decision making process. Or at least have some understanding of what it entails (3 weeks of no major issues? GKE specific bugs? someone comfortable enough to release the hounds?)

5. I've had nothing but headaches with Ingress deployments. I'd wager it's by far the most brought up topic on the k8s slack. The documentation is all over the place and it seems like every example is completely different from the last, albeit there's never any explanation as to why exampleA is different from exampleB but does the same thing. Then you throw in annotations that might be required and IIRC unless you're reading the go docs you wouldn't know anything about them.


GKE is currently 4 days behind OSS. That's hardly "weeks behind". 1.4.0 is available in GKE right now.


Yeah, that's a recent change and much appreciated. I was on 1.36 for quite awhile. The milestone mentioning the release dates to various regions is great, too. Not sure if that's always been there and I overlooked it, though.


I saw the mention of "Curated and pre-tested Helm chart", I am curious if anyone is running databases on Kubernetes?


Yeah, you mount them on a pvc and they'll get their storage reattached.


Great work. What should I make of that ScheduledJob is in alpha?


It means it's new; the API might change before it makes it through beta and into stable.


I was trying to write a tutorial about how to setup Kubernetes on bare metal. I have created etcd cluster of 5 nodes that checks certificates of clients that communicate with it for security. Sadly I found that at least one command in Kubernetes is either not using provided certificates or I am doing something wrong. I can see that entries get created in etcd, so certificates are definitely correct, but kubectl get cs shows that etcd cluster is not healthy. Given that I see some parts of Kubernetes talk with etcd fine, but I am not able to tell the scope of correctness I am kind of blocked. I don't know whether this is something being worked on, or maybe should I look into it and possibly fix. Tried to ask on Kubernetes Slack, but nobody seemed too interested in this. I am guessing everyone is running etcd without checking certificate validity? Because I was able to run it this way, but that kind of feels wrong. I also wanted to check if problem exists with Kubernetes 1.4, but I couldn't find any migration guides. Maybe I'll just start again...

Said article:

https://medium.com/@elcct/kubernetes-on-bare-metal-part-5-ku...

and the ticket:

https://github.com/kubernetes/kubernetes/issues/29330


I was under the impression k8s already did these things, but it seems like a lot of these features are just now entering beta.


"No guys stop complaining that google doesn't give support to kubernetes, we are not Google, we are the Cloud Native Computing Foundation." -- Some kubernetes developer.

Release blog of kubernetes brought to you by: -- Aparna Sinha, Product Manager, Google.

Seriously though those new features are really neat, and seems to compete with the ease of setup of docker 1.12(though swarm was a bit of a mess when I last tried to make something useful of it (docker 1.12.0)), I hope I can test them at work soon.

For now I'm having a great time with rancher.io and cattle.


Sorry, can you say more about your issue? We've always said we contribute (a ton!) to Kubernetes, but it's definitely not a commercial product, so we don't have a support for on-premises/non-Google Cloud deployments. That said (!!), if paid support is what you're after, please use one of the MANY organizations who DO offer support (http://kubernetes.io/community/ - about halfway down the page) and are huge contributors to the community.

Disclosure: I work at Google on Kubernetes.


The problem is, if the supporting organization is not the same as the developing organization, there's no obligation for the developing organization to respond to feedback from the supporting organization -- the developers are free to accept or reject feedback as they deem fit.

There could even be a conflict of interest: the Kubernetes developers are primarily sponsored by Google, which has an interest in promoting its own cloud offerings over those of their competitors. (Note the relative difficulty of setting up K8S on EC2; even if you can set it up, K8S assumes the ability to create a network topology that's unique to GCE; otherwise you have to use overlay network hacks.)


I don't think there is a distinction between a supporting and a developing organization at this point in the Kube ecosystem. At Red Hat we offer an enterprise version of Kube (OpenShift v3) and at the same time we contribute a ton of stuff back upstream. Personally it has been a great experience for me working with engineers from Google, different companies, and even individual contributors and I am pretty sure most if not all of my coworkers feel the same thing.


Have you ever made a contribution that conflicted with Google's goals? If so, how was it handled?


To be honest, I've been contributing to Kubernetes longer than any one who doesn't work at google, and it hasn't ever been an issue for me. The Googlers I have worked with have been sticklers for doing the right thing for the people we expected to need Kubernetes.

The google teams goals (seen from the outside) have been to built a phenomenal system for running applications that is stable, reliable, makes app author / maintainers lives easier, and will succeed as an open source ecosystem. If we have disagreed, it's more in the details of what to prioritize in the short term to succeed broadly (features, scale, ease of install, ease of running existing Docker images, etc). We haven't always picked right - but it's not for lack of trying.

EDIT: And I have certainly "forced" things that I felt certain audiences need into Kube by convincing other contributors over their initial objections. I can't think of a place where a reasoned argument has not carried the day, ever.


As someone who has been on the receiving end of Clayton's arguments (and a sometimes-winner :), I vouch. What he says above is really and truly the highest compliment one could pay.


A few items:

1) The people on that page all contribute mightily to the project in various ways - we wouldn't be where we are without them. We take feedback from everyone (we have over 15 special interest groups, many led by non-core team members). 2) K8s developers are most definitely not primarily sponsored by Google - more than 60% of K8s devs are NOT Googlers. 3) Overlay networks are definitely not a hack - many partners set them up with great benefit. The fact is networking is hard (tm), and unless you're just looking for a flat network, then you're going to have to use SOMETHING.

Disclosure: I work at Google on Kubernetes


Where are you getting the 60% number from?

From an analysis of all commits in the k8s (main) repo this is the data I am getting about domains and the breakup of which users under which domains commit/author the most.

  -----------------------
  Top 20 author (domains)
  -----------------------
  
  google.com => 16825
  gmail.com => 8220
  redhat.com => 4051
  fathomdb.com => 501
  bedafamily.com => 420
  coreos.com => 398
  huawei.com => 352
  raintown.org => 269
  zte.com.cn => 183
  mesosphere.io => 172
  zju.edu.cn => 140
  apache.org => 126
  mirantis.com => 72
  hotmail.co.uk => 67
  amadeus.com => 67
  163.com => 64
  us.ibm.com => 64
  tmrts.com => 44
  box.com => 43
  canonical.com => 42
  
  --------------------------
  Top 20 committer (domains)
  --------------------------
  
  google.com => 16655
  gmail.com => 7130
  redhat.com => 4065
  fathomdb.com => 493
  bedafamily.com => 419
  coreos.com => 388
  huawei.com => 348
  raintown.org => 268
  zte.com.cn => 180
  mesosphere.io => 174
  zju.edu.cn => 131
  apache.org => 121
  amadeus.com => 66
  163.com => 65
  us.ibm.com => 64
  hotmail.co.uk => 63
  mirantis.com => 63
  ebay.com => 53
  box.com => 43
  tmrts.com => 42
Btw you (google?) should really invest in something like http://stackalytics.com/ if the community wants to have good transparency around this type of data.

Crappy script to generate that data @ https://gist.github.com/harlowja/aca0b3c7d94c78014798fd9eb88...


There's no question Google has the most code checked in, but this can be a faulty metric (generated, rebased, etc can mess up authorship).

We think it's more important around # of unique contributors, where we (Google) are <50%.

For Stackalytics - http://stackalytics.com/?project_type=kubernetes-group&metri...

Looks like my 60% number is out of date - looks like we (Google) are up to 44%. I'll have to figure out why.

Disclosure: I work at Google on Kubernetes


In my experience, the question of who the contributors are and how much they may contribute is less important than the question of who has control of the project.

If you're claiming that Google has delegated authority over Kubernetes to the open-source community, and therefore is not in a position to place its needs over those of the community, please say so explicitly here.

Why quibble over statistics when we can get an official statement?


We, Google, have contributed 100% of Kubernetes to the Cloud Native Compute foundation, and, therefore, are not in position to place our needs over those of the community.[1]

That is not to say we (Google) do not continue to be deeply invested in its success (it's the core of our Google Container Engine), and, further, human beings who are also employed at Google _are_ core contributors to the project, but Google is not associated with the project.

[1] https://www.linuxfoundation.org/news-media/announcements/201...

Disclosure: I work at Google on Kubernetes


> K8s developers are most definitely not primarily sponsored by Google - more than 60% of K8s devs are NOT Googlers.

The key metric, in my view, is how many of them can approve PRs into master. How many of them are not Googlers?

Moreover, what would happen if a PR arrived that might make K8S incompatible with GCE, but be otherwise better for everyone else? I am certain it would be summarily rejected by Google.

> Overlay networks are definitely not a hack - many partners set them up with great benefit. The fact is networking is hard (tm), and unless you're just looking for a flat network, then you're going to have to use SOMETHING.

I respectfully disagree. The use of multiple IP addresses per node is a hard requirement of K8S, even though it is arguably unnecessary in environments whose servers can be bound to arbitrary ports.

By doing so, K8S made an explicit tradeoff that was better suited to Google's cloud offering than others'. I'm not saying it was the wrong decision; I'm simply saying that it forces complexity on those who operate in single-IP-per-host environments. (In case you think I'm unfairly laying blame, I point the finger equally at AWS and other cloud providers who refuse to make it simple to allocate a useful number of IP addresses per instance.)

I maintain my characterization that overlay networks are a hack: they make tracing more complex (tcpdump doesn't natively understand VXLAN or decapsulate their frames); they are compatible with few (if any) NetFlow analyzers (which are often used by orgs who use them for IDS and other purposes); and they add overhead to packet processing, particularly in virtualized environments that don't support VXLAN hardware offloading.


I originally objected to IP per pod given that it was so early in wide scale deployment of SDN on metal (at Red Hat, a year before Kube 1.0, we were already worried about how in the world we could support IP per pod as well as on gce). In light of everything, I think it was the right decision. While VXLAN is cumbersome, IP per entity allows very powerful integration into existing networks, fits well with almost all tools in the space, and makes apps much easier to run.

I do still want to see more solid work done for programmable BGP (calico) and programmable iptables (Contiv) and even programmable routers (it's not that hard to program routers on the fly today, just incredibly specific to each technology).

I also look forward to being able to exploit tools like ECMP more effectively within the cluster to do L3 load balancing and DSR - much of that would be a lot harder without being able to rely on endpoints everywhere.


I can't edit my previous comment, but let me just drop this here - Canonical JUST released a new commercially supported distro of Kubernetes.

https://insights.ubuntu.com/2016/09/27/canonical-expands-ent...

Disclosure: I work at Google on Kubernetes.


Anyone else lament fleetd and the promise of a simple tool based container orchestration?

Yet another complex framework.

sigh


Yeah, sure.

  Fleet for dnyamic scheduling
  Consul for k/v service discovery
  Registrator to actually populate consul
  Confd to talk to consul so you can configure your containers
  
  ...

Oh wait I seem to be building a crappy more involved version of kubernetes.

p.s. I actually did the above. kubernetes is a better solution.


Can confirm; did similar and kubernetes is a better solution.


fleetd doesn't even support rolling updates. It just barely got support for in-place updates. There's a reason why CoreOS folks themselves don't dedicate many cycles to it anymore.


It has plenty of flaws, but it also solves problems that aren't well handled with Kubernetes unless I'm missing something, such as being able to schedule arbitrary systemd units, including things like timers, across a cluster.

As a low level building block it is/was more flexible in some ways. Of course it is solving only a tiny part of what Kubernetes tries to solve.


not particularly! Sometimes problems really are difficult and demand thoughtfully designed solutions, and sometimes complexity in those solutions is both tolerable and worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: