Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey folks, Talos creator here. Happy to answer any questions you guys may have. Sounds like some confusion about exactly what Talos is. A lot of good feedback here that we will take and improve our documentation.

Talos is a Linux distribution built specifically for Kubernetes. The short version is that we have stripped out absolutely everything that is not required to make a machine a Kubernetes node, including SSH and console access (I will explain why).

Here goes the long version. We have done a number of things to improve security, including a read-only filesystem, except for what the Kubelet needs (/var/lib/kubelet, /etc/cni, etc.). It runs entirely in RAM from a Squashfs, and only Kubernetes makes use of a disk. We have stripped SSH/Console access and added a gRPC API that gives engineers the ability to debug and remediate issues.

We didn't just stop at this. We are writing everything, including the init system, in Golang, which allows us to integrate deeply with Kubernetes. Everything about Talos is API driven.

Some of the highlights include:

- SSH/console access replaced with gRPC API that is secured via mutual TLS.

- Immutable. Immutability prevents drift, making the cluster consistent across the board.

- Automated upgrades that can be orchestrated in an intelligent way. By using Kubernetes events, and our API, we can roll out upgrades from an operator (currently a WIP and planned for release in 0.3 Talos) and do safe in a safe manner.

- Cluster API (CAPI) integration that allows rapid creation of Kubernetes clusters using Kubernetes style declarative YAML.

- Support for AWS, GCP, Azure, Packet, vSphere, Bare Metal, and Docker. The experience for each is consistent, making it easy to reason about Talos regardless of where you run it.

- CIS and KSPP security configuration enforcements.

- Keeping current by supporting the latest and greatest version of Kubernetes, while writing upgrade paths into the system.

- Support for local Docker based clusters, easily created using our CLI. This is super useful for creating CI pipelines where you might want to run integration tests against the same Talos/Kubernetes versions running in production.

- Installs and upgrades are performed via containers.

We feel that by removing SSH/console, making the core of Talos read-only, and treating the nodes as ephemeral machines, we are creating a much more secure way to run Kubernetes. A really good talk was given on these ideas at Blackhat this year: https://swagitda.com/speaking/us-19-Shortridge-Forsgren-Cont.... We feel we align with the recommendations made there.

In addition to security, we envision a system that will be self-healing and intelligent. By having an API and integrating with Kubernetes, the sky is really the limit on the tooling we can build to create this self-healing system.

Our goal with Talos is to allow engineers to more or less forget about each individual node. Managing the OS alongside Kubernetes is a lot of work.

I will address the questions and comments as replies. Feel free to ask more as a reply to this comment.

Feel free to join our meetings every Monday and Thursday at 17:00 UTC on https://zoom.us/j/3595189922. Also, join our slack and I'd be more than happy to talk some more about Talos! https://slack.dev.talos-systems.io



One thing which I would need to switch from CoreOS to Talos is GPU drivers. My current setup uses the NVIDIA driver containers:

https://hub.docker.com/r/nvidia/driver

I build slightly customized images using a process derived from the one in the NVIDIA repo:

https://gitlab.com/nvidia/container-images/driver/blob/maste... https://gitlab.com/nvidia/container-images/driver/blob/maste...

The automation here is predicated on CoreOS distributing matching { kernel, headers, toolchain } artifacts for each release, and in particular how specific OS releases get promoted from the alpha -> beta -> stable channels without modification. This lets me build new drivers automatically for each alpha release, validate the drivers on the beta channel, and have no surprises on the stable channel. Does Talos intend to do something similar?


Yes we do. I personally am working on the channel based approach for our 0.3 release that we just started developing. I would love it if you could make a meeting some time soon to chat some more. User feedback will be really help.

Since Talos is built entirely in containers and we control the entire toolchain, I believe you could achieve the same with Talos.


These all sound like fantastic choices — is anyone using this in production? I’d like to replace ours with this today tbh.


We are working with a number of users currently. Please see our README for community meeting times if you'd like to chat some more!


I am your target audience and few things put me off:

- no SSH access to nodes. As you mentioned in the docs ideally we'd need node of it, but restricting me to just dmesg, ps and top via osctl won't cut it.

- due to combination of lack of SSH and custom kernel , it is probably next to impossible to run perf tool, which should be compiled using kernel headers of the currently running kernel. Same goes for bcc or any other bpf based tools

- node joins the cluster based on a static trustd username and password. It kinda defeats the purpose of all the dance around PKI if static secret is all you need to become a node. This part is hard to get right, but TPM or vTPM on cloud providers can be part of the solution

- provisioning cluster is easy part. Upgrading it is where fun begins. There should be clear demonstration that you covered this case in your fully automated OS/kubenretes installer, I couldn't find any on your website

- you introduced own components, namely trustd, osd and proxyd. There should be a diagram of what runs where and how control and data are passed between different components.

- building your own OS is a big task, you probably base yours on something, would be good to do document tha, this would make it more trustworthy


First of all, thank you for taking the time to write this out. The feedback is very valuable. I will do my best to address each comment.

Let me start by laying out our design constraints. We knew we wanted a handful of simple features:

- minimal

- immutable

- and secure

and we approached them with the willingness to do whatever it took to achieve them, no matter how different it would be from any Linux distribution today.

The degree to which we want to obtain minimalism is what I like to call "ultra". Not a single file should be on the image that isn't absolutely needed. Furthermore, not a single process should be allowed to run that isn't required to obtain the goal of running Kubernetes. So we started by creating an image with just enough to run the kubelet and the kubelet only. Obviously, this isn't practical, but it was a place to start.

In implementing our immutability design constraint we decided to:

- make the root filesystem read-only - have no package manager - not allow any generic use of the OS (i.e. it would be only for the purposes of running Kubernetes)

When optimizing for one thing, you often degrade another. In our case, if we optimize for minimalism, then immutability becomes degraded. We need to address a way to manage and debug the node, and we need libraries/binaries to do so. With no package manager, this means everything must be baked into the image, and thus we degrade immutability.

Tacking on yet another design constraint, security, things become even more interesting. The more you add to a system, the higher the risk in vulnerabilities. The more allowed permissions in a system, the higher the risk in vulnerabilities. So minimalism, and immutability actually complement security. In our case, security has the highest priority of all, which means we aren't willing to degrade anything that supports the security of the system. So minimalism, and immutability must be present.

Aside from our design constraints of minimalism, and immutability, we also avoid C as much as possible. We want to build something using a modern language for all the reasons you would choose a modern language over C today, but mostly for security purposes.

Taking all the above into consideration, this meant that we are still left with figuring out how to manage a machine without degrading minimalism, immutability, and security. So without tooling on the rootfs, without a package manager, and without a way to run custom processes, we still need a way to obtain the information we need from a machine. Thus the API was born.

The API doesn't only solve the management issue, it also reenforces all of our design constraints:

- we can keep the image minimal with a single binary serving the API - we can keep the image immutable by building a robust API - we could retain security by using mutual TLS and offering a read-only API - we could write it in a modern language, using modern tooling (golang and gRPC)

At this point what need is there in SSH/console access if the design constraints essentially remove all usefulness in console access? The problem isn't necessarily the need for SSH/console, its the need for a way to get the data to make informed decisions.

There are also additional benefits to an API. There is a reason the concept exists. With an API you get a standarization, strong types, and constistent and well known output formats. The benefits are many.

I'd like to also point you in the direction of an execellent talk given this year at Blackhat: https://swagitda.com/speaking/us-19-Shortridge-Forsgren-Cont.... The section on D.I.E. in particular will add some additional support to the reasons I gave above.

That is my lengthy response to the reasoning behind the removal of SSH. Remember, just because we don't have SSH baked in, nothing is stopping you from running a DaemonSet that has SSH.

As for a custom kernel, we would love to support this. Happy to take in feedback here. We create Talos in containers and our goal is to create the necessary tooling to make this dead simple.

As for node joins, they do not happen with the trustd username and password. We use kubeadm under the hood, so its token based, and possible to have a TTL. We have since moved to token based approach for trustd as well. Note that the trustd token simply gives the node the ability to a worker to request a certficate for OSD, so that you can hit the node's API.

We are currently working on an upgrade operator and it is planned for v0.3. If you would like to have some say in the direction we go, we would be happy to have you in our community meetings!

You make good points about the diagrams. It is clear from this post that we have work to do around the documentation.

And finally, Talos is not based on any distribution. We have a toolchain that we build, and subsequently build our entire distribution from.

I hope I have answered your questions well enough. I look forward to hearing back from you. Your input is valued, and we really would like to use it to turn this into somethi great!


Generally seems like a great offering!

I see immutable, but also upgradable? Is that via in-place upgrades or do upgrades require a reboot?

Example: severe bug or vulnerability in kubelet or containerd/docker. Can I use the API to roll out a fix to existing nodes such that running workloads have no disruption?


We are taking two approaches to this. The first is that you could roll out a replacement node and shutdown the old one. In bare metal scenarios this is much harder so we implemented in place upgrades, but they work very similar to creating a new node. Since Talos is immutable and runs from RAM, an in place upgrade consists of shutting down all services, and then wiping the disk and performing a fresh install. We then reboot the node and its as if you wiped the machine clean and installed the new version of Talos from the get go. This is all via the API by the way.


Wait, you store the local roots on disk? Why not nfs or something similar - especially if you run from ram anyway?

Also sounds like a missed opportunity for kexec and a pivot to new rootfs on a new ramdisk?

Ed: based on https://www.talos-systems.com/docs/guides/bare_metal/ i gather i misunderstood what was said here; its new config in pxe, shutdown and reboot? Which maybe could be shutdown and kexec.


The rootfs in stored in the booloader partition and in the initramfs. As for NFS, I can see us adding support for that, but the out of the box experience for Talos in any of clouds will be painful if we exclusively require NFS.

Since we adhere to the KSPP (https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Pr...) guidelines, kexec is not an option unfortunately. We thought about this early on, but opted to follow KSPP over using kexec.


The whole point of Kubernetes is that you don't think this way. Replacing a node is not an impactful event if you're using K8S correctly.


I agree with this to an extent. There are certainly places where replacing can be expensive. For example, bare metal, or if the machine contains a large amount of data and moving that data to a new node is time consuming.


Your storage should be separated from the worker nodes. Unless you have some hyper-converged setup, then you make the deliberate choice that your node became special. (sorry for using the term hyper-converged)


Until you have to deal with RWO PVCs and evicting a nose requires an expensive and slow disk detatch/attach operation


Looks great: I run a compute cluster (30 bare metal nodes) and I can barely keep the OS patched. I've been looking to switch to a k8s setup, running out of a RO NFS root (booted with PXE). This looks like an even more promising choice.


Interesting. We PXE boot Talos in packet. One user is even go so far as to PXE boot Talos on every boot.


> PXE boot Talos in packet

I'm a fan of the bare metal providers, btw, though I mainly tried out Scaleway for a while.

> PXE boot Talos on every boot

IMO this is a great thing, since once your combination of tftp/dnsmasq/matchbox/http is configured and deployed, you don't need anything else.

In the current, old setup I use, I just flip the PXE-boot switch (e.g. w/ ipmitool) to image the node's drive with whatever install process and then switch back to local boot, but switching boot device and the local hard drive are both extra failure modes (at least in my aging bare metal cluster)


I have an infinite boot loop when trying talos.iso in virtualbox. can you share the recommanded settings to test this OS in a virtualized environment?


I haven't tested in Virtualbox yet. Would you mind either join our slack or creating a GitHub issue where we could provide better support?


1) So how do you support storage volumes? Can you mount for example EBS into a container.

2) What about GPU? Can you support the nividia gpu containers?


We do support storage volumes. A recent change in Rook seems to have broken how it works with Talos, but we know storage is important and we are working on fixes. We would love to land support for Nvidia GPU containers. Your not the first to ask for GPU support, so I'm certain we will be taking a closer look at that.


> we know storage is important

That's quite the understatement - unless you have some other place that holds all your state (outside of k8s)?


> Your workspace is currently on the free version of Slack

I wanted to raise your awareness that someone from Zulip was hanging out on HN recently and said that hosted Zulip is free for open source projects.

Maybe no one cares about message history on projects that move so fast, but I wanted to ensure you were aware.


Thanks! I haven't heard of Zulip, I will be taking a look!


Is this also for arm architectures?


I spent a lot of time in v0.2 building Talos for ARM, and it works, but there is a good amount of work to be done to make it official. We need to setup ARM nodes to run our builds from, and refactor our build logic to account for multiple architectures. There is also a bit of work to be done around the boot loader logic since we use syslinux. We are close, just needs a little push.


Sounds like digital authoritarianism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: