https://github.com/cilium/cilium

sort by:
Revision Author Date Message Commit Date
88642ed Prepare for release v1.6.5 Signed-off-by: Joe Stringer <joe@cilium.io> 16 December 2019, 15:28:58 UTC
b642e32 daemon: Decrease log level for svc not found msg This commit changes a log level from "warn" to "info" of a "Service frontend not found" message. The SVC-related refactoring in v1.7 has fixed a few race conditions which could have been a culprit for the message. Unfortunately, it'd be quite difficult to backport the changes due to the heavy refactoring. Instead, we change the log level, as in (probably) most cases the warning is harmless. Signed-off-by: Martynas Pumputis <m@lambda.lt> 16 December 2019, 15:24:22 UTC
884d1af Dockerfile runtime: add python3 dependency [ upstream commit 5f28f533a3dc8d3396456a914bbc46a5c961dc88 ] It seems bpftool can't be build without python3 so we need to install python3 in order to build it. Signed-off-by: André Martins <andre@cilium.io> 16 December 2019, 10:04:45 UTC
b9a20d3 update golang to 1.12.14 Signed-off-by: André Martins <andre@cilium.io> 16 December 2019, 10:04:45 UTC
054487e pkg/workloads: sleep 500ms before reconnecting to containerd In case of a failure, cilium will try to reconnect to containerd immediately after an error. This can cause lots of verbose error messages to show up if containerd is not available. In order to fix this, Cilium should sleep 500ms before attempting to reconnect to containerd. Signed-off-by: André Martins <andre@cilium.io> 16 December 2019, 10:04:34 UTC
d03ee1f cilium: encryption bugtool should remove aead, comp and auth-trunk keys [ upstream commit 3ce82597e6ecc11306ea90846742ff5ce72d8735 ] Originally encryption only supported end and auth key types but we have also added aead types now as well. This adds all the supported ALGO types for stripping from bugtool this future proofs us if we add more keys later and also removes the aead keys which is an algo we support today. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
19cca7e k8s: Use ParseService when comparing two services [ upstream commit 65357a2312212457a0dfc8dc955c6e3d28d910c9 ] Previously, the EqualV1Services() function had its own k8s.Service constructor which was not in sync with ParseService() and did not consider some service fields. The consequence of this was that changing a type or ports of a service did not trigger the service update, and thus the service change was not reflected in the SVC BPF maps. Use ParseService() instead of the custom constructor to prevent any discrepancies in the future. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
405c36e doc: Disable masquerading in all chaining guides [ upstream commit 24fc7840dfdbdc272b37302be8263e15dd6ec4e3 ] Not all chaining guides were properly disabling masquerading. In chaining mode, the masquerade decision is delegated to the underlying plugin responsible for the networking. Enabling chaining leads to unnecessary iptables rules being installed. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
112dcb8 doc: Fix AKS installation guide [ upstream commit 5f58c1f299bdb96c420f20e4d917b6e3134093e6 ] Transaprent mode is no longer required and we can chain on any networking mode of the AKS CNI plugin. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
e39dae9 k8s: Fix typo in io.cilium/shared-service annotation [ upstream commit 8dc0b1274b29edb8d0cb73fb6068f28914b932cf ] The annotation was misspelled as `io.ciliumshared-service`. This adds the missing slash and test cases for the annotation parser. Since the annotation is not documented, no fallback to support the old spelling is introduced. Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
bc619c9 pkg/endpoint: delete _next directories during restore [ upstream commit ae878faa9cd8deb85b463a395d537ce41ae9dd20 ] This patch detects and deletes during endpoint restoration, endpoint directories that match `${EPID}_next` or `${EPID}_next_fail` and for which there already exists an endpoint directory `${EPID}`. The idea is to consider such a directory stale (e.g the process was terminated while regenerating endpoint `${EPID}` - in which case there is no need to attempt to restore an endpoint from them. Fixes #9600 Signed-off-by: ifeanyi <ify1992@yahoo.com> Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 12 December 2019, 16:04:10 UTC
123c364 envoy: Update to 1.12.2 Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 11 December 2019, 17:06:13 UTC
2dc14cd Dockerfile: Use Envoy image that always resumes NPDS Use an updated Envoy image that always resumes gRPC for NPDS after having paused it for the duration of worker threads completing their policy updates. This restores the earlier behavior of sending an ACK for the NPDS also when the last listener is removed from Envoy. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 11 December 2019, 17:06:13 UTC
d7230fb envoy: Update to release 1.12.1 [ upstream commit 71dc4c93312526a045e1f2d6dd39e9f3bd9d348c ] Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 11 December 2019, 17:06:13 UTC
f0e4c57 envoy: Update to release 1.12 with Cilium TLS support [ upstream commit 95487c9578da401559b6ad0cc449b47edeb96885 ] Update cilium-proxy to a build from Envoy release 1.12 with Cilium TLS support patches. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> 11 December 2019, 17:06:13 UTC
a596c63 .github: add github actions to cilium Make use of https://github.com/cilium/github-actions to automate PR interactions in Cilium repository. Signed-off-by: André Martins <andre@cilium.io> 05 December 2019, 10:31:55 UTC
2ef7a80 Add nil check for init container terminated state [ upstream commit 76a7febff2bf7dadb2ef0380599eb98fe52d8ce0 ] Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> 04 December 2019, 11:38:45 UTC
69fe5cc Move missed kubectl apply calls to `Apply` calls [ upstream commit 41221a56e4a13fc8203d60ae87e944a4188c2285 ] Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> 04 December 2019, 11:38:45 UTC
dd2158a add Force to Apply and use it in cilium install [ upstream commit 127e7d3f5c6756fda8918d9015bc366b8ca31f99 ] This change makes our Cilium installation from within ginkgo use `--force=true` `kubectl apply` flag which will be helpful with upgrade tests. Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> 04 December 2019, 11:38:45 UTC
b755065 Add ApplyOptions [ upstream commit 15a345b2df3ed246bdc7788f6d5fd9a7998e756d ] ApplyOptions is added in test helpers so it's easier to add more options for apply. Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org> 04 December 2019, 11:38:45 UTC
8048d32 Prepare for release v1.6.4 Signed-off-by: André Martins <andre@cilium.io> 27 November 2019, 16:39:34 UTC
30935a0 helm: Fix bug to disable health-checks in chaining mode [ upstream commit 38ca1f86a852a823e32dc59694784048c4b4aa6d ] Endpoint healthchecks fail in these modes and should be disabled. The helm charts accounted for this but it was incorrectly triggered previously. fixes [bdb51a0fb0e1a139eb9a2dfbd024fb3ed1e717d7] Signed-off-by: Ray Bejjani <ray@isovalent.com> Signed-off-by: André Martins <andre@cilium.io> 27 November 2019, 15:55:40 UTC
4c18936 Pin kubectl version in ginkgo vms In master, we are using kubectl on host. In 1.6 branch we are still using kubectl from inside the vm. This change cause newer version of kubectl to be installed in ginkgo vms during provisioning. This is done to make use of bugfix for https://github.com/kubernetes/kubernetes/issues/66390 which caused backports to fail our CI. Signed-off-by: Maciej Kwiek <maciej@isovalent.com> 21 November 2019, 16:46:39 UTC
60d3051 Revert "envoy: Update to release 1.12 with Cilium TLS support" This reverts commit 14e1ae77bee93d3cce6cc63b40357d8227d516d2. 19 November 2019, 15:35:01 UTC
1a5753d Revert "Envoy: Use CLUSTER_PROVIDED loadbalancer type." This reverts commit 1527c7616a02ce20c6711af4e32ceea73f0656f5. 19 November 2019, 15:35:01 UTC
26e53d5 Revert "accesslog: Add support for missing and rejected headers." This reverts commit bf1c1fc62675904e24221401dc20f16dea4f5346. 19 November 2019, 15:35:01 UTC
659c72d fqdn: L3-aware L7 DNS policy enforcement The DNS proxy now accounts for the target L3 selector and destination port, only allowing requests whilelisted specifically for that L3/L4 pair. ipcache lookups iterate over in-use prefix lengths, which will likely be bounded in most scenarios. Regexpmap is also removed as the proxy was the final user. Signed-off-by: Ray Bejjani <ray@isovalent.com> 19 November 2019, 15:02:59 UTC
8904d23 k8s: update k8s to v1.16.3 [ upstream commit 904b274e2c76edf232758126cce4b369bf356a3a ] Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
27c6833 test/provision: update k8s test versions to 1.14.9 and 1.15.6 [ upstream commit ecb77876d05fc3755b98c14bde3029ac01959854 ] Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
8f4f668 policy: Keep cached selector references for L3-dependent L7 rules. [ upstream commit 75bc3c7029aacb174216c494fbd636ecefc7b65c ] Keep the selector cache references for selectors used as keys for L3-dependent L7 rules, even when the filter applies to all L3 endpoints. This is required for FQDN selectors to be tracked by the selector cache even when the rule is merged with another rule wildcarding at L3. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
7665e36 Fix kafka-v1.yaml file for compatibility [ upstream commit 0ccc75c21537c0d29bca88c28d6b2281814c9898 ] Fixes the Compatibility issue seen in the kafka-v1.yaml file while trying to test the Getting Started with Istio guide. Based on the Kubernetes description for using StatefulSet: "You must set the .spec.selector field of a StatefulSet to match the labels of its .spec.template.metadata.labels. Prior to Kubernetes 1.8, the .spec.selector field was defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation." Fixes: #9603 Signed-off-by: Swaminathan Vasudevan <svasudevan@suse.com> Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
3c84068 aws/eni: do not resync node if semaphore Acquire fails [ upstream commit 8cc9f876422bdbca41a10f7b70f626b1d1ded98f ] In case the semaphore fails to be acquired we should not continue resyncing the node and eventually release that same semaphore as it might cause the semaphore to panic due the number of 'Release' calls is superior than the number of previous acquires. Fixes: b96f90c15f48 ("eni: Support for parallel workers") Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
2315331 iptables: Fix incorrect SNAT for externalTrafficPolicy=local [ upstream commit 4ae2ead1196e675684bba86e4eac82d5a4f1e158 ] So far, Cilium has installed a rule to clear the masquerade bit set by kube-proxy in an attempt to still expose the correct source IP for kube-proxy traffic regardless of the externalTrafficPolicy setting. This works relatively well unless hostPort is getting involved. By clear the bit, it is no longer possible to distinguish hostPort traffic and traffic for which externalTrafficPolicy=local has been set. Revert this strategy and stop clearing the masquerade bit. This allows to remove several SNAT/MASQUERADE rules installed by Cilium which attempted to replicate the kube-proxy SNAT behavior. Instead, leave it up to the user to properly set externalTrafficPolicy=local. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 18 November 2019, 13:14:13 UTC
366ddb1 vendor: point vishvananda/netlink back to upstream [ upstream commit 12029d96279735403e3898677c3b7360d25d5aa8 ] Given PR https://github.com/vishvananda/netlink/pull/498 has been merged officially, move the repo back to upstream. Final follow-up to 515654b58beb ("vendor: fix stack corruption from retrieving veth peer index"). Re-performed same update as in 515654b58beb. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
3878056 cni: fix cni plugin error formatting when agent is not running [ upstream commit 3a3018f33d32ddc3b0b3ae3f4725c4ce1aa6c80a ] * Fixes #9190 Signed-off-by: Deepesh Pathak <deepshpathak@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
d9a1485 unmanaged kube-dns: Delete one pod per iteration [ upstream commit 2586d5902b3ae7b369794fb06b95313cb2d9fd86 ] Under some circumstances, such as the initial boot up of cilium and cilium operator in a cluster, the original code would iterate through all kube-dns pods and delete them in a single pass, causing momentary clusterwide DNS outage. With this, a single kube-dns pod will be deleted per iteration. It is still suboptimal, since it relies on hope and quick coredns startup, but it is arguably better than taking down the DNS service for a few seconds. Signed-off by: Jean Raby <jean@raby.sh> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
d54d6cb operator: do not rm kube-dns pods if unmanaged-pod-watcher-interval == 0 [ upstream commit 29c8f216d757628a69cad52d7366c3ec7a1bdcf8 ] Signed-off-by: André Martins <andre@cilium.io> Reported-by: Jean Raby <jean@raby.sh> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
52b2d24 pkg/k8s: fix service update bug fix [ upstream commit bfcf82913c98e9b5dbd3a8aea3f04b8548118105 ] When a service changes its ports, we should remove the services from the datapath as well. The same goes with the endpoints, if an endpoint disappears or changes its ports we should make sure we remove underlaying backends. Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
7298826 k8s/watcher: refactor code to generate k8s services [ upstream commit b071e1fced504629ecf38fd2a740a4f6e57c188b ] Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
42b2950 Added chart value for etcd-operator cluster domain [ upstream commit a5f4964488672feed0ca07cd124202888070f84d ] Signed-off-by: Dan Sexton <dan.b.sexton@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
aa23204 pkg/endpoint: start RegenerationFailureHandler after assign epID [ upstream commit eba2f90e7e124d9fbc26b9992c6d27f22765455d ] As an epID is only assigned at the time is exposed, we should only start the regeneration failure handler after having an ID allocated to the endpoint. This makes sure the controllers won't show up as duplicates in `cilium --status` as the controller names would be all have the same name (`endpoint-0-regeneration-recovery`). Fixes: fa8a499aab9c ("endpoint: start a controller to retry regeneration") Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
af7e63d Docs: tofqdns-pre-cache is optional in preflight templates [ upstream commit 52a12df9c4d3cc88197048e34c0e5805e66e58f4 ] The fqdns precache is an opt-in feature but we had configured the preflight pod to always produce it. This sometimes causes errors when the a user has not opted into generating this file but still ran the preflight manifest, possibly one pre-dating the feature. It is now a more explicit option in the helm templates, making it more clear what is happening. Signed-off-by: Ray Bejjani <ray@isovalent.com> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
5ca0b30 bpf: Don't perform L3 operation when ENABLE_ROUTING is disabled [ upstream commit 94d62d66c925abd82a47bd3a6df46d5341529173 ] The datapath was incorrectly performing an L3 operation even if routing was disabled for packets falling through and being passed to the stack. Reported-by: @rabysh Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
f195d76 fqdn: DNSCache LookupByRegex functions don't return empty matches [ upstream commit a4d7cfae761170491a19b6e48fd8b4966d7cfef1 ] An accident in the logic that collected IPs that matched regexes would always add a name, but correctly return no ips for it. This caused other code to perceive the name as matching-with-ips. This subsequently meant that names that had all names expired would not be correctly cleaned in the policy map until another update required it. DNSCache.LookupByRegex doesn't return names with no matching IP, and mapIPToSelectors guards against this more robustly, matching the code for .MatchName fields. Signed-off-by: Ray Bejjani <ray@isovalent.com> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
d637fa6 eni: Allow releasing excess IP addresses via option [ upstream commit e5f1ffaa918b1c6306b7fdc83714891f4f496f4d ] Allow releasing excess IP addresses from ENI via operator option `--aws-release-excess-ips` This option allows to reduce waste of IP addresses. When set to true, cilium-operator checks the number of addresses regularly and attempt to release some free addresses if: available > min-allocate && available - used > preallocate + max-above-watermark The check and action will be executed every minute when the interval based backgroud resync is triggered. Release actions will always be executed after allocations. There is no limit on ENIs per subnet so ENIs are remained on the node. Fixes: #9424 Signed-off-by: Jaff Cheng <jaff.cheng.sh@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 16 November 2019, 01:52:43 UTC
a9a8d76 k8s/endpointsynchronizer: re-fecth CEP in case of update conflict [ upstream commit f1644be68cd35eba6d96843c7e658b5a58112410 ] In case of any error, the object returned by k8s client is nil/empty so we need to re-fetch the object from kube-apiserver in order to perform an update with the most recent spec. Fixes: e8e06d4e76a6 ("k8s: Fetch-free CEP update logic") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
1667b55 pkg/endpoint: do not runIPIdentitySync is not running with kvstore [ upstream commit 6850007a8fd3568cb594612e14658ec37dd05584 ] Running runIPIdentitySync would make cilium to create endless controllers to update the identity but this doesn't make sense to run while kvstore is not setup. Fixes: 8822c1d650f0 ("k8s: Add CRD Identities as an identity allocator backend") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
d2c86ca health: Add some basic unit tests for adding nodes [ upstream commit 568d354c0de88d802d62e43055eaf21f5fb55bed ] Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
ee392cb health: Factor out getting the IPs to probe [ upstream commit 115c8f17f11d26b47b1d48353b77b92d18fbd7fa ] This will assist in upcoming unit tests. Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
d8d0d2b health: Fix up IP removal from health prober [ upstream commit 091e84b83f149e52e795296c84ba0bd882c6ffc8 ] The `prober.nodes` is actually a map of nodes sorted by peer IP as the key, where the peer IP could be either the node's IP or the node's health endpoint IP. Deletion from this should be done using each individual node address, rather than only deleting the element that is currently being iterated over. Related: #9103 Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
652cb8e health: Fix handling of node update events [ upstream commit 17986192ce60592f6ff0723d3483ff57fbfec3ad ] Since we supported dumping incremental changes to the list of nodes (added or removed), there's no point in doing mark-and-sweep GC of nodes in `setNodes()`. Remove those bits and rely entirely on the incremental changes. There's a subtle change that occurs here with regards to node updates: A node update is propagated as "removing" the node with its old details (IPs,etc) and "adding" the node with its new details (IP,etc). Previously, we would mark all *nodes* that have been removed (including ones that are being updated), then iterate through the 'added' and (un-)mark such nodes so that the GC sweep of nodes doesn't get rid of the nodes. However, this inadvertently meant that updates to a node which change IP addresses would be improperly handled, specifically: * Node has IPs A and B * Node is updated to have IPs A and C * Call comes for setNodes(node{A, C}, node{A, B}) * We mark the node corresponding to IPs A, B for removal * We iterate through the new IPs A, C and mark the same node that it should *not* be removed * We sweep, but because the node should not be removed, we retain the IP B in the nodes list and subsequently in probe results Fix this by actually removing all nodes and IPs in the removed list, then adding all nodes and IPs in the added list. Related: #9103 Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
bf1c1fc accesslog: Add support for missing and rejected headers. [ upstream commit 3cf4c701d0b477f294b88b754b2551793a855e8e ] Add access log fields 'MissingHeaders' and 'RejectedHeaders': MissingHeaders: HTTP headers that were either added to the request, or headers that were merely logged as missing. RejectedHeaders: HTTP headers that were flagged as invalid by the policy. Depending on the policy these headers may have been removed and the request allowed, or the request may have been denied due to them. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
1527c76 Envoy: Use CLUSTER_PROVIDED loadbalancer type. [ upstream commit df25e1c16762e1b195713023032f6ccfaf825d34 ] Envoy ORIGINAL_DST_LB load balancer type has been deprecated, use CLUSTER_PROVIDED instead. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
14e1ae7 envoy: Update to release 1.12 with Cilium TLS support Update cilium-proxy to a build from Envoy release 1.12 with Cilium TLS support patches. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
3d23dc8 endpoint: Run labels controller under ep manager [ upstream commit a31ab29f57b2ea915fa82b8598593252cf638533 ] Move the labels resolution controller for an endpoint into the endpoint package and run it under the manager for the endpoint. This way, when the endpoint is removed, the controller will also be stopped. The other minor change is that the context for the controller is now the endpoint's aliveCtx rather than the daemon's context directly. Fixes: #9540 Fixes: eb8df612c551 ("daemon: Start controller when pod labels resolution fails") Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
5ce3988 endpoint: Clarify naming for identity resolution [ upstream commit 217b0afd2fd05f41e4b061b6bdd183433d80040e ] The labelsResolver is not actually resolving the labels for the endpoint, but rather using the labels of the endpoint to resolve its identity instead. Rename these functions to clarify the intent. There's actually already a controller responsible for resolving labels and other pod metadata, and upcoming commits will shift that controller into the endpoint package so this should reduce confusion. Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
f713333 pkg/policy: show error if user installs a L7 CNP with L7 proxy disabled [ upstream commit 70296ae60704ee4ed5d8571e0e96a1ce717c8ddf ] A cilium node with a L7 proxy enabled will show the following status in the node status: ``` status: nodes: k8s1: enforcing: true lastUpdated: "2019-10-29T14:09:49.449575508Z" localPolicyRevision: 2 ok: true ``` A cilium node without a L7 proxy enabled will show the following status in the node status: ``` status: nodes: k8s1: error: 'Invalid CiliumNetworkPolicy spec: L7 policy is not supported since L7 proxy is not enabled' lastUpdated: "2019-10-29T14:12:23.309604539Z" ``` Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
092baad docs: Fix ipvlan iptables-free gsg [ upstream commit d0e9c7724d9d7d73fc329467b19f81d0281ef8a8 ] Disable L7 proxy, as otherwise cilium will panic due to "--install-iptables-rules" set to false. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
1bd35e9 helm: Add global.l7Proxy.enabled param [ upstream commit 3ae8804dce3126c8c157dd73db8f113f9354d1ee ] The param controls whether the L7 proxy is enabled (defaults to true). Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
a3a9b18 daemon: Enable FQDN proxy if --enable-l7-proxy is set [ upstream commit 7c73c92319ee9fd481e2f48ad37d925b48cd03ad ] Previously, the proxy was disabled if `--install-iptables-rules` was set to false, which could have been not obvious for a user. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
aad5082 daemon: Disable L7 proxy with explicit flag [ upstream commit 118cd947a5b7ddb29946fcc023a9689317fa98cb ] Previously, the L7 proxy was disabled if "--install-iptables-rules" was set to false which might be not clear for a user. Instead, add "--enable-l7-proxy" (defaults to true) flag to control whether the proxy should be disabled or not. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
26307c3 docs: clarify usage of bpf fs mount [ upstream commit f380a79605eddb9c51a292e06b828a8c4d21d983 ] As several users often miss this optional but required steps in a production environments, we should also change the instructions for the bpf mount volume to be required instead of optional. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
08d1d2e envoy: Remove 'force' argument from cache operations [ upstream commit 87aa0809abfab640b7293d007b5d3851dc39be16 ] Now that 'force' argument is never set to 'true' it can be removed. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
7a48fc3 policy: Add unit tests [ upstream commit 72393b43390503717d9b6e4189e5f849974a952a ] Add tests of the new l4 functions to the existing unit tests. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
b8abb74 envoy: Do not force Network Policy updates [ upstream commit f84361005aea54335fa945039933ab07dd386687 ] Use the new "UseCurrent" functionality to not force newtwork policy updates to be sent to Envoy when the policy did not change. Previously the 'force' flag was necessary, as there was no other way to wait for the ACK of the current version. This saves some unnecessary signaling between cilium-agent and Envoy. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
ffdd461 xds: Allow endpoints to wait for the current policy version to be acked [ upstream commit ccbd64f84a2fc1d53736fd746914c22533386b68 ] Even when no policy change is needed, the endpoint may need to wait for the current policy version to be acked by the proxy. Use a different Envoy node ID for proxylib, so that proxylib ACK arriving before the main Envoy ACK does not trigger DNS response to be released to the source POD. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
6d055c1 Envoy: Track last ACKed version per proxy node [ upstream commit 5d955191fd9bf9af7ae0dbf267b221f012a1f98c ] Track ACKs sent by proxy nodes. This is needed to be able to wait for the current cached resource version to be realized by all relevant nodes. Node entry in the map of acked versions is deleted when the network policy for the endpoint is removed, which happens when the endpoint is deleted. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
bd20a4c envoy: Always use IstioNodeToIP function [ upstream commit e7244b053078bbcf32d7001cd81b3ff01f505978 ] Envoy Node ID to IP mapping function is configurable, but we always use the Istio mapping anyway. Rather than spread the configurability elsewhere, always use the Istio convention. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
34e19ba logfields: Add tag for cached xDS version. [ upstream commit 82d663a1f494e52e10ccc31105889b5078c8febe ] Add and use a tag for cached xDS version. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: Ray Bejjani <ray@isovalent.com> 07 November 2019, 13:43:48 UTC
62195fc pkg/k8s: fix toServices policy update when service endpoints are modified [ upstream commit 0c6cb2663b7cdef3716dbdd78ead4339c0177966 ] When a toServices policy was used, Cilium wouldn't update the policy toCIDR whenever the endpoints selected by that service changed. This could make Cilium block traffic for new endpoints added, and allowing traffic for endpoints deleted for the service selected by the toServices rule. Signed-off-by: André Martins <andre@cilium.io> 06 November 2019, 23:26:55 UTC
af75fc6 golang: update to 1.12.13 Signed-off-by: André Martins <andre@cilium.io> 06 November 2019, 23:26:00 UTC
4b78e51 bpf: always force egress nat upon nodeport requests [ upstream commit e7bc8918daf8a3c8b599f1ec84a3ee7544249f62 ] For requests originating from outside world to a NodePort service, enforce doing SNAT for remote backends no matter whether the source port is below ephemeral port range. Client node: # curl --local-port 23 192.168.1.125:30000 <html><body><h1>It works!</h1></body></html> Intermediate node, before: [...] 13:39:49.979959 IP 192.168.1.120.23 > 192.168.1.125.30000: Flags [S], seq 1574471836, win 29200, options [mss 1460,sackOK,TS val 471781941 ecr 0,nop,wscale 7], length 0 13:39:49.979975 IP 192.168.1.120.23 > 10.12.22.74.80: Flags [S], seq 1574471836, win 29200, options [mss 1460,sackOK,TS val 471781941 ecr 0,nop,wscale 7], length 0 13:39:50.983064 IP 192.168.1.120.23 > 192.168.1.125.30000: Flags [S], seq 1574471836, win 29200, options [mss 1460,sackOK,TS val 471782945 ecr 0,nop,wscale 7], length 0 13:39:50.983078 IP 192.168.1.120.23 > 10.12.22.74.80: Flags [S], seq 1574471836, win 29200, options [mss 1460,sackOK,TS val 471782945 ecr 0,nop,wscale 7], length 0 [...] Intermediate node, after: [...] 13:35:02.917628 IP 192.168.1.120.23 > 192.168.1.125.30000: Flags [S], seq 1384089837, win 29200, options [mss 1460,sackOK,TS val 471494876 ecr 0,nop,wscale 7], length 0 13:35:02.917644 IP 10.11.185.253.61009 > 10.12.22.74.80: Flags [S], seq 1384089837, win 29200, options [mss 1460,sackOK,TS val 471494876 ecr 0,nop,wscale 7], length 0 13:35:02.917900 IP 10.12.22.74.80 > 10.11.185.253.61009: Flags [S.], seq 1601640307, ack 1384089838, win 28960, options [mss 1460,sackOK,TS val 1089404797 ecr 471494876,nop,wscale 7], length 0 13:35:02.917921 IP 192.168.1.125.30000 > 192.168.1.120.23: Flags [S.], seq 1601640307, ack 1384089838, win 28960, options [mss 1460,sackOK,TS val 1089404797 ecr 471494876,nop,wscale 7], length 0 [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
f3ebf84 bpf: do not error out when punt to stack return from nat [ upstream commit 439ea4d00ad9c18465b28a86fe42d121934207d7 ] When from egress we must handle this case properly by remapping to TC_ACT_OK instead of dropping. Fixes: 428f43e177de ("bpf: remap punt to stack so we properly recircle into bpf_netdev") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
e81a43a bpf: fix nodeport insns over limit regressions in netdev/overlay progs [ upstream commit 0c1914dc2facc5f62cd5986baf89aa42f8bfc1f0 ] When BPF-based nodeport is enabled, then we're running into a regression when IPv4 + IPv6 is enabled. # uname -a Linux apoc 4.19.57 #1 SMP Thu Jul 4 12:08:20 CEST 2019 x86_64 x86_64 x86_64 GNU/Linux With tunneling: level=warning msg="+ tc filter replace dev cilium_vxlan ingress prio 1 handle 1 bpf da obj bpf_overlay.o sec from-overlay" subsys=datapath-loader level=warning subsys=datapath-loader level=warning msg="Prog section '2/19' rejected: Argument list too long (7)!" subsys=datapath-loader level=warning msg=" - Type: 3" subsys=datapath-loader level=warning msg=" - Attach Type: 0" subsys=datapath-loader level=warning msg=" - Instructions: 4137 (41 over limit)" subsys=datapath-loader level=warning msg=" - License: GPL" subsys=datapath-loader Without tunneling: level=warning msg="+ tc filter replace dev eno1 egress prio 1 handle 1 bpf da obj bpf_netdev.o sec to-netdev" subsys=datapath-loader level=warning subsys=datapath-loader level=warning msg="Prog section 'to-netdev' rejected: Argument list too long (7)!" subsys=datapath-loader level=warning msg=" - Type: 3" subsys=datapath-loader level=warning msg=" - Attach Type: 0" subsys=datapath-loader level=warning msg=" - Instructions: 4137 (41 over limit)" subsys=datapath-loader level=warning msg=" - License: GPL" subsys=datapath-loader Given we have the full port range for NAT, it's fine and least amount of complexity to just reduce the SNAT_COLLISION_RETRIES by one. Mid-term we'll have a 4.19.57 kernel in our CI to avoid such regressions. Fixes: #9451 Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
de809ee bpf: remove force_range nat config parameter [ upstream commit dbd404dea4621e39d5ea826b1ba729ab228ee835 ] With recent nodeport collision rework, this option is not useful anymore since we always select a random port first. Therefore remove it. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
88a4d4b bpf: compile out bpf_lxc service lookup when host services enabled [ upstream commit 242860af48e864d8998c201f82f29878dda0f85d ] We can skip this step when we have host reachable services enabled since we already directly to the DNAT in socket layer for TCP/UDP under IPv4/v6. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
dccd31d bpf: enable direct bpf_netdev redirect when !netfilter [ upstream commit ea927177fcb2744fc1ffef0832e9d18e94d261ae ] Instead of going up the stack as per 4132b71e9abe ("bpf: Avoid redirect in bpf_netdev for NodePort"), re-enable the optimization if kernel has no netfilter loaded. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
c348315 bpf: perform nodeport nat into full port range [ upstream commit 0877d8e39ab3848b491a4036b05bc7405a1e477c ] Now that all prerequisites have been resolved, open up the full range (e.g. 10.11.20.22.63566 below): [...] 09:07:10.255903 IP 192.168.1.120.47236 > 192.168.1.125.30000: Flags [S], seq 2644581528, win 29200, options [mss 1460,sackOK,TS val 65725769 ecr 0,nop,wscale 7], length 0 09:07:10.255921 IP 10.11.20.22.63566 > 10.12.19.158.80: Flags [S], seq 2644581528, win 29200, options [mss 1460,sackOK,TS val 65725769 ecr 0,nop,wscale 7], length 0 09:07:10.256157 IP 10.12.19.158.80 > 10.11.20.22.63566: Flags [S.], seq 1799412017, ack 2644581529, win 28960, options [mss 1460,sackOK,TS val 1676656076 ecr 65725769,nop,wscale 7], length 0 09:07:10.256185 IP 192.168.1.125.30000 > 192.168.1.120.47236: Flags [S.], seq 1799412017, ack 2644581529, win 28960, options [mss 1460,sackOK,TS val 1676656076 ecr 65725769,nop,wscale 7], length 0 [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
b237897 bpf: merge nat handling ranges for bpf nodeport [ upstream commit ab9f57cc6390304c25be45db8b1cc6525369c987 ] Change the nodeport NAT handling code such that on ingress both reverse transations would happen out of nodeport_lb{4,6} instead of keeping SNATed port ranges separate and handling each at their own layer. There are multiple cases, i) regular NAT which was reversed early via nodeport_nat_rev(), and ii) replies from remote nodeport hosts which need to be reverse NATed as well on ingress. The former needs to be recirculated into the main ingress path whereas the latter can just be hairpinned and sent back to the orginal node that did the request. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
60c2ee7 bpf: fix tc-index bitfield wrt skipping nodeport [ upstream commit 475c123117d18efed838a931cdc82a15684d8968 ] The TC_INDEX_F_* flags are bitwise and do not denote shift positions or such, therefore the change of TC_INDEX_F_SKIP_NODEPORT from 2 to 3 was invalid. Undo it. Fixes: 830adba1c024 ("bpf: Support proxy using original source address and port.") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
62ce6b8 install: fix label used in ServiceMonitor to select cilium-agent [ upstream commit 825472219312c0530b0145c030cb26c227060757 ] The ServiceMonitor uses labels to select a Service resource, and the corresponding Service resource is labeled with `k8s-app: cilium`. Signed-off-by: Patrick Mahoney <pmahoney@greenkeytech.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
fec362f k8s: Provision NodePort services for LoadBalancer [ upstream commit 26f3a35613fb65b797bfcd3a7a4a99cc61c1a319 ] Some k8s external loadbalancers expect a service of the LoadBalancer type to be reachable via a NodePort defined by the service. Therefore, in the case of the NodePort BPF, for each k8s LoadBalancer service, we need to provision a BPF service for a NodePort defined in such service. Signed-off-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
48e0e41 bpf: remove optimization to bypass rev-snat as prep for external ip [ upstream commit d7caae44a9a10ac02ee6a7fc4714b98b4617c6f4 ] Remove this optimization as with future external ip support we always need to recircle through the tail call instead. This means an additional indirect call that is subject to retpoline for fast-path, but once we have kernel side support for direct patching (wip), this overhead can be reduced significantly again. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
d56cb8d bpf: remap punt to stack so we properly recircle into bpf_netdev [ upstream commit 428f43e177de53fe7da0fd885da2eb577fdb071d ] Currently it would have returned TC_ACT_OK, so we couldn't distinguish whether there was an actual entry in the NAT table that got properly reverse SNAT'ed or whether the request is not subject to NAT and to be pushed up the stack instead for further processing. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 31 October 2019, 10:26:36 UTC
2f2a254 flannel: Disable endpoint connectivity health check [ upstream commit 164a470be79f6219c9ad9a29506fd304928c4b3c ] No reason to disable endpoint health checking overall. Disabling endpoint connectivity health checking is sufficient. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@cilium.io> 29 October 2019, 08:44:13 UTC
5419eb3 helm: Disable endpoint-health-checking when chaining is enabled [ upstream commit bdb51a0fb0e1a139eb9a2dfbd024fb3ed1e717d7 ] Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@cilium.io> 29 October 2019, 08:44:13 UTC
dab81cf agent: Add --enable-endpoint-health-checking flag [ upstream commit f1248058510bcf902ca528ed033ecd99c98e460b ] Add ability to have node level connectivity health checking while disabling virtual endpoint connectivity health checks. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@cilium.io> 29 October 2019, 08:44:13 UTC
7f7dcc8 endpoint: regeneration controller runs with `RegenerateWithDatapathRewrite` [ upstream commit 18bb87c67d42003212d10cae6a2ce61995eb2aad ] The controller previously did not specify at which level the regeneration should occur for the Endpoint, which meant that the level used was `RegenerateWithoutDatapath`, which does not try to recompile or reload the Endpoint's datapath program. This means that if a prior regeneration failed, and the controller was invoked, the regeneration would "succeed", but the Endpoint's programs would not be built, leaving it in a bad state. To fix this, always ensure that we try to recompile the Endpoint's programs if a template has not been built (`RegenerateWithDatapathRewrite`). Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 22 October 2019, 23:30:05 UTC
f936246 eni: Allow selecting subnet by Name tag [ upstream commit fc5b7e3ad3ce3b8059e7672aa5280a435f6b0cb9 ] Currently the subnet-tag with key "Name" is not treated as a tag so we can't select a subnet by specifying a Name tag. This allows to select subnet by Name tag. Fixes: #9428 Signed-off-by: Jaff Cheng <jaff.cheng.sh@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 22 October 2019, 23:30:05 UTC
49bc179 Add ipsec upsert logs in debug mode [ upstream commit 7946e441cdb38eb4e65583b829f0f58fe20dd7d5 ] Signed-off-by: Laurent Bernaille <laurent.bernaille@datadoghq.com> Signed-off-by: André Martins <andre@cilium.io> 22 October 2019, 23:30:05 UTC
c0f4dfd Support null encrytion/auth [ upstream commit 8d290d31e187ca83d3acfbc8ac237527225c914c ] Signed-off-by: André Martins <andre@cilium.io> 22 October 2019, 23:30:05 UTC
af2021b vendor: update k8s dependencies to 1.16.2 Signed-off-by: André Martins <andre@cilium.io> 19 October 2019, 18:52:07 UTC
5ab0d03 update k8s to 1.13.12, 1.14.8, 1.15.5 and 1.16.2 Signed-off-by: André Martins <andre@cilium.io> 19 October 2019, 18:52:07 UTC
832ec5d go: bump golang to 1.12.12 Signed-off-by: André Martins <andre@cilium.io> 18 October 2019, 12:46:25 UTC
14c85d4 pkg/k8s: consider node taints as part of node equalness [ upstream commit 9b22d4e4f1d92db4e7feaffbb5631f99fb1ba898 ] As kube-controller-manager can change node taints to notify if the node is or is not reachable, Cilium should also take into account those taints. In the eventuality of a node to be down for more than 15 minutes it can be removed from the kvstore due lease expiration. That same node won't be re-added because Cilium is only checking for the equalness of annotations and node name. Checking for Taints equalness will make sure the node will be re-added to the kvstore. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
0f1a38b docs: Fix clustermesh secrets namespace [ upstream commit a729a7beaced60c0db5cdc84b6ddc1aeeba80630 ] The other commands in this doc specify the namespace, but the creation of the clustermesh configuration doesn't. I found that without specifying this, the configuration was not properly mounted into the cilium containers. Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
240d374 bugtool: add `cilium node list` output [ upstream commit 788e83e96307833df7553ab974ab0ba1b9daf6f8 ] Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
72523b1 cilium: encryption, better error reporting for multiple default routes [ upstream commit 14d6271840d9952e0824714cfe75274d237f0d7f ] I naively tried to add a test to auto-detect default route/interface in my self-managed cluster. However this cluster has multiple interfaces and multiple "default" routes distinguished by dev. The result was the test failed but it was not clear why and lead to some debugging which with proper error reporting in the logs would have been obvious. Improve error messages here. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
600dff2 cilium: encryption, increase initHealth RunInterval [ upstream commit 9d711f5ed6cdd3670ed27f73c56b8875cb3bffc8 ] While testing ginkgo with a self-managed kubernetes cluster I would sometimes fail encryption tests because cilum health policy would keep getting regenerated by the cilium-health-ep controller. It seems the timing is just a bit tight to get everything online before the controller issues a PingEndpoint. If this fails then the health endpoint is removed and added again. Sometimes this would get in a cycle where the health agent never got up before the ginkgo tests aborted. So resolve this by increasing the timer on the controller after this I no longer see any errors. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
9968bc4 cilium: bpf, fix undeclared ENCRYP_IFACE [ upstream commit badc9beffe8efd66f040f6bfe08ca9c2cf283cd4 ] level=warning msg="/var/lib/cilium/bpf/bpf_netdev.c:522:23: error: use of undeclared identifier 'ENCRYPT_IFACE'" subsys=daemon level=warning msg=" fib_params.ifindex = ENCRYPT_IFACE;" subsys=daemon level=warning msg=" ^" subsys=daemon Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> 17 October 2019, 19:44:40 UTC
back to top