https://github.com/cilium/cilium

sort by:
Revision Author Date Message Commit Date
b115746 Prepare 1.0.6 release Signed-off-by: Thomas Graf <thomas@cilium.io> 17 September 2018, 08:36:13 UTC
7379e80 examples/kubernetes: Add clean-cilium-bpf-state option [ upstream commit b88b879d2b5ab78c5a82398d6a46d084e5892a5b ] Add a new init container option which removes all pinned BPF maps during startup without clearing /var/run/cilium/state/. Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
d7ff32c examples/kubernetes: remove etcd Secrets from the ConfigMap [ upstream commit 186e6f399faa9b38677cdc913be7b920e4ac1637 ] The Secrets can now be created by running: kubectl create secret generic -n kube-system cilium-etcd-secrets \ --from-file=etcd-ca=ca.crt \ --from-file=etcd-client-key=client.key \ --from-file=etcd-client-crt=client.crt Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
704efac lbmap: Guarantee order of backends while scaling service [ upstream commit 7e2f9149ccdcf8300959f4b53eeb0aa6ae567f3e ] This commits resolves inconsistency when updating loadbalancer BPF map entries. The datapath relies on a consistent mapping of backend index to backend IP as the backend index is cached in the connection tracking table. The commit resolves the following deficits: * The previous listing of backend services when synchronizing down was relying on map iteration. Unfortunately, map iteration order is random which was able to lead to unnecessary backend slot reordering. * When scaling down the number of backends. The previous behavior would delete the backends and shift all entries to correspond to the new backend count. This did break consistent load-balancing mapping for existing connections relying on the shifted backends. Unfortunately, the datapath relies on a hole free list of backends in order to perform cheap slave selection based on the packet hash. The resolution is to preserve backend slots that are freeing up due to backend deletions and fill them with duplicates of other backends. In order to keep load balancing distribution fair, the backend with the least duplicates is nominated to fill in. Fixes: 42254ccbfaf ("lbmap: Support transactional updates") Fixes: #5425 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
a71f07f proxy: Remove port binding check on redirect creation [ upstream commit 02e737224008e21f04cf7872eaf1e568a0325c3f ] On redirect creation, we were checking that the allocated port could be bound, by opening and closing a listen socket on that port. However, Linux gives no guarantees about the delay after closing when the port can be bound again (by Envoy, in this case). Due to recent performance improvements, the delay between the test and the creation of the listener in Envoy was shortened, which increased the probability of Envoy not being able to bind the port for a new listener. Remove this check altogether. Signed-off-by: Romain Lenglet <romain@covalent.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
13f3cee lbmap: Support transactional updates [ upstream commit 42254ccbfaf8d3bfd464064aa995a38145bbfb07 ] The existing servive update logic deleted and re-added a service every time a service had to be updated. This commit adds update logic to the load balancer map and uses it when adding and updating services. There is still a small race window: 1. If the number of service backends is reduced 2. AND the CPU processing a packet in the datapath gets preempted between looking up the master key and looking up the selected slave. 3. AND the selected slave has an index that is greater than the new number of backends 4. THEN the slave lookup will fail Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
2c1cd38 lbmap: Introduce lock to allow for transactional operations [ upstream commit 6339f502979ebcf78072b29f6228e2bcdeabeaf5 ] There is a high level lock in daemon/loadbalanger.go which lbmap currently relies on but this is very fragile and not ideal. Introduce a lock on lbmap level. It's a simple global lock spanning all maps for now. It could eventually become more fine grained to cover only certain maps at a time but the contention on this lock is minimal for now. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
8cfed16 lbmap: Mark internal APIs as private [ upstream commit 055e0578b6fbab6f028da514e7a29d47d94b225e ] Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
2cc2493 test/k8sT: Fix bad backport Commit 15358ca1d175 purported to backport commit c162b904a93e ("test/k8sT: use specific commit for cilium/star-wars-demo YAMLs"), but it made other changes that aren't in the original commit completely broke this test. Fix it by reverting the parts that weren't in the original commit. Fixes: 15358ca1d175 ("test/k8sT: use specific commit for cilium/star-wars-demo YAMLs") Signed-off-by: Joe Stringer <joe@covalent.io> 12 September 2018, 04:59:42 UTC
f4fcd45 examples/kubernetes: add node.kubernetes.io/not-ready toleration [ upstream commit 1c3ff40396690bfbcaa66d855dae540bac862fd9 ] Kubernetes 1.12 added node.kubernetes.io/not-ready for nodes that don't have its runtime network ready. Since Cilium needs to be deployed on nodes so it can setup the CNI configuration the not-ready toleration needs to be added to the DaemonSet. Signed-off-by: André Martins <andre@cilium.io> 04 September 2018, 16:07:50 UTC
cb5592c k8s: add /status to RBAC for backport compatability In Cilium 1.2, a k8s functionality was added called "CRD Subresources". Cilium enables this functionality by updating its CRD definition in Kubenetes API-Server and with it, the CRD version. This functionality, allows Cilium to only update the Cilium Network Policy (CNP) and Cilium Endpoint (CEP) Status in the `/status` API endpoint without sending the full object to Kubernetes API-Server. On a downgrade from 1.2 to 1.0, where the user also downgrades the RBAC rules, Cilium will send the full object to Kubernetes whenever wants to update the status of the CNP or the CEP. As the user downgraded the RBAC definition, Cilium will no longer have permissions to write any status for any Cilium object. As the CRD definition was updated into Kubernetes API-Server in 1.2, Cilium 1.0 will not perform any changes of that CRD definition as its version is lower the one installed in kube-apiserver. To fix this issue, it is easier to backport the RBAC rules for 1.0 which allows Cilium to continue to have write permissions in `/status`. This downgrade issue will only happen in Kubernetes >= 1.11 which was when the "CRD Subresources" where enabled by default in Kubernetes API-Server. Signed-off-by: André Martins <andre@cilium.io> 03 September 2018, 11:36:21 UTC
f275c51 k8s fix clean state InitContainer permissions Signed-off-by: André Martins <andre@cilium.io> 31 August 2018, 09:45:48 UTC
2e1c30a test: fix star wars demo to run star-wars v1.0 [ upstream commit 50f29e92fc737d4649257c744156724c064a983c ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
b7b5b48 Revert "test/k8sT: use specific commit for cilium/star-wars-demo YAMLs" [ upstream commit fce1786db8d09662b18f18578038b5e9f09a355f ] This reverts commit c162b904a93e7b597a60dd00cbcf9f398d9235c1. Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
fec5434 lxcmap: Fix always returning an error on delete [ upstream commit a0fbf0adde2c87b1b3b7f32e4464022a7adc2a26 ] Previously, the `errors` variable that would be returned was always initialized to a non-nil empty slice, which would ensure that if the caller checked it against nil that it would appear to always fail, leading to the following false positive warning message: unable to delete element 7439 from map /sys/fs/bpf/tc/globals/cilium_policy_7439: [] Fixes: #5089 Signed-off-by: Joe Stringer <joe@covalent.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
05421f8 lxcmap: Improve error messages in DeleteElement() [ upstream commit a8e1512f3afe3ad88a6f8d708aa4fd1f569ead21 ] Include the map name here, so that the caller doesn't need to. Related: #5089 Signed-off-by: Joe Stringer <joe@covalent.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
15358ca test/k8sT: use specific commit for cilium/star-wars-demo YAMLs [ upstream commit c162b904a93e7b597a60dd00cbcf9f398d9235c1 ] A recent commit moved location of files in the cilium/star-wars-demo repository. This broke the CI tests because they assumed the files were at a specific location. Hardcode the base link to GitHub to a specific commit to get the files for running the test instead of pointing to master. Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
4d1817a pkg/k8s: properly handle empty NamespaceSelector [ upstream commit 376ab67369722fde7b7d2dda0e1aa75741859b65 ] If a rule is passed with an empty NamespaceSelector, this means that all traffic should be allowed to/from all pods in all namespaces. However, Cilium was improperly treating this as allowing all traffic to/from all destinations/sources accordingly. Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
27a2fae docs: Fix theme paths so RTD picks up in-tree theme [ upstream commit 8e7979e5b3c4b1745ad7753c9b0a86ff49d5e0b7 ] Signed-off-by: Joe Stringer <joe@covalent.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
1696231 docs: Expand the "v:" to "version:" in the nav bar [ upstream commit aada722e97320cbc5ef7847b4e6a791cac627163 ] The navigational bar has formatted versions like this for some time: "v: v1.0" This looks a bit weird, because the "v" is duplicated. Make it a bit more obvious by printing "version: ..." instead. For active branches, this should show like: "version: v1.1" For other standard readthedocs versions, it should show like: "version: latest" "version: stable" For tagged releases, this should show like: "version: 1.1.1" Signed-off-by: Joe Stringer <joe@covalent.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
8019504 docs: Update sphinx theme to print version for stable [ upstream commit 556036452e9e25b5d417003c4c067622f660aff7 ] Update the sphinx theme so that if the version provided by readthedocs is "stable", then the actual version that the docs are generated from is formatted in the upper left hand navigational bar, rather than "stable". Signed-off-by: Joe Stringer <joe@covalent.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 30 August 2018, 21:31:52 UTC
74ce213 endpoint: Fix locking while calling endpoint.getLogger() [ upstream commit a3c0cf2aa3e3742bcd8a2eaf84bc4602a7327718 ] Fixes several occasions of endpoint.Mutex not being held at all when calling endpoint.getLogger(). Fixes several additional occasions of endpoint.Mutex only being held for writing which did not protect the e.logger field sufficiently when the cached logger was to be updated. Instead of fixing all callers, introduce a new dedicated mutex for reading and writing e.logger. Given the new mutex for the logger, some of the write locking of endpoint.Mutex can be relaxed to read locking. Fixes: deb2de2ce563 ("completion: Refactor proxy completion logic in a new package") Fixes: 1cd2b3092d6c ("k8s: Add CiliumEndpoint sync GC controller") Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 09 August 2018, 12:54:00 UTC
53ae1de allocator: nextCache can hold a nil value for a id/key [ upstream commit fedecd9160dcc38c1be4b2b682de350748e3b622 ] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1626fbf] goroutine 46 [running]: github.com/cilium/cilium/pkg/kvstore/allocator.(*cache).start.func1(0xc4204b3ec0, 0xc421002000, 0xc4204b3e00, 0xc422edaaa0) /home/nirmoy/go/src/github.com/cilium/cilium/pkg/kvstore/allocator/cache.go:202 +0x6af created by github.com/cilium/cilium/pkg/kvstore/allocator.(*cache).start /home/nirmoy/go/src/github.com/cilium/cilium/pkg/kvstore/allocator/cache.go:137 +0x16b Signed-off-by: Nirmoy Das <ndas@suse.de> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 09 August 2018, 12:54:00 UTC
232b448 monitor: Fix spin loop when reading stdout from monitor fails [ upstream commit 45aea8b9963bafc759a7ba16825c81c5f35a1c86 ] The following pprof cpu trace was observed: ``` flat flat% sum% cum cum% 6670ms 32.97% 32.97% 7650ms 37.82% syscall.Syscall 2270ms 11.22% 44.19% 3290ms 16.26% runtime.scanobject 1690ms 8.35% 52.55% 4300ms 21.26% runtime.mallocgc 910ms 4.50% 57.04% 910ms 4.50% runtime.heapBitsForObject 680ms 3.36% 60.41% 680ms 3.36% runtime.greyobject 560ms 2.77% 63.17% 560ms 2.77% runtime.memclrNoHeapPointers 460ms 2.27% 65.45% 460ms 2.27% runtime.heapBitsSetType 430ms 2.13% 67.57% 490ms 2.42% runtime.ifaceeq 300ms 1.48% 69.06% 10450ms 51.66% bufio.(*Reader).ReadSlice 300ms 1.48% 70.54% 300ms 1.48% runtime.casgstatus ``` The cause seems lack of error handling of `ReadBytes()` so an EOF on the pipe will result in the for loop spinning forever. Also fix up invalid calls to Fatalf() and instead restart the monitor when creation of the pipe fails. The last fix is to have launcher.Run() return an error if the command cannot be started so we can restart the monitor properly if we can't execute it at first. [ Backporter's notes: Manual conflict resolution required, details below ] diff --cc monitor/launch/launcher.go index f60570238aa0,cc2eb20efc5e..000000000000 --- a/monitor/launch/launcher.go +++ b/monitor/launch/launcher.go @@@ -74,42 -73,65 +72,62 @@@ func (nm *NodeMonitor) GetPid() int return nm.GetProcess().Pid } - // Run starts the node monitor. - func (nm *NodeMonitor) Run(sockPath string) { - nm.SetTarget(targetName) - for { - os.Remove(sockPath) - if err := syscall.Mkfifo(sockPath, 0600); err != nil { - log.WithError(err).Fatalf("Unable to create named pipe %s", sockPath) - time.Sleep(time.Duration(5) * time.Second) - } + // run creates a FIFO at sockPath, launches the monitor as sub process and then + // reads stdout from the monitor and updates nm.state accordingly. The function + // returns with an error if the FIFO cannot be created, opened or if the an + // error was encountered while reading stdout from the monitor. The FIFO is always + // removed again when the function returns. -func (nm *NodeMonitor) run(sockPath, bpfRoot string) error { ++func (nm *NodeMonitor) run(sockPath string) error { + os.Remove(sockPath) + if err := syscall.Mkfifo(sockPath, 0600); err != nil { + return fmt.Errorf("Unable to create named pipe %s: %s", sockPath, err) + } + + defer os.Remove(sockPath) + + pipe, err := os.OpenFile(sockPath, os.O_RDWR, 0600) + if err != nil { + return fmt.Errorf("Unable to open named pipe for writing: %s", err) + } + + defer pipe.Close() + + nm.pipeLock.Lock() + nm.pipe = pipe + nm.pipeLock.Unlock() - pipe, err := os.OpenFile(sockPath, os.O_RDWR, 0600) - nm.Launcher.SetArgs([]string{"--bpf-root", bpfRoot}) + if err := nm.Launcher.Run(); err != nil { + return err + } + + r := bufio.NewReader(nm.GetStdout()) + for nm.GetProcess() != nil { + l, err := r.ReadBytes('\n') // this is a blocking read if err != nil { - log.WithError(err).Fatal("Unable to open named pipe for writing") - time.Sleep(time.Duration(5) * time.Second) + return fmt.Errorf("Unable to read stdout from monitor: %s", err) } - nm.Mutex.Lock() - nm.pipe = pipe - nm.Mutex.Unlock() + var tmp *models.MonitorStatus + if err := json.Unmarshal(l, &tmp); err != nil { + return fmt.Errorf("Unable to unmarshal stdout from monitor: %s", err) + } - nm.Launcher.Run() + nm.setState(tmp) + } - r := bufio.NewReader(nm.GetStdout()) - for nm.GetProcess() != nil { - l, _ := r.ReadBytes('\n') // this is a blocking read - var tmp *models.MonitorStatus - if err := json.Unmarshal(l, &tmp); err != nil { - continue - } - nm.setState(tmp) - } + return fmt.Errorf("Monitor process quit unexepctedly") + } - pipe.Close() -// Run starts the node monitor and keeps on restarting it. The function will -// never return. -func (nm *NodeMonitor) Run(sockPath, bpfRoot string) { ++func (nm *NodeMonitor) Run(sockPath string) { + backoffConfig := backoff.Exponential{Min: time.Second, Max: 2 * time.Minute} + + nm.SetTarget(targetName) + for { - if err := nm.run(sockPath, bpfRoot); err != nil { ++ if err := nm.run(sockPath); err != nil { + log.WithError(err).Warning("Error while running monitor") + } - // throttle breakage loops - time.Sleep(restartInterval) + backoffConfig.Wait() } } Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Joe Stringer <joe@covalent.io> 07 August 2018, 20:15:32 UTC
e66620b kubernetes: set maxUnavailable to pods to 2 on upgrade [ upstream commit 6ac133c503223e0cd5186da07bf355530f3a5620 ] [ Backporter's notes: Removed crio YAMLs during backport. ] This will prevent Cilium from stopping all pods when doing an version upgrade. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Joe Stringer <joe@wand.net.nz> 30 July 2018, 18:59:44 UTC
c17f270 daemon: always re-add CNP when receiving an update from Kubernetes [ upstream commit ca3b2c254d24b3c271697f26b519399c0f6ceb0f ] Fixes: 1fd4f57c1ab3 ("adding CRD cilium network policy policy status") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Joe Stringer <joe@wand.net.nz> 30 July 2018, 18:59:44 UTC
d839bf9 Add label script docs to backporting process [ upstream commit 0b5f00a6e40e54fb4365040bcb92a05d48e8b3b4 ] Signed-off-by: Maciej Kwiek <maciej@covalent.io> Signed-off-by: Joe Stringer <joe@wand.net.nz> 30 July 2018, 18:59:44 UTC
ac8b82f Add label script for backporting [ upstream commit 8ae80d14be1b166f122c4f960001794ba1919842 ] contrib/backporting/set-labels.py can be used to change backported PR label status accordingly. Signed-off-by: Maciej Kwiek <maciej@covalent.io> Signed-off-by: Joe Stringer <joe@wand.net.nz> 30 July 2018, 18:59:44 UTC
d50ed77 pkg/endpoint: annotate pod with numeric identity [ upstream commit a735c86d3dc6f352950f3ce254601d03654b9e3e ] As the pod annotation value is expected to be numeric, Cilium should always set that value with a numeric one instead of the string representation. Example of the expected output: $ kubectl get pods -n kube-system kube-dns-7dcc557ddd-vl9s2 -o yaml apiVersion: v1 kind: Pod metadata: annotations: cilium.io/identity: "129" scheduler.alpha.kubernetes.io/critical-pod: "" Fixes: e2d08b5ba510 ("endpoint: Use controller pattern to sync identity to k8s pod") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Joe Stringer <joe@wand.net.nz> 30 July 2018, 18:59:44 UTC
d383263 Prepare for release v1.0.5 Signed-off-by: Ian Vernon <ian@cilium.io> 27 July 2018, 06:07:20 UTC
9fd9d51 test/provision: do not install kubedns in 1.11 Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
a556eb5 Test: Skip Kube-dns if the Kubernetes version is 1.11 [ upstream commit 8c8f7ae7df1e505883ceb7fd1bc1ec6fa63e264c ] On Kubernetes 1.11 the dns engine is Coredns instead of Kube-dns. With this change we avoid to have two DNS services installed when the Coredns is in place. Signed-off-by: Eloy Coto <eloy.coto@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
e6214a5 test/k8sT: wait for DNS to be ready in Kafka pods [ upstream commit 1f2f2807bce9020075e73f8a6fe5facbfed53e56 ] We have observed that while DNS lookups succeed from the host for the various services used in the Kafka Policies test, the CI has failed with errors like the following: "Removing server kafka-service:9092 from bootstrap.servers as DNS resolution failed for kafka-service" To prevent this issue, ensure that "nslookup" succeeds within each Kafka pod itself for "kafka-service". Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
672ec9f tests: disable k8s 1.12-alpha.0 tests [ upstream commit db674150168541c2d1dc6524d761f7e2d0f00906 ] Since k8s 1.12-alpha.0 was build before the integration of the CRD Update Status feature, the tests will always fail as Cilium uses this feature for all k8s versions >= 1.11.0. This commit should be reverted once k8s 1.12-beta.0 is released. Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
bc5391c Revert "Revert "ginkgo-kubernetes-all.Jenkinsfile: move k8s 1.10 and 1.12 to same stage"" [ upstream commit 41d8fc9260bc6b80cf12e01431a70e3c5e5f4423 ] This reverts commit 5e000239faafaa35cbd0bda747259c7016283ea5. Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
6786587 test: update k8s to 1.9.9 and 1.10.5 [ upstream commit e1c154dece59e5c4ff392ac66385ad6a8426c608 ] Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
784844b Revert "Revert "test: update k8s to 1.8.14, 1.10.4 and 1.11.0"" [ upstream commit 9f979bbb42d6611159211f58f963be3ac3a11528 ] This reverts commit 0f4a1d7bf4042b17c5130f0fb356e37e9e0f8023. Signed-off-by: André Martins <andre@cilium.io> 27 July 2018, 00:15:34 UTC
076e03e etcd: Fix and relax during recreate watcher loop [ upstream commit 734745ec7c801e54758ebbad61dc5f2d24dbef2c ] The revision to watch should not be incremented when Watch() returns an already closed channel. Also fix a cosmetic problem where lastRev in the for loop was shadowing the variable declared outside of the loop. We have observed occasions where etcd returns a closed channel on calling Watch(). The problem typically resolves itself quickly but we should sleep for a while to relax the CPU. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 26 July 2018, 22:07:56 UTC
b13114e pkg/endpoint: fix endpoint.logger race condition [ upstream commit b2967eb3b62b5d4aaa79ec023e71502db4380473 ] As getLogger specifies the endpoint.Mutex needs to be held, endpoint.getLogger() after a mutex.Lock() Fixes: 6a8b48951470 ("endpoint: Limit proxy completion timeout to proxy updates") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 26 July 2018, 22:07:56 UTC
e115d28 daemon: fix minimum number of work threads unit test [ upstream commit 3b33059187f7f89e2b3571d43cf24c38900536e0 ] The test failed with the following output when ran locally: ``` ---------------------------------------------------------------------- FAIL: <autogenerated>:1: DaemonConsulSuite.TestMinimumWorkerThreadsIsSet daemon_test.go:181: c.Assert(numWorkerThreads() >= 4, Equals, true) ... obtained bool = false ... expected bool = true ``` Fixes: 25f898609888 ("daemon: change minimal worker thread to 2") Signed-off-by: Ian Vernon <ian@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 26 July 2018, 22:07:56 UTC
bf42673 daemon: change minimal worker thread to 2 [ upstream commit 25f898609888dc1b0adf5b22e64a24f6f3f3b22d ] As all of your CI was running with 2 CPUs, it was creating some scalibility issues in cilium-agent while building the BPF programs. For this reason we should decrease the minimal worker threads to 2 so we can make sure Cilium won't take too much time to regenerate BPF programs in nodes that only have 2 CPUs. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Maciej Kwiek <maciej.iai@gmail.com> 26 July 2018, 22:07:56 UTC
2bf2769 ginkgo-kubernetes-all.Jenkinsfile: fix misplaced bracket Fixes: dcc4f78376fb ("Test: Fix issues with Ginkgo Kubernetes Job") Signed-off-by: André Martins <andre@cilium.io> 23 July 2018, 18:18:37 UTC
39fd8c7 pkg/kvstore: fix high-cpu usage when Cilium loses Consul connectivity [ upstream commit a4f7f1e15ae5c2fe4564502cd4952aca2b907095 ] Fixes: 85469b099dac ("kvstore: New kvstore abstraction API") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 21 July 2018, 14:51:49 UTC
4f02eec pkg/endpoint: set state ready if endpoint labels are the same [ upstream commit 04c810122df1a53232799edb488de4de6b5dc29b ] Fixes: 7ccf87701fc ("kvstore: New kvstore abstraction API") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ian Vernon <ian@cilium.io> 18 July 2018, 00:40:16 UTC
a592338 pkg/kvstore: set hard timeout for etcd lock path to 1 minute [ upstream commit c25a8493f8b3088443ea971747d2499bdee93d85 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Ian Vernon <ian@cilium.io> 18 July 2018, 00:40:16 UTC
b4c2731 test: remove policy enforcement in k8s tests [ upstream commit 94557d4c8a69ae51e10830818f96bef26e8098ff ] As we are testing the policy enforcement in runtime tests there's no point on testing this test in kubernetes as it causes network disruption in the DNS pod. This network disruption might be causing test failures in the following tests. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 12 July 2018, 00:49:45 UTC
c1f0977 envoy: use local_resources parameter during bazel build [ upstream commit a2de55038c949c654a0a4b1a0724579217f25421 ] This sets the resources for memory, CPU, and I/O for the bazel build. This avoids errors like the following when building Envoy: ``` 23:27:39 virtualbox-iso: ERROR: /home/vagrant/.cache/bazel/_bazel_vagrant/502ef5068e38073dd9828a920a71f484/external/envoy/source/server/http/BUILD:11:1: C++ compilation of rule '@envoy//source/server/http:admin_lib' failed (Exit 4) 23:27:39 virtualbox-iso: gcc: internal compiler error: Killed (program cc1plus) 23:27:39 virtualbox-iso: Please submit a full bug report, 23:27:39 virtualbox-iso: with preprocessed source if appropriate. 23:27:39 virtualbox-iso: See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions. 23:27:39 virtualbox-iso: Target //:envoy failed to build 23:27:39 virtualbox-iso: Use --verbose_failures to see the command lines of failed build steps. 23:27:39 virtualbox-iso: INFO: Elapsed time: 444.469s, Critical Path: 74.62s 23:27:39 virtualbox-iso: INFO: 1544 processes, local. 23:27:39 virtualbox-iso: FAILED: Build did NOT complete successfully 23:27:39 virtualbox-iso: FAILED: Build did NOT complete successfully 23:27:40 virtualbox-iso: make: *** [envoy-release] Error 1 23:27:40 virtualbox-iso: Makefile:68: recipe for target 'envoy-release' failed 23:27:41 ==> virtualbox-iso: Deregistering and deleting VM... 23:27:41 ==> virtualbox-iso: Deleting output directory... 23:27:41 Build 'virtualbox-iso' errored: Script exited with non-zero exit status: 2 ``` Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 12 July 2018, 00:49:45 UTC
4bd665d cilium-docker: fix gatewayIPv4 assignment [ upstream commit 9a43b10c07ccfbb0e5c2784f1c7dc1d5d29eec10 ] Signed-off-by: Nirmoy Das <ndas@suse.de> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 12 July 2018, 00:49:45 UTC
7963047 examples/kubernetes: add "system-node-critical" priorityClass [ upstream commit f385a2c37d0161e6a8d52fea8810a1b61d52483d ] More info: https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical-when-priorites-are-enabled Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 12 July 2018, 00:49:45 UTC
aaad939 test: add k8s 1.12 test framework [ upstream commit 878ad4d06426f3ec1294fea80fddd36d06b5e3c0 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 12 July 2018, 00:49:45 UTC
999f3ff examples/kubernetes: backport k8s 1.12 daemonset deployment files Signed-off-by: André Martins <andre@cilium.io> 06 July 2018, 21:11:45 UTC
cd92a92 pkg/policy: take into account To / FromRequires when computing L4 policy [ upstream commit 123901fadb9ae0b93f6759bb294899350a599755 ] Previously, requirements (i.e., ToRequires and FromRequires) were not taken into account when computing L4Policy. The net effect of this was that traffic from more identities than should have been allowed, was allowed for L4-policy. To do so, augment the current L4 policy resolution framework to aggregate the endpoint selectors corresponding to all requirements which select the labels for which policy is being evaluated as Kubernetes MatchExpressions, and then for each rule, append these MatchExpressions to either FromEndpoints or ToEndpoints as the rule is being evaluated against the provided set of labels. Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
3e21b27 backport: Only check merged PRs [ upstream commit ff108fb09ac07eccffb31f73dc693856fde421a1 ] The script would surface closed PRs with the needs/backport label. Instead of cleaning those up, clean up the script. Signed-off-by: Ray Bejjani <ray@covalent.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
4b12c46 backport: use the same url for all searches [ upstream commit 7dfffe42f51a0b5cfb8f2d2b085e7cc2da889f0d ] We had two, they were almost the same. Now we have one and no one will editg the incorrect one again. Signed-off-by: Ray Bejjani <ray@covalent.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
f55d8bd test: Use latest stable etcd and consul images [ upstream commit 678e983ddc07f05019e3b0b6b04146502444b383 ] Change the real etcd and consul docker image tags that are used in the unit tests run in the CI. Fixes: ab999f126278 ("test: Use latest stable etcd and consul images") Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
30f3bf7 tests: Update cilium-builder in unit tests to 2018-06-21 [ upstream commit 04aa3a38f149055914c26dfb8592b4c46ee7cd29 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
2a45f4d pkg/endpoint: set policy revision if there is no datapath changes [ upstream commit 0e35b735c3c788dc47aa94b3e6e5dc49dfdbd25d ] As cilium calls regeneratePolicy inside regenerateBPF, it could end up setting up the policy revision before enforcing the changes in the datapath. This bug could be triggered by TriggerPolicyUpdates where the trace call could be the following: 1 - endpoint.TriggerPolicyUpdates 2 - endpoint.TriggerPolicyUpdatesLocked 3 - endpoint.regeneratePolicy returns (true, nil) returns (true, nil) 4 - endpoint.Regenerate 5 - endpoint.regeneratePolicy 6 - endpoint.setPolicyRevision(y) # -> WRONG! 7 - endpoint.regenerateBPF 8 - endpoint.setPolicyRevision(y) # -> 'y' was already wrongly set in step 6 Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
c37c61a ginkgo.Jenkinsfile: increase timeout by 30 minutes [ upstream commit f649e1b0b932604b5a5c8992c2734a9c3170f510 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
a213b30 test: set default CRI socket [ upstream commit 0c4e2a7b9e9417accea6e9955778e0a0b1c7a860 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Manali Bhutiyani <manali@covalent.io> 05 July 2018, 21:26:48 UTC
07bba8b Check for nil before accessing Status [ upstream commit 85dc7669c2c13bd23de400ec3fd74834624bc397 ] Fixes: #4256 Signed-off-by: Mark deVilliers <markdevilliers@gmail.com> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
e9925cc allocator: Avoid scanning sequentual list when allocating [ upstream commit 1daaf2e590011174c0a7c59f5f4fb1d1f745c2bd ] Testing with a kvstore with 25K identities pre-loaded Before: allocations time-span 1.057793212s success 36.869210/s error 0.000000/s After: allocations time-span 1.00054621s success 1009.448629/s error 0.000000/s Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
5ca7043 identity: Fix allocator init with more than pre-existing 1024 keys [ upstream commit b86a178de2d87f4c535704afb48294b4e2f5870f ] The current allocator init code issues a ListAndWatch operation against the kvstore to populate the local cache of identities. It also emits an event on a channel for each cache entry to indicate the event of a new identity. So far, this channel was created by the allocator and returned. The buffer size was 1024. Since the channel was never read before the allocator was returned to the caller, the buffer size acted as an upper boundary for number of identities that can exist in the kvstore while an allocator can initialize. Fix this by making the caller responsible for creating the events channel and have the caller pass it in with a WithEvents() function. Add a section in the function documentation to mention the requirement to read the event channel while initializing is happening. Fixes: #4557 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
1f5f2a4 identity: Process identity events in batches [ upstream commit 774be4dc689ac4b181f809e73bc8cf14b810679c ] The existing watcher in charge of detecting identity changes in the cluster was implementing naive behavior. It was simply waiting while reading on a go channel to process create and modify events for new or changed identities. It would then call the function to trigger recalculation of required policies. The function would block until the operation completed. It would then continue reading and process whatever events had accumulated in the meantime. This is suboptimal for various reasons: * Events can pile up quicker than we can process as policy recalculation is an expensive operation. This would eventually cause the allocator code to slow down causing lag in handling policy update triggers on remote nodes. * Causing unnecessary load as a policy trigger resolves the necessary work for all changed or new identities in the cache since the function was last called. Optimize this by making the channel handler lightweight by using the new trigger package. The update continues to be blocking as there is no point in running parallel policy updates. Policy updates are triggered at most once per second as this is an expensive operation and batching helps reduce overall load which will reduce build times and thus latency. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
6322ef1 trigger: New trigger package [ upstream commit 0cc68a67c07ae78d184b5129b7916c391e356177 ] Simple logic to implement trigger based invocation of background routines that need to be serialized and should be subject to a minimum interval between invocations. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
c91e870 allocator: benchmark: Reserve ID space for reserved identities [ upstream commit 0961d92a0e7b59326f0db756b36c34ce349e1a18 ] Depending on the c.N automatically chosen by the test framework, the MaxID was able to drop below the MinID which caused the test to fail sometimes. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
5bd5679 test: Use latest stable etcd and consul images [ upstream commit ab999f1262784a625c2c139bbb0edf78a258a6fe ] * etcd 3.2.17 * consul 1.1.0 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
9d5366a Contrib: Backport script to use different versions [ upstream commit acd4e88e46438545eba842a87485e2d9d3c675ca ] - Add a new argument on check-stable to be able to set the target branch that we need to backport. Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 21 June 2018, 23:25:36 UTC
0cf54ac docs: use Documentation context to avoid longer image builds [ upstream commit 71935664fefdbac3b2e9dad3ebb20bc242ce11bf ] Instead of using the project root as the docker context to build the documentation image, copy the api directory into `_api` and use the Documentation directory as docker context for faster builds. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
f1d5ee1 examples/kubernetes: use POSIX regex for CILIUM_VERSION checker [ upstream commit d313ee14eaa254875b104f3dad5819dbae707463 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
cc348c9 kubernetes: Add missing parenthesis to only fail on invalid version [ upstream commit aa8518fbb63a441b4500f230a3a28a87786daf50 ] Previous logic always failed when generating YAML files. Fix logic to only fail if version check fails. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
c0cf1ab kubernetes: Fix generation of DaemonSet files to include v image tag prefix [ upstream commit 34efbece8f8f47b91f9efb72e08e7c336fe9e2f0 ] Fixes: #4480 Reported-by: Christophe VILA (@cvila84) Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
a018199 Dockerfile: update cilium-runtime with 2018-06-04 [ upstream commit ab39002f4743a48a7ff29e543b392afa9b421200 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
b295673 docker/Dockerfile: add gpg [ upstream commit de65c8592248e0b1fbe7c7b7d66a1916b567539b ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
fe8ac66 docker/Dockerfile: update loopback cni to 0.6.0 [ upstream commit cba23305331d27ac1e54e70eb5c00a2fac05d632 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
f7b616f docker/Dockerfile: update iproute2 to 4.16 [ upstream commit bf12d6ae1db89072afcde2629879fc27fd0c0dc8 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
269a63a docker/Dockerfile: update base image to ubuntu 18.04 [ upstream commit 5eaf7dcc9178596479538ece6abd137750a8b881 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
fa5cd54 docker/Dockerfile: update golang to 1.10.2 [ upstream commit 15c6a375df47a76ebf71b6f4885f164365d17cf8 ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 23:47:47 UTC
8a2b403 bpffs: Fix panic when root directory does not exist [ upstream commit bbc859cd7d707c11937b903799fbea80d26eb96f ] Code used results of os.Stat() even when it failed. Also check the error returned to Mkdir() in case the root directory had to be created. Fixes: #4466 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 19 June 2018, 20:05:56 UTC
94d8228 maps/tunnel: Use DefaultLogger [ upstream commit 170beac39fac8d596fb20471c7f234b6edfb7c11 ] log was not using the default logger. log messages were lost. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 15 June 2018, 21:47:54 UTC
4d6bae2 tunnel: Add debug messages on tunnel map manipulation [ upstream commit 32b25d5bf57d6d0c9a609c4890b2249f8bd34c83 ] Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 15 June 2018, 21:47:54 UTC
75e2321 tunnel: Make BPF tunnel map updates atomic [ upstream commit 3edb2d67cb6dbb014e17f902a01fd28006477924 ] The existing code deleted the old tunnel map entry before updating it. This lead to a short timeframe in which no tunnel mapping was present in the datapath. Fix this by updating before deleting and only attempt deletion if the old and new entry is different. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 15 June 2018, 21:47:54 UTC
903602a allocator: Increase allocator list timeout to 2 minutes [ upstream commit 05fbabc53481d92693d4aac249762776a95b9c02 ] It has been observed that if etcd is under heavy load, the time to list all identity keys can surpass 60 seconds. Reported-by: Amey Bhide <amey@covalent.io> Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> 15 June 2018, 21:47:54 UTC
2019325 Backport: Fix Jenkinsfile artifacts Ginkgo test in v1.0 branch generates `tar` files instead of zip files. New JenkinsFiles were backported where a zip format is in use, this PR fix the archive artifacts to gather the correct logs on v1.0 branch. Signed-off-by: Eloy Coto <eloy.coto@gmail.com> 14 June 2018, 04:48:18 UTC
595b0bd conntrack: Increase conntrack interval to 1 minute [ upstream commit d667626bf37970435809b9828c22ba3fb75ed561 ] The existing conntrack garbage collector interval is 10 seconds. In environments that are operating at the upper limit of the conntrack table (1M entries), this can put considerable stress on the CPU. Increase the interval to 1 minute to reduce the stress and find a better balance between resource consumption and aggressive cleanup. Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
18a0678 daemon: return error if createEndpoint fails [ upstream commit af4d78fd80348045774f0692febf34f9ea4dccc3 ] Fixes: b7e38028a5fc ("daemon: Refactor endpoint API handling") Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
19b5bb9 Test: Demo tests waiting to policies to be applied. [ upstream commit 18f028c953c6451dd845c7bc8723c905507f4f9f ] Applied policies in demos to make sure that the endpoints are updated correctly. Fix build https://jenkins.cilium.io/job/Ginkgo-CI-Tests-Pipeline/2798 Signed-off-by: Eloy Coto <eloy.coto@gmail.com> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
0fb7749 pkg/bpf: Use the other directory when /sys/fs/bpf is not BPFFS [ upstream commit 812eee2c8035747ee3292600ade6fb5a3cf40285 ] When Cilium is running inside container, it has /sys/fs/bpf directory mounted from the host. However, if BPPFS is not mounted on the host, then containerized Cilium should be aware of that and try to mount BPPFS in the other directory - /run/cilium/bpffs. Fixes #3717 Signed-off-by: Michal Rostecki <mrostecki@suse.com> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
878bc2c pkg/envoy: Don't hardcode BPFFS mount path [ upstream commit 80afa5b162891a095bca6e7c7a7c01124c9f6b03 ] Signed-off-by: Michal Rostecki <mrostecki@suse.com> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
1315244 bpf: Allow to define BPF map root via env variable [ upstream commit f116806421def816bec9b38554cc81d0d958fe81 ] Add CILIUM_BPF_MNT environment variable in cilium-map-migrate program and init.sh script to allow to define a non-default directory (other than /sys/fs/bpf) where BPFFS is mounted. Signed-off-by: Michal Rostecki <mrostecki@suse.com> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
b197a9d pkg/mountinfo: Add utility for getting mountinfo [ upstream commit c635b9d21b8857d9d3eaa2e8052c97cf7bbaecae ] This new package provides a struct representing each line from /proc/pid/mountinfo and a function which returns a slice with parsed information from that file. Information from /proc/pid/mountpoint will be used mainly for checking mountpoint's root and filesystem. Signed-off-by: Michal Rostecki <mrostecki@suse.com> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
00d3a68 allocator: Use DefaultLogger [ upstream commit 69ec3e28768822838439195488b18e782650e177 ] log was not using DefaultLogger. logs were lost Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 23:18:20 UTC
0bb5e50 examples/kubernetes: set correct image tag in kubernetes examples Signed-off-by: André Martins <andre@cilium.io> 11 June 2018, 13:40:42 UTC
228c4e5 Prepare for 1.0.4 release Signed-off-by: Thomas Graf <thomas@cilium.io> 08 June 2018, 22:34:01 UTC
1a1a6a5 GH-4339 Add k8s label source in GetPolicyLabels [ upstream commit 41a03c1dba41ab20bed154df71a47cd6c0b1daf3 ] Signed-off-by: ashwinp <ashwin@covalent.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
3e034f0 identity: Resolve unknown identity to label reserved:unknown [ upstream commit a1629015283a12f7c27359ee3c7ad4677fd2c543 ] ``` $ cilium identity get 0 ID LABELS 0 reserved:unknown ``` Fixes: #4296 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
23a17e7 identity: Ignore nil identity when generating IdentityCache [ upstream commit b4740f994237b122a1d809a50839fbe0cfb1a115 ] Fixes: #4213 Signed-off-by: Thomas Graf <thomas@cilium.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
ba851a0 Add "docker info" output to bugtool [ upstream commit 3a8c7999f0b0436102b7a6534c1fc8707a30b09f ] Fixes: #3990 Signed-Off-By: Steven Ceuppens <steven.ceuppens@icloud.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
8556733 init.sh: Use 'ip route replace' instead of 'ip route add' [ upstream commit bfd55b2c45611edac2903a21021a343f1f327734 ] Since we no longer delete the old cilium_host/cilium_net veth pair if they already exist, 'ip route add' will complain of existing routes. Fix this by using 'ip route replace' instead. Signed-off-by: Jarno Rajahalme <jarno@covalent.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
07b9c46 daemon: trigger policy updates upon daemon configuration update [ upstream commit 659d5acfa8c54f1257d3d57ab8d5732109ee1cca ] Daemon configuration directly affects endpoint programs; trigger= policy updates accordingly. Signed-off by: Ian Vernon <ian@cilium.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
371ca55 pkg/endpointmanager: always regenerate if policy forcibly computed [ upstream commit 3b9b744a5fc0e4f8b7d95d28017665c81b4d6c3f ] Regardless of if there is no change in the computed policy for an endpoint, still try to regenerate it. Previously, only a difference in the endpoint's policy and configuration, not the agent's configuration, resulted in an endpoint regeneration, which was incorrect. This is because if there is a change in the configuration of the cilium-agent, a regeneration of endpoints may still be required, beacuse the endpoint's program is compiled not only with its headerfile (lxc_config.h), but the agent's headerfile (node_config.h) as well. Thus, checking only the result of pkg/endpoint/policy.go:regeneratePolicy` is not sufficient for determining whether an endpoint's program should be rebuilt. Signed-off-by: Ian Vernon <ian@cilium.io> Signed-off-by: John Fastabend <john.fastabend@gmail.com> 08 June 2018, 22:17:22 UTC
back to top