Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upDev. env. container-per-worker implementation for Kubernetes #11419
Conversation
| export CHANGE_MINIKUBE_NONE_USER=true | ||
| sudo -E minikube start --vm-driver=none | ||
| sleep 60 | ||
| envsubst < deployment/kubernetes/build-pod.yaml.template > deployment/kubernetes/build-pod.yaml |
cihangir
Sep 5, 2017
Contributor
I missed this on previous PR. Could you use (or eliminate the necessity) something else other than envsubst MacOS does not come with it.
szkl
Sep 5, 2017
Member
I think we can make it a required dependency. It is a part of gettext, actually. So, it might not be installed by GNU+Linux distros. What do you think?
brew install gettext
brew link --force gettext
It is located in $BREW_PREFIX/opt/gettext/bin/envsubst.
cihangir
Sep 5, 2017
Contributor
We might even not need this if we start working generating templates from JSON objects related
| KONFIG.supervisorConf = (require '../deployment/supervisord.coffee').create KONFIG | ||
| KONFIG.kubernetesConf = (require '../deployment/kubernetes.coffee').create KONFIG | ||
| for name, options of KONFIG.workers when options.kubernetes?.ports? | ||
| fs.writeFileSync "./deployment/kubernetes/backend-pod/#{name}-svc.yaml", (require '../deployment/kubernetes.coffee').createWorkerServices name, options |
cihangir
Sep 5, 2017
Contributor
Could you separate this into multiple lines where first you create the data to be written and write it later?
|
We need to work on the configuration. |
| kubernetes : | ||
| image : 'nginx' | ||
| command : " [ \"nginx\", \"-c\", \"#{KONFIG.projectRoot}/nginx.conf\" ] " | ||
| mounts : [ 'koding-working-tree', 'assets' ] |
cihangir
Sep 5, 2017
Contributor
Could you define this mounts in KONFIG.worker as separate properties? We will have greater visibility in what we can mount in k8
cihangir
Sep 5, 2017
Contributor
While generating the mounts for kubernetes templates, check if the mount is pre-defined in KONFIG, if not fail.
| ports : | ||
| containerPort : "#{KONFIG.kontrol.port}" | ||
| hostPort : "#{KONFIG.kontrol.port}" | ||
| envVariables : [ |
cihangir
Sep 5, 2017
Contributor
You dont need to define every env vars as key-value pairs. Just define the required konfig params and read them from KONFIG.
cihangir
Sep 5, 2017
Contributor
Also while reading from KONFIG, check if the required env var is in KONFIG, if not fail.
| @@ -116,13 +174,34 @@ module.exports = (KONFIG, options, credentials) -> | |||
| ] | |||
| healthCheckURLs : [ "http://localhost:#{KONFIG.kloud.port}/healthCheck" ] | |||
| versionURL : "http://localhost:#{KONFIG.kloud.port}/version" | |||
| kubernetes : | |||
| image : 'koding/base' | |||
| command : " [ \"./run\", \"exec\", \"#{GOBIN_K8S}/kloud\" ] " | |||
| mounts : [ 'koding-working-tree', 'root-kite-volume', 'generated-volume' ] | ||
| ports: | ||
| containerPort : "#{KONFIG.kloud.port}" | ||
| hostPort : "#{KONFIG.kloud.port}" |
cihangir
Sep 5, 2017
Contributor
We can eliminate the host port in next iteration, we wont need it when we have service descriptions in place.
| @@ -116,13 +174,34 @@ module.exports = (KONFIG, options, credentials) -> | |||
| ] | |||
| healthCheckURLs : [ "http://localhost:#{KONFIG.kloud.port}/healthCheck" ] | |||
| versionURL : "http://localhost:#{KONFIG.kloud.port}/version" | |||
| kubernetes : | |||
| image : 'koding/base' | |||
cihangir
Sep 5, 2017
Contributor
We should not need the command section here. This worker definition already has it.
| mounts : [ 'koding-working-tree', 'root-kite-volume' ] | ||
| ports : | ||
| containerPort : '4560' | ||
| hostPort : '4560' |
|
|
||
| containerSection = | ||
| """ | ||
| \n - name: #{container.name} |
cihangir
Sep 5, 2017
Contributor
You dont need to work with strings. First generate an object then use a yaml generator to convert that to yaml.
| @@ -15,6 +15,10 @@ createUpstreams = (KONFIG) -> | |||
| servers += '\n' if servers isnt '' | |||
| port = parseInt(port, 10) | |||
|
|
|||
| if name is 'webserver' and options.kubernetes.ports? | |||
cihangir
Sep 5, 2017
Contributor
This is not the place where we should handle port swapping. Please pass a new flag to configure and do this in first step.
| @@ -41,6 +41,13 @@ function run() { | |||
| esac | |||
| } | |||
|
|
|||
| function k8s_build() { | |||
|
There are many places repeating the same information. |
|
|
||
| deployment/kubernetes/backend-pod/ | ||
| deployment/kubernetes/frontend-pod/client-containers.yaml | ||
| deployment/kubernetes/build-pod.yaml |
| @@ -255,7 +255,7 @@ module.exports = (options, credentials) -> | |||
| # TODO: average request count per hour for a user should be measured and a reasonable limit should be set | |||
| nodejsRateLimiter : { enabled: no, guestRules: [{ interval: 3600, limit: 5000 }], userRules: [{ interval: 3600, limit: 10000 }] } # limit: request limit per rate limit window, interval: rate limit window duration in seconds | |||
| nodejsRateLimiterForApi : { enabled: yes, guestRules: [{ interval: 60, limit: 5 }], userRules: [{ interval: 60, limit: 60 }] } # limit: request limit per rate limit window, interval: rate limit window duration in seconds | |||
| webserver : { port: 8080 } | |||
| webserver : { port: 8080, k8sPort: 8040 } | |||
| @@ -105,7 +105,12 @@ Configuration = (options = {}) -> | |||
| sh: (require './generateShellEnv').create KONFIG, options | |||
| json: JSON.stringify KONFIG, null, 2 | |||
|
|
|||
| if not fs.existsSync './deployment/kubernetes/backend-pod' then fs.mkdirSync './deployment/kubernetes/backend-pod' | |||
| @@ -5,14 +5,55 @@ module.exports = (KONFIG, options, credentials) -> | |||
| GOBIN = '%(ENV_KONFIG_PROJECTROOT)s/go/bin' | |||
| GOPATH = '%(ENV_KONFIG_PROJECTROOT)s/go' | |||
|
|
|||
| GOBIN_K8S = "#{KONFIG.projectRoot}/go/bin" | |||
| GOPATH_K8S = "#{KONFIG.projectRoot}/go" | |||
| stopsignal : 'QUIT' | ||
| kubernetes : | ||
| image : 'nginx' | ||
| command : " [ \"nginx\", \"-c\", \"#{KONFIG.projectRoot}/nginx.conf\" ] " |
| @@ -15,6 +15,10 @@ createUpstreams = (KONFIG) -> | |||
| servers += '\n' if servers isnt '' | |||
| port = parseInt(port, 10) | |||
|
|
|||
| if name is 'webserver' and options.kubernetes.ports? | |||
| webserverPort = parseInt(options.kubernetes.ports.containerPort, 10) | |||
| servers += "\tserver 127.0.0.1:#{webserverPort + index} max_fails=3 fail_timeout=10s;\n" | |||
| # more info on guest user update: https://www.rabbitmq.com/blog/2014/04/02/breaking-things-with-rabbitmq-3-3/ | ||
| k8s_health_check $RABBITMQ_POD_NAME koding 5 120 rabbitmq | ||
| sleep 10 | ||
| kubectl exec -n koding $RABBITMQ_POD_NAME -c rabbitmq -- bash -c "rabbitmqctl add_user test test && rabbitmqctl set_user_tags test administrator && rabbitmqctl set_permissions -p / test '.*' '.*' '.*'" |
| kubectl delete -f $1 | ||
| } | ||
|
|
||
| if [ "$1" == "k8s_connectivity_check" ]; then |
szkl
Sep 5, 2017
Member
You don't have to add a conditional clause for each command. See if you can just find out if there is a function defined with the same name as "$1".
| @@ -116,13 +174,34 @@ module.exports = (KONFIG, options, credentials) -> | |||
| ] | |||
| healthCheckURLs : [ "http://localhost:#{KONFIG.kloud.port}/healthCheck" ] | |||
| versionURL : "http://localhost:#{KONFIG.kloud.port}/version" | |||
| kubernetes : | |||
| image : 'koding/base' | |||
6d836a1
to
14f9f1a
14f9f1a
to
d14d53a
|
nothing blocker apart from |
| export NAMESPACE_DIR="${KONFIG_PROJECTROOT}/deployment/kubernetes/namespace.yaml" | ||
| export BACKEND_DIR="${KONFIG_PROJECTROOT}/deployment/kubernetes/backend-pod/containers.yaml" | ||
| export EXT_SERVICES_DIR="${KONFIG_PROJECTROOT}/deployment/kubernetes/external-services/ -R" |
| @@ -1,15 +1,22 @@ | |||
| export KONFIG_PROJECTROOT=/opt/koding | |||
cihangir
Sep 13, 2017
Contributor
we may need to get rid of from this special config file sooner than it might seem ;)
| envsubst < deployment/kubernetes/frontend-pod/client-containers.yaml.template > deployment/kubernetes/frontend-pod/client-containers.yaml | ||
| cp $BACKEND_DIR ${KONFIG_PROJECTROOT}/deployment/generated_files | ||
| cp $BACKEND_DIR ${KONFIG_PROJECTROOT}/deployment/generated_files -r |
cihangir
Sep 13, 2017
Contributor
if this -r meant for recursive, please move it next to the command itself with its corresponding capital case.
| kubectl exec -n $2 $1 -c $3 -- bash -c "./run is_ready" || exit 1 | ||
| } | ||
| ${KONFIG_PROJECTROOT}/scripts/k8s-utilities.sh create_k8s_resource ${KONFIG_PROJECTROOT}/deployment/kubernetes/external-services/mongo | ||
| export MONGO_POD_NAME=$(kubectl get pods --namespace koding -l "app=mongo-ext-service" -o jsonpath="{.items[0].metadata.name}") |
cihangir
Sep 13, 2017
Contributor
you can create a new variable for general options that will be passed to each command ie: export KUBECTL_OPTS="--namespace koding" on your next iterations.
mertaytore
Sep 13, 2017
Author
Member
Noted and will make the appropriate changes on the next iteration.
| remaining_time=600 | ||
|
|
||
| until eval "check_pod_response $1 $2 $3"; do | ||
| sleep 30 |
cihangir
Sep 13, 2017
Contributor
30 is a rather big number for an interval. 3-5 would be more suitable.
| ${KONFIG_PROJECTROOT}/scripts/k8s-utilities.sh create_k8s_resource ${KONFIG_PROJECTROOT}/deployment/kubernetes/external-services/countly | ||
| export COUNTLY_POD_NAME=$(kubectl get pods --namespace koding -l "app=countly-ext-service" -o jsonpath="{.items[0].metadata.name}") | ||
| ${KONFIG_PROJECTROOT}/scripts/k8s-utilities.sh check_pod_state $COUNTLY_POD_NAME Pending | ||
| sleep 40 |
cihangir
Sep 13, 2017
Contributor
why we have sleeps around the code? check_pod_state already ensures that the state is in desired state.
|
|
||
| KONFIG.kubernetesConf.spec.containers.push generateContainerSection name, workerOptions | ||
|
|
||
| KONFIG.kubernetesConf.spec.volumes = generateVolumesSection KONFIG, options |
cihangir
Sep 13, 2017
Contributor
You should create the volumes spec from the KONFIG itself, not from your prior knowledge. This is the same reason that i commented earlier.
Anyways please create an issue to refactor these volume creation steps and we can leave this as is for now.
mertaytore
Sep 13, 2017
•
Author
Member
I opened an issue for it and will change it on the next iteration.
| @@ -0,0 +1,25 @@ | |||
| apiVersion: v1 | |||
cihangir
Sep 13, 2017
Contributor
ps: replication controller is shortened with "rc"
see your filename
e949e6f
to
e01d3df
d3aa997
to
6e7c59e
6e7c59e
to
1f6c7ff
|
Looks like this is useful. Can we merge it? |
|
@mertaytore you can send the fixes for this PR on your next PRs. ie: removing command section from kubernetes config in workers.coffee |

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

All workers run on different containers in backend pod, landing & client builds are done in frontend pod and external services (rabbitmq, postgres, redis, countly & mongo) run on their respective replication controllers for development environment.
Description
Details on some modifications:
generateKonfig.coffee: Kubernetes' API runs on 8080. Therefore, webserver needs to run on another port, configured to be 8040 for this implementation.
nginx.coffee: Added port 8040 to webserver's upstream for nginx.conf generation. This addition will provide the port for both build types, docker-compose and k8s.
bootstrap-container: Added build_k8s.
bootstrap-container buildcommand was not removed because the command can potentially be used in GitLab integrationkontrol.go: Kontrol is mislead by the environment variables that are generated by the Kubernetes service resource exposing Kontrol. Kontrol's environment loader tries to load
KONTROL_PORT=tcp://<clusterIP-of-kontrol>:3000as kontrol's port value but it fails since the value is not an integer to be parsed. This change did not result in any errors when Koding was built with docker-compose.The change regarding kontrol.go is open for other workarounds.
Motivation and Context
This implementation is worked on top of #11338 with no supervisor execution.
How Has This Been Tested?
No warnings or errors occurred on coffeelint executions on the .coffee files that were modified
Types of changes