kubernetes

here are some things you should not worry about

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}

this command is deprecated

kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-brpz6 1/1 Running 1 5d21h
kube-system coredns-f9fd979d6-v6kv8 1/1 Running 1 5d21h
kube-system etcd-k8smaster 1/1 Running 1 5d21h
kube-system kube-apiserver-k8smaster 1/1 Running 2 5d21h
kube-system kube-controller-manager-k8smaster 1/1 Running 2 5d21h
kube-system kube-proxy-24wgg 1/1 Running 0 4d8h
kube-system kube-proxy-h8pv4 1/1 Running 2 5d21h
kube-system kube-proxy-jrhvk 1/1 Running 0 4d8h
kube-system kube-scheduler-k8smaster 1/1 Running 1 5d21h
kube-system weave-net-9gdhj 2/2 Running 1 4d8h
kube-system weave-net-9zdtb 2/2 Running 0 4d8h
kube-system weave-net-z2z7x 2/2 Running 3 4d8h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-d2677 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-7288s 1/1 Running 0 41m

kubernetes

kubernetes is a container orchestration tool , developed by google. it helps you manage applications that may have 100’s or 1000’s of containers that came up about as microservices arrived. Managing containers using scripts become unwieldy , so the need for orchestration management . This should help with automating High availability , scale performance, and disaster recovery.

Kubernetes basic architecture has one master with multiple worker nodes. Each node has a kubelet which is a process for intercommunication with the nodes. Applications run on the worker nodes. Each worker nodes multiple docker container. The master node runs the api server ( ui, api, cli) , controller manager which keeps track of whats happening in the cluster , scheduler which ensures pods placement, etcd which is kubernetes backing key-value store. worker nodes are much bigger since it runs all of the containers , think of worker nodes as the muscles and the master node as the brain .

 Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. Pod is an abstraction over container . This way the underlying container can be replaced. Usually one application per pod. Each pod gets an ip address ( internal) and can communicate with other ip address. since these are ephemeral , the ips can change when the pods get recreated , its best to attach it to a service. Service gives the ability to assign a static ip. lifecycle of the pods and the service are not connected. for the application to be accessible to be outside , you create an external service. Databases are usually associated with internal services . The service is in the form of an ipadress:port combination . its best to name the service and thats what ingress does.

ConfigMap – this has the external configuration of the application. eg database url , ports etc. Secrets – are used to store credentials base64 encoded. pods can be connected to configmap and Secrets.

volumes – for databases , you need data to be persisted . Data in pods can go away with the pod , so we need to use persisted volumes that can be attached to the pod. This storage can be on the local machine or on the cloud external to the kubernetes cluster.

service has 2 functions – static ip and a load balancer

deployment – blueprint for pods

in practice we create blueprints and not pods.

deployment -> pods -> containers

Database cannot be replicated with deployment , because you need to manage the state of the database. This mechanism is provided by stateful sets. so Deployment for stateless and stateful sets for stateful . Deploying stateful sets is not easy, so sometimes DBs are sometime hosted outside of the K8 cluster.

minikube – one node cluster where master processes and worker processes are on the same machine. so its essentially a one node cluster that runs in the virtual box and can be used for testing puproses

Kubectl – command line tool for k8 cluster. The Api server is the main entry point for the cluster and the cli is used to interact with this.

installing minikube on windows

Ensure hypervisor can be run -> go to cmd and type in systeminfo. You should see a message that states this

Hyper-V Requirements:      A hypervisor has been detected. Features required for Hyper-V will not be displayed.

Now we need to enable hypervisor – we can open up powershell as an administrator and run the command below

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All


Path          :
Online        : True
RestartNeeded : False

ensure docker desktop is installed. Install chcolotey -download the install script and open up it in powershell ise and inspect the script and then run the script. Use choco to install minikube.

C:\Windows\system32>choco install minikube
Chocolatey v0.10.15
Installing the following packages:
minikube
By installing you accept licenses for the packages.
Progress: Downloading kubernetes-cli 1.19.1... 100%
Progress: Downloading Minikube 1.13.1... 100%

kubernetes-cli v1.19.1 [Approved]
kubernetes-cli package files install completed. Performing other installation steps.
The package kubernetes-cli wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): A

Extracting 64-bit C:\ProgramData\chocolatey\lib\kubernetes-cli\tools\kubernetes-client-windows-amd64.tar.gz to C:\ProgramData\chocolatey\lib\kubernetes-cli\tools...
C:\ProgramData\chocolatey\lib\kubernetes-cli\tools
Extracting 64-bit C:\ProgramData\chocolatey\lib\kubernetes-cli\tools\kubernetes-client-windows-amd64.tar to C:\ProgramData\chocolatey\lib\kubernetes-cli\tools...
C:\ProgramData\chocolatey\lib\kubernetes-cli\tools
 ShimGen has successfully created a shim for kubectl.exe
 The install of kubernetes-cli was successful.
  Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-cli\tools'

Minikube v1.13.1 [Approved]
minikube package files install completed. Performing other installation steps.
 ShimGen has successfully created a shim for minikube.exe
 The install of minikube was successful.
  Software install location not explicitly set, could be in package or
  default install location if installer.

Chocolatey installed 2/2 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

install a virtual switch – run the command in powershell

 New-VMSwitch -name minikube -NetAdapterName Ethernet -AllowManagementOS $true

Name     SwitchType NetAdapterInterfaceDescription
----     ---------- ------------------------------
minikube External   Realtek PCIe GbE Family Controller

install minikube – run this in powershell as an admin

minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube"

i was running into issues where it could not find hyperv , i started docker desktop and typed in minikube start and it defaulted to docker



PS C:\Windows\system32> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube"

minikube start 


Path          : 
Online        : True
RestartNeeded : False

* minikube v1.13.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the hyperv driver based on user configuration

minikube : * Exiting due to PROVIDER_HYPERV_NOT_FOUND: The 'hyperv' provider was not found: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe 
@(Get-Wmiobject Win32_ComputerSystem).HypervisorPresent returned ". : File C:\\Users\\vargh\\OneDrive\\Documents\\WindowsPowerShell\\profile.ps1 
cannot be loaded. The file \r\nC:\\Users\\vargh\\OneDrive\\Documents\\WindowsPowerShell\\profile.ps1 is not digitally signed. You cannot run this 
script on \r\nthe current system. For more information about running scripts and setting execution policy, see \r\nabout_Execution_Policies at 
https:/go.microsoft.com/fwlink/?LinkID=135170.\r\nAt line:1 char:3\r\n+ . 'C:\\Users\\vargh\\OneDrive\\Documents\\WindowsPowerShell\\profile.ps1'\r\n+ 
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n    + CategoryInfo          : SecurityError: (:) [], PSSecurityException\r\n    
+ FullyQualifiedErrorId : UnauthorizedAccess\r\nTrue\r\n"
At line:3 char:1
+ minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (* Exiting due t...ss\r\nTrue\r\n":String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
* Suggestion: Enable Hyper-V: Start PowerShell as Administrator, and run: 'Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All'
* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/

* minikube v1.13.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner

minikube :     > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 65 B / 65 B 
[--------------------------] 100.00% ? p/s 0s    > kubelet.sha256: 65 B / 6
kubelet: 99.56 MiB / 104.88 MiB [---------->] 94.93% 10.44 MiB p/s ETA 0s    > kubelet: 103.69 MiB / 104.88 MiB [--------->] 98.86% 10.44 MiB p/s ETA 
0s    > kubelet: 104.88 MiB / 104.88 MiB [------------] 100.00% 11.34 MiB p/s 10s! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 
1.16.6-beta.0, which may have incompatibilites with Kubernetes 1.19.2.
At line:5 char:1
+ minikube start
+ ~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (    > kubectl.s...ernetes 1.19.2.:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
 
* Want kubectl v1.19.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" by default



PS C:\Windows\system32> 

test using this command – kubectl get pods

kubectl get pods
No resources found in default namespace.

kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   6m14s   v1.19.2

minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}


from this point everything will be run using kubectl . We typically create deployment which then creates the pods.

kubectl create deployment nginx-depl --image=nginx
deployment.apps/nginx-depl created

and then to get status 

kubectl get deployment
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-depl   1/1     1            1           51s



At this point we have created a deployment based on the nginx image which has created a pod based on the deployment. We can get the pod by the command below

kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
nginx-depl-5c8bf76b5b-xq7dj   1/1     Running   0          3m12s

so it has the prefix of the deployment and a random id and the status is running so at this point the container is running. We can get the logs of the underlying pod by specifying the command as shown below

kubectl logs nginx-depl-5c8bf76b5b-xq7dj
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up


Now lets start building a mongodb pod

kubectl create deployment mongo-depl --image=mongo
deployment.apps/mongo-depl created

kubectl get pod
NAME                          READY   STATUS              RESTARTS   AGE
mongo-depl-5fd6b7d4b4-j9pf5   0/1     ContainerCreating   0          8s
nginx-depl-5c8bf76b5b-xq7dj   1/1     Running             0          9m34s

kubectl logs mongo-depl-5fd6b7d4b4-j9pf5
{"t":{"$date":"2020-10-15T19:24:24.053+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-10-15T19:24:24.055+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} ..... ( remaining content deleted )

we can use the describe command to find more info about the pod , the syntax is as follows

kubectl describe pod mongo-depl-5fd6b7d4b4-j9pf5
Name:         mongo-depl-5fd6b7d4b4-j9pf5
Namespace:    default
Priority:     0
Node:         minikube/172.17.0.2
Start Time:   Thu, 15 Oct 2020 15:24:07 -0400
Labels:       app=mongo-depl
              pod-template-hash=5fd6b7d4b4
Annotations:  <none>
Status:       Running
IP:           172.18.0.4
IPs:
  IP:           172.18.0.4
Controlled By:  ReplicaSet/mongo-depl-5fd6b7d4b4
Containers:
  mongo:
    Container ID:   docker://de6c695be4efa2f543cff1d5884f14c497aee9cd0b3a2f04defcd4d4c56d7458
    Image:          mongo
    Image ID:       docker-pullable://mongo@sha256:efc408845bc917d0b7fd97a8590e9c8d3c314f58cee651bd3030c9cf2ce9032d
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 15 Oct 2020 15:24:24 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-85bf2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-85bf2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-85bf2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m10s  default-scheduler  Successfully assigned default/mongo-depl-5fd6b7d4b4-j9pf5 to minikube
  Normal  Pulling    4m9s   kubelet, minikube  Pulling image "mongo"
  Normal  Pulled     3m54s  kubelet, minikube  Successfully pulled image "mongo" in 14.932513519s
  Normal  Created    3m54s  kubelet, minikube  Created container mongo
  Normal  Started    3m53s  kubelet, minikube  Started container mongo


notice where it says events , it basically shows the steps – it pulled the image , created the container and started the container .

now we will look at logging into the pod and executing commands

kubectl  exec -it mongo-depl-5fd6b7d4b4-j9pf5 -- bin/bash
root@mongo-depl-5fd6b7d4b4-j9pf5:/#

make sure there is space between the double hyphens and the shell bin/bash in this case . This brings us to command prompt inside the pod and now we can execute commands just like a linux machine.

with creating the deployment , all of the options are passed in the command line and it can become complicated, so its much cleaner to pass a file to kubectl using kubectl apply -f config-file.yaml command

docker – part 3

lets pull in a specific version of node with the alpine tag , alpine images are typically the samllest and helps you create small images.

docker pull node:lts-alpine
lts-alpine: Pulling from library/node
cbdbe7a5bc2a: Pull complete                                                                                             9287919c3a0f: Pull complete                                                                                             43a47bbd54c9: Pull complete                                                                                             3c1bcea295c4: Pull complete                                                                                             Digest: sha256:53bbb1eeb8bc916ee27f9e01c542788699121bd7b5a9d9f39eaff64c2fcd0412
Status: Downloaded newer image for node:lts-alpine
docker.io/library/node:lts-alpine

lets look at the size tag

C:\training>docker image ls
REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE
user-service-api                            latest              d6c4df7196aa        44 hours ago        945MB
website                                     latest              ec6fa782dfbf        45 hours ago        137MB
node                                        lts-alpine          d8b74300d554        6 days ago          89.6MB
node                                        latest              f47907840247        6 days ago          943MB

note how small the lts-alpine image is , its only 137MB compared to the 943MB for node

the same applies for nginx – bottom line alpine linux images are much more smaller

nginx alpine bd53a8aa5ac9 8 days ago 22.3MB
nginx latest 992e3b7be046 8 days ago 133MB

lets change our images to use the alpine version

change the corresponding dockerfile , where it says From , update to refer to the nginx :alpine or node:alpine and issue the build command as shown below

C:\training\nodeegs\user-service-api>docker build -t user-service-api:latest .
Sending build context to Docker daemon  19.97kB
Step 1/6 : FROM node:alpine
 ---> 87e4e57acaa5
Step 2/6 : WORKDIR /app
 ---> Running in 2c324be4450e
Removing intermediate container 2c324be4450e
 ---> a52a0e88e8e9
Step 3/6 : ADD package*.json ./
 ---> d69b2ede02d2
Step 4/6 : RUN npm install
 ---> Running in 79165a49fa10
npm WARN user-service-api@1.0.0 No description
npm WARN user-service-api@1.0.0 No repository field.

added 50 packages from 37 contributors and audited 50 packages in 1.699s
found 0 vulnerabilities

Removing intermediate container 79165a49fa10
 ---> 6e7a39633834
Step 5/6 : ADD . .
 ---> 9a2cc6e2ef61
Step 6/6 : CMD node index.js
 ---> Running in 951c562eaa77
Removing intermediate container 951c562eaa77
 ---> 48026bfc7e3d
Successfully built 48026bfc7e3d
Successfully tagged user-service-api:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

now when we check the image size , we can see the size reduced as well since we reused the tags, the older images are assigned none

C:\training\dockertrng>docker image ls
REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE
website                                     latest              556fcda99af2        5 seconds ago       26.3MB
user-service-api                            latest              48026bfc7e3d        2 minutes ago       119MB
<none>                                      <none>              d6c4df7196aa        44 hours ago        945MB
<none>                                      <none>              ec6fa782dfbf        45 hours ago        137MB
node                                        alpine              87e4e57acaa5        6 days ago          117MB
node                                        latest              f47907840247        6 days ago          943MB
nginx                                       alpine              bd53a8aa5ac9        8 days ago          22.3MB
nginx                                       latest              992e3b7be046        8 days ago          133MB

lets look at tags , version and tagging . Version allows controlling image version. Since the underlying image of node , nginx can change , its advisable to specify version. go to hub.docker.com and search for node as well as go to nodejs.org and figure out the stable version

on the hub.docker.com , look for the corresponding alpine image

mention this version in the docker file

from

to

vscode will actually list out all of the image versions available. now go ahead and reissue the docker build command and you can now see the exact version being pulled to create the image

you can use the docker tag command to assign a version to an image. so in the example below , we can assign version 1 to the image with the latest tag

docker tag user-service-api:latest user-service-api:1

C:\training\nodeegs\user-service-api>docker image ls
REPOSITORY                                  TAG                  IMAGE ID            CREATED             SIZE
user-service-api                            1                    f97cb57c9621        38 minutes ago      92.4MB
user-service-api                            latest               f97cb57c9621        38 minutes ago      92.4MB
website                                     latest               556fcda99af2        54 minutes ago      26.3MB

if we need to make any change to the source code , we can build it and assign it the tag latest and then create a version 2 from the latest tag. This way the image with the latest tag will always point to the latest version and then we have specific versions as well.

C:\training\nodeegs\user-service-api>docker image ls
REPOSITORY                                  TAG                  IMAGE ID            CREATED             SIZE
user-service-api                            1                    f97cb57c9621        42 minutes ago      92.4MB
user-service-api                            2

https://cloud.google.com/container-registrylets talk about docker registries , docker registries is a scalable server side application that stores and lets you distribute images. We just need to use the command push to get the image to the registry. docker hub is a public registry , quay.io , Amazon ECR , Azure container registry , google container registry are the other ones .

lets push one of our images to docker hub , login to docker hub and create anew repo , you get one private repo by default

in my case i am going to call the private repository as myrepo and this is what it looks like

it shows the command to push a new tag to this repo . go back to your desktop and click on login and this presents you with the login screen

you can also login by typing docker login and enter your creds

here is the tricky part , the push refers to the registry path , so its best to name the repo same as application and in docker to put a tag that has your docker id

docker push sjvz/myrepos/userserviceapi:2
The push refers to repository [docker.io/sjvz/myrepos/userserviceapi]
d8ff11b621d8: Preparing                                                                                                 c980f362df9f: Preparing                                                                                                 b87374988724: Preparing                                                                                                 6e960b3b1e1c: Preparing                                                                                                 8760de05bee9: Preparing                                                                                                 52fdc5bf1f19: Waiting                                                                                                   8049bee4ff2a: Waiting                                                                                                   50644c29ef5a: Waiting                                                                                                   denied: requested access to the resource is denied

docker tag user-service-api:2 sjvz/myrepos:2

docker push sjvz/myrepos:2
The push refers to repository [docker.io/sjvz/myrepos]
d8ff11b621d8: Pushed                                                                                                    c980f362df9f: Pushed                                                                                                    b87374988724: Pushed                                                                                                    6e960b3b1e1c: Pushed                                                                                                    8760de05bee9: Pushed                                                                                                    52fdc5bf1f19: Pushed                                                                                                    8049bee4ff2a: Pushed                                                                                                    50644c29ef5a: Pushed                                                                                                    2: digest: sha256:169e40860aa8d2db29de09cdd33d9fe924c8eda71e27212f3054742806ca7fec size: 1992

its kind of weird , but i have tagged my application with myid/reponame and then pushed to the repo …not sure if there is a better way to do this

so its best to delete the repository and name it same as application and then push to the same

you can delete the repo by going into settings .

when you create a new repo , it does give you these instructions to tag the image with the reponame as follows

docker tag local-image:tagname new-repo:tagname
docker push new-repo:tagname

you can use docker inspect containerid to inspect the container

docker logs containerid to inspect the logs

docker logs -f containerid , to follow the logs in realtime

to get into the container , use docker exec -it containerid , the i stands for interactive and the ‘t’ stands for tty terminal

docker containers – part 2

in this section we will start off with mounting volumes between containers

the key command here is volumes-from and the syntax is as below

docker run --name website-copy --volumes-from website -d -p 8081:80 nginx

dockerfile allows us to create our own images .

docker image ls 

the above command will list all of the images we have

in ide , create a file and name it Dockerfile , start with the FROM keyword and mention the base image. In this case , its going to be nginx , so thats the base image and the second line is really adding the current directory of the host to the path specified where it will be mounted inside the container , so the dockerfile should look like this

FROM nginx:latest
ADD . /usr/share/nginx/html


save the dockerfile. go the directory where the code is and type in the command below

docker build  --tag website:latest .                                                                                                                                                                                                                                                                                                            Sending build context to Docker daemon  4.071MB
Step 1/2 : FROM nginx:latest
 ---> 992e3b7be046
Step 2/2 : ADD . /usr/share/nginx/html
 ---> ec6fa782dfbf
Successfully built ec6fa782dfbf
Successfully tagged website:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

the “.” after the tag indicates current directory and thats where the docker file is kept , so it copes the base image from step 1 and then add the current files to the container dest directory in step 2 . notice the default set of permissions.

type in docker image ls to check if the new images are available

 docker image ls                                                                                                                                                                                                                                                                                                                                 REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE
website                                     latest              ec6fa782dfbf        3 minutes ago       137MB
nginx                                       latest              992e3b7be046        7 days ago          133MB

now lets run a container off the newly created image

PS C:\training\dockertrng> docker run --name website -p 8080:80 -d website:latest                                                                                                                                                                                                                                                                                          835d06b0801c3233c5009724c893feedcb18e745dcc8ffee901c21f21d48f4c1
PS C:\training\dockertrng> docker ps --format=$FORMAT                                                                                                                                                                                                                                                                                                                      ID      835d06b0801c
Name    website
Image   website:latest
Ports   0.0.0.0:8080->80/tcp
Command "/docker-entrypoint.…"
Created 2020-10-12 18:39:56 -0400 EDT
Status  Up 10 seconds

as you can see the container is named website and its running off the the image website:latest .

lets create a container that runs node and express . Install node and then follow the hello world instruction given for express and now the goal is to run the same as a docker container. So just like before we need to create a dockerfile and it will look like this .

FROM node:latest
WORKDIR /app
ADD . .
RUN npm install
CMD node index.js

the ADD . . is confusing , but here is the interpretatiion , the first . represents the current directory where the docker build command would run and the second . represents the workdir in other words /app directory that was specified in the line above . so this is what you get when you run the docker build command.

docker build  -t user-service-api:latest .
Sending build context to Docker daemon   2.01MB
Step 1/5 : FROM node:latest
 ---> f47907840247
Step 2/5 : WORKDIR /app
 ---> Using cache
 ---> 0c9323ed7812
Step 3/5 : ADD . .
 ---> e0b87ce6045f
Step 4/5 : RUN npm install
 ---> Running in 8ffa6f7451e8
npm WARN user-service-api@1.0.0 No description
npm WARN user-service-api@1.0.0 No repository field.

audited 50 packages in 0.654s
found 0 vulnerabilities

Removing intermediate container 8ffa6f7451e8
 ---> a9780fbcaf7e
Step 5/5 : CMD node index.js
 ---> Running in a6633c49b9ef
Removing intermediate container a6633c49b9ef
 ---> d6c4df7196aa
Successfully built d6c4df7196aa
Successfully tagged user-service-api:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

At this point an image has been created based on the dockerfile and it has the node and the index.js file that we need. so if we spin up a container based on that image , then we get the desired output

docker run --name websitesv -d -p 3000:3000 user-service-api:latest
2d475dccd375995e5af09b96e4bc85045235d20fe88a7fccccba80d9bc793719


Now if you go to localhost:3000 , it should give you the response based on the code in index.js


lets look at .dockerignore file

this file is used to ignore any files or folder in the current directory that does not need to be added to the container workdirectory . in the example above we are copying the dockerfile , the node modules and possibly the .git file in the docker container even though we dont need it. the .dockerignore file gives us the abilty to ignore these files when the image is created . Basically create a .dockerignore file in he same dir as dockerfile and add the following to the same and then run the build statement

node_modules
Dockerfile
.git

the build will download the node packages everytime and this makes the process slow . The more efficient approach is to enable the use of caching and this can be done by stating the package*.json file and npm install explicitly and this ensures that cache is used sunce docker may not detect changes in those directories

FROM node:latest
WORKDIR /app
ADD package*.json ./
RUN npm install
ADD . .
CMD node index.js

Docker containers -part 1

these are my notes from a recent tutorial i watched on youtube , by amigoscode

Docker tool box  -old way , docker desktop is the new way to run dockers on your machine

Docker is a daemon that runs on your machine that can internally run containers . Think of  hypervisor , but you need a host os that the hypervisor will convert the instructions to the underlying layer , but in this , the docker daemon will pas it to the underlying os. So we can live with one os and the docker daemon and now you can run a whole bunch of containers

 docker –version

Docker version 19.03.12, build 48a66213fe

docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Docker ps command will attach to the daemon and  list any containers if created

An image is a template for creating an environment of your choice – it contains everything – os, software, app code etc

You take an image and run a container with it

Go to hub.docker.com – explore the images and donwload the images  – in this case we are pulling nginx

 docker pull nginx

Using default tag: latest

latest: Pulling from library/nginx

d121f8d1c412: Pull complete                                                                                                                                                                                                                                                                                                                                                66a200539fd6: Pull complete                                                                                                                                                                                                                                                                                                                                                e9738820db15: Pull complete                                                                                                                                                                                                                                                                                                                                                d74ea5811e8a: Pull complete                                                                                                                                                                                                                                                                                                                                                ffdacbba6928: Pull complete                                                                                                                                                                                                                                                                                                                                                Digest: sha256:fc66cdef5ca33809823182c9c5d72ea86fd2cef7713cf3363e1a0b12a5d77500

Status: Downloaded newer image for nginx:latest

docker.io/library/ngi

Notice the tag – it says latest that’s the tag

Docker images lists all the images you have

 docker images

REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE

nginx                                       latest              992e3b7be046        6 days ago          133MB

Since containers are images that are running , you specify the image and the tag as shown below

 docker run nginx:latest

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration

/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/

/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh

10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf

10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf

/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh

/docker-entrypoint.sh: Configuration complete; ready for start up

Nginx – image ., latets – is the tag

This starts the daemon , open up anew powershell window and list this command

docker container ls

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES

a54ce12191d8        nginx:latest        “/docker-entrypoint.…”   34 seconds ago      Up 32 seconds       80/tcp              condescending_liskov

Note the port is 80/tcp

To run in detach mode  use the -d flag

 docker run -d nginx:latest

e97caef31a44d818508b1e36f0ba76a77d461fd1af26c5c5c74c38a1e8576fe4

To map a localhost port to a the container port , use the -p flag , specify the localhost port first and then the container port

So here is the command and you may get a windows pop up

docker run -d  -p 8080:80 nginx:latest

77328413d2b59e5a70fe19d4b3d6922f80cb201568e7002de4240cc6866e5c66

Windows Defender Firewall has blocked some features of this 
app 
Defer-der Firewal has boded of .badmd.exe 
and "ivate neW•crks. 
cm. dode. bade—d. exe 
C: vogrm Vesou•ces 
docker. backend. exe 
Now networks: 
as my network 
retwa•ks, such as and coffee (not 
because hese net•.vorks often have litde no security) 
What risks of agowinq

docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                  NAMES

77328413d2b5        nginx:latest        “/docker-entrypoint.…”   3 minutes ago       Up 3 minutes        0.0.0.0:8080->80/tcp   stupefied_clarke

You can map multiple ports from the source to the dest

Use another  -P localhostport:container port in the docker command

You can start and stop container by names as well

 docker stop stupefied_clarke

stupefied_clarke

docker ps -a

 list all containers

docker rm 77328413d2b5

77328413d2b5 is the container id

   use  docker rm $(docker ps-aq) to remove all containers , the q flag stands for quiet mode

Use -f if there is a running container

Random name gets assigned , but you can specify a name  with –name  flag

You should always name your containers

You can use the format command to display the container in a much more logical manner

PS C:\Users\vargh> docker ps –format=”ID\t{{.ID}}\nName\t{{.Names}}\nImage\t{{.Image}}\nPorts\t{{.Ports}}\nCommand\t{{.Command}}\nCreated\t{{.CreatedAt}}\nStatus\t{{.Status}}\n”                                                                                                                                                                                         ID      6adcb2e2ad1f

Name    website2

Image   nginx:latest

Ports   0.0.0.0:4000->80/tcp, 0.0.0.0:9080->80/tcp

Command “/docker-entrypoint.…”

Created 2020-10-12 15:25:32 -0400 EDT

Status  Up 3 minutes

ID      113e35f080da

Name    website

Image   nginx:latest

Ports   0.0.0.0:3000->80/tcp, 0.0.0.0:8080->80/tcp

Command “/docker-entrypoint.…”

Created 2020-10-12 15:14:09 -0400 EDT

Status  Up 13 minutes

 $FORMAT=”ID\t{{.ID}}\nName\t{{.Names}}\nImage\t{{.Image}}\nPorts\t{{.Ports}}\nCommand\t{{.Command}}\nCreated\t{{.CreatedAt}}\nStatus\t{{.Status}}\n”                                                                                                                                                                                                    docker ps –format=$FORMAT                                                                                                                                                                                                                                                                                                                              ID      6adcb2e2ad1f

You can create a powershell variable $FORMAT and pass that to the docker command

Docker volume

Host 
bind 
mount 
Container 
tmpfs 
mount 
volume

Volume allows sharing of data between hosts and containers or between containers

In windows , right click on whale  -> settings -> resources -> File sharing

Settings 
E 
x 
General 
Resources 
ADVANCED 
• SHARING 
PROVES 
N ETWORK 
Docker Engine 
Command Cine 
Kubernetes 
Resources File sharing 
These directories (and their subdirectories) can be bind mounted into Docker 
containers. You can check the for more details. 
C: 
Select Folder 
t J > ThisPC > Windows-SSD(C:) > training 
Organize • 
New folder 
tensorflowtrng 
oneDrive 
This pc 
3D Objects 
Desktop 
Downloads 
Name 
azureml 
azure-ml-housing-dataset 
dockertrng 
No-show-Issue-Comma-300Zcsv 
tensorflowtmg

docker run  –name website  -v c:/training/dockertrng:/usr/share/nginx/html:ro -d -p 3000:80 -p 8080:80 nginx:latest                                                                                                                                                                                                                             

Mounting this localhost file , you can serve it up in the container  – perfect for static file

To work interactively

docker exec -it website bash                                                                                                                                                                                                                                                                                                                                                 

This command puts us inside the container , now you can directly create files in the docker container that will be accessible in the host if the volume was mounted without the readonly flag.