diff --git a/README.md b/README.md
index 8f1abf6ea7ebd81cffdd7e8478d342dc92cd5138..8b24a8388f8e1009f40474029a96e02d806010c1 100644
--- a/README.md
+++ b/README.md
@@ -31,7 +31,6 @@ Logins:
 There are four important directories:
 * [Backend](./backend) runs on the server and is a middleware between database and frontend
 * [Frontend](./webapp) is a server-side-rendered and client-side-rendered web frontend
-* [Deployment](./deployment) configuration for kubernetes
 * [Cypress](./cypress) contains end-to-end tests and executable feature specifications
 
 In order to setup the application and start to develop features you have to
@@ -98,6 +97,12 @@ docker-compose -f docker-compose.yml up
 
 This will start all required Docker containers
 
+## Deployment
+
+Deployment methods can be found in the [Ocelot-Social-Deploy-Rebranding](https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding) repository.
+
+The only deployment method in this repository is `docker-compose` for development purposes as described above.
+
 ## Developer Chat
 
 Join our friendly open-source community on [Discord](https://discordapp.com/invite/DFSjPaX) :heart_eyes_cat:
diff --git a/SUMMARY.md b/SUMMARY.md
index a04c96d986a60ecb559aca5096b0779c405b7247..9c74b197483b22b98491eb1ec1fef36102586905 100644
--- a/SUMMARY.md
+++ b/SUMMARY.md
@@ -15,24 +15,8 @@
   * [End-to-end tests](cypress/README.md)
   * [Frontend tests](webapp/testing.md)
   * [Backend tests](backend/testing.md)
+* [Deployment](https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding/blob/master/deployment/README.md)
 * [Contributing](CONTRIBUTING.md)
-* [Kubernetes Deployment](deployment/README.md)
-  * [Minikube](deployment/minikube/README.md)
-  * [Digital Ocean](deployment/digital-ocean/README.md)
-    * [Kubernetes Dashboard](deployment/digital-ocean/dashboard/README.md)
-    * [HTTPS](deployment/digital-ocean/https/README.md)
-  * [ocelot.social](deployment/ocelot-social/README.md)
-    * [Error Reporting](deployment/ocelot-social/error-reporting/README.md)
-    * [Mailserver](deployment/ocelot-social/mailserver/README.md)
-    * [Maintenance](deployment/ocelot-social/maintenance/README.md)
-  * [Volumes](deployment/volumes/README.md)
-    * [Neo4J Offline-Backups](deployment/volumes/neo4j-offline-backup/README.md)
-    * [Neo4J Online-Backups](deployment/volumes/neo4j-online-backup/README.md)
-    * [Volume Snapshots](deployment/volumes/volume-snapshots/README.md)
-    * [Reclaim Policy](deployment/volumes/reclaim-policy/README.md)
-    * [Velero](deployment/volumes/velero/README.md)
-  * [Metrics](deployment/monitoring/README.md)
-  * [Legacy Migration](deployment/legacy-migration/README.md)
 * [Feature Specification](cypress/features.md)
 * [Code of conduct](CODE_OF_CONDUCT.md)
 * [License](LICENSE.md)
diff --git a/deployment/.gitignore b/deployment/.gitignore
deleted file mode 100644
index 61e5916241772bb9cc5d53338ae665db304aa371..0000000000000000000000000000000000000000
--- a/deployment/.gitignore
+++ /dev/null
@@ -1,6 +0,0 @@
-secrets.yaml
-configmap.yaml
-**/secrets.yaml
-**/configmap.yaml
-**/staging-values.yaml
-**/production-values.yaml
\ No newline at end of file
diff --git a/deployment/README.md b/deployment/README.md
deleted file mode 100644
index f3bb6d01ed43914e2de7bcfeae0395e1d5ccf7b1..0000000000000000000000000000000000000000
--- a/deployment/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# ocelot.social \| Deployment Configuration
-
-There are a couple different ways we have tested to deploy an instance of ocelot.social, with [Kubernetes](https://kubernetes.io/) and via [Helm](https://helm.sh/docs/). In order to manage your own
-network, you have to [install Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), [install Helm](https://helm.sh/docs/intro/install/) (optional, but the preferred way),
-and set up a Kubernetes cluster. Since there are many different options to host your cluster, we won't go into specifics here.
-
-We have tested two different Kubernetes providers: [Minikube](./minikube/README.md)
-and [Digital Ocean](./digital-ocean/README.md).
-
-Check out the specific documentation for your provider. After that, choose whether you want to go with the recommended deploy option [Helm](./helm/README.md), or use Kubernetes to apply the configuration for [ocelot.social](./ocelot-social/README.md).
-
-## Initialise Database For Production After Deployment
-
-After the first deployment of the new network on your server, the database must be initialized to start your network. This involves setting up a default administrator with the following data:
-
-- E-mail: admin@example.org
-- Password: 1234
-
-{% hint style="danger" %}
-TODO: When you are logged in for the first time, please change your (the admin's) e-mail to an existing one and change your password to a secure one !!!
-{% endhint %}
-
-Run the following command in the Docker container of the or a backend:
-
-{% tabs %}
-{% tab title="Kubernetes For Docker" %}
-
-```bash
-# with explicit pod backend name
-$ kubectl -n ocelot-social exec -it <backend-name> yarn prod:migrate init
-
-# or
-
-# if you have only one backend grep it
-$ kubectl -n ocelot-social exec -it $(kubectl -n ocelot-social get pods | grep backend | awk '{ print $1 }') yarn prod:migrate init
-
-# or
-
-# sh in your backend and do the command here
-$ kubectl -n ocelot-social exec -it $(kubectl -n ocelot-social get pods | grep backend | awk '{ print $1 }') sh
-backend: $ yarn prod:migrate init
-backend: $ exit
-```
-
-{% endtab %}
-{% tab title="Docker-Compose Running Local" %}
-
-```bash
-# exec in backend
-$ docker-compose exec backend yarn run db:migrate init
-```
-
-{% endtab %}
-{% tab title="Running Local" %}
-
-```bash
-# exec in folder backend/
-$ yarn run db:migrate init
-```
-
-{% endtab %}
-{% endtabs %}
diff --git a/deployment/digital-ocean/README.md b/deployment/digital-ocean/README.md
deleted file mode 100644
index 2ded383362d4b406e3af04d1fcd111ea121269a3..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Digital Ocean
-
-As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at Digital Ocean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
-
-## Connect to your local cluster
-
-1. Create a cluster at [Digital Ocean](https://www.digitalocean.com/).
-2. Download the `***-kubeconfig.yaml` from the Web UI.
-3. Move the file to the default location where kubectl expects it to be: `mv ***-kubeconfig.yaml ~/.kube/config`. Alternatively you can set the config on every command: `--kubeconfig ***-kubeconfig.yaml`
-4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
-
-The output should look about like this:
-
-```sh
-$ kubectl get nodes
-NAME                  STATUS   ROLES    AGE   VERSION
-nifty-driscoll-uu1w   Ready    <none>   69d   v1.13.2
-nifty-driscoll-uuiw   Ready    <none>   69d   v1.13.2
-nifty-driscoll-uusn   Ready    <none>   69d   v1.13.2
-```
-
-If you got the steps right above and see your nodes you can continue.
-
-Digital Ocean Kubernetes clusters don't have a graphical interface, so I suggest
-to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
-Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
-do this as a last step.
-
-## Spaces
-
-We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/). 
-
-We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
-
-After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
-
-```sh
-s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
-```
diff --git a/deployment/digital-ocean/dashboard/README.md b/deployment/digital-ocean/dashboard/README.md
deleted file mode 100644
index 5f66afe0b4d2e1b8d82b4a8cdf71d126daf3f2ec..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/dashboard/README.md
+++ /dev/null
@@ -1,55 +0,0 @@
-# Install Kubernetes Dashboard
-
-The kubernetes dashboard is optional but very helpful for debugging. If you want to install it, you have to do so only **once** per cluster:
-
-```bash
-# in folder deployment/digital-ocean/
-$ kubectl apply -f dashboard/
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
-```
-
-### Login to your dashboard
-
-Proxy the remote kubernetes dashboard to localhost:
-
-```bash
-$ kubectl proxy
-```
-
-Visit:
-
-[http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
-
-You should see a login screen.
-
-To get your token for the dashboard you can run this command:
-
-```bash
-$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
-```
-
-It should print something like:
-
-```text
-Name:         admin-user-token-6gl6l
-Namespace:    kube-system
-Labels:       <none>
-Annotations:  kubernetes.io/service-account.name=admin-user
-              kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
-
-Type:  kubernetes.io/service-account-token
-
-Data
-====
-ca.crt:     1025 bytes
-namespace:  11 bytes
-token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
-```
-
-Grab the token from above and paste it into the [login screen](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
-
-When you are logged in, you should see sth. like:
-
-![Dashboard](./dashboard-screenshot.png)
-
-Feel free to save the login token from above in your password manager. Unlike the `kubeconfig` file, this token does not expire.
diff --git a/deployment/digital-ocean/dashboard/admin-user.yaml b/deployment/digital-ocean/dashboard/admin-user.yaml
deleted file mode 100644
index 27b6bb802062a2c19bdd28bd69cf0c433b87d580..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/dashboard/admin-user.yaml
+++ /dev/null
@@ -1,5 +0,0 @@
-apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: admin-user
-  namespace: kube-system
diff --git a/deployment/digital-ocean/dashboard/dashboard-screenshot.png b/deployment/digital-ocean/dashboard/dashboard-screenshot.png
deleted file mode 100644
index 6aefb5414d26ac3cb69153c0624456e47e4ba88f..0000000000000000000000000000000000000000
Binary files a/deployment/digital-ocean/dashboard/dashboard-screenshot.png and /dev/null differ
diff --git a/deployment/digital-ocean/dashboard/role-binding.yaml b/deployment/digital-ocean/dashboard/role-binding.yaml
deleted file mode 100644
index faa8927a2b95388240a82ba9c01bb44c54378715..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/dashboard/role-binding.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
-  name: admin-user
-roleRef:
-  apiGroup: rbac.authorization.k8s.io
-  kind: ClusterRole
-  name: cluster-admin
-subjects:
-- kind: ServiceAccount
-  name: admin-user
-  namespace: kube-system
diff --git a/deployment/digital-ocean/https/.gitignore b/deployment/digital-ocean/https/.gitignore
deleted file mode 100644
index bebae8d05a6d2955689d41badce6bfc9ed3b51c0..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/https/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-ingress.yaml
-issuer.yaml
diff --git a/deployment/digital-ocean/https/README.md b/deployment/digital-ocean/https/README.md
deleted file mode 100644
index 5729a763f1890394e9807970849fd422c8fc4741..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/https/README.md
+++ /dev/null
@@ -1,164 +0,0 @@
-# Setup Ingress and HTTPS
-
-{% tabs %}
-{% tab title="Helm 3" %}
-
-## Via Helm 3
-
-Follow [this quick start guide](https://cert-manager.io/docs/) and install certmanager via Helm 3:
-
-## Or Via Kubernetes Directly
-
-```bash
-$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml
-```
-
-{% endtab %}
-{% tab title="Helm 2" %}
-
-{% hint style="info" %}
-CAUTION: Tiller on Helm 2 is [removed](https://helm.sh/docs/faq/#removal-of-tiller) on Helm 3, because of savety issues. So we recomment Helm 3.
-{% endhint %}
-
-Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html) and install certmanager via Helm 2 and tiller:
-[This resource was also helpful](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm)
-
-```bash
-$ kubectl create serviceaccount tiller --namespace=kube-system
-$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
-$ helm init --service-account=tiller
-$ helm repo add jetstack https://charts.jetstack.io
-$ helm repo update
-$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
-$ helm install --name cert-manager --namespace cert-manager --version v0.11.0 jetstack/cert-manager
-```
-
-{% endtab %}
-{% endtabs %}
-
-## Create Letsencrypt Issuers and Ingress Services
-
-Copy the configuration templates and change the file according to your needs.
-
-```bash
-# in folder deployment/digital-ocean/https/
-cp templates/issuer.template.yaml ./issuer.yaml
-cp templates/ingress.template.yaml ./ingress.yaml
-```
-
-At least, **change email addresses** in `issuer.yaml`. For sure you also want
-to _change the domain name_ in `ingress.yaml`.
-
-Once you are done, apply the configuration:
-
-```bash
-# in folder deployment/digital-ocean/https/
-$ kubectl apply -f .
-```
-
-{% hint style="info" %}
-CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
-And to create a load balancer costs money. Please refine the following documentation if required.
-{% endhint %}
-
-{% tabs %}
-{% tab title="Without Load Balancer" %}
-
-A solution without a load balance you can find [here](../no-loadbalancer/README.md).
-
-{% endtab %}
-{% tab title="With Digital Ocean Load Balancer" %}
-
-{% hint style="info" %}
-CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
-Please refine the following documentation if required.
-{% endhint %}
-
-In earlier days by now, your cluster should have a load balancer assigned with an external IP
-address. On Digital Ocean, this is how it should look like:
-
-![Screenshot of Digital Ocean dashboard showing external ip address](./ip-address.png)
-
-If the load balancer isn't created automatically you have to create it your self on Digital Ocean under Networks.
-In case you don't need a Digital Ocean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
-
-{% endtab %}
-{% endtabs %}
-
-Check the ingress server is working correctly:
-
-```bash
-$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
-<page HTML>
-```
-
-If the response looks good, configure your domain registrar for the new IP address and the domain.
-
-Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
-
-```bash
-$ kubectl -n ocelot-social describe certificate tls
-<
-...
-Spec:
-  ...
-  Issuer Ref:
-    Group:      cert-manager.io
-    Kind:       ClusterIssuer
-    Name:       letsencrypt-staging
-...
-Events:
-  <no errors>
->
-$ kubectl -n ocelot-social describe secret tls
-<
-...
-Annotations:  ...
-              cert-manager.io/issuer-kind: ClusterIssuer
-              cert-manager.io/issuer-name: letsencrypt-staging
-...
->
-```
-
-If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate – no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate – possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
-
-```bash
-# in folder deployment/digital-ocean/https/
-$ kubectl apply -f ingress.yaml
-```
-
-Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
-
-```bash
-$ kubectl -n ocelot-social describe certificate tls
-<
-...
-Spec:
-  ...
-  Issuer Ref:
-    Group:      cert-manager.io
-    Kind:       ClusterIssuer
-    Name:       letsencrypt-prod
-...
-Events:
-  <no errors>
->
-$ kubectl -n ocelot-social describe secret tls
-<
-...
-Annotations:  ...
-              cert-manager.io/issuer-kind: ClusterIssuer
-              cert-manager.io/issuer-name: letsencrypt-prod
-...
->
-```
-
-In case the certificate is not newly created delete the former secret to force a refresh:
-
-```bash
-$ kubectl  -n ocelot-social delete secret tls
-```
-
-Now, HTTPS should be configured on your domain. Congrats!
-
-For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).
diff --git a/deployment/digital-ocean/https/ip-address.png b/deployment/digital-ocean/https/ip-address.png
deleted file mode 100644
index db523156adce4de5bd9bb198823c83bda84b795a..0000000000000000000000000000000000000000
Binary files a/deployment/digital-ocean/https/ip-address.png and /dev/null differ
diff --git a/deployment/digital-ocean/https/namespace.yaml b/deployment/digital-ocean/https/namespace.yaml
deleted file mode 100644
index 43898c54662aacca5fc20e4f48e4a4cd3fbec434..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/https/namespace.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-kind: Namespace
-apiVersion: v1
-metadata:
-  name: ocelot-social
-  labels:
-    name: ocelot-social
diff --git a/deployment/digital-ocean/https/templates/ingress.template.yaml b/deployment/digital-ocean/https/templates/ingress.template.yaml
deleted file mode 100644
index 7fe690182b6c2e352e81e433eb178131371dc081..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/https/templates/ingress.template.yaml
+++ /dev/null
@@ -1,32 +0,0 @@
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name: ingress
-  namespace: ocelot-social
-  annotations:
-    kubernetes.io/ingress.class: "nginx"
-    # cert-manager.io/issuer: "letsencrypt-staging"  # in case using issuers instead of a cluster-issuers
-    cert-manager.io/cluster-issuer: "letsencrypt-staging"
-    nginx.ingress.kubernetes.io/proxy-body-size: 10m
-spec:
-  rules:
-  - host: develop-k8s.ocelot.social
-    http:
-      paths:
-        - backend:
-            serviceName: web
-            servicePort: 3000
-          path: /
-  # decommt if you have installed the mailservice
-  # - host: mail.ocelot.social
-  #   http:
-  #     paths:
-  #     - backend:
-  #         serviceName: mailserver
-  #         servicePort: 80
-  #       path: /
-  # decommt to activate SSL via port 443 if you have installed the certificate. probalby via the cert-manager
-  # tls:
-  # - hosts:
-  #   - develop-k8s.ocelot.social
-  #   secretName: tls
diff --git a/deployment/digital-ocean/https/templates/issuer.template.yaml b/deployment/digital-ocean/https/templates/issuer.template.yaml
deleted file mode 100644
index a87b41438ccd5090776fae74b7c0e4284738b866..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/https/templates/issuer.template.yaml
+++ /dev/null
@@ -1,70 +0,0 @@
----
-# used while installation as first setup for testing purposes, recognize 'server: https://acme-staging-v02…'
-# !!! replace the e-mail for expiring certificates, see below !!!
-# !!! create the used secret, see below !!!
-apiVersion: cert-manager.io/v1
-kind: ClusterIssuer
-metadata:
-  name: letsencrypt-staging
-  namespace: ocelot-social
-spec:
-  acme:
-    # You must replace this email address with your own.
-    # Let's Encrypt will use this to contact you about expiring
-    # certificates, and issues related to your account.
-    email: user@example.com
-    server: https://acme-staging-v02.api.letsencrypt.org/directory
-    privateKeySecretRef:
-      # Secret resource that will be used to store the account's private key.
-      name: letsencrypt-staging-issuer-account-key
-    # Add a single challenge solver, HTTP01 using nginx
-    solvers:
-    - http01:
-        ingress:
-          class: nginx
----
-# used after installation for production, recognize 'server: https://acme-v02…'
-# !!! replace the e-mail for expiring certificates, see below !!!
-# !!! create the used secret, see below !!!
-apiVersion: cert-manager.io/v1
-kind: ClusterIssuer
-metadata:
-  name: letsencrypt-prod
-  namespace: ocelot-social
-spec:
-  acme:
-    # You must replace this email address with your own.
-    # Let's Encrypt will use this to contact you about expiring
-    # certificates, and issues related to your account.
-    email: user@example.com
-    server: https://acme-v02.api.letsencrypt.org/directory
-    privateKeySecretRef:
-      # Secret resource that will be used to store the account's private key.
-      name: letsencrypt-prod-issuer-account-key
-    # Add a single challenge solver, HTTP01 using nginx
-    solvers:
-    - http01:
-        ingress:
-          class: nginx
----
-# fill in your letsencrypt-staging-issuer-account-key
-# generate base 64: $ echo -n '<your data>' | base64
-apiVersion: v1
-data:
-  tls.key: <your base 64 data>
-kind: Secret
-metadata:
-  name: letsencrypt-staging-issuer-account-key
-  namespace: ocelot-social
-type: Opaque
----
-# fill in your letsencrypt-prod-issuer-account-key
-# generate base 64: $ echo -n '<your data>' | base64
-apiVersion: v1
-data:
-  tls.key: <your base 64 data>
-kind: Secret
-metadata:
-  name: letsencrypt-prod-issuer-account-key
-  namespace: ocelot-social
-type: Opaque
diff --git a/deployment/digital-ocean/no-loadbalancer/.gitignore b/deployment/digital-ocean/no-loadbalancer/.gitignore
deleted file mode 100644
index e44914af64614e049f80d202a2ba52e879001220..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/no-loadbalancer/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-mydns.values.yaml
-myingress.values.yaml
diff --git a/deployment/digital-ocean/no-loadbalancer/README.md b/deployment/digital-ocean/no-loadbalancer/README.md
deleted file mode 100644
index 1afba041390ddeb5cdbbe6803014aabe6bd46a22..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/no-loadbalancer/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Solution Withou A Loadbalancer
-
-## Expose Port 80 On Digital Ocean's Managed Kubernetes Without A Loadbalancer
-
-Follow [this solution](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709) and install a second firewall, nginx, and use external DNS via Helm 3.
-
-{% hint style="info" %}
-CAUTION: Some of the Helm charts are already depricated, so do an investigation of the approbriate charts and fill the correct commands in here.
-{% endhint %}
diff --git a/deployment/digital-ocean/no-loadbalancer/templates/mydns.values.template.yaml b/deployment/digital-ocean/no-loadbalancer/templates/mydns.values.template.yaml
deleted file mode 100644
index bfef4dd0363633cedbb898cf60fcd21431f825cd..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/no-loadbalancer/templates/mydns.values.template.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
----
-provider: digitalocean
-digitalocean:
-  # create the API token at https://cloud.digitalocean.com/account/api/tokens
-  # needs read + write
-  apiToken: "DIGITALOCEAN_API_TOKEN"
-domainFilters:
-  # domains you want external-dns to be able to edit
-  - example.com
-rbac:
-  create: true
\ No newline at end of file
diff --git a/deployment/digital-ocean/no-loadbalancer/templates/myingress.values.template.yaml b/deployment/digital-ocean/no-loadbalancer/templates/myingress.values.template.yaml
deleted file mode 100644
index f901a06771744d2602199dd434984b5464eb4b79..0000000000000000000000000000000000000000
--- a/deployment/digital-ocean/no-loadbalancer/templates/myingress.values.template.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
----
-controller:
-  kind: DaemonSet
-  hostNetwork: true
-  dnsPolicy: ClusterFirstWithHostNet
-  daemonset:
-    useHostPort: true
-  service:
-    type: ClusterIP
-rbac:
-  create: true
\ No newline at end of file
diff --git a/deployment/helm/ocelot.social/.helmignore b/deployment/helm/ocelot.social/.helmignore
deleted file mode 100644
index 50af0317254197a5a019f4ac2f8ecc223f93f5a7..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/.helmignore
+++ /dev/null
@@ -1,22 +0,0 @@
-# Patterns to ignore when building packages.
-# This supports shell glob matching, relative path matching, and
-# negation (prefixed with !). Only one pattern per line.
-.DS_Store
-# Common VCS dirs
-.git/
-.gitignore
-.bzr/
-.bzrignore
-.hg/
-.hgignore
-.svn/
-# Common backup files
-*.swp
-*.bak
-*.tmp
-*~
-# Various IDEs
-.project
-.idea/
-*.tmproj
-.vscode/
diff --git a/deployment/helm/ocelot.social/Chart.yaml b/deployment/helm/ocelot.social/Chart.yaml
deleted file mode 100644
index bd67fde175dd697126cd834fa9543df3fb1e3b6b..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/Chart.yaml
+++ /dev/null
@@ -1,5 +0,0 @@
-apiVersion: v1
-appVersion: "0.3.1"
-description: A Helm chart for ocelot.social
-name: ocelot-social
-version: 0.1.0
diff --git a/deployment/helm/ocelot.social/README.md b/deployment/helm/ocelot.social/README.md
deleted file mode 100644
index 3f964576ec1f9f53a3af8636ac5057e52d832686..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Helm installation of Human Connection
-
-Deploying Human Connection with Helm is very straight forward. All you have to
-do is to change certain parameters, like domain names and API keys, then you
-just install our provided Helm chart to your cluster.
-
-## Configuration
-
-You can customize the network with your configuration by changing the `values.yaml`, all variables will be available as
-environment variables in your deployed kubernetes pods.
-
-Probably you want to change this environment variable to your actual domain:
-
-```bash
-# in folder /deployment/helm
-CLIENT_URI: "https://develop-k8s.ocelot.social"
-```
-
-If you want to edit secrets, you have to `base64` encode them. See [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually). You can also use `helm-secrets`, but we have yet to test it.
-
-```bash
-# example how to base64 a string:
-$ echo -n 'admin' | base64
-YWRtaW4=
-```
-Those secrets get `base64` decoded and are available as environment variables in
-your deployed kubernetes pods.
-
-# https
-If you start with setting up the `https`, when you install the app, it will automatically take care of the certificates for you.
-
-First check that you are using `Helm v3`, this is important since it removes the need for `Tiller`. See, [FAQ](https://helm.sh/docs/faq/#removal-of-tiller)
-
-```bash
-$ helm version
-# output should look similar to this:
-#version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
-```
-
-Apply cert-manager CRDs before installing (or it will fail)
-
-```bash
-$ kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13.0/deploy/manifests/00-crds.yaml
-```
-
-Next, create the `cert-manager` namespace
-```bash
-$ kubectl create namespace cert-manager
-```
-Add the `jetstack` repo and update
-
-```bash
-$ helm repo add jetstack https://charts.jetstack.io
-$ helm repo update
-```
-
-Install cert-manager
-```bash
-$ helm install cert-manager --namespace cert-manager --version v0.13.0 jetstack/cert-manager
-```
-
-# Deploy
-
-Once you are satisfied with the configuration, you can install the app.
-
-```bash
-# in folder /deployment/helm/human-connection
-$ helm install develop ./ --namespace human-connection
-```
-Where `develop` is the release name, in this case develop for our develop server and `human-connection` is the namespace, again customize for your needs. The release name can be anything you want. Just keep in mind that it is used in the templates to prepend the `CLIENT_URI` and other places.
-
-This will set up everything you need for the network, including `deployments`, and their `pods`, `services`, `ingress`, `volumes`(PersitentVolumes), `PersistentVolumeClaims`, and even `ClusterIssuers` for https certificates.
diff --git a/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-prod.yaml b/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-prod.yaml
deleted file mode 100644
index e46c1f0b39c18e1e43df0cb503ed44da8f6ec2b9..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-prod.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-apiVersion: cert-manager.io/v1alpha2
-kind: ClusterIssuer
-metadata:
-  name: letsencrypt-prod
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  acme:
-    server: https://acme-v02.api.letsencrypt.org/directory
-    email: {{ .Values.supportEmail }}
-    privateKeySecretRef:
-      name: letsencrypt-prod
-    solvers:
-    - http01:
-        ingress:
-          class: nginx
diff --git a/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-staging.yaml b/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-staging.yaml
deleted file mode 100644
index 531b2075bfddef1c692ae05fc1977a07307f1138..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/cluster-issuers/letsencrypt-staging.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-apiVersion: cert-manager.io/v1alpha2
-kind: ClusterIssuer
-metadata:
-  name: letsencrypt-staging
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  acme:
-    server: https://acme-staging-v02.api.letsencrypt.org/directory
-    email: {{ .Values.supportEmail }}
-    privateKeySecretRef:
-      name: letsencrypt-staging
-    solvers:
-    - http01:
-        ingress:
-          class: nginx
diff --git a/deployment/helm/ocelot.social/templates/deployments/deployment-backend.yaml b/deployment/helm/ocelot.social/templates/deployments/deployment-backend.yaml
deleted file mode 100644
index bed4e0b77b0bea814b5faafc5b74fb8afbab8e37..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/deployments/deployment-backend.yaml
+++ /dev/null
@@ -1,58 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name:  {{ .Release.Name }}-backend
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  replicas: 1
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  strategy:
-    rollingUpdate:
-      maxSurge: 0
-      maxUnavailable: "100%"
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-backend
-  template:
-    metadata:
-      name: deployment-backend
-      annotations:
-        backup.velero.io/backup-volumes: uploads
-      labels:
-        ocelot.social/commit: {{ .Values.commit }}
-        ocelot.social/selector: deployment-backend
-    spec:
-      containers:
-      - name: backend
-        image: "{{ .Values.backendImage }}:{{ .Chart.AppVersion }}"
-        imagePullPolicy: {{ .Values.image.pullPolicy }}
-        envFrom:
-        - configMapRef:
-            name: {{ .Release.Name }}-configmap
-        - secretRef:
-            name: {{ .Release.Name }}-secrets
-        ports:
-        - containerPort: 4000
-          protocol: TCP
-        resources: {}
-        terminationMessagePath: /dev/termination-log
-        terminationMessagePolicy: File
-        volumeMounts:
-          - mountPath: /develop-backend/public/uploads
-            name: uploads
-      dnsPolicy: ClusterFirst
-      restartPolicy: Always
-      schedulerName: default-scheduler
-      securityContext: {}
-      terminationGracePeriodSeconds: 30
-      volumes:
-      - name: uploads
-        persistentVolumeClaim:
-          claimName: uploads-claim
-status: {}
diff --git a/deployment/helm/ocelot.social/templates/deployments/deployment-mailserver.yaml b/deployment/helm/ocelot.social/templates/deployments/deployment-mailserver.yaml
deleted file mode 100644
index c0c0b70fc984af99aebe85cef54c38fae21d597b..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/deployments/deployment-mailserver.yaml
+++ /dev/null
@@ -1,40 +0,0 @@
-{{- if .Values.developmentMailserverDomain }}
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name:  {{ .Release.Name }}-mailserver
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  replicas: 1
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-mailserver
-  template:
-    metadata:
-      labels:
-        ocelot.social/selector: deployment-mailserver
-      name: mailserver
-    spec:
-      containers:
-      - name: mailserver
-        image: djfarrelly/maildev
-        imagePullPolicy: {{ .Values.image.pullPolicy }}
-        ports:
-        - containerPort: 80
-        - containerPort: 25
-        envFrom:
-        - configMapRef:
-            name: {{ .Release.Name }}-configmap
-        - secretRef:
-            name: {{ .Release.Name }}-secrets
-      restartPolicy: Always
-      terminationGracePeriodSeconds: 30
-status: {}
-{{- end}}
diff --git a/deployment/helm/ocelot.social/templates/deployments/deployment-maintenance.yaml b/deployment/helm/ocelot.social/templates/deployments/deployment-maintenance.yaml
deleted file mode 100644
index 2b33c16628fe938983c381e3673063e65f49cdb8..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/deployments/deployment-maintenance.yaml
+++ /dev/null
@@ -1,32 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name:  {{ .Release.Name }}-maintenance
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-maintenance
-  template:
-    metadata:
-      labels:
-        ocelot.social/commit: {{ .Values.commit }}
-        ocelot.social/selector: deployment-maintenance
-      name: maintenance
-    spec:
-      containers:
-        - name: maintenance
-          env:
-            - name: HOST
-              value: 0.0.0.0
-          image: "{{ .Values.maintenanceImage }}:{{ .Chart.AppVersion }}"
-          ports:
-            - containerPort: 80
-          imagePullPolicy: Always
-      restartPolicy: Always
-      terminationGracePeriodSeconds: 30
diff --git a/deployment/helm/ocelot.social/templates/deployments/deployment-neo4j.yaml b/deployment/helm/ocelot.social/templates/deployments/deployment-neo4j.yaml
deleted file mode 100644
index 3895bbf7377dd1a9d88f9138b750b30d2f073d2e..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/deployments/deployment-neo4j.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name:  {{ .Release.Name }}-neo4j
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  replicas: 1
-  strategy:
-    rollingUpdate:
-      maxSurge: 0
-      maxUnavailable: "100%"
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-neo4j
-  template:
-    metadata:
-      name: neo4j
-      annotations:
-        backup.velero.io/backup-volumes: neo4j-data
-      labels:
-        ocelot.social/commit: {{ .Values.commit }}
-        ocelot.social/selector: deployment-neo4j
-    spec:
-      containers:
-      - name: neo4j
-        image: "{{ .Values.neo4jImage }}:{{ .Chart.AppVersion }}"
-        imagePullPolicy: {{ .Values.image.pullPolicy }}
-        ports:
-        - containerPort: 7687
-        - containerPort: 7474
-        resources:
-          requests:
-            memory: {{ .Values.neo4jResourceRequestsMemory | default "1G" | quote }}
-          limits:
-            memory: {{ .Values.neo4jResourceLimitsMemory | default "1G" | quote }}
-        envFrom:
-        - configMapRef:
-            name: {{ .Release.Name }}-configmap
-        volumeMounts:
-          - mountPath: /data/
-            name: neo4j-data
-      volumes:
-      - name: neo4j-data
-        persistentVolumeClaim:
-          claimName: neo4j-data-claim
-      restartPolicy: Always
-      terminationGracePeriodSeconds: 30
diff --git a/deployment/helm/ocelot.social/templates/deployments/deployment-web.yaml b/deployment/helm/ocelot.social/templates/deployments/deployment-web.yaml
deleted file mode 100644
index 303a9fb436e6173cf8f555a3921c394046b9b350..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/deployments/deployment-web.yaml
+++ /dev/null
@@ -1,43 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name:  {{ .Release.Name }}-webapp
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  replicas: 2
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-webapp
-  template:
-    metadata:
-      name: webapp
-      labels:
-        ocelot.social/commit: {{ .Values.commit }}
-        ocelot.social/selector: deployment-webapp
-    spec:
-      containers:
-      - name: webapp
-        image: "{{ .Values.webappImage }}:{{ .Chart.AppVersion }}"
-        imagePullPolicy: {{ .Values.image.pullPolicy }}
-        envFrom:
-        - configMapRef:
-            name: {{ .Release.Name }}-configmap
-        - secretRef:
-            name: {{ .Release.Name }}-secrets
-        env:
-        - name: HOST
-          value: 0.0.0.0
-        ports:
-        - containerPort: 3000
-        resources: {}
-        imagePullPolicy: Always
-      restartPolicy: Always
-      terminationGracePeriodSeconds: 30
-status: {}
diff --git a/deployment/helm/ocelot.social/templates/ingress/ingress.template.yaml b/deployment/helm/ocelot.social/templates/ingress/ingress.template.yaml
deleted file mode 100644
index 0ef133f4042e0dd429d2c6e63677442d85eb5c41..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/ingress/ingress.template.yaml
+++ /dev/null
@@ -1,36 +0,0 @@
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
-  name:  {{ .Release.Name }}-ingress
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-  annotations:
-    kubernetes.io/ingress.class: "nginx"
-    cert-manager.io/cluster-issuer: {{ .Values.letsencryptIssuer }}
-    nginx.ingress.kubernetes.io/proxy-body-size: 10m
-spec:
-  tls:
-    - hosts:
-      - {{ .Values.domain }}
-      secretName: tls
-  rules:
-    - host: {{ .Values.domain }}
-      http:
-        paths:
-          - path: /
-            backend:
-              serviceName: {{ .Release.Name }}-webapp
-              servicePort: 3000
-{{- if .Values.developmentMailserverDomain }}
-    - host: {{ .Values.developmentMailserverDomain }}
-      http:
-        paths:
-        - path: /
-          backend:
-            serviceName: {{ .Release.Name }}-mailserver
-            servicePort: 80
-{{- end }}
diff --git a/deployment/helm/ocelot.social/templates/jobs/job-db-migration.yaml b/deployment/helm/ocelot.social/templates/jobs/job-db-migration.yaml
deleted file mode 100644
index e18ef77fab9b0038340b359ec8c29be443ffed62..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/jobs/job-db-migration.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-apiVersion: batch/v1
-kind: Job
-metadata:
-  name: {{ .Release.Name }}-db-migrations
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-  annotations:
-    "helm.sh/hook": post-upgrade
-    "helm.sh/hook-weight": "5"
-    "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
-spec:
-  template:
-    metadata:
-      name: {{ .Release.Name }}
-    spec:
-      restartPolicy: Never
-      containers:
-      - name: db-migrations-job
-        image: "{{ .Values.backendImage }}:latest"
-        command: ["/bin/sh", "-c", "{{ .Values.dbMigrations }}"]
-        envFrom:
-          - configMapRef:
-              name: {{ .Release.Name }}-configmap
-          - secretRef:
-              name: {{ .Release.Name }}-secrets
\ No newline at end of file
diff --git a/deployment/helm/ocelot.social/templates/services/service-backend.yaml b/deployment/helm/ocelot.social/templates/services/service-backend.yaml
deleted file mode 100644
index 8c1cc01d3cbc72a68b865d1c76296517ae7f7eed..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/services/service-backend.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name:  {{ .Release.Name }}-backend
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  ports:
-    - name: graphql
-      port: 4000
-      targetPort: 4000
-  selector:
-    ocelot.social/selector: deployment-backend
diff --git a/deployment/helm/ocelot.social/templates/services/service-mailserver.yaml b/deployment/helm/ocelot.social/templates/services/service-mailserver.yaml
deleted file mode 100644
index b6b0981236b61c92748846d56e8a0816322571e8..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/services/service-mailserver.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-{{- if .Values.developmentMailserverDomain }}
-apiVersion: v1
-kind: Service
-metadata:
-  name: {{ .Release.Name }}-mailserver
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  ports:
-  - name: web
-    port: 80
-    targetPort: 80
-  - name: smtp
-    port: 25
-    targetPort: 25
-  selector:
-    ocelot.social/selector: deployment-mailserver
-{{- end}}
diff --git a/deployment/helm/ocelot.social/templates/services/service-maintenance.yaml b/deployment/helm/ocelot.social/templates/services/service-maintenance.yaml
deleted file mode 100644
index 0730f8027b65db16551ebe694ceeadbd5ee8bfe7..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/services/service-maintenance.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name:  {{ .Release.Name }}-maintenance
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  ports:
-    - name: web
-      port: 80
-      targetPort: 80
-  selector:
-    ocelot.social/selector: deployment-maintenance
diff --git a/deployment/helm/ocelot.social/templates/services/service-neo4j.yaml b/deployment/helm/ocelot.social/templates/services/service-neo4j.yaml
deleted file mode 100644
index d9f1b14a082577cbbdaaf7680b097062e6011697..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/services/service-neo4j.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name:  {{ .Release.Name }}-neo4j
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  ports:
-  - name: bolt
-    port: 7687
-    targetPort: 7687
-  - name: web
-    port: 7474
-    targetPort: 7474
-  selector:
-    ocelot.social/selector: deployment-neo4j
diff --git a/deployment/helm/ocelot.social/templates/services/service-webapp.yaml b/deployment/helm/ocelot.social/templates/services/service-webapp.yaml
deleted file mode 100644
index c17e001ff0824c1186dd18bdbe0cf36f8e3a8861..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/services/service-webapp.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name:  {{ .Release.Name }}-webapp
-  labels:
-    app.kubernetes.io/instance: {{ .Release.Name }}
-    app.kubernetes.io/managed-by: {{ .Release.Service }}
-    app.kubernetes.io/name: ocelot-social
-    app.kubernetes.io/version: {{ .Chart.AppVersion }}
-    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
-spec:
-  ports:
-    - name: {{ .Release.Name }}-webapp
-      port: 3000
-      protocol: TCP
-      targetPort: 3000
-  selector:
-    ocelot.social/selector: deployment-webapp
diff --git a/deployment/helm/ocelot.social/templates/volumes/pvc-neo4j-data.yaml b/deployment/helm/ocelot.social/templates/volumes/pvc-neo4j-data.yaml
deleted file mode 100644
index 3f85d3ae8963e7f4cbe719c674b0f8eba8080abc..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/volumes/pvc-neo4j-data.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
-  name: neo4j-data-claim
-spec:
-  accessModes:
-    - ReadWriteOnce
-  resources:
-    requests:
-      storage: {{ .Values.neo4jStorage }}
diff --git a/deployment/helm/ocelot.social/templates/volumes/pvc-uploads.yaml b/deployment/helm/ocelot.social/templates/volumes/pvc-uploads.yaml
deleted file mode 100644
index 7eb81135b591211a4cf787d3bd63e09cd5736884..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/templates/volumes/pvc-uploads.yaml
+++ /dev/null
@@ -1,16 +0,0 @@
-
-kind: PersistentVolumeClaim
-apiVersion: v1
-metadata:
-  name: uploads-claim
-spec:
-  dataSource:
-    name: uploads-snapshot
-    kind: VolumeSnapshot
-    apiGroup: snapshot.storage.k8s.io
-  accessModes:
-    - ReadWriteOnce
-  resources:
-    requests:
-      storage:  {{ .Values.uploadsStorage }}
-
diff --git a/deployment/helm/ocelot.social/values.yaml b/deployment/helm/ocelot.social/values.yaml
deleted file mode 100644
index 4c15c99a728b80693275b3ae306cbd06a837a800..0000000000000000000000000000000000000000
--- a/deployment/helm/ocelot.social/values.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-# domain is the user-facing domain.
-domain: develop-docker.ocelot.social
-# commit is the latest github commit deployed.
-commit: 889a7cdd24dda04a139b2b77d626e984d6db6781
-# dbInitialization runs the database initializations in a post-install hook.
-dbInitializion: "yarn prod:migrate init"
-# dbMigrations runs the database migrations in a post-upgrade hook.
-dbMigrations: "yarn prod:migrate up"
-# bakendImage is the docker image for the backend deployment
-backendImage: ocelotsocialnetwork/backend
-# maintenanceImage is the docker image for the maintenance deployment
-maintenanceImage: ocelotsocialnetwork/maintenance
-# neo4jImage is the docker image for the neo4j deployment
-neo4jImage: ocelotsocialnetwork/neo4j
-# webappImage is the docker image for the webapp deployment
-webappImage: ocelotsocialnetwork/webapp
-# image configures pullPolicy related to the docker images
-image:
-  # pullPolicy indicates when, if ever, pods pull a new image from docker hub.
-  pullPolicy: IfNotPresent
-# letsencryptIssuer is used by cert-manager to set up certificates with the given provider.
-letsencryptIssuer: "letsencrypt-prod"
-# neo4jConfig changes any default neo4j config/adds it.
-neo4jConfig:
-  # acceptLicenseAgreement is used to agree to the license agreement for neo4j's enterprise edition.
-  acceptLicenseAgreement: \"yes\"
-  # apocImportFileEnabled enables the import of files to neo4j using the plugin apoc 
-  apocImportFileEnabled: \"true\"
-  # dbmsMemoryHeapInitialSize configures initial heap size. By default, it is calculated based on available system resources.(valid units are `k`, `K`, `m`, `M`, `g`, `G`)
-  dbmsMemoryHeapInitialSize: "500M"
-  # dbmsMemoryHeapMaxSize configures maximum heap size. By default it is calculated based on available system resources.(valid units are `k`, `K`, `m`, `M`, `g`, `G`)
-  dbmsMemoryHeapMaxSize: "500M"
-  # dbmsMemoryPagecacheSize configures the amount of memory to use for mapping the store files, in bytes (or 'k', 'm', and 'g')
-  dbmsMemoryPagecacheSize: "490M"
-# neo4jResourceLimitsMemory configures the memory limits available.
-neo4jResourceLimitsMemory: "2G"
-# neo4jResourceLimitsMemory configures the memory available for requests.
-neo4jResourceRequestsMemory: "1G"
-# supportEmail is used for letsencrypt certs.
-supportEmail: "devops@ocelot.social"
-# smtpHost is the host for the mailserver.
-smtpHost: "mail.ocelot.social"
-# smtpPort is the port to be used for the mailserver.
-smtpPort: \"25\"
-# jwtSecret is used to encode/decode a user's JWT for authentication
-jwtSecret: "Yi8mJjdiNzhCRiZmdi9WZA=="
-# privateKeyPassphrase is used for activity pub
-privateKeyPassphrase: "YTdkc2Y3OHNhZGc4N2FkODdzZmFnc2FkZzc4"
-# mapboxToken is used for the Mapbox API, geolocalization.
-mapboxToken: "cGsuZXlKMUlqb2lhSFZ0WVc0dFkyOXVibVZqZEdsdmJpSXNJbUVpT2lKamFqbDBjbkJ1Ykdvd2VUVmxNM1Z3WjJsek5UTnVkM1p0SW4wLktaOEtLOWw3MG9talhiRWtrYkhHc1E="
-uploadsStorage: "25Gi"
-neo4jStorage: "5Gi"
-developmentMailserverDomain: mail.ocelot.social
diff --git a/deployment/legacy-migration/README.md b/deployment/legacy-migration/README.md
deleted file mode 100644
index 66100a3c8d9c25c8e341ed6520cf07e5390b11bd..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/README.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Legacy data migration
-
-This setup is **completely optional** and only required if you have data on a
-server which is running our legacy code and you want to import that data. It
-will import the uploads folder and migrate a dump of the legacy Mongo database
-into our new Neo4J graph database.
-
-## Configure Maintenance-Worker Pod
-
-Create a configmap with the specific connection data of your legacy server:
-
-```bash
-$ kubectl create configmap maintenance-worker          \
-  -n ocelot-social                          \
-  --from-literal=SSH_USERNAME=someuser                  \
-  --from-literal=SSH_HOST=yourhost                      \
-  --from-literal=MONGODB_USERNAME=hc-api                \
-  --from-literal=MONGODB_PASSWORD=secretpassword        \
-  --from-literal=MONGODB_AUTH_DB=hc_api                 \
-  --from-literal=MONGODB_DATABASE=hc_api                \
-  --from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
-```
-
-Create a secret with your public and private ssh keys. As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with `ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
-
-```bash
-$ kubectl create secret generic ssh-keys          \
-  -n ocelot-social                    \
-  --from-file=id_rsa=/path/to/.ssh/id_rsa         \
-  --from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
-  --from-file=known_hosts=/path/to/.ssh/known_hosts
-```
-
-## Deploy a Temporary Maintenance-Worker Pod
-
-Bring the application into maintenance mode.
-
-{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
-
-
-Then temporarily delete backend and database deployments
-
-```bash
-$ kubectl -n ocelot-social get deployments
-NAME            READY   UP-TO-DATE   AVAILABLE   AGE
-backend         1/1     1            1           3d11h
-neo4j           1/1     1            1           3d11h
-webapp          2/2     2            2           73d
-$ kubectl -n ocelot-social delete deployment neo4j
-deployment.extensions "neo4j" deleted
-$ kubectl -n ocelot-social delete deployment backend
-deployment.extensions "backend" deleted
-```
-
-Deploy one-time develop-maintenance-worker pod:
-
-```bash
-# in deployment/legacy-migration/
-$ kubectl apply -f maintenance-worker.yaml
-pod/develop-maintenance-worker created
-```
-
-Import legacy database and uploads:
-
-```bash
-$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
-$ import_legacy_db
-$ import_legacy_uploads
-$ exit
-```
-
-Delete the pod when you're done:
-
-```bash
-$ kubectl -n ocelot-social delete pod develop-maintenance-worker
-```
-
-Oh, and of course you have to get those deleted deployments back. One way of
-doing it would be:
-
-```bash
-# in folder deployment/
-$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml
-```
-
diff --git a/deployment/legacy-migration/maintenance-worker.yaml b/deployment/legacy-migration/maintenance-worker.yaml
deleted file mode 100644
index d8b118b67bd5d160ce7f3b9675af38e02d9d91e8..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker.yaml
+++ /dev/null
@@ -1,40 +0,0 @@
----
-  kind: Pod
-  apiVersion: v1
-  metadata:
-    name: develop-maintenance-worker
-    namespace: ocelot-social
-  spec:
-    containers:
-    - name: develop-maintenance-worker
-      image: ocelotsocialnetwork/develop-maintenance-worker:latest
-      imagePullPolicy: Always
-      resources:
-        requests:
-          memory: "2G"
-        limits:
-          memory: "8G"
-      envFrom:
-      - configMapRef:
-          name: maintenance-worker
-      - configMapRef:
-          name: configmap
-      volumeMounts:
-      - name: secret-volume
-        readOnly: false
-        mountPath: /root/.ssh
-      - name: uploads
-        mountPath: /uploads
-      - name: neo4j-data
-        mountPath: /data/
-    volumes:
-    - name: secret-volume
-      secret:
-        secretName: ssh-keys
-        defaultMode: 0400
-    - name: uploads
-      persistentVolumeClaim:
-        claimName: uploads-claim
-    - name: neo4j-data
-      persistentVolumeClaim:
-        claimName: neo4j-data-claim
diff --git a/deployment/legacy-migration/maintenance-worker/.dockerignore b/deployment/legacy-migration/maintenance-worker/.dockerignore
deleted file mode 100644
index 59ba63a8b118bede7ff899f89c73ea481196029b..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/.dockerignore
+++ /dev/null
@@ -1 +0,0 @@
-.ssh/
diff --git a/deployment/legacy-migration/maintenance-worker/.gitignore b/deployment/legacy-migration/maintenance-worker/.gitignore
deleted file mode 100644
index 485bc00e62f1a7d45ed82661fd51dd8509938537..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-.ssh/
-ssh/
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/Dockerfile b/deployment/legacy-migration/maintenance-worker/Dockerfile
deleted file mode 100644
index 760cc06c8c5fcb6a1830a89f088c014e3e59d1e8..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM ocelotsocialnetwork/develop-neo4j:latest
-
-ENV NODE_ENV=maintenance
-EXPOSE 7687 7474
-
-ENV BUILD_DEPS="gettext"  \
-    RUNTIME_DEPS="libintl"
-
-RUN set -x && \
-    apk add --update $RUNTIME_DEPS && \
-    apk add --virtual build_deps $BUILD_DEPS &&  \
-    cp /usr/bin/envsubst /usr/local/bin/envsubst && \
-    apk del build_deps
-
-
-RUN apk upgrade --update
-RUN apk add --no-cache mongodb-tools openssh nodejs yarn rsync
-
-COPY known_hosts /root/.ssh/known_hosts
-COPY migration /migration
-COPY ./binaries/* /usr/local/bin/
diff --git a/deployment/legacy-migration/maintenance-worker/binaries/.env b/deployment/legacy-migration/maintenance-worker/binaries/.env
deleted file mode 100644
index 773918095a12a2cd2a558fe7677ea94cd11355f2..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/binaries/.env
+++ /dev/null
@@ -1,6 +0,0 @@
-# SSH Access
-# SSH_USERNAME='username'
-# SSH_HOST='example.org'
-
-# UPLOADS_DIRECTORY=/var/www/api/uploads
-OUTPUT_DIRECTORY='/uploads/'
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/binaries/idle b/deployment/legacy-migration/maintenance-worker/binaries/idle
deleted file mode 100755
index f5b1b2454beabc72ef8e33317e653d18611a7cd2..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/binaries/idle
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/usr/bin/env bash
-tail -f /dev/null
diff --git a/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_db b/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_db
deleted file mode 100755
index 6ffdf8e3faf971820cae916816f5860315f83914..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_db
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env bash
-set -e
-for var in "SSH_USERNAME" "SSH_HOST" "MONGODB_USERNAME" "MONGODB_PASSWORD" "MONGODB_DATABASE" "MONGODB_AUTH_DB"
-do
-  if [[ -z "${!var}" ]]; then
-    echo "${var} is undefined"
-    exit 1
-  fi
-done
-
-/migration/mongo/export.sh
-/migration/neo4j/import.sh
diff --git a/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_uploads b/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_uploads
deleted file mode 100755
index 5c0b67d74f21d7f4b70b01d59c5cd9a04567411d..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/binaries/import_legacy_uploads
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# import .env config
-set -o allexport
-source $(dirname "$0")/.env
-set +o allexport
-
-for var in "SSH_USERNAME" "SSH_HOST" "UPLOADS_DIRECTORY"
-do
-  if [[ -z "${!var}" ]]; then
-    echo "${var} is undefined"
-    exit 1
-  fi
-done
-
-rsync --archive --update --verbose ${SSH_USERNAME}@${SSH_HOST}:${UPLOADS_DIRECTORY}/ ${OUTPUT_DIRECTORY}
diff --git a/deployment/legacy-migration/maintenance-worker/known_hosts b/deployment/legacy-migration/maintenance-worker/known_hosts
deleted file mode 100644
index 947840cb2366a8adc09f8c13ffa4a901e309d02a..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/known_hosts
+++ /dev/null
@@ -1,3 +0,0 @@
-|1|GuOYlVEhTowidPs18zj9p5F2j3o=|sDHJYLz9Ftv11oXeGEjs7SpVyg0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM5N29bI5CeKu1/RBPyM2fwyf7fuajOO+tyhKe1+CC2sZ1XNB5Ff6t6MtCLNRv2mUuvzTbW/HkisDiA5tuXUHOk=
-|1|2KP9NV+Q5g2MrtjAeFSVcs8YeOI=|nf3h4wWVwC4xbBS1kzgzE2tBldk= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
-|1|HonYIRNhKyroUHPKU1HSZw0+Qzs=|5T1btfwFBz2vNSldhqAIfTbfIgQ= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
diff --git a/deployment/legacy-migration/maintenance-worker/migration/mongo/.env b/deployment/legacy-migration/maintenance-worker/migration/mongo/.env
deleted file mode 100644
index 376cb56d0637a62ba2d3cc1e9ee94df3699b63d9..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/mongo/.env
+++ /dev/null
@@ -1,17 +0,0 @@
-# SSH Access
-# SSH_USERNAME='username'
-# SSH_HOST='example.org'
-
-# Mongo DB on Remote Maschine
-# MONGODB_USERNAME='mongouser'
-# MONGODB_PASSWORD='mongopassword'
-# MONGODB_DATABASE='mongodatabase'
-# MONGODB_AUTH_DB='admin'
-
-# Export Settings
-# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
-EXPORT_PATH='/tmp/mongo-export/'
-EXPORT_MONGOEXPORT_BIN='mongoexport'
-MONGO_EXPORT_SPLIT_SIZE=6000
-# On Windows use something like this
-# EXPORT_MONGOEXPORT_BIN='C:\Program Files\MongoDB\Server\3.6\bin\mongoexport.exe'
diff --git a/deployment/legacy-migration/maintenance-worker/migration/mongo/export.sh b/deployment/legacy-migration/maintenance-worker/migration/mongo/export.sh
deleted file mode 100755
index b56ace87a652198192f8d424bfc0ca028b655df9..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/mongo/export.sh
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# import .env config
-set -o allexport
-source $(dirname "$0")/.env
-set +o allexport
-
-# Export collection function defintion
-function export_collection () {
-  "${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1.json"
-  mkdir -p ${EXPORT_PATH}splits/$1/
-  split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1.json ${EXPORT_PATH}splits/$1/
-}
-
-# Export collection with query function defintion
-function export_collection_query () {
-  "${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1_$3.json" --query "$2"
-  mkdir -p ${EXPORT_PATH}splits/$1_$3/
-  split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1_$3.json ${EXPORT_PATH}splits/$1_$3/
-}
-
-# Delete old export & ensure directory
-rm -rf ${EXPORT_PATH}*
-mkdir -p ${EXPORT_PATH}
-
-# Open SSH Tunnel
-ssh -4 -M -S my-ctrl-socket -fnNT -L 27018:localhost:27017 -l ${SSH_USERNAME} ${SSH_HOST}
-
-# Export all Data from the Alpha to json and split them up
-export_collection "badges"
-export_collection "categories"
-export_collection "comments"
-export_collection_query "contributions" '{"type": "DELETED"}' "DELETED"
-export_collection_query "contributions" '{"type": "post"}' "post"
-# export_collection_query "contributions" '{"type": "cando"}' "cando"
-export_collection "emotions"
-# export_collection_query "follows" '{"foreignService": "organizations"}' "organizations"
-export_collection_query "follows" '{"foreignService": "users"}' "users"
-# export_collection "invites"
-# export_collection "organizations"
-# export_collection "pages"
-# export_collection "projects"
-# export_collection "settings"
-export_collection "shouts"
-# export_collection "status"
-export_collection_query "users" '{"isVerified": true }' "verified"
-# export_collection "userscandos"
-# export_collection "usersettings"
-
-# Close SSH Tunnel
-ssh -S my-ctrl-socket -O check -l ${SSH_USERNAME} ${SSH_HOST}
-ssh -S my-ctrl-socket -O exit  -l ${SSH_USERNAME} ${SSH_HOST}
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/.env b/deployment/legacy-migration/maintenance-worker/migration/neo4j/.env
deleted file mode 100644
index 16220f3e67bea20fc81301bb781c07aba57f0466..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/.env
+++ /dev/null
@@ -1,16 +0,0 @@
-# Neo4J Settings
-# NEO4J_USERNAME='neo4j'
-# NEO4J_PASSWORD='letmein'
-
-# Import Settings
-# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
-IMPORT_PATH='/tmp/mongo-export/'
-IMPORT_CHUNK_PATH='/tmp/mongo-export/splits/'
-
-IMPORT_CHUNK_PATH_CQL='/tmp/mongo-export/splits/'
-# On Windows this path needs to be windows style since the cypher-shell runs native - note the forward slash
-# IMPORT_CHUNK_PATH_CQL='C:/Users/dornhoeschen/AppData/Local/Temp/mongo-export/splits/'
-
-IMPORT_CYPHERSHELL_BIN='cypher-shell'
-# On Windows use something like this
-# IMPORT_CYPHERSHELL_BIN='C:\Program Files\neo4j-community\bin\cypher-shell.bat'
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/badges.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/badges.cql
deleted file mode 100644
index adf63dc1f77659284942c7ebbe3359b96f3c0b56..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/badges.cql
+++ /dev/null
@@ -1,52 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[?]   image: {
-[?]     path: { // Path is incorrect in Nitro - is icon the correct name for this field?
-[X]       type: String,
-[X]       required: true
-        },
-[ ]     alt: { // If we use an image - should we not have an alt?
-[ ]       type: String,
-[ ]       required: true
-        }
-      },
-[?]   status: { 
-[X]     type: String, 
-[X]     enum: ['permanent', 'temporary'], 
-[ ]     default: 'permanent', // Default value is missing in Nitro
-[X]     required: true
-      },
-[?]   type: {
-[?]     type: String, // in nitro this is a defined enum - seems good for now
-[X]     required: true
-      },
-[X]   id: {
-[X]     type: String,
-[X]     required: true
-      },
-[?]   createdAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      }, 
-[?]   updatedAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as badge
-MERGE(b:Badge {id: badge._id["$oid"]})
-ON CREATE SET
-b.id        = badge.key,
-b.type      = badge.type,
-b.icon      = replace(badge.image.path, 'https://api-alpha.human-connection.org', ''),
-b.status    = badge.status,
-b.createdAt = badge.createdAt.`$date`,
-b.updatedAt = badge.updatedAt.`$date`
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/delete.cql
deleted file mode 100644
index 2a6f8c24418f5c21016d6910445e016ceac2422f..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/badges/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (n:Badge) DETACH DELETE n;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/categories.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/categories.cql
deleted file mode 100644
index 5d495887698b1cdd71fcbd5beba76e897e57dff5..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/categories.cql
+++ /dev/null
@@ -1,129 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[X]   title: {
-[X]     type: String,
-[X]     required: true
-      },
-[?]   slug: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[ ]     unique: true // Unique value is not enforced in Nitro?
-      },
-[?]   icon: { // Nitro adds required: true
-[X]     type: String,
-[ ]     unique: true // Unique value is not enforced in Nitro?
-      },
-[?]   createdAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      },
-[?]   updatedAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as category
-MERGE(c:Category {id: category._id["$oid"]})
-ON CREATE SET
-c.name      = category.title,
-c.slug      = category.slug,
-c.icon      = category.icon,
-c.createdAt = category.createdAt.`$date`,
-c.updatedAt = category.updatedAt.`$date`
-;
-
-// Transform icon names
-MATCH (c:Category)
-WHERE (c.icon = "categories-justforfun")
-SET c.icon = 'smile'
-SET c.slug = 'just-for-fun'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-luck")
-SET c.icon = 'heart-o'
-SET c.slug = 'happiness-values'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-health")
-SET c.icon = 'medkit'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-environment")
-SET c.icon = 'tree'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-animal-justice")
-SET c.icon = 'paw'
-SET c.slug = 'animal-protection'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-human-rights")
-SET c.icon = 'balance-scale'
-SET c.slug = 'human-rights-justice'
-
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-education")
-SET c.icon = 'graduation-cap'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-cooperation")
-SET c.icon = 'users'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-politics")
-SET c.icon = 'university'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-economy")
-SET c.icon = 'money'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-technology")
-SET c.icon = 'flash'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-internet")
-SET c.icon = 'mouse-pointer'
-SET c.slug = 'it-internet-data-privacy'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-art")
-SET c.icon = 'paint-brush'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-freedom-of-speech")
-SET c.icon = 'bullhorn'
-SET c.slug = 'freedom-of-speech'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-sustainability")
-SET c.icon = 'shopping-cart'
-;
-
-MATCH (c:Category)
-WHERE (c.icon = "categories-peace")
-SET c.icon = 'angellist'
-SET c.slug = 'global-peace-nonviolence'
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/delete.cql
deleted file mode 100644
index c06b5ef2bbbf83bf4cf600a861a1745ca7ec6fd9..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/categories/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (n:Category) DETACH DELETE n;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/comments.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/comments.cql
deleted file mode 100644
index 083f9f76288867f806cdef4a5f4fd5e6573f2a51..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/comments.cql
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[?]   userId: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[?]   contributionId: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[X]   content: {
-[X]     type: String,
-[X]     required: true
-      },
-[?]   contentExcerpt: { // Generated from content
-[X]     type: String,
-[ ]     required: true // Not required in Nitro
-      },
-[ ]   hasMore: { type: Boolean },
-[ ]   upvotes: {
-[ ]     type: Array,
-[ ]     default: []
-      },
-[ ]   upvoteCount: {
-[ ]     type: Number,
-[ ]     default: 0
-      },
-[?]   deleted: {
-[X]     type: Boolean,
-[ ]     default: false, // Default value is missing in Nitro
-[-]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as comment
-MERGE (c:Comment {id: comment._id["$oid"]})
-ON CREATE SET
-c.content        = comment.content,
-c.contentExcerpt = comment.contentExcerpt,
-c.deleted        = comment.deleted,
-c.createdAt      = comment.createdAt.`$date`,
-c.updatedAt      = comment.updatedAt.`$date`,
-c.disabled       = false
-WITH c, comment, comment.contributionId as postId
-MATCH (post:Post {id: postId})
-WITH c, post, comment.userId as userId
-MATCH (author:User {id: userId})
-MERGE (c)-[:COMMENTS]->(post)
-MERGE (author)-[:WROTE]->(c)
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/delete.cql
deleted file mode 100644
index c4a7961c545f04086e7341f07f7ecf9ec66028ff..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/comments/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (n:Comment) DETACH DELETE n;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/contributions.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/contributions.cql
deleted file mode 100644
index f09b5ad7185f3d4aa8ded26e9750fad46f0ed564..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/contributions.cql
+++ /dev/null
@@ -1,156 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-[?] { //Modeled incorrect as Post
-[?]   userId: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[ ]   organizationId: {
-[ ]     type: String,
-[-]     index: true
-      },
-[X]   categoryIds: {
-[X]     type: Array,
-[-]     index: true
-      },
-[X]   title: {
-[X]     type: String,
-[X]     required: true
-      },
-[?]   slug: { // Generated from title
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[?]     unique: true, // Unique value is not enforced in Nitro?
-[-]     index: true
-      },
-[ ]   type: { // db.getCollection('contributions').distinct('type') -> 'DELETED', 'cando', 'post'
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   cando: {
-[ ]     difficulty: {
-[ ]       type: String,
-[ ]       enum: ['easy', 'medium', 'hard']
-        },
-[ ]     reasonTitle: { type: String },
-[ ]     reason: { type: String }
-      },
-[X]   content: {
-[X]     type: String,
-[X]     required: true
-      },
-[?]   contentExcerpt: { // Generated from content
-[X]     type: String,
-[?]     required: true // Not required in Nitro
-      },
-[ ]   hasMore: { type: Boolean },
-[X]   teaserImg: { type: String },
-[ ]   language: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   shoutCount: {
-[ ]     type: Number,
-[ ]     default: 0,
-[-]     index: true
-      },
-[ ]   meta: {
-[ ]     hasVideo: {
-[ ]       type: Boolean,
-[ ]       default: false
-        },
-[ ]     embedds: {
-[ ]       type: Object,
-[ ]       default: {}
-        }
-      },
-[?]   visibility: {
-[X]     type: String,
-[X]     enum: ['public', 'friends', 'private'],
-[ ]     default: 'public', // Default value is missing in Nitro
-[-]     index: true
-      },
-[?]   isEnabled: {
-[X]     type: Boolean,
-[ ]     default: true, // Default value is missing in Nitro
-[-]     index: true
-      },
-[?]   tags: { type: Array }, // ensure this is working properly
-[ ]   emotions: {
-[ ]     type: Object,
-[-]     index: true,
-[ ]     default: {
-[ ]       angry: {
-[ ]         count: 0,
-[ ]         percent: 0
-[ ]       },
-[ ]       cry: {
-[ ]         count: 0,
-[ ]         percent: 0
-[ ]       },
-[ ]       surprised: {
-[ ]         count: 0,
-[ ]         percent: 0
-          },
-[ ]       happy: {
-[ ]         count: 0,
-[ ]         percent: 0
-          },
-[ ]       funny: {
-[ ]         count: 0,
-[ ]         percent: 0
-          }
-        }
-      },
-[?]   deleted: { // THis field is not always present in the alpha-data
-[?]     type: Boolean,
-[ ]     default: false, // Default value is missing in Nitro
-[-]     index: true
-      },
-[?]   createdAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      },
-[?]   updatedAt: {
-[?]     type: Date, // Type is modeled as string in Nitro which is incorrect
-[ ]     default: Date.now // Default value is missing in Nitro
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-*/
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as post
-MERGE (p:Post {id: post._id["$oid"]})
-ON CREATE SET
-p.title          = post.title,
-p.slug           = post.slug,
-p.image          = replace(post.teaserImg, 'https://api-alpha.human-connection.org', ''),
-p.content        = post.content,
-p.contentExcerpt = post.contentExcerpt,
-p.visibility     = toLower(post.visibility),
-p.createdAt      = post.createdAt.`$date`,
-p.updatedAt      = post.updatedAt.`$date`,
-p.deleted        = COALESCE(post.deleted, false),
-p.disabled       = COALESCE(NOT post.isEnabled, false)
-WITH p, post
-MATCH (u:User {id: post.userId})
-MERGE (u)-[:WROTE]->(p)
-WITH p, post, post.categoryIds as categoryIds
-UNWIND categoryIds AS categoryId
-MATCH (c:Category {id: categoryId})
-MERGE (p)-[:CATEGORIZED]->(c)
-WITH p, post.tags AS tags
-UNWIND tags AS tag
-WITH apoc.text.replace(tag, '[^\\p{L}0-9]', '') as tagNoSpacesAllowed
-CALL apoc.when(tagNoSpacesAllowed =~ '^((\\p{L}+[\\p{L}0-9]*)|([0-9]+\\p{L}+[\\p{L}0-9]*))$', 'RETURN tagNoSpacesAllowed', '', {tagNoSpacesAllowed: tagNoSpacesAllowed})
-YIELD value as validated
-WHERE validated.tagNoSpacesAllowed IS NOT NULL
-MERGE (t:Tag { id: validated.tagNoSpacesAllowed, disabled: false, deleted: false })
-MERGE (p)-[:TAGGED]->(t)
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/delete.cql
deleted file mode 100644
index 70adad664ae1ade6f8ea4a68dd89fbe3c2072ec6..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/contributions/delete.cql
+++ /dev/null
@@ -1,2 +0,0 @@
-MATCH (n:Post) DETACH DELETE n;
-MATCH (n:Tag) DETACH DELETE n;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/delete_all.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/delete_all.cql
deleted file mode 100644
index d018713000daf88c6155f920acf4ff33b2b031a4..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/delete_all.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (n) DETACH DELETE n;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/delete.cql
deleted file mode 100644
index 18fb6699fb28a25e79da4b9342a79ec03317f7c6..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (u:User)-[e:EMOTED]->(c:Post) DETACH DELETE e;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/emotions.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/emotions.cql
deleted file mode 100644
index 06341f277e91555b08931383bab876c1f0a604bd..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/emotions/emotions.cql
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[X]   userId: {
-[X]     type: String,
-[X]     required: true,
-[-]     index: true
-      },
-[X]   contributionId: {
-[X]     type: String,
-[X]     required: true,
-[-]     index: true
-      },
-[?]   rated: {
-[X]     type: String,
-[ ]     required: true,
-[?]     enum: ['funny', 'happy', 'surprised', 'cry', 'angry']
-      },
-[X]   createdAt: {
-[X]     type: Date,
-[X]     default: Date.now
-      },
-[X]   updatedAt: {
-[X]     type: Date,
-[X]     default: Date.now
-      },
-[-]   wasSeeded: { type: Boolean }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as emotion
-MATCH (u:User {id: emotion.userId}),
-      (c:Post {id: emotion.contributionId})
-MERGE  (u)-[e:EMOTED {
-          id:        emotion._id["$oid"],
-          emotion:   emotion.rated,
-          createdAt: datetime(emotion.createdAt.`$date`),
-          updatedAt: datetime(emotion.updatedAt.`$date`)
-        }]->(c)
-RETURN e;
-/*
-  // Queries
-  // user sets an emotion emotion:
-  // MERGE (u)-[e:EMOTED {id: ..., emotion: "funny", createdAt: ..., updatedAt: ...}]->(c)
-  // user removes emotion
-  // MATCH  (u)-[e:EMOTED]->(c) DELETE e
-  // contribution distributions over every `emotion` property value for one post
-  // MATCH (u:User)-[e:EMOTED]->(c:Post {id: "5a70bbc8508f5b000b443b1a"}) RETURN e.emotion,COUNT(e.emotion)
-  // contribution distributions over every `emotion` property value for one user (advanced - "whats the primary emotion used by the user?")
-  // MATCH (u:User{id:"5a663b1ac64291000bf302a1"})-[e:EMOTED]->(c:Post) RETURN e.emotion,COUNT(e.emotion)
-  // contribution distributions over every `emotion` property value for all posts created by one user (advanced - "how do others react to my contributions?")
-  // MATCH (u:User)-[e:EMOTED]->(c:Post)<-[w:WROTE]-(a:User{id:"5a663b1ac64291000bf302a1"}) RETURN e.emotion,COUNT(e.emotion)
-  // if we can filter the above an a variable timescale that would be great (should be possible on createdAt and updatedAt fields)
-*/
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/delete.cql
deleted file mode 100644
index 3de01f8ea36b1a4fffa5dee7b05e74402ce5145a..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-MATCH (u1:User)-[f:FOLLOWS]->(u2:User) DETACH DELETE f;
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/follows.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/follows.cql
deleted file mode 100644
index fac858a9a9d6f8058e02793fbea8f8e0e1ce46e8..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/follows/follows.cql
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[?]   userId: {
-[-]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[?]   foreignId: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[?]   foreignService: { // db.getCollection('follows').distinct('foreignService') returns 'organizations' and 'users'
-[ ]     type: String,
-[ ]     required: true,
-[ ]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-    index:
-[?] { userId: 1, foreignId: 1, foreignService: 1 },{ unique: true } // is the unique constrain modeled?
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as follow
-MATCH (u1:User {id: follow.userId}), (u2:User {id: follow.foreignId})
-MERGE (u1)-[:FOLLOWS]->(u2)
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/import.sh b/deployment/legacy-migration/maintenance-worker/migration/neo4j/import.sh
deleted file mode 100755
index ccb22dafbde7b72fc175901ae7b68cc1250122f0..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/import.sh
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# import .env config
-set -o allexport
-source $(dirname "$0")/.env
-set +o allexport
-
-# Delete collection function defintion
-function delete_collection () {
-  # Delete from Database
-  echo "Delete $2"
-  "${IMPORT_CYPHERSHELL_BIN}" < $(dirname "$0")/$1/delete.cql > /dev/null
-  # Delete index file
-  rm -f "${IMPORT_PATH}splits/$2.index"
-}
-
-# Import collection function defintion
-function import_collection () {
-  # index file of those chunks we have already imported
-  INDEX_FILE="${IMPORT_PATH}splits/$1.index"
-  # load index file
-  if [ -f "$INDEX_FILE" ]; then
-    readarray -t IMPORT_INDEX <$INDEX_FILE
-  else
-     declare -a IMPORT_INDEX
-  fi
-  # for each chunk import data
-  for chunk in ${IMPORT_PATH}splits/$1/*
-  do
-    CHUNK_FILE_NAME=$(basename "${chunk}")
-    # does the index not contain the chunk file name?
-    if [[ ! " ${IMPORT_INDEX[@]} " =~ " ${CHUNK_FILE_NAME} " ]]; then
-      # calculate the path of the chunk
-      export IMPORT_CHUNK_PATH_CQL_FILE="${IMPORT_CHUNK_PATH_CQL}$1/${CHUNK_FILE_NAME}"
-      # load the neo4j command and replace file variable with actual path
-      NEO4J_COMMAND="$(envsubst '${IMPORT_CHUNK_PATH_CQL_FILE}' < $(dirname "$0")/$2)"
-      # run the import of the chunk
-      echo "Import $1 ${CHUNK_FILE_NAME} (${chunk})"
-      echo "${NEO4J_COMMAND}" | "${IMPORT_CYPHERSHELL_BIN}" > /dev/null
-      # add file to array and file
-      IMPORT_INDEX+=("${CHUNK_FILE_NAME}")
-      echo "${CHUNK_FILE_NAME}" >> ${INDEX_FILE}
-    else
-      echo "Skipping $1 ${CHUNK_FILE_NAME} (${chunk})"
-    fi
-  done
-}
-
-# Time variable
-SECONDS=0
-
-# Delete all Neo4J Database content
-echo "Deleting Database Contents"
-delete_collection "badges" "badges"
-delete_collection "categories" "categories"
-delete_collection "users" "users"
-delete_collection "follows" "follows_users"
-delete_collection "contributions" "contributions_post"
-delete_collection "contributions" "contributions_cando"
-delete_collection "shouts" "shouts"
-delete_collection "comments" "comments"
-delete_collection "emotions" "emotions"
-
-#delete_collection "invites"
-#delete_collection "notifications"
-#delete_collection "organizations"
-#delete_collection "pages"
-#delete_collection "projects"
-#delete_collection "settings"
-#delete_collection "status"
-#delete_collection "systemnotifications"
-#delete_collection "userscandos"
-#delete_collection "usersettings"
-echo "DONE"
-
-# Import Data
-echo "Start Importing Data"
-import_collection "badges" "badges/badges.cql"
-import_collection "categories" "categories/categories.cql"
-import_collection "users_verified" "users/users.cql"
-import_collection "follows_users" "follows/follows.cql"
-#import_collection "follows_organizations" "follows/follows.cql"
-import_collection "contributions_post" "contributions/contributions.cql"
-#import_collection "contributions_cando" "contributions/contributions.cql"
-#import_collection "contributions_DELETED" "contributions/contributions.cql"
-import_collection "shouts" "shouts/shouts.cql"
-import_collection "comments" "comments/comments.cql"
-import_collection "emotions" "emotions/emotions.cql"
-
-# import_collection "invites"
-# import_collection "notifications"
-# import_collection "organizations"
-# import_collection "pages"
-# import_collection "systemnotifications"
-# import_collection "userscandos"
-# import_collection "usersettings"
-
-# does only contain dummy data
-# import_collection "projects"
-
-# does only contain alpha specifc data
-# import_collection "status
-# import_collection "settings""
-
-echo "DONE"
-
-echo "Time elapsed: $SECONDS seconds"
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/invites/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/invites/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/invites/invites.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/invites/invites.cql
deleted file mode 100644
index f4a5bf0061951f99541f5f2fd5adb8fff5c8f379..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/invites/invites.cql
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   email: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true,
-[ ]     unique: true
-      },
-[ ]   code: {
-[ ]     type: String,
-[-]     index: true,
-[ ]     required: true
-      },
-[ ]   role: {
-[ ]     type: String,
-[ ]     enum: ['admin', 'moderator', 'manager', 'editor', 'user'],
-[ ]     default: 'user'
-      },
-[ ]   invitedByUserId: { type: String },
-[ ]   language: { type: String },
-[ ]   badgeIds: [],
-[ ]   wasUsed: {
-[ ]     type: Boolean,
-[-]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-  }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as invite;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/notifications/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/notifications/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/notifications/notifications.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/notifications/notifications.cql
deleted file mode 100644
index aa6ac8eb9159304eb42992220a837fd1d50151c7..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/notifications/notifications.cql
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   userId: { // User this notification is sent to
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   type: {
-[ ]     type: String,
-[ ]     required: true,
-[ ]     enum: ['comment','comment-mention','contribution-mention','following-contribution']
-      },
-[ ]   relatedUserId: {
-[ ]     type: String,
-[-]     index: true
-      },
-[ ]   relatedContributionId: {
-[ ]     type: String,
-[-]     index: true
-      },
-[ ]   relatedOrganizationId: {
-[ ]     type: String,
-[-]     index: true
-      },
-[ ]   relatedCommentId: {type: String },
-[ ]   unseen: {
-[ ]     type: Boolean,
-[ ]     default: true,
-[-]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-  }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as notification;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/organizations/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/organizations/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/organizations/organizations.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/organizations/organizations.cql
deleted file mode 100644
index e473e697c646697f8eb74b7b20e6736e25ba3bc0..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/organizations/organizations.cql
+++ /dev/null
@@ -1,137 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   name: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   slug: {
-[ ]     type: String,
-[ ]     required: true,
-[ ]     unique: true,
-[-]     index: true
-      },
-[ ]   followersCounts: {
-[ ]     users: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     organizations: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     projects: {
-[ ]       type: Number,
-[ ]       default: 0
-        }
-      },
-[ ]   followingCounts: {
-[ ]     users: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     organizations: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     projects: {
-[ ]       type: Number,
-[ ]       default: 0
-        }
-      },
-[ ]   categoryIds: {
-[ ]     type: Array,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   logo: { type: String },
-[ ]   coverImg: { type: String },
-[ ]   userId: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   description: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   descriptionExcerpt: { type: String }, // will be generated automatically
-[ ]   publicEmail: { type: String },
-[ ]   url: { type: String },
-[ ]   type: {
-[ ]     type: String,
-[-]     index: true,
-[ ]     enum: ['ngo', 'npo', 'goodpurpose', 'ev', 'eva']
-      },
-[ ]   language: {
-[ ]     type: String,
-[ ]     required: true,
-[ ]     default: 'de',
-[-]     index: true
-      },
-[ ]   addresses: {
-[ ]     type: [{
-[ ]       street: {
-[ ]         type: String,
-[ ]         required: true
-          },
-[ ]       zipCode: {
-[ ]         type: String,
-[ ]         required: true
-          },
-[ ]       city: {
-[ ]         type: String,
-[ ]         required: true
-          },
-[ ]       country: {
-[ ]         type: String,
-[ ]         required: true
-          },
-[ ]       lat: {
-[ ]         type: Number,
-[ ]         required: true
-          },
-[ ]       lng: {
-[ ]         type: Number,
-[ ]         required: true
-          }
-        }],
-[ ]     default: []
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   isEnabled: {
-[ ]     type: Boolean,
-[ ]     default: false,
-[-]     index: true
-      },
-[ ]   reviewedBy: {
-[ ]     type: String,
-[ ]     default: null,
-[-]     index: true
-      },
-[ ]   tags: {
-[ ]     type: Array,
-[-]     index: true
-      },
-[ ]   deleted: {
-[ ]     type: Boolean,
-[ ]     default: false,
-[-]     index: true
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as organisation;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/pages/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/pages/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/pages/pages.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/pages/pages.cql
deleted file mode 100644
index 18223136b7c8c225d6a0786aa2a9fb6c19550717..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/pages/pages.cql
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   title: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   slug: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   type: {
-[ ]     type: String,
-[ ]     required: true,
-[ ]     default: 'page'
-      },
-[ ]   key: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   content: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   language: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   active: {
-[ ]     type: Boolean,
-[ ]     default: true,
-[-]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-    index:
-[ ] { slug: 1, language: 1 },{ unique: true }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as page;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/projects/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/projects/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/projects/projects.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/projects/projects.cql
deleted file mode 100644
index ed859c157ebaffe21215e69d3150a9e9781854f7..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/projects/projects.cql
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   name: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   slug: { type: String },
-[ ]   followerIds: [],
-[ ]   categoryIds: { type: Array },
-[ ]   logo: { type: String },
-[ ]   userId: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   description: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   content: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   addresses: {
-[ ]     type: Array,
-[ ]     default: []
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as project;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/settings/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/settings/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/settings/settings.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/settings/settings.cql
deleted file mode 100644
index 1d557d30c94ecd428503b4eed8fa35016b267bad..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/settings/settings.cql
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   key: {
-[ ]     type: String,
-[ ]     default: 'system',
-[-]     index: true,
-[ ]     unique: true
-      },
-[ ]   invites: {
-[ ]     userCanInvite: {
-[ ]       type: Boolean,
-[ ]       required: true,
-[ ]       default: false
-        },
-[ ]     maxInvitesByUser: {
-[ ]       type: Number,
-[ ]       required: true,
-[ ]       default: 1
-        },
-[ ]     onlyUserWithBadgesCanInvite: {
-[ ]       type: Array,
-[ ]       default: []
-        }
-      },
-[ ]   maintenance: false
-    }, {
-[ ]   timestamps: true
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as setting;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/delete.cql
deleted file mode 100644
index 21c2e1f908a4085831c42365ca2ccdbd705fd3c9..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/delete.cql
+++ /dev/null
@@ -1 +0,0 @@
-// this is just a relation between users and contributions - no need to delete
\ No newline at end of file
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/shouts.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/shouts.cql
deleted file mode 100644
index d370b4b4adc7cd66778c00557bc05a527a9baa7d..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/shouts/shouts.cql
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[?]   userId: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[?]   foreignId: {
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[?]   foreignService: { // db.getCollection('shots').distinct('foreignService') returns 'contributions'
-[X]     type: String,
-[ ]     required: true, // Not required in Nitro
-[-]     index: true
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-    index:
-[?] { userId: 1, foreignId: 1 },{ unique: true } // is the unique constrain modeled?
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as shout
-MATCH (u:User {id: shout.userId}), (p:Post {id: shout.foreignId})
-MERGE (u)-[:SHOUTED]->(p)
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/status/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/status/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/status/status.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/status/status.cql
deleted file mode 100644
index 010c2ca0925acfa2de49deae4ecdbb3ffb3c5b35..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/status/status.cql
+++ /dev/null
@@ -1,19 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   maintenance: {
-[ ]     type: Boolean,
-[ ]     default: false
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as status;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/systemnotifications/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/systemnotifications/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/systemnotifications/systemnotifications.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/systemnotifications/systemnotifications.cql
deleted file mode 100644
index 4bd33eb7cd334051ce06af1a5454abf2a0055973..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/systemnotifications/systemnotifications.cql
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   type: {
-[ ]     type: String,
-[ ]     default: 'info',
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   title: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   content: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   slot: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   language: {
-[ ]     type: String,
-[ ]     required: true,
-[-]     index: true
-      },
-[ ]   permanent: {
-[ ]     type: Boolean,
-[ ]     default: false
-      },
-[ ]   requireConfirmation: {
-[ ]     type: Boolean,
-[ ]     default: false
-      },
-[ ]   active: {
-[ ]     type: Boolean,
-[ ]     default: true,
-[-]     index: true
-      },
-[ ]   totalCount: {
-[ ]     type: Number,
-[ ]     default: 0
-      },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as systemnotification;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/delete.cql
deleted file mode 100644
index 32679f6c8490054035b92887a55ec407da8ad157..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/delete.cql
+++ /dev/null
@@ -1,2 +0,0 @@
-MATCH (n:User) DETACH DELETE n;
-MATCH (e:EmailAddress) DETACH DELETE e;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/users.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/users.cql
deleted file mode 100644
index 02dff089f613b7d0c75f5d408b84aab004a08549..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/users/users.cql
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[?]   email: {
-[X]     type: String,
-[-]     index: true,
-[X]     required: true,
-[?]     unique: true //unique constrain missing in Nitro
-      },
-[?]   password: { // Not required in Alpha -> verify if always present
-[X]     type: String
-      },
-[X]   name: { type: String },
-[X]   slug: {
-[X]     type: String,
-[-]     index: true
-      },
-[ ]   gender: { type: String },
-[ ]   followersCounts: {
-[ ]     users: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     organizations: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     projects: { 
-[ ]       type: Number,
-[ ]       default: 0
-        }
-      },
-[ ]   followingCounts: {
-[ ]     users: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     organizations: {
-[ ]       type: Number,
-[ ]       default: 0
-        },
-[ ]     projects: {
-[ ]       type: Number,
-[ ]       default: 0
-        }
-      },
-[ ]   timezone: { type: String },
-[X]   avatar: { type: String },
-[X]   coverImg: { type: String },
-[ ]   doiToken: { type: String },
-[ ]   confirmedAt: { type: Date },
-[?]   badgeIds: [], // Verify this is working properly
-[?]   deletedAt: { type: Date }, // The Date of deletion is not saved in Nitro
-[?]   createdAt: {
-[?]     type: Date, // Modeled as String in Nitro
-[ ]     default: Date.now // Default value is missing in Nitro
-      },
-[?]   updatedAt: {
-[?]     type: Date, // Modeled as String in Nitro
-[ ]     default: Date.now // Default value is missing in Nitro
-      },
-[ ]   lastActiveAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   isVerified: { type: Boolean },
-[?]   role: {
-[X]     type: String,
-[-]     index: true,
-[?]     enum: ['admin', 'moderator', 'manager', 'editor', 'user'], // missing roles manager & editor in Nitro
-[ ]     default: 'user' // Default value is missing in Nitro
-      },
-[ ]   verifyToken: { type: String },
-[ ]   verifyShortToken: { type: String },
-[ ]   verifyExpires: { type: Date },
-[ ]   verifyChanges: { type: Object },
-[ ]   resetToken: { type: String },
-[ ]   resetShortToken: { type: String },
-[ ]   resetExpires: { type: Date },
-[X]   wasSeeded: { type: Boolean },
-[X]   wasInvited: { type: Boolean },
-[ ]   language: {
-[ ]     type: String,
-[ ]     default: 'en'
-      },
-[ ]   termsAndConditionsAccepted: { type: Date }, // we display the terms and conditions on registration
-[ ]   systemNotificationsSeen: {
-[ ]     type: Array,
-[ ]     default: []
-      }
-    }
-*/
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as user
-MERGE(u:User {id: user._id["$oid"]})
-ON CREATE SET
-u.name       = user.name,
-u.slug       = COALESCE(user.slug, apoc.text.random(20, "[A-Za-z]")),
-u.email      = user.email,
-u.encryptedPassword   = user.password,
-u.avatar     = replace(user.avatar, 'https://api-alpha.human-connection.org', ''),
-u.coverImg   = replace(user.coverImg, 'https://api-alpha.human-connection.org', ''),
-u.wasInvited = user.wasInvited,
-u.wasSeeded  = user.wasSeeded,
-u.role       = toLower(user.role),
-u.createdAt  = user.createdAt.`$date`,
-u.updatedAt  = user.updatedAt.`$date`,
-u.deleted    = user.deletedAt IS NOT NULL,
-u.disabled   = false
-MERGE (e:EmailAddress {
-  email: user.email,
-  createdAt: toString(datetime()),
-  verifiedAt: toString(datetime())
-})
-MERGE (e)-[:BELONGS_TO]->(u)
-MERGE (u)-[:PRIMARY_EMAIL]->(e)
-WITH u, user, user.badgeIds AS badgeIds
-UNWIND badgeIds AS badgeId
-MATCH (b:Badge {id: badgeId})
-MERGE (b)-[:REWARDED]->(u)
-;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/userscandos/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/userscandos/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/userscandos/userscandos.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/userscandos/userscandos.cql
deleted file mode 100644
index 55f58f17167f656ff572b81d0a07923e36be7e62..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/userscandos/userscandos.cql
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   userId: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   contributionId: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   done: {
-[ ]     type: Boolean,
-[ ]     default: false
-      },
-[ ]   doneAt: { type: Date },
-[ ]   createdAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      },
-[ ]   wasSeeded: { type: Boolean }
-    }
-    index:
-[ ] { userId: 1, contributionId: 1 },{ unique: true }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usercando;
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/usersettings/delete.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/usersettings/delete.cql
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/deployment/legacy-migration/maintenance-worker/migration/neo4j/usersettings/usersettings.cql b/deployment/legacy-migration/maintenance-worker/migration/neo4j/usersettings/usersettings.cql
deleted file mode 100644
index 722625944a22ffb39f7983269b52b69be73c356f..0000000000000000000000000000000000000000
--- a/deployment/legacy-migration/maintenance-worker/migration/neo4j/usersettings/usersettings.cql
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
-// Alpha Model
-// [ ] Not modeled in Nitro
-// [X] Modeled in Nitro
-// [-] Omitted in Nitro
-// [?] Unclear / has work to be done for Nitro
-    {
-[ ]   userId: {
-[ ]     type: String,
-[ ]     required: true,
-[ ]     unique: true
-      },
-[ ]   blacklist: {
-[ ]     type: Array,
-[ ]     default: []
-      },
-[ ]   uiLanguage: {
-[ ]     type: String,
-[ ]     required: true
-      },
-[ ]   contentLanguages: {
-[ ]     type: Array,
-[ ]     default: []
-      },
-[ ]   filter: {
-[ ]     categoryIds: {
-[ ]       type: Array,
-[ ]       index: true
-        },
-[ ]     emotions: {
-[ ]       type: Array,
-[ ]       index: true
-        }
-      },
-[ ]   hideUsersWithoutTermsOfUseSigniture: {type: Boolean},
-[ ]   updatedAt: {
-[ ]     type: Date,
-[ ]     default: Date.now
-      }
-    }
-*/
-
-CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usersetting;
diff --git a/deployment/minikube/README.md b/deployment/minikube/README.md
deleted file mode 100644
index 014f9510c02eaf0c2d47587f237110793039a170..0000000000000000000000000000000000000000
--- a/deployment/minikube/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# Minikube
-
-There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
-
-After you [installed Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
-open your minikube dashboard:
-
-```text
-$ minikube dashboard
-```
-
-This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
-
-Follow the installation instruction for [Human Connection](../ocelot-social/README.md).
-If all the pods and services have settled and everything looks green in your
-minikube dashboard, expose the services you want on your host system.
-
-For example:
-
-```text
-$ minikube service webapp --namespace=ocelotsocialnetwork
-# optionally
-$ minikube service backend --namespace=ocelotsocialnetwork
-```
-
diff --git a/deployment/monitoring/README.md b/deployment/monitoring/README.md
deleted file mode 100644
index 46dfb0301b2133adb4bf04b0221d86c9a046f40d..0000000000000000000000000000000000000000
--- a/deployment/monitoring/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Metrics
-
-You can optionally setup [prometheus](https://prometheus.io/) and
-[grafana](https://grafana.com/) for metrics.
-
-We follow this tutorial [here](https://medium.com/@chris_linguine/how-to-monitor-your-kubernetes-cluster-with-prometheus-and-grafana-2d5704187fc8):
-
-```bash
-kubectl proxy # proxy to your kubernetes dashboard
-
-helm repo list
-# If using helm v3, the stable repository is not set, so you need to manually add it.
-helm repo add stable https://kubernetes-charts.storage.googleapis.com
-# Create a monitoring namespace for your cluster
-kubectl create namespace monitoring
-helm --namespace monitoring install prometheus stable/prometheus
-kubectl -n monitoring get pods # look for 'server'
-kubectl port-forward -n monitoring <PROMETHEUS_SERVER_ID> 9090
-# You can now see your prometheus server on: http://localhost:9090
-
-# Make sure you are in folder `deployment/`
-kubectl apply -f monitoring/grafana/config.yml
-helm --namespace monitoring install grafana stable/grafana -f monitoring/grafana/values.yml
-# Get the admin password for grafana from your kubernetes dashboard.
-kubectl --namespace monitoring port-forward <POD_NAME> 3000
-# You can now see your grafana dashboard on: http://localhost:3000
-# Login with user 'admin' and the password you just looked up.
-# In your dashboard import this dashboard:
-# https://grafana.com/grafana/dashboards/1860
-# Enter ID 180 and choose "Prometheus" as datasource.
-# You got metrics!
-```
-
-Now you should see something like this:
-
-![Grafana dashboard](./grafana/metrics.png)
-
-You can set up a grafana dashboard, by visiting https://grafana.com/dashboards, finding one that is suitable and copying it's id.
-You then go to the left hand menu in localhost, choose `Dashboard` > `Manage` > `Import`
-Paste in the id, click `Load`, select `Prometheus` for the data source, and click `Import`
-
-When you just installed prometheus and grafana, the data will not be available
-immediately, so wait for a couple of minutes and reload.
diff --git a/deployment/monitoring/grafana/config.yml b/deployment/monitoring/grafana/config.yml
deleted file mode 100644
index a338e3480d5896db6d72efd6b8c284916a88de18..0000000000000000000000000000000000000000
--- a/deployment/monitoring/grafana/config.yml
+++ /dev/null
@@ -1,16 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: prometheus-grafana-datasource
-  namespace: monitoring
-  labels:
-    grafana_datasource: '1'
-data:
-  datasource.yaml: |-
-    apiVersion: 1
-    datasources:
-    - name: Prometheus
-      type: prometheus
-      access: proxy
-      orgId: 1
-      url: http://prometheus-server.monitoring.svc.cluster.local
diff --git a/deployment/monitoring/grafana/metrics.png b/deployment/monitoring/grafana/metrics.png
deleted file mode 100644
index cc68f1bad10efef0abdf03fc1194808c17dc6484..0000000000000000000000000000000000000000
Binary files a/deployment/monitoring/grafana/metrics.png and /dev/null differ
diff --git a/deployment/monitoring/grafana/values.yml b/deployment/monitoring/grafana/values.yml
deleted file mode 100644
index 02004cc1c43d0d498d63b3624679661863b642a0..0000000000000000000000000000000000000000
--- a/deployment/monitoring/grafana/values.yml
+++ /dev/null
@@ -1,4 +0,0 @@
-sidecar:
-  datasources:
-    enabled: true
-    label: grafana_datasource
diff --git a/deployment/namespace.yaml b/deployment/namespace.yaml
deleted file mode 100644
index 43898c54662aacca5fc20e4f48e4a4cd3fbec434..0000000000000000000000000000000000000000
--- a/deployment/namespace.yaml
+++ /dev/null
@@ -1,6 +0,0 @@
-kind: Namespace
-apiVersion: v1
-metadata:
-  name: ocelot-social
-  labels:
-    name: ocelot-social
diff --git a/deployment/ocelot-social/README.md b/deployment/ocelot-social/README.md
deleted file mode 100644
index 29680f0c8a0d8c5faf8b18c61088fcfe361a5997..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/README.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Kubernetes Configuration For ocelot.social
-
-Deploying *ocelot.social* with kubernetes is straight forward. All you have to
-do is to change certain parameters, like domain names and API keys, then you
-just apply our provided configuration files to your cluster.
-
-## Configuration
-
-Change into the `./deployment` directory and copy our provided templates:
-
-```bash
-# in folder deployment/ocelot-social/
-$ cp templates/secrets.template.yaml ./secrets.yaml
-$ cp templates/configmap.template.yaml ./configmap.yaml
-```
-
-Change the `configmap.yaml` in the `./deployment/ocelot-social` directory as needed, all variables will be available as
-environment variables in your deployed Kubernetes pods.
-
-Probably you want to change this environment variable to your actual domain:
-
-```yaml
-# in configmap.yaml
-CLIENT_URI: "https://develop-k8s.ocelot.social"
-```
-
-If you want to edit secrets, you have to `base64` encode them. See [Kubernetes Documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
-
-```bash
-# example how to base64 a string:
-$ echo -n 'admin' | base64
-YWRtaW4=
-```
-
-Those secrets get `base64` decoded and are available as environment variables in
-your deployed Kubernetes pods.
-
-## Create A Namespace
-
-```bash
-# in folder deployment/
-$ kubectl apply -f namespace.yaml
-```
-
-If you have a [Kubernets Dashboard](../digital-ocean/dashboard/README.md)
-deployed you should switch to namespace `ocelot-social` in order to
-monitor the state of your deployments.
-
-## Create Persistent Volumes
-
-While the deployments and services can easily be restored, simply by deleting
-and applying the Kubernetes configurations again, certain data is not that
-easily recovered. Therefore we separated persistent volumes from deployments
-and services. There is a [dedicated section](../volumes/README.md). Create those
-persistent volumes once before you apply the configuration.
-
-## Apply The Configuration
-
-Before you apply you should think about the size of the droplet(s) you need.
-For example, the requirements for Neo4j v3.5.14 are [here](https://neo4j.com/docs/operations-manual/3.5/installation/requirements/).
-Tips to configure the pod resources you find [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
-
-```bash
-# in folder deployment/
-$ kubectl apply -f ocelot-social/
-```
-
-This can take a while, because Kubernetes will download the Docker images from Docker Hub. Sit
-back and relax and have a look into your kubernetes dashboard. Wait until all
-pods turn green and they don't show a warning `Waiting: ContainerCreating`
-anymore.
diff --git a/deployment/ocelot-social/deployment-backend.yaml b/deployment/ocelot-social/deployment-backend.yaml
deleted file mode 100644
index 1664686d974e26a898bbc8ef22aecc5411c74736..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/deployment-backend.yaml
+++ /dev/null
@@ -1,62 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  creationTimestamp: null
-  labels:
-    ocelot.social/commit: COMMIT
-    ocelot.social/selector: deployment-ocelot-social-backend
-  name: backend
-  namespace: ocelot-social
-spec:
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  replicas: 1
-  revisionHistoryLimit: 2147483647
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-ocelot-social-backend
-  strategy:
-    rollingUpdate:
-      maxSurge: 0
-      maxUnavailable: 100%
-    type: RollingUpdate
-  template:
-    metadata:
-      annotations:
-        backup.velero.io/backup-volumes: uploads
-      creationTimestamp: null
-      labels:
-        ocelot.social/commit: COMMIT
-        ocelot.social/selector: deployment-ocelot-social-backend
-      name: backend
-    spec:
-      containers:
-      - envFrom:
-        - configMapRef:
-            name: configmap
-        - secretRef:
-            name: ocelot-social
-        image: ocelotsocialnetwork/develop-backend:latest  # for develop
-        # image: ocelotsocialnetwork/develop-backend:0.6.3  # for production or staging
-        imagePullPolicy: Always  # for develop or staging
-        # imagePullPolicy: IfNotPresent  # for production
-        name: backend
-        ports:
-        - containerPort: 4000
-          protocol: TCP
-        resources: {}
-        terminationMessagePath: /dev/termination-log
-        terminationMessagePolicy: File
-        volumeMounts:
-        - mountPath: /develop-backend/public/uploads
-          name: uploads
-      dnsPolicy: ClusterFirst
-      restartPolicy: Always
-      schedulerName: default-scheduler
-      securityContext: {}
-      terminationGracePeriodSeconds: 30
-      volumes:
-      - name: uploads
-        persistentVolumeClaim:
-          claimName: uploads-claim
-status: {}
diff --git a/deployment/ocelot-social/deployment-neo4j.yaml b/deployment/ocelot-social/deployment-neo4j.yaml
deleted file mode 100644
index aa2c367c6009d2fe3d1bf7157f4042801e744419..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/deployment-neo4j.yaml
+++ /dev/null
@@ -1,65 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  creationTimestamp: null
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-neo4j
-  name: neo4j
-  namespace: ocelot-social
-spec:
-  progressDeadlineSeconds: 2147483647
-  replicas: 1
-  revisionHistoryLimit: 2147483647
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-ocelot-social-neo4j
-  strategy:
-    rollingUpdate:
-      maxSurge: 0
-      maxUnavailable: 100%
-    type: RollingUpdate
-  template:
-    metadata:
-      annotations:
-        backup.velero.io/backup-volumes: neo4j-data
-      creationTimestamp: null
-      labels:
-        ocelot.social/selector: deployment-ocelot-social-neo4j
-      name: neo4j
-    spec:
-      containers:
-      - envFrom:
-        - configMapRef:
-            name: configmap
-        image: ocelotsocialnetwork/develop-neo4j:latest  # for develop
-        # image: ocelotsocialnetwork/develop-neo4j:0.6.3  # for production or staging
-        imagePullPolicy: Always  # for develop or staging
-        # imagePullPolicy: IfNotPresent  # for production
-        name: neo4j
-        ports:
-        - containerPort: 7687
-          protocol: TCP
-        - containerPort: 7474
-          protocol: TCP
-        resources:
-          # see description and add cpu https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
-          # see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
-          limits:
-            memory: 2G
-          requests:
-            memory: 2G
-        terminationMessagePath: /dev/termination-log
-        terminationMessagePolicy: File
-        volumeMounts:
-        - mountPath: /data/
-          name: neo4j-data
-      dnsPolicy: ClusterFirst
-      restartPolicy: Always
-      schedulerName: default-scheduler
-      securityContext: {}
-      terminationGracePeriodSeconds: 30
-      volumes:
-      - name: neo4j-data
-        persistentVolumeClaim:
-          claimName: neo4j-data-claim
-status: {}
diff --git a/deployment/ocelot-social/deployment-webapp.yaml b/deployment/ocelot-social/deployment-webapp.yaml
deleted file mode 100644
index 2cc742debb49cd783e4342a5bbdf5bb105e9fc69..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/deployment-webapp.yaml
+++ /dev/null
@@ -1,54 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  creationTimestamp: null
-  labels:
-    ocelot.social/commit: COMMIT
-    ocelot.social/selector: deployment-ocelot-social-webapp
-  name: web
-  namespace: ocelot-social
-spec:
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  replicas: 2
-  revisionHistoryLimit: 2147483647
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-ocelot-social-webapp
-  strategy:
-    rollingUpdate:
-      maxSurge: 1
-      maxUnavailable: 1
-    type: RollingUpdate
-  template:
-    metadata:
-      creationTimestamp: null
-      labels:
-        ocelot.social/commit: COMMIT
-        ocelot.social/selector: deployment-ocelot-social-webapp
-      name: web
-    spec:
-      containers:
-      - env:
-        - name: HOST
-          value: 0.0.0.0
-        envFrom:
-        - configMapRef:
-            name: configmap
-        - secretRef:
-            name: ocelot-social
-        image: ocelotsocialnetwork/webapp:latest
-        imagePullPolicy: Always
-        name: web
-        ports:
-        - containerPort: 3000
-          protocol: TCP
-        resources: {}
-        terminationMessagePath: /dev/termination-log
-        terminationMessagePolicy: File
-      dnsPolicy: ClusterFirst
-      restartPolicy: Always
-      schedulerName: default-scheduler
-      securityContext: {}
-      terminationGracePeriodSeconds: 30
-status: {}
diff --git a/deployment/ocelot-social/error-reporting/README.md b/deployment/ocelot-social/error-reporting/README.md
deleted file mode 100644
index 5a4eac7c9f3d2911d200e9ed0e7c86bb64a7091b..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/error-reporting/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# Error reporting
-
-We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
-our backend and web frontend. You can either use a hosted or a self-hosted
-instance. Just set the two `DSN` in your
-[configmap](../templates/configmap.template.yaml) and update the `COMMIT`
-during a deployment with your commit or the version of your release.
-
-## Self-hosted Sentry
-
-For data privacy it is recommended to set up your own instance of sentry. 
-If you are lucky enough to have a kubernetes cluster with the required hardware
-support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).
-
-On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
-Apparently Digital Ocean's kubernetes clusters do not fulfill the requirements.
diff --git a/deployment/ocelot-social/mailserver/README.md b/deployment/ocelot-social/mailserver/README.md
deleted file mode 100644
index ed9292d5ce25768ae66cca666cfe64b41c6191b4..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/mailserver/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# Development Mail Server
-
-You can deploy a fake smtp server which captures all send mails and displays
-them in a web interface. The [sample configuration](../templates/configmap.template.yaml)
-is assuming such a dummy server in the `SMTP_HOST` configuration and points to
-a cluster-internal SMTP server.
-
-To deploy the SMTP server just uncomment the relevant code in the
-[ingress server configuration](../../https/templates/ingress.template.yaml) and
-run the following:
-
-```bash
-# in folder deployment/ocelot-social
-$ kubectl apply -f mailserver/
-```
-
-You might need to refresh the TLS secret to enable HTTPS on the publicly
-available web interface.
diff --git a/deployment/ocelot-social/mailserver/deployment-mailserver.yaml b/deployment/ocelot-social/mailserver/deployment-mailserver.yaml
deleted file mode 100644
index dc5c49347828df6a2047cf87825a5e128704ac21..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/mailserver/deployment-mailserver.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  creationTimestamp: null
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-mailserver
-  name: mailserver
-  namespace: ocelot-social
-spec:
-  minReadySeconds: 15
-  progressDeadlineSeconds: 60
-  replicas: 1
-  revisionHistoryLimit: 2147483647
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-ocelot-social-mailserver
-  strategy:
-    rollingUpdate:
-      maxSurge: 1
-      maxUnavailable: 1
-    type: RollingUpdate
-  template:
-    metadata:
-      creationTimestamp: null
-      labels:
-        ocelot.social/selector: deployment-ocelot-social-mailserver
-      name: mailserver
-    spec:
-      containers:
-      - envFrom:
-        - configMapRef:
-            name: configmap
-        - secretRef:
-            name: ocelot-social
-        image: djfarrelly/maildev
-        imagePullPolicy: Always
-        name: mailserver
-        ports:
-        - containerPort: 80
-          protocol: TCP
-        - containerPort: 25
-          protocol: TCP
-        resources: {}
-        terminationMessagePath: /dev/termination-log
-        terminationMessagePolicy: File
-      dnsPolicy: ClusterFirst
-      restartPolicy: Always
-      schedulerName: default-scheduler
-      securityContext: {}
-      terminationGracePeriodSeconds: 30
-status: {}
diff --git a/deployment/ocelot-social/mailserver/service-mailserver.yaml b/deployment/ocelot-social/mailserver/service-mailserver.yaml
deleted file mode 100644
index 8c38a94b482e11f3a6ca960b8b261f10d720c5c2..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/mailserver/service-mailserver.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name: mailserver
-  namespace: ocelot-social
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-mailserver
-spec:
-  ports:
-  - name: web
-    port: 80
-    targetPort: 80
-  - name: smtp
-    port: 25
-    targetPort: 25
-  selector:
-    ocelot.social/selector: deployment-ocelot-social-mailserver
diff --git a/deployment/ocelot-social/maintenance/README.md b/deployment/ocelot-social/maintenance/README.md
deleted file mode 100644
index 08a177e65081b4f2725e5d6b9d4c930c33d87397..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/maintenance/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Maintenance mode
-
-> Despite our best efforts, systems sometimes require downtime for a variety of reasons. 
-
-Quote from [here](https://www.nrmitchi.com/2017/11/easy-maintenance-mode-in-kubernetes/)
-
-We use our maintenance mode for manual database backup and restore. Also we
-bring the database into maintenance mode for manual database migrations.
-
-## Deploy the service
-
-We prepared sample configuration, so you can simply run:
-
-```sh
-# in folder deployment/
-$ kubectl apply -f ./ocelot-social/maintenance/
-```
-
-This will fire up a maintenance service.
-
-## Bring application into maintenance mode
-
-Now if you want to have a controlled downtime and you want to bring your
-application into maintenance mode, you can edit your global ingress server.
-
-E.g. copy file [`deployment/digital-ocean/https/templates/ingress.template.yaml`](../../digital-ocean/https/templates/ingress.template.yaml) to new file `deployment/digital-ocean/https/ingress.yaml` and change the following:
-
-```yaml
-...
-
-  - host: develop-k8s.ocelot.social
-    http:
-      paths:
-      - path: /
-        backend:
-          # serviceName: web
-          serviceName: maintenance
-          # servicePort: 3000
-          servicePort: 80
-```
-
-Then run `$ kubectl apply -f deployment/digital-ocean/https/ingress.yaml`. If you
-want to deactivate the maintenance server, just undo the edit and apply the
-configuration again.
-
diff --git a/deployment/ocelot-social/maintenance/deployment-maintenance.yaml b/deployment/ocelot-social/maintenance/deployment-maintenance.yaml
deleted file mode 100644
index 8c38aab66d61c34b185008ee9c9a2bc9556edb8d..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/maintenance/deployment-maintenance.yaml
+++ /dev/null
@@ -1,27 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: maintenance
-  namespace: ocelot-social
-spec:
-  selector:
-    matchLabels:
-      ocelot.social/selector: deployment-ocelot-social-maintenance
-  template:
-    metadata:
-      labels:
-        ocelot.social/commit: COMMIT
-        ocelot.social/selector: deployment-ocelot-social-maintenance
-      name: maintenance
-    spec:
-      containers:
-        - name: web
-          env:
-            - name: HOST
-              value: 0.0.0.0
-          image: ocelotsocialnetwork/develop-maintenance:latest
-          ports:
-            - containerPort: 80
-          imagePullPolicy: Always
-      restartPolicy: Always
-      terminationGracePeriodSeconds: 30
diff --git a/deployment/ocelot-social/maintenance/service-maintenance.yaml b/deployment/ocelot-social/maintenance/service-maintenance.yaml
deleted file mode 100644
index 7c636e25367c26795e8e0fddcc794ceafcc5c5e6..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/maintenance/service-maintenance.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name: maintenance
-  namespace: ocelot-social
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-maintenance
-spec:
-  ports:
-    - name: web
-      port: 80
-      targetPort: 80
-  selector:
-    ocelot.social/selector: deployment-ocelot-social-maintenance
diff --git a/deployment/ocelot-social/service-backend.yaml b/deployment/ocelot-social/service-backend.yaml
deleted file mode 100644
index 16d5859f81a78c8c0c26d23d3d92bf6fb5286eb6..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/service-backend.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name: backend
-  namespace: ocelot-social
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-backend
-spec:
-  ports:
-    - name: web
-      port: 4000
-      targetPort: 4000
-  selector:
-    ocelot.social/selector: deployment-ocelot-social-backend
diff --git a/deployment/ocelot-social/service-neo4j.yaml b/deployment/ocelot-social/service-neo4j.yaml
deleted file mode 100644
index 2a0e404ea0b572e6adeaca9100a4eb357ecae3cf..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/service-neo4j.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name: neo4j
-  namespace: ocelot-social
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-neo4j
-spec:
-  ports:
-  - name: bolt
-    port: 7687
-    targetPort: 7687
-  - name: web
-    port: 7474
-    targetPort: 7474
-  selector:
-    ocelot.social/selector: deployment-ocelot-social-neo4j
diff --git a/deployment/ocelot-social/service-webapp.yaml b/deployment/ocelot-social/service-webapp.yaml
deleted file mode 100644
index a3acacb11cc971d1a9ea07352b63852eec521bee..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/service-webapp.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
-  name: web
-  namespace: ocelot-social
-  labels:
-    ocelot.social/selector: deployment-ocelot-social-webapp
-spec:
-  ports:
-    - name: web
-      port: 3000
-      targetPort: 3000
-  selector:
-    ocelot.social/selector: deployment-ocelot-social-webapp
diff --git a/deployment/ocelot-social/templates/configmap.template.yaml b/deployment/ocelot-social/templates/configmap.template.yaml
deleted file mode 100644
index daa5c792187833f27c9ebce4eefd9fd4ab34bb35..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/templates/configmap.template.yaml
+++ /dev/null
@@ -1,39 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-data:
-  # decomment following lines for S3 bucket to store our images
-  # AWS_ACCESS_KEY_ID: see secrets
-  # AWS_BUCKET: ocelot-social-uploads
-  # AWS_ENDPOINT: fra1.digitaloceanspaces.com
-  # AWS_REGION: fra1
-  # AWS_SECRET_ACCESS_KEY: see secrets
-  CLIENT_URI: "https://develop-k8s.ocelot.social"  # change this to your domain
-  COMMIT: ""
-  EMAIL_DEFAULT_SENDER: devops@ocelot.social  # change this to your e-mail
-  GRAPHQL_PORT: "4000"
-  GRAPHQL_URI: "http://backend.ocelot-social:4000"  # leave this as ocelot-social
-  # decomment following line for Neo4j Enterprice version instead of Community version
-  # NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
-  NEO4J_AUTH: "none"
-  # NEO4J_dbms_connector_bolt_thread__pool__max__size: "10000"
-  NEO4J_apoc_import_file_enabled: "true"
-  NEO4J_dbms_memory_heap_initial__size: "500M"
-  NEO4J_dbms_memory_heap_max__size: "500M"
-  NEO4J_dbms_memory_pagecache_size: "490M"
-  NEO4J_dbms_security_procedures_unrestricted: "algo.*,apoc.*"
-  NEO4J_URI: "bolt://neo4j.ocelot-social:7687"  # leave this as ocelot-social
-  PUBLIC_REGISTRATION: "false"
-  REDIS_DOMAIN: ---toBeSet(IP)---
-  # REDIS_PASSWORD: see secrets
-  REDIS_PORT: "6379"
-  SENTRY_DSN_WEBAPP: "---toBeSet---"
-  SENTRY_DSN_BACKEND: "---toBeSet---"
-  SMTP_HOST: "mail.ocelot.social"  # change this to your domain
-  # SMTP_PASSWORD: see secrets
-  SMTP_PORT: "25"  # change this to your port
-  # SMTP_USERNAME: see secrets
-  SMTP_IGNORE_TLS: 'true'  # change this to your setting
-  WEBSOCKETS_URI: wss://develop-k8s.ocelot.social/api/graphql  # change this to your domain
-metadata:
-  name: configmap
-  namespace: ocelot-social
diff --git a/deployment/ocelot-social/templates/secrets.template.yaml b/deployment/ocelot-social/templates/secrets.template.yaml
deleted file mode 100644
index 1e4865c8dadc9b4cac31a15983f8dd3fe7b2366a..0000000000000000000000000000000000000000
--- a/deployment/ocelot-social/templates/secrets.template.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Secret
-data:
-  # decomment following lines for S3 bucket to store our images
-  # AWS_ACCESS_KEY_ID: ---toBeSet---
-  # AWS_SECRET_ACCESS_KEY: ---toBeSet---
-  JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
-  PRIVATE_KEY_PASSPHRASE: "YTdkc2Y3OHNhZGc4N2FkODdzZmFnc2FkZzc4"
-  MAPBOX_TOKEN: "---toBeSet(IP)---"
-  NEO4J_USERNAME: ""
-  NEO4J_PASSWORD: ""
-  REDIS_PASSWORD: ---toBeSet---
-  SMTP_PASSWORD: "---toBeSet---"
-  SMTP_USERNAME: "---toBeSet---"
-metadata:
-  name: ocelot-social
-  namespace: ocelot-social
diff --git a/deployment/volumes/README.md b/deployment/volumes/README.md
deleted file mode 100644
index 1d849682c602630bf6e6530a2ceb4b83f29a0285..0000000000000000000000000000000000000000
--- a/deployment/volumes/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Persistent Volumes
-
-At the moment, the application needs two persistent volumes:
-
-* The `/data/` folder where `neo4j` stores its database and
-* the folder `/develop-backend/public/uploads` where the backend stores uploads, in case you don't use Digital Ocean Spaces (an AWS S3 bucket) for this purpose.
-
-As a matter of precaution, the persistent volume claims that setup these volumes
-live in a separate folder. You don't want to accidently loose all your data in
-your database by running
-
-```sh
-kubectl delete -f ocelot-social/
-```
-
-or do you?
-
-## Create Persistent Volume Claims
-
-Run the following:
-
-```sh
-# in folder deployments/
-$ kubectl apply -f volumes
-persistentvolumeclaim/neo4j-data-claim created
-persistentvolumeclaim/uploads-claim created 
-```
-
-## Backup And Restore
-
-We tested a couple of options how to do disaster recovery in kubernetes. First,
-there is the [offline backup strategy](./neo4j-offline-backup/README.md) of the
-community edition of Neo4J, which you can also run on a local installation.
-Kubernetes also offers so-called [volume snapshots](./volume-snapshots/README.md).
-Changing the [reclaim policy](./reclaim-policy/README.md) of your persistent
-volumes might be an additional safety measure. Finally, there is also a
-kubernetes specific disaster recovery tool called [Velero](./velero/README.md).
diff --git a/deployment/volumes/neo4j-data.yaml b/deployment/volumes/neo4j-data.yaml
deleted file mode 100644
index 1053d010587811bf83d95f23ce65a58cc1da41e7..0000000000000000000000000000000000000000
--- a/deployment/volumes/neo4j-data.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
----
-  kind: PersistentVolumeClaim
-  apiVersion: v1
-  metadata:
-    name: neo4j-data-claim
-    namespace: ocelot-social
-  spec:
-    accessModes:
-      - ReadWriteOnce
-    resources:
-      requests:
-        storage: "10Gi"  # see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
diff --git a/deployment/volumes/neo4j-offline-backup/README.md b/deployment/volumes/neo4j-offline-backup/README.md
deleted file mode 100644
index 7c34aa7640766d07b90d31a8b75b1c4635672b25..0000000000000000000000000000000000000000
--- a/deployment/volumes/neo4j-offline-backup/README.md
+++ /dev/null
@@ -1,88 +0,0 @@
-# Backup (offline)
-
-This tutorial explains how to carry out an offline backup of your Neo4J
-database in a kubernetes cluster.
-
-An offline backup requires the Neo4J database to be stopped. Read
-[the docs](https://neo4j.com/docs/operations-manual/current/tools/dump-load/).
-Neo4J also offers online backups but this is available in enterprise edition
-only.
-
-The tricky part is to stop the Neo4J database *without* stopping the container.
-Neo4J's docker container image starts `neo4j` by default, so we have to override
-this command with sth. that keeps the container spinning but does not terminate
-it.
-
-## Stop and Restart Neo4J Database in Kubernetes
-
-[This tutorial](http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/)
-explains how to keep a docker container running. For kubernetes, the way to
-override the docker image `CMD` is explained [here](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod).
-
-So, all we have to do is edit the kubernetes deployment of our Neo4J database
-and set a custom `command` every time we have to carry out tasks like backup,
-restore, seed etc.
-
-First bring the application into [maintenance mode](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/deployment/ocelot-social/maintenance/README.md) to ensure there are no
-database connections left and nobody can access the application.
-
-Run the following:
-
-```sh
-$ kubectl -n ocelot-social edit deployment develop-neo4j
-```
-
-Add the following to `spec.template.spec.containers`:
-
-```sh
-["tail", "-f", "/dev/null"]
-```
-
-and write the file which will update the deployment.
-
-The command `tail -f /dev/null` is the equivalent of *sleep forever*. It is a
-hack to keep the container busy and to prevent its shutdown. It will also
-override the default `neo4j` command and the kubernetes pod will not start the
-database.
-
-Now perform your tasks!
-
-When you're done, edit the deployment again and remove the `command`. Write the
-file and trigger an update of the deployment.
-
-## Create a Backup in Kubernetes
-
-First stop your Neo4J database, see above. Then:
-
-```sh
-$ kubectl -n ocelot-social get pods
-# Copy the ID of the pod running Neo4J.
-$ kubectl -n ocelot-social exec -it <POD-ID> bash
-# Once you're in the pod, dump the db to a file e.g. `/root/neo4j-backup`.
-> neo4j-admin dump --to=/root/neo4j-backup
-> exit
-# Download the file from the pod to your computer.
-$ kubectl cp human-connection/<POD-ID>:/root/neo4j-backup ./neo4j-backup
-```
-
-Revert your changes to deployment `develop-neo4j` which will restart the database.
-
-## Restore a Backup in Kubernetes
-
-First stop your Neo4J database. Then:
-
-```sh
-$ kubectl -n ocelot-social get pods
-# Copy the ID of the pod running Neo4J.
-# Then upload your local backup to the pod. Note that once the pod gets deleted
-# e.g. if you change the deployment, the backup file is gone with it.
-$ kubectl cp ./neo4j-backup human-connection/<POD-ID>:/root/
-$ kubectl -n ocelot-social exec -it <POD-ID> bash
-# Once you're in the pod restore the backup and overwrite the default database
-# called `graph.db` with `--force`.
-# This will delete all existing data in database `graph.db`!
-> neo4j-admin load --from=/root/neo4j-backup --force
-> exit
-```
-
-Revert your changes to deployment `develop-neo4j` which will restart the database.
diff --git a/deployment/volumes/neo4j-online-backup/README.md b/deployment/volumes/neo4j-online-backup/README.md
deleted file mode 100644
index 602bbd5774e5024e3c5f89fdc8c9fb4644b551ad..0000000000000000000000000000000000000000
--- a/deployment/volumes/neo4j-online-backup/README.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# Backup (online)
-
-## Online backups are only avaible with a Neo4j Enterprise and a license, see https://neo4j.com/licensing/ for the different licenses available
-
-This tutorial explains how to carry out an online backup of your Neo4J
-database in a kubernetes cluster.
-
-One of the benefits of doing an online backup is that the Neo4j database does not need to be stopped, so there is no downtime. Read [the docs](https://neo4j.com/docs/operations-manual/current/backup/performing/)
-
-To use Neo4j Enterprise you must add this line to your configmap, if using, or your deployment `develop-neo4j` env.
-
-```sh
-NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
-```
-
-## Create a Backup in Kubernetes
-
-```sh
-# Backup the database with one command, this will get the develop-neo4j pod, ssh into it, and run the backup command
-$ kubectl -n=human-connection exec -it $(kubectl -n=human-connection get pods | grep develop-neo4j | awk '{ print $1 }') -- neo4j-admin backup --backup-dir=/var/lib/neo4j --name=neo4j-backup
-# Download the file from the pod to your computer.
-$ kubectl cp human-connection/$(kubectl -n=human-connection get pods | grep develop-neo4j | awk '{ print $1 }'):/var/lib/neo4j/neo4j-backup ./neo4j-backup/
-```
-
-You should now have a backup of the database locally. If you want, you can simulate disaster recovery by sshing into the develop-neo4j pod, deleting all data and restoring from backup
-
-## Disaster where database data is gone somehow
-
-```sh
-$ kubectl -n=human-connection exec -it $(kubectl -n=human-connection get pods | grep develop-neo4j |awk '{ print $1 }') bash
-# Enter cypher-shell
-$ cypher-shell
-# Delete all data
-> MATCH (n) DETACH DELETE (n);
-
-> exit
-```
-
-## Restore a backup in Kubernetes 
-
-Restoration must be done while the database is not running, see [our docs](https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#stop-and-restart-neo-4-j-database-in-kubernetes) for how to stop the database, but keep the container running
-
-After, you have stopped the database, and have the pod running, you can restore the database by running these commands:
-
-```sh
-$ kubectl -n ocelot-social get pods
-# Copy the ID of the pod running Neo4J.
-# Then upload your local backup to the pod. Note that once the pod gets deleted
-# e.g. if you change the deployment, the backup file is gone with it.
-$ kubectl cp ./neo4j-backup/ human-connection/<POD-ID>:/root/
-$ kubectl -n ocelot-social exec -it <POD-ID> bash
-# Once you're in the pod restore the backup and overwrite the default database
-# called `graph.db` with `--force`.
-# This will delete all existing data in database `graph.db`!
-> neo4j-admin restore --from=/root/neo4j-backup --force
-> exit
-```
-
-Revert your changes to deployment `develop-neo4j` which will restart the database.
diff --git a/deployment/volumes/reclaim-policy/README.md b/deployment/volumes/reclaim-policy/README.md
deleted file mode 100644
index ecafd3973d15a37c4f3e4f5984ac1a743b4d984e..0000000000000000000000000000000000000000
--- a/deployment/volumes/reclaim-policy/README.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Change Reclaim Policy
-
-We recommend to change the `ReclaimPolicy`, so if you delete the persistent
-volume claims, the associated volumes will be released, not deleted.
-
-This procedure is optional and an additional security measure. It might prevent
-you from loosing data if you accidently delete the namespace and the persistent
-volumes along with it.
-
-```sh
-$ kubectl -n ocelot-social get pv
-
-NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS       REASON   AGE
-pvc-bd02a715-66d0-11e9-be52-ba9c337f4551   5Gi        RWO            Delete           Bound    ocelot-social/neo4j-data-claim   do-block-storage            4m24s
-pvc-bd208086-66d0-11e9-be52-ba9c337f4551   10Gi       RWO            Delete           Bound    ocelot-social/uploads-claim      do-block-storage            4m12s
-```
-
-Get the volume id from above, then change `ReclaimPolicy` with:
-
-```sh
-kubectl patch pv <VOLUME-ID> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
-
-# in the above example
-kubectl patch pv pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
-kubectl patch pv pvc-bd208086-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
-```
-
-Given that you changed the reclaim policy as described above, you should be able
-to create a persistent volume claim based on a volume snapshot content. See
-the general kubernetes documentation [here](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/)
-and our specific documentation for snapshots [here](../snapshot/README.md).
diff --git a/deployment/volumes/uploads.yaml b/deployment/volumes/uploads.yaml
deleted file mode 100644
index 45e1292a8173e2d5e21076a9f4b3fa8b0d5fcf8a..0000000000000000000000000000000000000000
--- a/deployment/volumes/uploads.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
----
-  kind: PersistentVolumeClaim
-  apiVersion: v1
-  metadata:
-    name: uploads-claim
-    namespace: ocelot-social
-  spec:
-    accessModes:
-      - ReadWriteOnce
-    resources:
-      requests:
-        storage: "10Gi"
diff --git a/deployment/volumes/velero/README.md b/deployment/volumes/velero/README.md
deleted file mode 100644
index 5b8fc9d2e6f9aa52c675bb22829d67921ccd79af..0000000000000000000000000000000000000000
--- a/deployment/volumes/velero/README.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# Velero
-
-{% hint style="danger" %}
-I tried Velero and it did not work reliably all the time. Sometimes the
-kubernetes cluster crashes during recovery or data is not fully recovered.
-
-Feel free to test it out and update this documentation once you feel that it's
-working reliably. It is very likely that Digital Ocean had some bugs when I
-tried out the steps below.
-{% endhint %}
-
-We use [velero](https://github.com/heptio/velero) for on premise backups, we
-tested on version `v0.11.0`, you can find their
-documentation [here](https://heptio.github.io/velero/v0.11.0/).
-
-Our kubernets configurations adds some annotations to pods. The annotations
-define the important persistent volumes that need to be backed up. Velero will
-pick them up and store the volumes in the same cluster but in another namespace
-`velero`.
-
-## Prequisites
-
-You have to install the binary `velero` on your computer and get a tarball of
-the latest release. We use `v0.11.0` so visit the
-[release](https://github.com/heptio/velero/releases/tag/v0.11.0) page and
-download and extract e.g. [velero-v0.11.0-linux-arm64.tar.gz](https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-linux-amd64.tar.gz).
-
-
-## Setup Velero Namespace
-
-Follow their [getting started](https://heptio.github.io/velero/v0.11.0/get-started)
-instructions to setup the Velero namespace. We use
-[Minio](https://docs.min.io/docs/deploy-minio-on-kubernetes) and
-[restic](https://github.com/restic/restic), so check out Velero's instructions
-how to setup [restic](https://heptio.github.io/velero/v0.11.0/restic):
-
-```sh
-# run from the extracted folder of the tarball
-$ kubectl apply -f config/common/00-prereqs.yaml
-$ kubectl apply -f config/minio/
-```
-
-Once completed, you should see the namespace in your kubernetes dashboard.
-
-## Manually Create an On-Premise Backup
-
-When you create your deployments for Human Connection the required annotations
-should already be in place. So when you create a backup of namespace
-`human-connection`:
-
-```sh
-$ velero backup create hc-backup --include-namespaces=human-connection
-```
-
-That should backup your persistent volumes, too. When you enter:
-
-```sh
-$ velero backup describe hc-backup --details
-```
-
-You should see the persistent volumes at the end of the log:
-
-```sh
-....
-
-Restic Backups:
-  Completed:
-    human-connection/develop-backend-5b6dd96d6b-q77n6: uploads
-    human-connection/develop-neo4j-686d768598-z2vhh: neo4j-data
-```
-
-## Simulate a Disaster
-
-Feel free to try out if you loose any data when you simulate a disaster and try
-to restore the namespace from the backup:
-
-```sh
-$ kubectl delete namespace human-connection
-```
-
-Wait until the wrongdoing has completed, then:
-```sh
-$ velero restore create --from-backup hc-backup
-```
-
-Now, I keep my fingers crossed that everything comes back again. If not, I feel
-very sorry for you.
-
-
-## Schedule a Regular Backup
-
-Check out the [docs](https://heptio.github.io/velero/v0.11.0/get-started). You
-can create a regular schedule e.g. with:
-
-```sh
-$ velero schedule create hc-weekly-backup --schedule="@weekly" --include-namespaces=human-connection
-```
-
-Inspect the created backups:
-
-```sh
-$ velero schedule get
-NAME               STATUS    CREATED                          SCHEDULE   BACKUP TTL   LAST BACKUP   SELECTOR
-hc-weekly-backup   Enabled   2019-05-08 17:51:31 +0200 CEST   @weekly    720h0m0s     6s ago        <none> 
-
-$ velero backup get
-NAME                              STATUS      CREATED                          EXPIRES   STORAGE LOCATION   SELECTOR
-hc-weekly-backup-20190508155132   Completed   2019-05-08 17:51:32 +0200 CEST   29d       default            <none>
-
-$ velero backup describe hc-weekly-backup-20190508155132 --details
-# see if the persistent volumes are backed up
-```
diff --git a/deployment/volumes/volume-snapshots/README.md b/deployment/volumes/volume-snapshots/README.md
deleted file mode 100644
index cc66ae4ae02aaff6fcf2d92d6469b199199d2a4e..0000000000000000000000000000000000000000
--- a/deployment/volumes/volume-snapshots/README.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Kubernetes Volume Snapshots
-
-It is possible to backup persistent volumes through volume snapshots. This is
-especially handy if you don't want to stop the database to create an [offline
-backup](../neo4j-offline-backup/README.md) thus having a downtime.
-
-Kubernetes announced this feature in a [blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/). Please make yourself familiar with it before you continue.
-
-## Create a Volume Snapshot
-
-There is an example in this folder how you can e.g. create a volume snapshot for
-the persistent volume claim `neo4j-data-claim`:
-
-```sh
-# in folder deployment/volumes/volume-snapshots/
-kubectl apply -f snapshot.yaml
-```
-
-If you are on Digital Ocean the volume snapshot should show up in the Web UI:
-
-![Digital Ocean Web UI showing a volume snapshot](./digital-ocean-volume-snapshots.png)
-
-## Provision a Volume based on a Snapshot
-
-Edit your persistent volume claim configuration and add a `dataSource` pointing
-to your volume snapshot. [The blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/) has an example in section "Provision a new volume from a snapshot with
-Kubernetes".
-
-There is also an example in this folder how the configuration could look like.
-If you apply the configuration new persistent volume claim will be provisioned
-with the data from the volume snapshot:
-
-```
-# in folder deployment/volumes/volume-snapshots/
-kubectl apply -f neo4j-data.yaml
-```
-
-## Data Consistency Warning
-
-Note that volume snapshots do not guarantee data consistency. Quote from the
-[blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/):
-
-> Please note that the alpha release of Kubernetes Snapshot does not provide
-> any consistency guarantees. You have to prepare your application (pause
-> application, freeze filesystem etc.) before taking the snapshot for data
-> consistency.
-
-In case of Neo4J this probably means that enterprise edition is required which
-supports [online backups](https://neo4j.com/docs/operations-manual/current/backup/).
-
diff --git a/deployment/volumes/volume-snapshots/digital-ocean-volume-snapshots.png b/deployment/volumes/volume-snapshots/digital-ocean-volume-snapshots.png
deleted file mode 100644
index cb6599616cad86471f5a990596a0f681470255c3..0000000000000000000000000000000000000000
Binary files a/deployment/volumes/volume-snapshots/digital-ocean-volume-snapshots.png and /dev/null differ
diff --git a/deployment/volumes/volume-snapshots/neo4j-data.yaml b/deployment/volumes/volume-snapshots/neo4j-data.yaml
deleted file mode 100644
index cd8552bda6fc0d817f8ad1c7f176538bc796fd47..0000000000000000000000000000000000000000
--- a/deployment/volumes/volume-snapshots/neo4j-data.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
----
-  kind: PersistentVolumeClaim
-  apiVersion: v1
-  metadata:
-    name: neo4j-data-claim
-    namespace: ocelot-social
-    labels:
-      app: ocelot-social
-  spec:
-    dataSource:
-      name: neo4j-data-snapshot
-      kind: VolumeSnapshot
-      apiGroup: snapshot.storage.k8s.io
-    accessModes:
-      - ReadWriteOnce
-    resources:
-      requests:
-        storage: 1Gi
diff --git a/deployment/volumes/volume-snapshots/snapshot.yaml b/deployment/volumes/volume-snapshots/snapshot.yaml
deleted file mode 100644
index eac2f0857ef44f7efd93c17c3781d21b50c61a03..0000000000000000000000000000000000000000
--- a/deployment/volumes/volume-snapshots/snapshot.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
----
-  apiVersion: snapshot.storage.k8s.io/v1alpha1
-  kind: VolumeSnapshot
-  metadata:
-    name: uploads-snapshot
-    namespace: ocelot-social
-  spec:
-    source:
-      name: uploads-claim
-      kind: PersistentVolumeClaim
diff --git a/scripts/.gitkeep b/scripts/.gitkeep
new file mode 100644
index 0000000000000000000000000000000000000000..b0699a3b8e9b0ccc433c7f5bbf2622fc902f1a02
--- /dev/null
+++ b/scripts/.gitkeep
@@ -0,0 +1 @@
+We will put CI and package.json scripts in here in the future
\ No newline at end of file
diff --git a/scripts/deploy.sh b/scripts/deploy.sh
deleted file mode 100755
index c79223c69852e8053e02d9afe53562a30652e9e6..0000000000000000000000000000000000000000
--- a/scripts/deploy.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/usr/bin/env bash
-sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml
-sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml
-kubectl -n ocelot-social patch configmap develop-configmap -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml)"
-kubectl -n ocelot-social patch deployment backend -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
-kubectl -n ocelot-social patch deployment webapp -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
diff --git a/scripts/patches/patch-configmap.yaml b/scripts/patches/patch-configmap.yaml
deleted file mode 100644
index a77c567fa62a4e7040afc35b12530359971b5799..0000000000000000000000000000000000000000
--- a/scripts/patches/patch-configmap.yaml
+++ /dev/null
@@ -1,7 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-data:
-  COMMIT: <COMMIT>
-metadata:
-  name: configmap
-  namespace: ocelot-social
diff --git a/scripts/patches/patch-deployment.yaml b/scripts/patches/patch-deployment.yaml
deleted file mode 100644
index 7c46eb8b0289d4b04e3f7907faf58d9606e6a8de..0000000000000000000000000000000000000000
--- a/scripts/patches/patch-deployment.yaml
+++ /dev/null
@@ -1,5 +0,0 @@
-spec:
-  template:
-    metadata:
-      labels:
-        ocelot.social/commit: <COMMIT>
diff --git a/scripts/setup_kubernetes.sh b/scripts/setup_kubernetes.sh
deleted file mode 100755
index ea39312f645db776d2a94f287d69f3cde1017345..0000000000000000000000000000000000000000
--- a/scripts/setup_kubernetes.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env bash
-
-# This script can be called multiple times for each `before_deploy` hook
-# so let's exit successfully if kubectl is already installed:
-command -v kubectl && exit 0
-
-curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
-chmod +x ./kubectl
-sudo mv ./kubectl /usr/local/bin/kubectl
-
-curl -LO https://github.com/digitalocean/doctl/releases/download/v1.14.0/doctl-1.14.0-linux-amd64.tar.gz
-tar xf doctl-1.14.0-linux-amd64.tar.gz
-chmod +x ./doctl
-sudo mv ./doctl /usr/local/bin/doctl
-
-doctl auth init --access-token $DIGITALOCEAN_ACCESS_TOKEN
-mkdir -p ~/.kube/
-doctl k8s cluster kubeconfig show develop > ~/.kube/config
diff --git a/webapp/maintenance/README.md b/webapp/maintenance/README.md
index ee00633ef8c1520cf78705d459326e0287fcbb35..99e410a1c11ff11438fdc857b9971821e4fb264e 100644
--- a/webapp/maintenance/README.md
+++ b/webapp/maintenance/README.md
@@ -37,11 +37,3 @@ $ docker-compose up
 
 And the maintenance mode page or service will be started as well in an own container.
 In the browser you can reach it under `http://localhost:5000/`.
-
-{% endtab %}
-{% tab title="On The Server" %}
-
-How to bring a Kubernetes server into maintenance mode is discriped [here](../../deployment/ocelot-social/maintenance/README.md).
-
-{% endtab %}
-{% endtabs %}