๐Infrastructure
Last updated
Last updated
The Vetspire application is hosted in Google Cloud Platform under the vetspire.com
organizational unit. It is deployed over several projects and over 3 environments (prod, staging and dev) with a management environment for tooling that is required for managing the other environments or that is shared.
The below diagram shows the projects used in hosting the app with what is hosted in said projects.
The backend for the Vetspire application is entirely hosted in the vetspire-app
project. This is made up of CloudSQL instances hosting the PostgreSQL database used for the app and GKE Kubernetes clusters which host the APIs as microservices.
A single region (us-central1
), network (the default created by GCP when the project was made) and subnetwork (again an auto-created default in the default network for the region we use) is used for all the environments which is where the GKE clusters are deployed into. The CloudSQL instances are deployed in their own obfuscated networks managed by Google but are peered to the cluster network for private access between cluster and DB instances.
The GKE clusters are set up to autoscale, auto-heal and auto-upgrade - taking advantage of the managed service capabilities. This means, most importantly, that as traffic to our APIs increases, the nodes scale up horizontally (more nodes are created) to allow for more instances of our APIs as required (and scaled back down after traffic does).
The microservices deployed are:
admin
api
datasync
protocols
worker
All of the above microservices are deployed onto the same cluster in the same namespace. They are deployed as one image which behaves differently based on the value of the VETSPIRE_APP
environment variable. Horizontal pod autoscaling is in use in production to ensure our pods scale with traffic.
All the APIs are publicly exposed using Kubernetes Ingress resources which (in GCP) create Google Cloud Load Balancers. They also use GCP Managed Certificates for SSL.
This diagram shows the makeup of the environment cluster at a very high level.
All of the APIs have the CloudSQL auth proxy deployed as a "sidecar" container. This means it is deployed in the same pod as the application container and so is accessed on the same localhost network as the app. This allows for increased network security as well as better performance.
The application code takes care of migrations and so, to ensure that these are run before any of the APIs start up, a Kubernetes job is used to run these upon a new version of the app being deployed.
The API deployments are built to wait for the Kubernetes job to complete before they attempt to start up.
Most of the configuration for the applications is stored in the Kubernetes secret secrets
. Values from this secret are then used as environment variables on the Kubernetes deployment. This is also where the database credentials used by the app are found.
Another secret - cloudsql-instance-credentials
- is used by the CloudSQL proxy sidecar container for the service account credentials needed to access the CloudSQL instances.
There are many web apps deployed as part of the Vetspire app and these are all deployed using Firebase. These are not consistently deployed to all environments with most being in production and dev only.
One key nuance is that the staging frontend for the Vetspire application itself is deployed in the vetspire-app
project (the same as all of the backend components). The production and dev versions of this frontend are deployed in their own projects.
The Vetspire APIs use a single PostgreSQL database for all of its data. This is called:
vetspire-dev
in the dev environment
vetspire_staging
in the prod and staging environments
All continuous integration and deployment is managed by Github actions.
The DNS for the vetspire.com
domain is hosted in Google Domains. This is also where all the DNS records are managed.
For the backend, A records are used for each API pointing to the Load Balancer IP address.
For the main frontend, for each environment, a single wildcard A record is used (for example *.dev.vetspire.com
) and pointed to the IP addresses given by the Firebase project. This allows us to not have to create an A record for every subdomain for every tenant of the application. However, each tenant's subdomain does need to be registered in the Firebase project under "custom domains".
This domain is registered with "namecheap" (because at the time of its creation, Google Domains did not allow for the .vet
TLD). DNS is then managed in Google Domains.