FAQs on Managing Microservices Architecture with EYK


This article addresses the frequently asked questions on deployment architecture when adopting EYK for containerized deployments of microservices as well as general questions on how to manage microservices with EYK.





1 Will there always be a publicly accessible domain and IP for each application deployed?

Each application gets DNS record of the following structure: appname.clustername.accountname.ey.io. Those are pointed to an ELB holding a specific hostname.

As expected, you may add your own DNS record as a CNAME to that ELB hostname.

2 Can we make some applications in a cluster "private" and only accessible to other EYK applications deployed in the cluster? What about cross-cluster communication?

Indeed you can. For inner-cluster communication, you make the application non-routable and can reach the application on appname.appname:80.

For cross-cluster communication, you can whitelist the egress IP of the cluster as you would for accessing the application from a specific IP alone.

3 Is there some other "service mesh" solution available in EYK? not necessarily have to use isto or linkerd or the like, but trying to understand how we can manage microservices architecture with EYK. There isn't a service mesh solution attached to EYK given that we focus on covering the larger picture of microservices implementations. Service mesh, although desirable, does fall into more of an edge case than the broad spectrum of Kubernetes implementations.
4 Is the base technology Kubernetes that our containers run? Indeed it is. As with Engine Yard Cloud, we rely on the proven services provided by AWS, in particular Amazon EKS (Amazon Elastic Kubernetes Service).
5 What IP addresses do the containers get? Public or private IPs? All the containers get private IPs, routable or non-routable according to your configuration.
6 Is cluster egress available, or is it limited? That is, can the containers in the application make HTTP calls over the internet? What about VPC-peering to private AWS VPC in which we will deploy custom services (e.g. a NATS.io cluster).

Cluster egress is supported. Either of the two static EIPs are attached to each request going out of the cluster.

VPC peering is available as it has been so far with Engine Yard Cloud.

7 For the publicly accessible domain, are WebSockets a supported technology for the "web" process in an application? Yes, WebSockets are supported.
8 When we create a cluster in EYK, an AWS EKS is created but managed through EYK; AWS ECS vs Fargate are abstracted over so we don't need to worry about it. That is correct. You only interact with your cluster using the eyk CLI.


Each process declared in the Procfile of the application repo will end up as a Kubernetes Deployment. Is this correct?

Indeed. Each process ends up as a distinct deployment under the same namespace as the application it belongs to.

10 Depending on the process of the Procfile, a Kubernetes Service MAY be created. Is this correct? All applications get a service. You then have the ability to choose if it is routable or not from the outside world.
11 Will these Kubernetes Services always be of "LoadBalancer" type? All application services get a ClusterIP type of service. They can then be (or not) accessed from the router service which is of LoadBalancer type.
12 If a service is "LoadBalancer" type, then is this an AWS NLB type ELB? The route service runs nginx. Like the EKS paradigm above, this is an under the hood mechanism and you don't interact with it. Once again using EYK cli you can add domains and certificates, the liveliness and the rest of the configuration is left to the backend.
13 Are Kubernetes Ingress or Kubernetes Gateways used in any way? Which, if so, would it be that AWS EKS will instantiate an AWS ALB type ELB instead? Each cluster has an ELB in front which is responsible for the ingress traffic.


If we deploy a "private" app for only in-cluster access, then is the addressing appname.appname:80 mentioned in Q2 still a Kubernetes Service? Still using NLB? Or is it like a "ClusterIp" or "NodePort" type?

All application services get a ClusterIP type of service.

15 When you mentioned whitelisting Source cluster egress IP in the Target cluster application's allowlist, is this control/restriction performed at the ELB level? Or is there some other Ingress/Gateway thing going on in the Kubernetes cluster itself? This is handled by the router LoadBalancer type of service.
16 How will the routing differ if the Procfile and Container expose ports on web vs non-web type processes? Both cmd and web processes can be routable according to your configuration. Any other type of service is not routable by default. It differs to the extent that the non cmd/web processes cannot be contacted by the router service at all, so the only communication they receive can only come from within the same cluster.

Back to top


Please sign in to leave a comment.

Powered by Zendesk