Standard service - a fixed amount of work that is performed at a fixed price.  Post Service

  

Tuesday, 14 May 2019 13:38

GraphQL on Kyma

Written by  https://labs.cx.sap.com/2019/05/15/graphql-on-kyma/
Rate this item
(0 votes)

Now that we’ve looked at the fundamentals of GraphQL and also have gone through some practical GraphQL exercises, I wanted to share some thoughts about how Kyma and GraphQL can be

used in combination to create a flexible and extensible API layer for enterprise applications.

Kyma is an open-source project designed natively on Kubernetes. It allows you to extend and customize cloud-based and on-premise enterprise applications in a quick and modern way, using serverless computing or microservice architecture.

Kyma has built-in support for exposing Kubernetes services to the outside world with the help of the API custom resource. Using this mechanism, you could connect to multiple backends or connected services in a lambda function and then return the result to the client. While this may be fine for clearly defined API use cases and single type clients, we would need to create a new lambda/service/API for new clients and soon this would become a quite tricky and messy adventure. 

So here’s an idea how GraphQL can be used to get multiple API clients exactly what they want, provide data from multiple backend systems in single request/response cycles and provide devs with the tooling necessary to develop the GraphQL queries easily. At this point, the implementation is a proof of concept with the hope of leading to a broader discussion around the use of GraphQL for Kyma. Please share your thoughts in the comments or contact me on Twitter if you like!

The big picture

The core components of this architecture are:

  • a Kyma cluster deployed to GCP
  • several App Connectors which facilitate the connections to components such as the SAP C/4HANA Customer Data Cloud, Customer Service Cloud or Commerce Cloud. 
  • a GraphQL-server Deployment exposed via the Kyma API resource to the outside world. The crucial configuration for the GraphQL server is mounted to the GraphQL-server pods with the help of Kubernetes ConfigMaps for the Types (schema) and resolvers. 
  • an Editor component is integrating with the Kyma Console and allows an API designer to change the types and resolvers. The component that interfaces with the Kubernetes API server is the Backend component in the image above. 

Let’s start with the editor component that’s integrated with the Kyma console. 

Kyma GraphQL Editor

The screenshot below shows the Kyma Console. The left and top parts are not controlled by our application and provided by the console. The central part is the current GraphQL editor, a micro-UI which simply needs to be deployed to the Kyma Cluster. At this point, the editor is as basic as it can be. It allows you to pick a GraphQL Type file and then allows the API designer to modify the type (schema) and the resolver.  

To add your own UI to the Kyma console, you simply have to create a ClusterMicroFrontend resource. Below is an example.

apiVersion: ui.kyma-project.io/v1alpha1 kind: ClusterMicroFrontend metadata: name: graphql-editor spec: category: GraphQL navigationNodes: - label: Config navigationPath: config viewUrl: / placement: namespace version: 0.0.1 viewBaseUrl: https://url.to.your.microui

Of course, you have to make sure that the view url is available, you would typically deploy this as a pod/service/api combination to the same cluster. 

The frontend framework used is Vue.js and all static web files are finally served with a minimalistic web container based on nginx. The Dockerfile for this static web server looks like this:

# build stage FROM node:lts-alpine as build-stage WORKDIR /app COPY package*.json ./ RUN yarn install COPY . . RUN npm run build # production stage FROM nginx:stable-alpine as production-stage COPY --from=build-stage /app/dist /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]

As you can see, it’s a multi-stage build. The build state will use a node alpine image as the base and essentially run the build process. Under the hood, vue uses webpack to combine all resources to the static dist directory . The second stage takes the static resources and adds them to a nginx container, again based on alpine. The results is a container image with about 20MB.

GraphQL Editor Backend

As our GraphQL editor component is still quite basic, the simple tasks that our editor backend needs to fulfill are these:

  • Interface with the Kubernetes API server to read and update the ConfigMaps for the GraphQL types and resolvers.
  • Provide a way to restart the GraphQL server so it can start up with the newly provided configuration

None of these are very complicated, therefore let me just show you an example of how the official Kubernetes Javascript API is used to retrieve a ConfigMap entry representing a GraphQL schema type. By the way, Visual Studio Code was very helpful in this case as the Javascript API of the Kubernetes client is generated and not well documented. While it’s all based on the well described Kubernetes resources and operations, some code completion does help.

/* Get the code of one specific type key via the types configmap. */ router.get('/types/:key', function(req, res, next) { k8sApi.readNamespacedConfigMap('graphql-types', namespace).then(kubeResponse => { let cm = kubeResponse.body if (cm.data[req.params.key]) { res.json({success:true, key:req.params.key, value:cm.data[req.params.key]}) } else { res.json({success: false, msg: `Key ${req.params.key} does not exist.`}) } }).catch (err => { res.json({success: false, msg: 'Unable to retrieve configmap.'}) }) });

To make sure that this API can be used and tested both locally and also within the cluster, a little util was created that checks if we’re running under the DEBUG environment.

const k8s = require('@kubernetes/client-node'); const kc = new k8s.KubeConfig(); if (process.env.DEBUG) { kc.loadFromDefault() } else { kc.loadFromCluster() } module.exports = kc.makeApiClient(k8s.Core_v1Api);

The loadFromDefault() call will try to find a local kubeconfig and use that to communicate with the Kubernetes API server. In the Kubernetes cluster, we get the credentials automounted into our pods and loadFromCluster() will retrieve the credentials from ‘/var/run/secrets/kubernetes.io/serviceaccount’. The serviceaccount needs to be referenced from the pod spec of your deployment and a service account binding needs to reference a role with access rights to ConfigMaps. You can find more info about service accounts here.

GraphQL-Server

This component is the main component of this architecture. The Dockerfile of this component creates an image which can then be run as a deployment in your cluster, thereby providing you with the flexibility to scale it up or down. To enable the Yoga GraphQL server to pick up the types and resolvers that the GraphQL Editor has modified, these elements are attached to the pods’ specs’ via the help of ConfigMaps. This can be seen from the podspec below:

--- apiVersion: apps/v1 kind: Deployment metadata: name: graphql-server namespace: stage spec: replicas: 1 selector: matchLabels: app: graphql-server template: metadata: labels: app: graphql-server annotations: "sidecar.istio.io/inject": "true" spec: containers: - name: graphql-server image: gcr.io/sap-hybris-labs/kyma-graphql-server:0.2.0 imagePullPolicy: Always ports: - containerPort: 4000 volumeMounts: - name: resolvers mountPath: /usr/src/app/resolvers/ - name: types mountPath: /usr/src/app/types/ volumes: - name: resolvers configMap: name: graphql-resolvers - name: types configMap: name: graphql-types

As ConfigMaps can be mounted to pods via volumeMounts, the entries of the ConfigMaps will appear as a file system within the container. To initially create these ConfigMaps, we can simply run a kubectl command in our local dev environment:

### Resolvers kubectl create configmap graphql-resolvers --from-file=./resolvers/ -n stage ### Types kubectl create configmap graphql-types --from-file=./types/ -n stage

Outlook

I hope you found this series about GraphQL useful. We’ve looked at the fundamentals of GraphQL and also have gone through some practical GraphQL exercises, This post explained how Kyma and GraphQL can be used together to create a modern, flexible and extensible API layer for a diverse set of clients. Don’t forget, why you will want to use GraphQL:

  • GraphQL clients (single page frontends, IoT devices, TVs, etc.) ask for data (via queries) and get exactly what they need.
  • GraphQL queries may access data from different backends. Instead for making many calls against multiple backends, a GraphQL client just needs to make a single request.
  • The GraphQL type system (schema) clearly defines what clients can request and dev tools that use this schema can assist client developers to create and validate the queries.

If you would like to find out more about Kyma, you should join the openSAP course which will bring you up to speed in no time! There’s also lot’s of documentation available at kyma-project.io.

Read 151 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.