nkia-server

npia-server

Table of Contents

Description

This project aims not only to demonstrate a proper use case for npia-api but also to
offer a production-ready implementation of npia-api compliant system (and as usual for me, it doesn’t reach the level at this point ʘ‿ʘ).

This repository holds two examplary (and hopefully someday production-grade) systems that implement somewhat distinct
interfaces to interact with npia-api. Each system is as follows.

  1. Http server that implements KCXD-STTC Protocol (you think it’s a bogus word? Well… it was until now! See Security section to
    know what on earth that means) to handle secure http query from a compliant client, ex) npia-go-client

  2. Web socket hub that implements KCXD-MTSC Protocol (again, Security) to handle secure web socket query from a compliant client
    ex) the code in src/sock directory implements this, ex2) also, orchestrator/ofront will implement this protocol in future release, but for now,
    it relies on oauth2 based https connection for security. This means, oauth2 user must (please!) configure properly the external
    reverse proxy (nginx, per se) to handle https:// and wss://.

Project Overview

The blueprint for the repository is as follows.

npia-server

The tree structure for the repository is as follows

├── debug_bin
├── debug_build_run
├── debug_cleanup
├── docs
├── go.mod
├── go.sum
├── LICENSE
├── orchestrator
│   ├── debug_amalgamate_bin
│   ├── debug_amalgamate_config
│   ├── docker-compose.yaml
│   ├── odb
│   ├── odebug_build_run
│   ├── odebug_cleanup
│   ├── ofront
│   │   ├── config.json
│   │   ├── ocontroller
│   │   ├── omodels
│   │   ├── omodules
│   │   ├── orouter
│   │   └── oview
│   └── osock
├── src
│   ├── controller
│   ├── modules
│   ├── router
│   └── sock
├── test
│   └── kindcluster
└── var

Let me guide you through the each entrypoint briefly.

Details

This section dives into the details of each entry point.

However, it doesn’t go so deeper that you don’t have to look at the source code to understand how everything works in conjunction.

For even more details, refer to specific comments associated with a code block, or better, you could just run it yourself.

debug_bin

orchestrator

orchestrator debug_amalgamate_bin

orchestrator debug_amalgamate_config

orchestrator odb

orchestrator ofront

orchestrator ofront ocontroller

orchestrator ofront omodels

orchestrator ofront omodules

orchestrator ofront orouter

orchestrator ofront oview

orchestrator osock

src

src controller

src modules

src router

src sock

test kindcluster

Security Model

Warning

1. DO NOT USE PLAIN HTTP CHANNEL WHEN ACCESSING FRONT, YET!

2. DO USE YOUR OWN OAUTH CREDENTIAL!

Model Description

This page is about the problematic bogus words appearing excessively frequently
throughout the whole documents,

  1. KCXD (Kubeconfig X509 Data)
    Under the hood, when executing kubectl command, a thing called Bidirectional X509
    between kubectl client and kube apiserver happens to authenticate mutual integrity.
    Kubeconfig is used for that protocol and npia-server decided to build a protocol
    upon the data which is assumed to be sharable (or not! haha) if client and server are
    set up by the same entity.

  2. STTC (Single Terminal Transfer Challenge, or Communication)
    Using KCXD, it is a protocol between http client and server which targets
    single-terminal oriented requests and responses.

  3. MTSC (Multi Terminal Socket Challenge, or Communication)
    Using KCXD, it is a protocol between front and orchestrator and sock client which
    targets multi-terminal oriented web socket requests and responses.

Model Overview

The blueprint for KCXD Challenge Protocol is as follows

1. STTC (Single Terminal Transfer Challenge)

kcxd-sttc

2. MTSC (Multi Terminal Socket Challenge)

kcxd-mtsc

Model Details

STTC (Single Terminal Transfer Challenge)

  1. In order for this protocol to be successfully resolved, client and server
    must share the exactly same kubeconfig files.

  2. According to its contexts fields, client extracts all available
    certificate-authority-data’s public keys, which are kubernetes root ca’s
    public keys, and sends them all with each corresponding context cluster name.
    Now, upon receiving that information, server verifies if the public keys are
    the ones that it has by using those keys to verify the context user’s certificates,
    which are supposed to be signed using kubernetes root ca’s private keys if authentic.

  3. Now, if the verification is successful for each and every user’s certificate,
    it’s time for server to assign a challenge id and random bytes of 16 - 32 length for
    each and every context user, store it, encrypts the random bytes using each context
    user’s public key, which can be extracted from the kubeconfig context user’s certificate
    data, and sends the data back to client.
    Upon receiving this, client must remember the challenge id and for each and every entry
    of context user, decrypt the challenge with corresponding private keys which are accessible
    from the kubernetes context user’s private key data.

  4. If each and every challenge is decrypted, client sends the answer attached to the
    challenge id it received.

  5. If each and every challenge is correctly decrypted, server then assumes that the
    counterpart has the same kubeconfig file and generates a session key for the client,
    pairs it with the session’s unique symmetric key for AES-GCM algorithm, encrypts
    the key with randomly picked context user’s public key, and finally sends it back
    to the client with the context user name as a field key.

  6. Finally, upon receiving it, client decrypts the data with the corresponding private
    key of the user, stores the symmetric key, and starts encrypting the communication data.

MTSC (Multi Terminal Socket Challenge)

  1. In order for this protocol to be successfully resolved, server sock must have
    only one, exact, and whole subset of the orchestrator side’s kubeconfig file.

Going forward, server sock will be referred to as “client” and orchestrator
will be referred to as “server”

  1. According to its contexts field, client extracts the certificate-authority-data’s
    public key, which is kubernetes root ca’s public key, and sends it with a corresponding
    context cluster name.
    Now, upon receiving that information, server verifies if the public key is
    the one that it has by using the key to verify the context user’s certificate,
    which is supposed to be signed using kubernetes root ca’s private key if authentic.

  2. Now, if the verification is successful for ethe user’s certificate,
    it’s time for server to assign a challenge id and random bytes of 16 - 32 length for
    the context user, store it, encrypts the random bytes using each context
    user’s public key, which can be extracted from the kubeconfig context user’s certificate
    data, and sends the data back to client.
    Upon receiving this, client must remember the challenge id and for the context user,
    decrypt the challenge with corresponding private key which is accessible
    from the kubernetes context user’s private key data.

  3. If the challenge is decrypted, client sends the answer attached to the
    challenge id it received.

  4. If the challenge is correctly decrypted, server then assumes that the
    counterpart has the one, exact, and whole subset of kubeconfig file that it has and
    generates a session key for the client,
    pairs it with the session’s unique symmetric key for AES-GCM algorithm, encrypts
    the key with the context user’s public key, and finally sends it back
    to the client with the context user name as a field key.

  5. Finally, upon receiving it, client decrypts the data with the corresponding private
    key of the user, stores the symmetric key, and starts encrypting the communication data.

  6. From server’s point of view, if there are more than one client terminal, it iterates
    the process for each terminal that wants to connect to the server.

Scenario

Scenario Description

This page is about basic examplary scenarios for each mainline use cases & implementations of

  1. KCXD-STTC (Kubeconfig X509 Data based Single Terminal Transfer Communication)

  2. KCXD-MTSC (Kubeconfig X509 Data based Multi Terminal Socket Communication)

1.

Single Terminal Transfer Communication Mode

1-1. Initiate npia-server STTC mode

In the below screenshot, you can see the gogin server is running
in debug mode, showing all available endpoints.
Here, our points of interest are those suffixed with /test paths since,
yes, this is a test.

1

1-2. Run npia-go-client debug script for STTC mode test client

In the below screenshot, what you see is the npia-go-client’s piece of script
where you can check out a simple test interaction with the npia-server in
STTC mode which involves single terminal transfer challenge and api querying.

2

1-3. Check if the STTC Challenge Protocol and subsequent api queries have been successful

In the below screenshot, you can see that the first two requests have been made to
resolve challenge protocol authentication and the following api query, followed by
two more queries to initiate multi mode and then switch to another target cluster.
We can be sure the challenge was succefult since the latter three requests never return
200 status in the case of unsuccessful authenticaton.

3

1-4. Check if the STTC client has received what it wanted

Voila, it has!

4

2.

Multi Terminal Socket Communication Mode

2-1. Initiate npia-server MTSC mode

In the below screenshot, you can see the orchestrator components are
up and running as containers.

5

2-2. Initiate npia-server MTSC socket client

With a manually compiled src executable, using the below command will make
a debugging connection to the orchestrator

6

2-3. Check if MTSC Challenge Protocol has been successful and connection is maintained

If the challenge protocol has been successful and connection is
accepted, you will see something like the below screenshot.

7

2-4. As a front-user, conduct oauth2.

As the below screenshot, you will see the page popping up if you first access the default
path.

8

2-5. If OAuth is successful, you are ready to orchestrate multiple terminals

Just like the below screenshot, you will have an interface to query multiple terminals
manifested and connected to the orchestrator.

9

2-6. Meanwhile, check if front connection was successful.

If the oauth has been successful and connection to the orchestrator is
accepted, you will see something like the below screenshot.

10

2-7. Now, let’s see what happens if we actually make a query

As you can see, feeding proper arguments into the input fields and hitting run,
the requested message will travel through the orchestrator, making it to the other
side, getting processed, and travel back through the orchestartor to reach the
client browser.

11

2-8. Check if the message touched the orchestrator

You can see it reached the front handler and got sent to the sock client.

12

2-9. Check if the message reached the npia-server MTSC mode client

Yep, all good.

13