G33k, Programmes

cheap node.js ‘fetch’ for people not wanting to use 3rd party libs

Using javascript/typescript ? rest ? using json ? Just http(s) >= 1.1 ?

more and more people are getting tired with the idea of bundling external libraries in their own service / library.
Can I just call another endpoint without adding ‘fetch’, ‘axios’ or anything ? Adding them is automatically enabling the burden of maintaining a package-lock file + future upgrades + git bots to keep everything up to date.
My purpose is just to have the smallest possible JS bundle.

So I have done a budget version of fetch with the just the relevant and meaningful code, able to fetch data from a remote endpoint.
To make it compatible with isomorphic-fetch, an async json() function exists to collect the result as a json object.

The result is a drop-in replacement for that library, but also for window.fetch or deno’s fetch function.
Even with this small snippet, you already get the request + the response + the decompression of the response body. And of course, everything here uses built-in node.js capabilities.

import { IncomingMessageIncomingHttpHeaders } from “http”;
import { requestRequestOptions } from “https”;
import { URL } from “url”;
import { gunzipinflatebrotliDecompress } from “zlib”;
export const fetch = async (
  urlURL | string,
  options?: RequestOptions & { body?: Buffer | string | nullmethod?: string }
): Promise<{
  statusCode?: number;
  headersIncomingHttpHeaders;
  json: () => Promise<any>;
}> => {
  const urlObject = url instanceof URL ? url : new URL(“”url);
  const [responserequestError]: [
    IncomingMessage | null,
    Error | null
  ] = await new Promise((resolve=> {
    const http1RequestOptionsRequestOptions = {
      …(options || {}),
      hostname: urlObject.hostname,
      path: url
        .toString()
        .replace(/https?:\/\//“”)
        .replace(/^[^/]*/i“”),
      port: urlObject.port,
      protocol: urlObject.protocol,
      rejectUnauthorized: false,
      method: options?.method,
      headers: {
        …(options?.headers ?? {}),
        host: urlObject.hostname,
      },
    };
    const outboundHttp1Request = request(http1RequestOptions, (res=>
      resolve([resnull])
    );
    if (options?.body)
      outboundHttp1Request.write(
        typeof options.body === “object”
          ? JSON.stringify(options.body)
          : options.body
      );
    outboundHttp1Request.on(“error”, (thrown=> {
      resolve([nullthrown]);
    });
    outboundHttp1Request.end();
  });
  if (requestErrorthrow requestError;
  const [dataresponseError]: [
    Buffer | null,
    Error | null
  ] = await new Promise((resolve=> {
    let partialBody = Buffer.alloc(0);
    response?.on(“error”, (thrown=> {
      resolve([nullthrown]);
    });
    response?.on(“data”, (message=> {
      partialBody = Buffer.concat([partialBodymessage]);
    });
    response?.on(“end”, () => {
      resolve([partialBodynull]);
    });
  });
  if (responseErrorthrow responseError;
  const fetchResponse = async () =>
    await (response?.headers[“content-encoding”] || “”)
      .split(“,”)
      .reduce((bufferformatNotTrimed=> {
        const format = formatNotTrimed.trim().toLowerCase();
        const method =
          format === “gzip” || format === “x-gzip”
            ? gunzip
            : format === “deflate”
            ? inflate
            : format === “br”
            ? brotliDecompress
            : format === “identity” || format === “”
            ? (
                inputBuffer,
                callback: (err?: Errordata?: Buffer=> void
              ) => {
                callback(undefinedinput);
              }
            : null;
        if (method === null)
          throw new Error(`${format} compression not supported by the proxy`);
        return buffer.then(
          (data1=>
            new Promise((resolve=>
              !method
                ? resolve(data1)
                : (method as any)(data1, (err?: Errordata2?: Buffer=> {
                    if (errthrow err;
                    resolve(data2!);
                  })
            )
        );
      }, Promise.resolve(data));
  return {
    …response!,
    json: () =>
      fetchResponse().then((data=>
        !data ? null : JSON.parse(data.toString())
      ),
  };
};
G33k, Programmes

Jenkinsfile-GitHub : resolve $CHANGE_ID from branch build

It can be frustrating to realize that env.CHANGE_ID is not defined in your jenkins job, just because the current job is a branch build and not a pull request build.
But this is only an obstacle, and can be overcome.

Proof (assuming you know the url of the github api, the github user name, the project name and the name of your repository) :

if (env.BRANCH_NAME != MAIN_BRANCH) {
  def pulls = httpRequest acceptType: 'APPLICATION_JSON',
          authentication: githubUsername,
          consoleLogResponseBody: true,
          contentType: 'APPLICATION_JSON',
          httpMode: 'GET',
          responseHandle: 'NONE',
          url: "${GITHUB_API_URL}/repos/${PROJECT_NAME}/${REPOSITORY_NAME}/pulls"
  def pullsList = new groovy.json.JsonSlurper().parseText(pulls.content)
  def pullRequest = pullsList.find { it.head.ref == env.BRANCH_NAME }
  if (pullRequest != null) {
    env.CHANGE_ID = pullRequest.number
  }
}

Insert this code at the top of your build, and you can have access to env.CHANGE_ID.

G33k, Programmes

Bind all your gprc endpoints with a simple and magic coRouter

grpc-services are quite simple, focused and performance oriented services.
Instead of calling them with a format that needs complex marshalling / unmarshalling, and making use of a standard http 1.1 connection, grpc has a dedicated (therefore faster) generated code for each type.
This protocol also indexes data fields by number, which permits to shrink the input by many folds
(if you know the order of the keys in the structure, then you don’t need to indicate them in your payload)

grpc work efficiently with naive Netty servers (all you have to do is to call .bindService()).
But the integration is not so smooth when you currently have reactive spring components to integrate with (authentication, libraries, aspects).
When I have discovered coRouters, I was able to understand that one of its purpose was to dynamically declare endpoints based upon spring beans.

Then if you mix these two concepts together (grpc and coRouters), you can build autowired grpc services on spring.
How do you do that ?
Simply by copy / pasting this snippet in your codebase : https://gist.github.com/libetl/a655de480ed4d123e0c10fe557ea4271

G33k, Programmes

Typescript : can it validate that a key exists in an object ?

The problem is simpler than you think it is when you only look at the title.
If an object contains a complex data structure like this :

const structure = { a : { b : {c : { d : 1 } } } }

With a function that can read inside the object :

function read(path: string) {
return path.split('.').reduce((acc, value) => acc[value], structure);
}

console.log(read('a.b.c.d')); // 1


How can you verify with Typescript that the path exists in structure ?

There is a quite simple answer when the object is flat :
function read(path: keyof typeof structure)

… But the object is not flat.
Then you have to implement a descending recursive type to define the type.
Rather than sticking to the example, we can define a utility type so that any similar problem can be resolved with a generic solution.

type DeepKeyOf<T, U> = {
  [P in keyof T]?: T[P] extends U ? [P, DeepKeyOf<T[P], U>] | [P] : [P];
}[keyof T];

Where T is the type of the structure, and U is all the types that can be considered primitive.
For the above example, the code would become

function read(path: DeepKeyOf<keyof typeof structure, number>) {
let flattenedPath: string[] = [];
while (foundPath && foundPath.length) {
flattenedPath.push(foundPath[0]);
foundPath = foundPath[1] as any;
}
}

console.log(read(['a', ['b', ['c', ['d']]]])); // 1

Even though it does exactly the same when compiled in javascript, the path is now checked by typescript.

console.log(read(['a', ['b', ['c', ['d']]]])); // ok
console.log(read(['a', ['b']])); // ok
console.log(read(['a', ['b', ['e', ['d']]]])); // typescript compilation error

G33k, Programmes

Rendering react in Java

Before telling me that I am crazy, think about it again.

There are plenty of ways to use server-side rendering today, and the practice of sending displayable data through the network has proven to be efficient :
– rendering data logged in the same tool as for backend services
– less client business logic, less coupling between frontends
– multi-channel broadcast on your clients devices (mobile, desktop, television)
Besides, some frameworks are written on top of virtual machines to do the job. Which one specifically ? Node.js

Java has a poor support today, and many companies are considering rewritting their stack just for the dream of going multi-channel.
Some others cannot, … well it is quite expensive.

While the idea of going rogue and starting to rewrite the app seems like the only way to go, there is an intermediairy solution for companies that mainly has code written in Java :
Make your Java api return react elements.

Howto ?
React components are just initiated (or can always be summarized) by a composition of two elements :
– a type
– properties
If there are some advanced features (forward references, lazy loading, derived states, react portals), these can always be wrapped in simple components in the javascript / typescript client.
Example of json that you can write in your http response :

{"type":"table","props":{"id":"test","className":"my-table","children":{"type":"tablerow","props":{"vAlign":"center","children":{"type":"tablecell","props":{"children":{"type":"MyText","props":{"big":true,"bold":true,"danger":false,"children":"Hello"}}}}}}}}

Then this is how an user interface can be sent through a rest endpoint :

@Component
class RenderText {

    fun page() = tag { table }.new {
        id = "test"
        className = "my-table"
    }.setChildren {
        tag { tr }.new { vAlign = "center" }.setChildren {
            new { td }.setChildren {
                MyLib.MyText::class.new {
                    big = true
                    bold = true
                    danger = false
                    children = "Hello"
                }
            }
        }
    }.toString()
}

Of course, just like in javascript, this is programmatic, you can embed whatever logic you want.
To be able to build this above, you need to declare your react components interface in the JVM.

package com.mycompany.service.react

object MyLib {

    interface MyTextProps: React.Props<MyTextProps> {
        var bold: Boolean?
        var danger : Boolean?
        var big : Boolean?
    }

    interface MyText : React.FC<MyTextProps>
}

You may now ask how can we have the react typings without leaving the JVM. In my case I have used (https://github.com/Kotlin/dukat) to convert the react typescript types back into Kotlin (and then back to the JVM)
You can just copy / paste react 16.13.1 types in kotlin : https://gist.github.com/libetl/7f4784eeaa5320b14b33567c0544c52a#file-react-kt

Now that you have your components and the react library, there is one missing gap : how do I instantiate interfaces ?
I have added just a small set of helpers method to achieve this goal, by creating proxy instances.
These proxy instances are objects which can extend a complex interface without necessarily implementing all the behaviors.
That lets you bind data to your components without needing you to plug the events and interactions… since these will be lost during the serialization.
Just copy paste those “medium sized” builder methods : https://gist.github.com/libetl/7f4784eeaa5320b14b33567c0544c52a#file-helpers-kt.

Your react code in java should now compile… Enjoy some cheap server-side rendering in the JVM now.
That is it for the backend part. Your API is now able to render a react as json.
Next, you have to convert that json into react elements.
I have a javascript ES5 file to do that : https://gist.github.com/libetl/7f4784eeaa5320b14b33567c0544c52a#file-expand-react-json-js.

Finally, you need the html page to fetch the API that you have created and to transform the result into components :

<!DOCTYPE html>
<html lang=“en”>
  <head>
    <meta charset=“UTF-8” />
    <title>Test server side rendering</title>
  </head>
  <body>
    <div id=“root”></div>
    <script
      crossorigin
    ></script>
    <script
      crossorigin
    ></script>
    <script
      crossorigin
    ></script>
    <!– my lib export an umd variable called “components” –>
    <script
      crossorigin
    ></script>
    <script
      crossorigin=“anonymous”
    ></script>
    <script src=“expand-react-json.js”></script>
    <script type=“text/javascript”>
      window.addEventListener(“DOMContentLoaded”function (event) {
        // api name here is server-driven-ui, but call it your name
        fetch(“server-driven-ui”)
          .then(function (response) {
            return response.json();
          })
          .then(function (json) {
            ReactDOM.render(
              expandReactJson(Reactcomponents“”json),
              document.getElementById(“root”)
            );
          });
      });
    </script>
  </body>
</html>

That is it,
And let’s be patient. Someday we are going to hear about Jetpack Compose for the web.

Until next time, goodbye.

G33k, Programmes

spring-boot : declare yaml partials

It has been a question for me for many years :

How can I inject spring properties from a library right into my application without any overhead
* no copy / paste
* no reference in the configuration
* just by adding a maven / gradle dependency in the classpath.

I also need to declare these partials by discriminating by spring profiles (just like in application.yml)
These properties also need to be accessible in my own yaml

I want the library to declare common variables like these

domain-name: ${container-eks}.${environment}.${region}.${domain-name-suffix}

---
spring.profiles: eu

environment: dev
region: eu-west-1
domain-name-suffix: company.ext
container-deployer: eks

And then reuse them in my application.yml

user-api-url : https://user.${domain-name}/v2/
invoicing-api-url : https://invoicing.${domain-name}/v1/

Is that possible ?
Actually, you can write a very simple tool to load these yaml partials in your library,
just reuse that code in your library : https://gist.github.com/libetl/cb45dccaf27fd68a95fd79e3e02fad75

Then add an EnvironmentPostProcessor to your library :

package com.company.library.lib1

import com.company.library.tools.LoadYaml
import com.company.library.tools.LoadYaml.LoadOptions
import org.springframework.boot.SpringApplication
import org.springframework.boot.env.EnvironmentPostProcessor
import org.springframework.core.Ordered
import org.springframework.core.env.ConfigurableEnvironment
import org.springframework.core.io.ClassPathResource

internal class ApplicationSettingsConfiguration : EnvironmentPostProcessor {

override fun postProcessEnvironment(environment: ConfigurableEnvironment?, application: SpringApplication?) {

LoadYaml with LoadOptions(
environment,
deactivationProfile = "without-settings",
forApplicationScopeOnly = true,
yamlName = "lib1-settings(application)",
yamlClassPathResource = ClassPathResource("lib1-settings.yml"),
checkProfiles = true
)
}
}
In the above example, I have saved a yaml file in src/main/resources in my library called lib1-settings.yml where I have put all my common variables.

When checkProfiles is set to true, I can define several sections separated by ‘—‘ each where I can define differentiated values for predefined spring.profiles values.
deactivationProfile allows to skip the declaration of the yaml file if a profile is activated.
forApplicationScopeOnly determines whether to inject the vars in the bootstrap and application scopes or only in the application scope.

And with that, I am able to shrink how many variables I have to declare in my application, since a lot of them are already injected in my libraries.
My configuration file went down from 30kb to just 2kb, by defining variables for every environment and for every aspect.


G33k, Programmes

A tiny reverse-proxy to start in ONE command

You are a devops or a software developer in a company and you are about to start hacking one program of the system.
At some point, you will ask yourself that question :

“How do I route the traffic to my local version of the application ?”

(unless if you test everything remotely)
Some obvious answers exist (‘use Docker’, ‘get nginx’, ‘use our homegrown proxy’).
But what if you just want to use the tiniest possible solution ? Something that would require only one command to start ?

=> type ‘sudo npx local-traffic‘ in your terminal (sudo when run under linux / macOs). After 4 seconds, the proxy should be started.

There is a config file that you can modify (.local-traffic.json), actively watched by the proxy (so no need to reload/restart).

That proxy also takes care about a couple of neat things :
– cookies rewrite to keep them on the proxy domain (and to keep a session on a site)
– response body rewrite to get hyperlinks on the proxy
– SSL options (to be able to reach https sites)

More Information available at https://github.com/libetl/local-traffic

G33k, Programmes

Kotlin Coroutines utilities

If you prefer to combine Kotlin Coroutines with popular solutions like Flowable, Streams or RxJava, where every strategy is readily available, this post is not for you.

Otherwise, if you are interested in writing your own asynchronous strategies, here are two of them (in addition to my post from April 2019 : https://libetl.wordpress.com/2019/04/30/learn-how-to-group-expensive-calls-with-the-coroutines/).

Batching strategy
That strategy helps you stream your “Extract, transform, load” program by starting a parallel execution of a correctly throttled massive process.
In other words, regardless of how many rows are in your dataset, you are able to process them all.

Not too slow, and not too fast to avoid sending a throughput that your downstream actor cannot bear.
The batching strategy is then a kind of streaming facility.

It basically consists in keeping either n workers busy with one row unless if all the rows have been processed.
The strategy is initialized on the first use, and the following datasets can be processed in less “heating” time
Source code of that strategy : https://gist.github.com/libetl/71b826a0db248e6770a2c0b5c0ae6d18#file-batchcoroutinesstrategy-kt

Caching Strategy
Want to keep long time computation results in your program memory after having them processed ? That sounds interesting when your client is requesting some data and you cannot respond in a reasonable amount of time (more than 5 seconds).
Give your client an UUID and tell it to come back later with that UUID.

When a client request an UUID that is not yet computed, you can just reply “oh it is not ready yet”.
If it is done, “here are the results”,
otherwise “sorry, apparently that UUID does not correspond to a task done on this machine”

That strategy consists in a cache object (map of uuid to results), a worker to run async tasks, a “cacheAdder”, a method to poll the status of a task.
Basically, the job starts by sending a message to the worker, which after completion sends the result to the cacheAdder. The cache is configured to automatically make the elements expire 10 minutes after the last read.
Source code of that strategy : https://gist.github.com/libetl/71b826a0db248e6770a2c0b5c0ae6d18#file-cachingcoroutinesstrategy-kt

Can I combine them ?
Absolutely, here are the declarations to have a batching strategy with cache :

private val batch =
    batchingStrategy.batchOf(
        workers = 20,
        coroutineContext = coroutineContext
    ) {
        letsProcess(it)
        // this is where you tell what to do
        // for each element in your dataset
    }

private val batchWithCache =
    cachingStrategy.cache(
        workers = 20,
        coroutineContext = coroutineContext
    ) {
        batch(it).await()
        // "it" represents your data
        // the result of "await" is the
        // global result of the operation.
        // you can add further operations there
    }

G33k, Programmes

How do I mock javax.sql.DataSource

This is the annoying question I have had to ask myself.

Integration tests are pretty slow. Even when they use a dedicated database like H2 or Derby, they are still accessing dark infrastructure software layers to read / edit / write / flush data.
I was wanting tests able to run in no time and still performing data manipulation.

Rather than trying to mock each Data Access layer (and I have got many of them), how can I simply mock responses from databases without having to do bespoke mocks engineering.

Actually DataSource interface (which is a really primitive interface) can be mocked if we simulate the right methods.
DataSource also has a lot of dependencies and axioms that won’t be called by your application … so don’t create mocks for them.
Otherwise you will spend weeks defining your tests strategy.

I have digged into the Java Database Connectivity interfaces to understand the main interactions between roles and I could create several mocks to allow the critical path of the data flow go without any error

You will need to mock : DataSource, PreparedStatement, ResultSet and Connection,

Now the question is : Do I have to prepare tables before the tests ?
Well to be honest, only one Map. You can register a mapping between SQL requests and results (we will call it registeredSQLServerMappings).

And you can verify the results of your call by making the mocks update a mutable list, let’s call it calledStatements.

Here is the gist : https://gist.github.com/libetl/48beff8234a7e034762fa23f6692cb86. Anyone can copy and the only pre-requisite to use is to have mockK in the classpath (and kotlin).

To use it in your test,
– have a BeforeEach to do
val dataSource = mockk<DataSource>().simulateDatabase()
– register some mappings like with that example :
registeredSQLServerMappings["SELECT ID, TICKET_NUMBER, FIRST_NAME, LAST_NAME FROM DOCUMENT"] = listOf(
mapOf("ID" to 1234, "TICKET_NUMBER" to "ABCD", "FIRST_NAME" to "John", "LAST_NAME" to "Doe")
).toResultSet()

– use that DataSource in your data layer bean.
– Congratulations, you can now use a mocked database in your unit tests and they run in less than one second each.