Andy Crouch - Code, Technology & Obfuscation ...

JavaScript Destructuring Syntax

Photo: Unsplash - Markus Spiske

JavaScript has the destructuring assignment syntax to allow you to unpack objects and arrays into variables. I still see a lot of code where either the developer is not aware of this language feature or just didn’t use it. The feature was added in ES6 and enables more terse code without the lose of intent. What follows is a whistle-stop tour of the syntax and how to use it.

Array Destructuring

Array destructuring allows you to define variables based on the position of elements in an array. A simple example is:

const numbers = [1,2,3,4];
const [ one, two, three, four ] = numbers;
const [ a,,e,g ] = numbers;

console.log(one);
console.log(two);
console.log(three);
console.log(four);

console.log(a);
console.log(e);
console.log(g);

To destructure array values you will see that the variables are defined by declaring them in a set of square brackets. They are defined in the order they will be mapped to against the array elements. If the variables do not already exist then you need to prefix the declaration with a let or const. As the position is used to destructure the values from the array you need to handle the position of values you do not want. You will see there is an empty space declared in the a,e and g destructuring.

If you are interested in the first couple of elements in the array then you can apply a “rest” pattern and combine the remaining elements into a single variable. This is achieved by prefixing the final variable declared with three periods as shown in the example below:

const numbers = [1,2,3,4];
const [ one, two, ...remainder ] = numbers;

console.log(one);
console.log(two);
console.log(remainder);

You can us this array destructuring approach with iterables as shown below:

function* makeRangeIterator(start = 0, end = 100, step = 1) {
    let iterationCount = 0;
    for (let i = start; i < end; i += step) {
        iterationCount++;
        yield i;
    }

    return iterationCount;
}

var [first, second, third, fourth, fifth, sixth, ...rest] = makeRangeIterator();
console.log(sixth);
console.log(rest);

Object Destructuring

Destructuring objects works in a near-identical manner to what we have seen above. Instead of a variable being bound on position, they are bound to object properties as shown below:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
    address:{
         houseNumber: "14446",
         street: "Looney Road",
         town: "Loonery Town"
    }
}

const { firstName: first_name, lastName: last_name } = person;

console.log(first_name);
console.log(last_name);

An even clearer approach is to use the provided shortcut which works if you name the variables the same of the properties such as:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
    address:{
         houseNumber: "14446",
         street: "Looney Road",
         town: "Loonery Town"
    }
}

const { firstName, lastName } = person;

console.log(firstName);
console.log(lastName);

You will notice that instead of square brackets, object destruction used curly brackets to surround the variable declaration.

In order to deconstruct more complex objects such as the address data in the person object you can nest the declaration of variables in line with the structure of the object. You prefix each block of variable destructuring with the name of the parent property and surround their child declarations with further curly brackets as shown below:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
    address:{
        houseNumber: "14446",
        street: "Looney Road",
        town: "Loonery Town"
    }
}

const { address: {houseNumber, street} } = person;

console.log(houseNumber);
console.log(street);

You can nest your destucturing of the data as deeply as you want by continuing the pattern as shown here:

    const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
    address:{
        houseNumber: "14446",
        street: "Looney Road",
        town: "Loonery Town",
        phones:{
            mobile: "03456789",
            landLine: "23456789"
        }
    }
}

const { address: { houseNumber, street, phones:{ landLine } } } = person;

console.log(houseNumber);
console.log(street);
console.log(landLine);

Destructuring Defaults

If you try to destructure array elements or object properties that don’t exist your variable will be set to undefined as shown below:

const person = {
   firstName: "Donald",
   lastName: "Duck",
   age: "105",
}

const { weight } = person; // <- undefined


console.log(weight);

If you are unsure of the properties you are destructuring you can set default values when declaring the variables such as:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
}

const { firstName, lastName, height = "110" } = person;

console.log(firstName);
console.log(lastName);
console.log(height);

Wrapping Up

Hopefully you can see the benefit of destructuring and how it reduces the number of variable declarations you make to retrieve data from objects and arrays. You can use the syntax in a few ways other than just variable declaration.

You can define a function to accept a single object as a parameter and use the destructuring syntax to pull the actually values from the object that you need such as:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
}

function printNameFor({ firstName, lastName }){
    console.log(firstName);
    console.log(lastName);
}

printNameFor(person);

You can also use the syntax to handle returning multiple values from a function such as:

const person = {
    firstName: "Donald",
    lastName: "Duck",
    age: "105",
}

function getNamesFrom(person) {
    const { firstName, lastName } = person;
    return [ firstName, lastName];
}

const [ firstName, lastName ] = getNamesFrom(person);

console.log(firstName);
console.log(lastName);

If you have any questions around the destructuring syntax then let me know via twitter or email.

GraphQL, PostgreSQL & Hasura (Pt1)

Photo: Unsplash - Isaac Smith

In a recent project, we decided to build our backend in Hasura. This was my first time working with it and I have been impressed with the ease and power it provides. Essentially, Hasura melds the GraphQL language with PostgreSQL database to provide easy and fast real-time API’s powered by your data schema. In this post, I will cover GraphQL (at a high level) and how Hasura makes it easy to set up an API in no time.

(This is based purely on my recent experience and I am not in any way affiliated with Hasura).

GraphQL

GraphQL is an open-source query language that was developed by Facebook. It was released publicly in 2015 and is designed to power API’s by providing a runtime that allows clients to query just the data they need without the complexity and baggage of something like an ORM. Its flexibility means it makes evolving API’s overtime easier and it speeds development by removing the need to generate as much boilerplate code. For example, using an ORM like Objection.js you might write something like the following:

const person = await Person.query().findById(1);

console.log(person.firstName);
console.log(person.lastName);

This means that you are returning the whole Person record to be able to generate an object which the data is mapped to. You are then just using a small subset of the fields on that object. With GraphQL you can query just the data you need rather than having to return a whole record. The above query in GraphQL could be written as:

query{
  person{
    firstName
    lastName
  }
}

Which returns

{
  "person":{
    "firstName": "Donald",
    "lastName": "Duck"
  }
}

This is a trivial example but shows the power behind GraphQL Ask for what you want and get the response as JSON. The GraphQL language includes everything you need to create, read, update and delete data. Query’s, such as the snippet above, allow you to read existing data, Mutations allow you to create, update and delete data and Subscriptions allow you to monitor part of your schema to receive real-time updates.

Rather than provide a walk-through of GraphQL top to bottom I recommend you read the excellent tutorial on GraphQL.org.

PostgreSQL

PostgreSQL doesn’t really need an introduction as it really is the most advance open-source database available. If you have worked with SQL Server or a MySQL derivative then you will be at home. In the way in which Hasura works you do not need to interact with PostgreSQL directly. If you do want to learn more about it then the documentation can be found here.

Hasura

Setting up a playground to test Hasura is very easy. You can deploy an image to Heroku on their free tier and be up and running in minutes. Once deployed you will have a PostgreSQL database and an endpoint with which to access Hasura. I followed the tutorial for building a todo app which can be found here. I should mention I have found their documentation really good.

Once you are up and running you access the UI from your browser.

There are 4 main tabs to the UI labelled:

  • Graphiql
  • Data
  • Remote Schema’s
  • Events

All running GraphQL instances provide graphiql which is a repl type environment in which you can build and test your queries, mutations and subscriptions. The Hasura version is standard and provides automatically generate documentation and point and click query building capabilities.

The Data tab is where you will design and build your database schema. You have a point and click UI that simplifies the design of tables, keys and relationships. There is also a SQL pane in which you can create any PostgreSQL items you want such as functions or triggers. These can then be used from the UI to link functions, views and triggers to your schema.

The Remote Schema and Events tabs are powerful features which I want to cover in more depth in a follow-up article. The Remote Schema tab allows you to set up and consume one or more URL’s as part of your Hasura Schema. This means that you can write a Serverless function, for example, that accepts data to pass to a REST API but which exposes the results as GraphQL. The Events tab allow you to hook into schema-based events and react to them. So, again, you can use Serverless functions to process a new entry in a table and push the results to an endpoint or a different table.

After working through the tutorials I could immediately see how good this software could be. I still had questions around security and manageability:

  • Security - How easy is it to secure and what about Role Base permissions. Firstly, you can secure the entire Hasura instance with a password which is passed in all requests as a request header. It also then means if you try to visit the instance from your browser you will also need the password. Hasura provides full Role-Based permissions that can be added to each table and right down to actions on the table.
  • Manageability - First off migrations. They provide a migration framework through the Hasura console app that means you can push changes from your development instance through to staging and production with a simple command. I have opted to use Digital Ocean as a host as Hasura offered one-click deployment but they also support Azure and Google Could. Applying a custom domain was as easy as creating an A record for your domain and using the IP address of your instance. They also provide a health monitoring endpoint on each instance via /healthz.

All the features I have outlined in this post are available on the free tire and the performance is only limited by your hosting provider. I really am impressed by how we have used Hasura so far and how easy it has made creating API’s.

I will follow up to this post with more detailed howto around the Remote Schema and Event functionality. In the meantime if you have used Hasura for a project or have any interesting tips then let me know via twitter or email.

Update On Debugging Rust In VS Code

Photo: Unsplash - Matt Artz

A couple of weeks ago I wrote about setting up a Rust environment and mentioned that I hit a bug with the Rust Analyzer extension in VS Code that caused the VSVim keybinds to not work. I did raise a bug after missing an existing issue that reported the same problem. Oops.

Anyway, to fix the clash you just need to remove a keybinding associated with Rust Analyzer by:

  • Going to the Keyboard Shortcuts (Ctrl-Shit-p “Open Keyboard Shortcuts”).
  • Search for Rust Analyzer.
  • Find the Enhanced Enter Key binding and delete it.

Now VSVim and Rust Analyzer play nicely and from my limited testing Rust Analyzer is a better extension that Rls.

Let me know your thoughts on Rust Analyzer via twitter or email.

Ngrok

Photo: Unsplash - Nicole Y-C

I love a great simple utility and this week I found one that is genuinely useful, ngrok.

ngrok provides a simple way to forward your localhost server content through any NAT or a firewall. This means that you can fire up a demo of your latest website and share a URL with your client or boss and have them view it. I found it while developing a chatbot for Slack. What I loved about the site was that I was signed up and using the app in less than 4 minutes!

(For clarity I have no association or involvement in ngrok.)

First up head over to ngrok.com and sign up for a free account. Once you are in they have a simple UI that guides you through a 4 step process to download the app and get going.

I installed the app via my package manager. Step 1 & 2 provides details for Mac and Windows users and Linux users that are not lucky enough to have a package in their repositories. Once you have downloaded and installed the app you can move to step 3.

In step 3, you need to provide your account token which they provide a shell snippet to copy and paste into a terminal. Step 4 is run the command with some examples of how to serve different content. Serving your localhost webserver (on port 80) is as easy as

$ ngrok http 80

This results in a server running in the terminal which provides the temporary URL’s to reach your localhost.

They even provide a web based interface you can track and monitor requests. This is accessible at http://localhost:4040.

So simple and useful.

What simple but killer apps do you use to make life easy? Let me know via twitter or email.

Thoughts On Monoliths

Photo: Unsplash - Michael Schaffler

There was an interesting post on Microservices vs Monoliths this past week. You can read it here. This made me think a lot about why and what should drive your architectural choices.

Microservices have gained a lot of attention since 2013. The architectural approach composes complex applications by combining individual applications. The smaller applications (services) communicate via standard protocols such as http. They are stateless. These applications generally only relate to backend and data services. Early pioneers are listed as Netflix and Amazon.

Monolithic applications describe a typical n tier application. These bundle the state management, user interface and data access into a logical single application. The list of monolithic applications is endless and it is a very established approach.

There has been a lot written about Microservices. A lot of “we benefited in this way by migrating to Microservices” articles. A lot of new projects writing about going with services. These articles can sometimes be light on reasoning and wider business context. Every company and every project is unique. While reading about other projects is useful and educational, not taking the wider context into account could lead to bad decisions being made.

There are many questions that need to be answered when designing an application and it’s architecture. First, what are the problems you need to solve? What kind of project are you working on? What are the business goals? Are you starting out on a new project, taking over an MVP or refactoring a large, existing codebase? What is your budget? What are the available resources and what is the skill set? Are you going to have to use a particular stack or language? These are all standard inputs to your architectural decisions.

I have only ever concluded that a microservice approach would be beneficial once. In all other circumstances, I have to start or continue with a monolith. Why in that one instance did I feel that a service-based approach would be wise? The reasons included:

  • The existing code base comprised less than five main features. Each feature had a clear boundary and could be thought of as a distinct application.
  • The company goals were to replicate the features across global markets. Each market would have a different implementation caused by regulatory and supplier requirements.
  • The codebase would move to a CI/CD process to simplify the deployment process and improve testing. Each service would want to be independently deployable.
  • The longer-term plan for the company was to build a marketplace platform. This would provide common functionality on which any reverse auction-style marketplace could be built.
  • The plan to evolve the development team included having significant resources split into core and autonomous feature teams.

This was for a company that was past the initial cycle of build and testing features. They were 3 years in and already had a good market fit. They secured investment to drive forward and evolve into a technology-focused business. Unfortunately, they opted to invest in offline resources rather than the technical team and the project was doomed for failure. Anyway, my point is that there was a clear set of requirements and goals on that project that meant that a service-based approach seemed like a sensible option.

If you are starting a new project then you absolutely do not want to go with services. You will not understand your customer’s or the business well enough to design your services right first time. You almost certainly will not have the time to integrate test them and because of that your deployment and monitoring will be horrific to start with. These last two points are pretty key on the whole. Once you break your code down to lots (hundreds) of services then you need to not only test them but observe them in your environments. Instead of keeping one application online you have to manage to keep a lot of interrelated applications online. It’s not beyond impossible but it’s not quite as Develop friendly as everyone tells you.

If you are starting a project and need a web app and a mobile app then it is fine to have a single API app running them. You can structure that API as you might your services and you should absolutely be following clean code styles. This will mean that when you come to split the logic out into services (when your company has raised £10m and has a million users) that it is an easy task. Following that approach will make your project easier to develop anyway.

If you are looking to refactor an application that has become an unmanageable ball of bad code then Microservices is absolutely not the right approach for your team to consider. You should first review how you have ended up with such a bad codebase. If you do not fix the root issues then you will replicate them across multiple services. You need to refactor your codebase based on clean code principles and instil best practices and code reviews on your team.

This has turned into a bit of a brain dump. I started out really wanting to agree with the article I mention. I hope this has come across. Its OK to design your codebase as fits your skills, budget and company goals. Architect solutions to your problems and not those of much bigger, wealthier competitors. It’s always good to see what others are doing but building your project right and fast is what will make your company a success.

I’d love to hear your thoughts on the Monolith vs Services debate so please contact me via twitter or email.