The trouble with getting older is the amount of baggage you accumulate.
I have worked across various Linux and Windows machines in the last 18 odd years. Since 2002 when I installed Slackware over the course of a weekend, I have rarely been without a terminal. For years I ran Cygwin when I was on a Windows machine. Recently I found Babun which is the best *nix environment on Windows. I will write more details on that soon.
When you use the terminal you tend to find little tweaks and changes start to creep into your rc and profile files. Over time and after installing many tools and frameworks, your files start to look tired. You find you have blocks commented out. Configurations for tools you have long abandoned and settings which make no sense. Over the years I have included tweaks for different machines and terminal variants. Most of these are no longer required.
So I have decided to prune and more or less start over with my dotfiles. I will be updating my setup and documenting it as I go here on the blog. I will also make my configuration available on GitHub once I have it working in a basic fashion.
I decided to look at how I use the terminal first.
Bash has been my terminal of choice right up until about a year ago. I had created a great framework which allowed me to load shared configuration. Machine dependent configuration was sourced by host name. It worked well and meant that I stared at this prompt for about 15 years.
I had come across z shell (zsh) when installing Arch. I wasn’t really blown away, perhaps a little indifferent. Then I read an article (which I can no longer find) that sold me on switching to the z shell (zsh) based on the productivity gains.
Wikipedia describes zsh as:
Zsh is an extended Bourne shell with a large number of improvements, including some features of bash, ksh, and tcsh.
Benefits of zsh
I started to write a long parapgraph on the benefits of zsh. However, I came across this excellent slide deck (by Brendon Rapp) which covers many of the benefits of zsh much better than I can:
Setup and configuration
So I have switched full time to zsh. For Arch users, this link describes the process of switching your shell from Bash to zsh. The first time you fire zsh up it will walk you through a setup wizard. This will set up a default terminal experience. This is fine but what you want to do next is install the excellent oh-my-zsh framework.
oh-my-zsh contains hundreds of plugins and themes that make using zsh even better. Installation details can be found on the project page but it is as simple as
The project page covers enabling plugins and setting themes so I will not rehash that here. For convenience, there are screenshots of most themes.
I saw a co-worker using powerline status bars in vim, tux and shell and decided to go for something similar. There is a simple powerline type theme in oh-my-zsh called agnoster. It was fine but I had read about Powerline 9K which I decided to try as it is more configurable. Full installation details are here for Powerline 9K. Powerline 9K is very configurable and allows you to configure segments within your prompt. There are segments already available for many programming environments and git.
It came in at £25 more than my set budget of £1000 but I couldn’t help get the DDR4 RAM.
The case is plastic but sturdy. It will not win a beauty contest with a Mac but resembles more of an Alienware case. The screen is very clear and glare-free and I would never go back to anything but an FHD screen anyway. The keyboard is not as nasty as some but is not a Thinkpad keyboard. There is a decent amount of travel and the layout follows the standard UK design. As far as powering Visual Studio, I set it up with Visual Studio 2017. In first tests, I am loading our large solution in about 25 seconds which is a massive improvement.
All in all I am pleased with the laptop. One of our new developers also purchased a 14” model with similar specs and he is using it with 3 external monitors. He was also pleased to get such good spec components for such a good price.
How many choices do we make on an average day? Depending on the source you believe, the average adult makes around 35,000 decisions a day. These cover the mundane what to wear and what to eat through to what job to accept and where to live. Who do you want to marry and do you want children?
Recently I have found myself overwhelmed by choices. I become consumed by the variations on offer and procrastinated for prolonged periods. I am not talking about at meal times or when I get out of the shower and have to choose what to wear. I am talking about everything above that from what to do on a free day off to purchases. I am talking about the architecture and code choices I make for applications I write. As far as booking a holiday this year it took about 3 months. I settled on many different destinations until I found somewhere more appealing. I have always tended to over think things but as I have got older there feels like more choices are available. In talking to my wife I believe that the issue is the internet and the ability it gives you to research your choices. Even when in a shop about to buy something I am on my phone checking for better options.
In writing this piece I am choosing what to add and what to omit. Three days I have been redrafting it.
I am not alone in feeling this way. There have been many papers and books authored on the subject. The research shows that people are unable to handle large choice variations. They are more likely to make a snap decision to avoid the task of wading through the options. Based on the number of options presented research has shown that a snap decision may work out the best. Tesco (the UK supermarket) is reducing its product lines by around 30,000 items this year. The reason? The realisation that its fastest growing competitor, Aldi, offered very little choice. Tesco at the start of the year offered an unbelievable 28 varieties of Tomato sauce. Aldi offered one. It appears that the buying public does not need a mixture of packaging and product sizes. They want a fair priced item. They do not want to have to stand and think about Tomato sauce. The want to pick it up and buy it and move on to more interesting things.
Having used Linux for close to 20 years I have found the choices it offers to be empowering and a distraction. Repositories of packages have allowed me to create and build software with ease. It has no doubt increased my productivity as a developer. It has also lead me to wast hours on trying distributions and desktops. Tweaking configuration files to get settings how I want them. Trying themes to rectify some, frankly, hideous UI’s. And then there is Arch. I love Arch as a distribution and it would take a lot for me to leave it. While I know you can install it from scratch in a couple of hours I can get lost in the wiki. It is so deep and offers so many installation options I have lost days to installing it. So many packages and configurations, so much choice.
This is all the polar opposite of Apple and their approach to software. One core application per task with two theme options out of the box. How many people that you know that own Apple gear tweak their UI or need to install many applications to get their work done?
Words to adhere to but how to achieve it? I do not know but I am starting to look at each part of my life and the changes required. I have started by looking at the tools and environment changes I make to work. I have switched to Antegos for installing Arched based systems. I download the minimal ISO and have a Gnome based desktop (which I theme with Evopop) up and running in 15 minutes. This saves me a vast amount of time over installing vanilla Arch. Both my Windows and Linux environments are being scripted so I can restore them to a new machine quickly. I am working on cleaning up my dotfiles once and for all. I am limiting myself to one application per task. If I find a new application for a given task I am updating my setup and removing the old one. Considering I spend on average 10 hours a day in front of a computer this is already paying off time wise.
I am applying a new way of making purchases. I allow myself only one initial research session to select the best 3 options (based on requirements and price). I then make my decision based on predefined criteria and actually walk away from the internet. I found this useful recently as we have renovated the whole of our house. Before I would have lost vast amounts of time to selecting new furniture. To pick new items for 4 rooms I had it selected and purchased in 2 days. My wife is astounded and I have more time to do important things.
I am also taking the less is more approach with UI’s for applications I work on as they tend to offer too much choice. Look at Word from Microsoft or Writer from LibreOffice. The options and peripheral functionality on offer overwhelms users of the application. These applications should allow the user to focus on writing their content. This is why applications such as ia Writer or FocusWriter have such a good reputation, they strip the choices out. You have a screen to type text and you can insert tables and images. It is all you need to craft your prose. No distractions and no choices.
In rebuilding Open Energy Market’s core application I am looking at the options presented to the user. I am looking to strip away confusion and reduce the number of choices a user has to make. Do we need to ask this particular question? Can we default this pages values while providing a way that if they need changing the user can? The irony is that in undertaking this work I am presented with many options about the way to design it. I am back to having to make choices.
Package managers are awesome. They blew my mind when I first encountered them and they have only got better over the intervening years.
Package managers also suck! Or more to the point it sucks that there is not a single package manager. While trying not to start a hate campaign, think how awesome a package manager we’d have if all of the developers worked together on one?
Anyway, this post is not about Linux package managers it is about Chocolatey. Chocolatey is a package manager for Windows. Surprised? I was when I first read about it a few years ago. I meant to use it when I last set up a Windows laptop and forgot all about it. With a new laptop on order, I thought now was a good time to look at it.
So what is Chocolatey? According to the website:
Chocolatey is a global PowerShell execution engine using the NuGet packaging infrastructure. Think of it as the ultimate automation tool for Windows. Chocolatey is a package manager that can also embed/wrap native installers and has functions for downloading and checksumming resources from the internet - useful for when you have public packages, but don’t have distribution rights for the underlying software that packages represent (seen all the time with publicly available packages on the community repository).
Installing it is very easy. Open a Powershell with administrative privileges and enter:
For Chocolatey to work once installed you will need to ensure you have set your Powershell script Execution Policy. Full details can be found here. I use RemoteSigned on my machine.
Once installed you can install a package by using the install command:
Chocolatey has all the commands you would expect for a package manager and they are documented here here.
The great thing is Chocolatey is powered by Powershell. You can create scripts to initialise your machines should you use VM’s or switch machines often. This is exactly what I have done. Passing the -y argument will progress the installation without asking for your permission.
The only drawback I can find right now is the speed that packages are created. For example, there is no package that I can find for Visual Studio 2017 right now. You can look through the available packages here.
On the whole, so far it seems quite good. The speed of installations depends on the applications installation method. The only thing I would like is an update notifier which I have not found yet. I may create one in the next few weeks so keep an eye out.
Today is proof that putting all your eggs in one basket is a terrible idea.
Today proved that even if you are Amazon, your cloud services can still let your down. S3 (Amazon’s Cloud Storage) went down in the East Coast region and a large number of sites went down with it. Some sites didn’t go down but lost some functionality. Slack, Quora, Medium and Imgur were all affected. I am certain we will find out the cause in the next few days although people don’t seem to care once a service is back. While it is down all hell breaks loose and Twitter makes for interesting reading.
What this highlights is the trade off you deal with by embracing cloud services. I am old enough to remember pre-cloud. This was a time when running a couple of servers would take 40% of a developers time. It could also cost you hundreds a month to host, not including licence fees if you needed them. Devops back then was far more involved that it is now. You installed and configured everything on the box in most cases from the OS up. These days the OS is irrelevant and deployment can be as easy as a git push You want to set up a virtual machine? Piece of cake. You want to spin up a website or server for an API? 5-minute job. Pre-cloud people had jobs doing nothing but operating servers for small companies. Cloud providers automated these jobs away at many companies’s.
The cloud is the perfect solution to most developers desire. Developers by nature do not want to deal with or be System Administrators. They do not want to deal with servers, licencing and patching security vulnerabilities. They want to code and create and solve problems and then deploy their solution. Click and forget once that initial setup wizard is complete.
So the Cloud sounds great for developers. In many instances, it is the perfect solution. But, once you gain users your hosting stability and availability is far more important than your developer’s convenience.
Each of the vendors has multi-region solutions. They market the fact that their platform or services are resilient. They sell the dream that continuity will prevail and your application will be running even if an outage occurs. This is not the case without a fair amount of effort up front and usually during any moderate incident. It is something that as developers we need to give a lot more thought to. It should not be an afterthought otherwise, it comes back to haunt you.
At Open Energy Market we have suffered two major outages on Microsoft’s Azure platform. Both of these happened during trading hours. Both caused our customers to lose access to our platform. Even from the start, our infrastructure has been set up so we had redundancy. Replicated services across different regions and regular fail-over testing. We were doing everything by the book and then Azures DNS routing suffered an issue and we were down. Having many regions gives you a sense of security until the infrastructure that connects them fails. Then all you can do is sit and wait until Microsoft fixes the issue.
Let’s not hide the fact that their communication at these times is not the best. Most developers would rather know what the issue is and how and whenit’s fixed than there being an issue. Open, transparent, communication is key.
Something interesting happens as well. You explain to your team the situation and say things like “this is bad but anothercompany.com are also down”. Somehow this conveys a sense that we have picked a great platform and are not the only ones suffering. We try and justify our shortcomings by grouping ourselves with stalwarts of the internet that should know better. This brings little comfort to the end users or your team who are having to liaise with them.
There is another interesting point, platform lock-in. These platforms each offer their own take on services and server types that we take for granted. It is very easy to design your codebase around one of the providers SDK. Azure Functions and AWS Lambda offer serverless functionality. They are incompatible and you can not switch from one to the other without modification to your code.
Once bitten and all that. The second time it happened on the 15th September last year we were already designing a Disaster Recovery solution. Irony is explaining that we are building a DR solution during an outage. We had taken the decision that we would stick with Azure as we’d made a fair investment in it but we needed a backup. So we built a replica environment on AWS. We had to make amendments to our code to refactor out Azure SDK specific code. We set up a bi-directional replication database agents. We also amended our document handling architecture. A new caching mechanism that persists to both Azure and AWS storage was developed. We now replicate any service we use in Azure on AWS. The solution is not perfect. It does give us a fall back though and will see us through our current redevelopment phase.
That brings me to my final point about cloud hosting (for now). It is not ideal to only consider it’s impact at the end of your development effort. Designing your infrastructure as you build out your application (or service) is key. As we develop our new codebase at Open Energy Market we are considering not only how we are going to host our services but also if they are portable across multiple cloud providers.