How many choices do we make on an average day? Depending on the source you believe, the average adult makes around 35,000 decisions a day. These cover the mundane what to wear and what to eat through to what job to accept and where to live. Who do you want to marry and do you want children?
Recently I have found myself overwhelmed by choices. I become consumed by the variations on offer and procrastinated for prolonged periods. I am not talking about at meal times or when I get out of the shower and have to choose what to wear. I am talking about everything above that from what to do on a free day off to purchases. I am talking about the architecture and code choices I make for applications I write. As far as booking a holiday this year it took about 3 months. I settled on many different destinations until I found somewhere more appealing. I have always tended to over think things but as I have got older there feels like more choices are available. In talking to my wife I believe that the issue is the internet and the ability it gives you to research your choices. Even when in a shop about to buy something I am on my phone checking for better options.
In writing this piece I am choosing what to add and what to omit. Three days I have been redrafting it.
I am not alone in feeling this way. There have been many papers and books authored on the subject. The research shows that people are unable to handle large choice variations. They are more likely to make a snap decision to avoid the task of wading through the options. Based on the number of options presented research has shown that a snap decision may work out the best. Tesco (the UK supermarket) is reducing its product lines by around 30,000 items this year. The reason? The realisation that its fastest growing competitor, Aldi, offered very little choice. Tesco at the start of the year offered an unbelievable 28 varieties of Tomato sauce. Aldi offered one. It appears that the buying public does not need a mixture of packaging and product sizes. They want a fair priced item. They do not want to have to stand and think about Tomato sauce. The want to pick it up and buy it and move on to more interesting things.
Having used Linux for close to 20 years I have found the choices it offers to be empowering and a distraction. Repositories of packages have allowed me to create and build software with ease. It has no doubt increased my productivity as a developer. It has also lead me to wast hours on trying distributions and desktops. Tweaking configuration files to get settings how I want them. Trying themes to rectify some, frankly, hideous UI’s. And then there is Arch. I love Arch as a distribution and it would take a lot for me to leave it. While I know you can install it from scratch in a couple of hours I can get lost in the wiki. It is so deep and offers so many installation options I have lost days to installing it. So many packages and configurations, so much choice.
This is all the polar opposite of Apple and their approach to software. One core application per task with two theme options out of the box. How many people that you know that own Apple gear tweak their UI or need to install many applications to get their work done?
Words to adhere to but how to achieve it? I do not know but I am starting to look at each part of my life and the changes required. I have started by looking at the tools and environment changes I make to work. I have switched to Antegos for installing Arched based systems. I download the minimal ISO and have a Gnome based desktop (which I theme with Evopop) up and running in 15 minutes. This saves me a vast amount of time over installing vanilla Arch. Both my Windows and Linux environments are being scripted so I can restore them to a new machine quickly. I am working on cleaning up my dotfiles once and for all. I am limiting myself to one application per task. If I find a new application for a given task I am updating my setup and removing the old one. Considering I spend on average 10 hours a day in front of a computer this is already paying off time wise.
I am applying a new way of making purchases. I allow myself only one initial research session to select the best 3 options (based on requirements and price). I then make my decision based on predefined criteria and actually walk away from the internet. I found this useful recently as we have renovated the whole of our house. Before I would have lost vast amounts of time to selecting new furniture. To pick new items for 4 rooms I had it selected and purchased in 2 days. My wife is astounded and I have more time to do important things.
I am also taking the less is more approach with UI’s for applications I work on as they tend to offer too much choice. Look at Word from Microsoft or Writer from LibreOffice. The options and peripheral functionality on offer overwhelms users of the application. These applications should allow the user to focus on writing their content. This is why applications such as ia Writer or FocusWriter have such a good reputation, they strip the choices out. You have a screen to type text and you can insert tables and images. It is all you need to craft your prose. No distractions and no choices.
In rebuilding Open Energy Market’s core application I am looking at the options presented to the user. I am looking to strip away confusion and reduce the number of choices a user has to make. Do we need to ask this particular question? Can we default this pages values while providing a way that if they need changing the user can? The irony is that in undertaking this work I am presented with many options about the way to design it. I am back to having to make choices.
Package managers are awesome. They blew my mind when I first encountered them and they have only got better over the intervening years.
Package managers also suck! Or more to the point it sucks that there is not a single package manager. While trying not to start a hate campaign, think how awesome a package manager we’d have if all of the developers worked together on one?
Anyway, this post is not about Linux package managers it is about Chocolatey. Chocolatey is a package manager for Windows. Surprised? I was when I first read about it a few years ago. I meant to use it when I last set up a Windows laptop and forgot all about it. With a new laptop on order, I thought now was a good time to look at it.
So what is Chocolatey? According to the website:
Chocolatey is a global PowerShell execution engine using the NuGet packaging infrastructure. Think of it as the ultimate automation tool for Windows. Chocolatey is a package manager that can also embed/wrap native installers and has functions for downloading and checksumming resources from the internet - useful for when you have public packages, but don’t have distribution rights for the underlying software that packages represent (seen all the time with publicly available packages on the community repository).
Installing it is very easy. Open a Powershell with administrative privileges and enter:
For Chocolatey to work once installed you will need to ensure you have set your Powershell script Execution Policy. Full details can be found here. I use RemoteSigned on my machine.
Once installed you can install a package by using the install command:
Chocolatey has all the commands you would expect for a package manager and they are documented here here.
The great thing is Chocolatey is powered by Powershell. You can create scripts to initialise your machines should you use VM’s or switch machines often. This is exactly what I have done. Passing the -y argument will progress the installation without asking for your permission.
The only drawback I can find right now is the speed that packages are created. For example, there is no package that I can find for Visual Studio 2017 right now. You can look through the available packages here.
On the whole, so far it seems quite good. The speed of installations depends on the applications installation method. The only thing I would like is an update notifier which I have not found yet. I may create one in the next few weeks so keep an eye out.
Today is proof that putting all your eggs in one basket is a terrible idea.
Today proved that even if you are Amazon, your cloud services can still let your down. S3 (Amazon’s Cloud Storage) went down in the East Coast region and a large number of sites went down with it. Some sites didn’t go down but lost some functionality. Slack, Quora, Medium and Imgur were all affected. I am certain we will find out the cause in the next few days although people don’t seem to care once a service is back. While it is down all hell breaks loose and Twitter makes for interesting reading.
What this highlights is the trade off you deal with by embracing cloud services. I am old enough to remember pre-cloud. This was a time when running a couple of servers would take 40% of a developers time. It could also cost you hundreds a month to host, not including licence fees if you needed them. Devops back then was far more involved that it is now. You installed and configured everything on the box in most cases from the OS up. These days the OS is irrelevant and deployment can be as easy as a git push You want to set up a virtual machine? Piece of cake. You want to spin up a website or server for an API? 5-minute job. Pre-cloud people had jobs doing nothing but operating servers for small companies. Cloud providers automated these jobs away at many companies’s.
The cloud is the perfect solution to most developers desire. Developers by nature do not want to deal with or be System Administrators. They do not want to deal with servers, licencing and patching security vulnerabilities. They want to code and create and solve problems and then deploy their solution. Click and forget once that initial setup wizard is complete.
So the Cloud sounds great for developers. In many instances, it is the perfect solution. But, once you gain users your hosting stability and availability is far more important than your developer’s convenience.
Each of the vendors has multi-region solutions. They market the fact that their platform or services are resilient. They sell the dream that continuity will prevail and your application will be running even if an outage occurs. This is not the case without a fair amount of effort up front and usually during any moderate incident. It is something that as developers we need to give a lot more thought to. It should not be an afterthought otherwise, it comes back to haunt you.
At Open Energy Market we have suffered two major outages on Microsoft’s Azure platform. Both of these happened during trading hours. Both caused our customers to lose access to our platform. Even from the start, our infrastructure has been set up so we had redundancy. Replicated services across different regions and regular fail-over testing. We were doing everything by the book and then Azures DNS routing suffered an issue and we were down. Having many regions gives you a sense of security until the infrastructure that connects them fails. Then all you can do is sit and wait until Microsoft fixes the issue.
Let’s not hide the fact that their communication at these times is not the best. Most developers would rather know what the issue is and how and whenit’s fixed than there being an issue. Open, transparent, communication is key.
Something interesting happens as well. You explain to your team the situation and say things like “this is bad but anothercompany.com are also down”. Somehow this conveys a sense that we have picked a great platform and are not the only ones suffering. We try and justify our shortcomings by grouping ourselves with stalwarts of the internet that should know better. This brings little comfort to the end users or your team who are having to liaise with them.
There is another interesting point, platform lock-in. These platforms each offer their own take on services and server types that we take for granted. It is very easy to design your codebase around one of the providers SDK. Azure Functions and AWS Lambda offer serverless functionality. They are incompatible and you can not switch from one to the other without modification to your code.
Once bitten and all that. The second time it happened on the 15th September last year we were already designing a Disaster Recovery solution. Irony is explaining that we are building a DR solution during an outage. We had taken the decision that we would stick with Azure as we’d made a fair investment in it but we needed a backup. So we built a replica environment on AWS. We had to make amendments to our code to refactor out Azure SDK specific code. We set up a bi-directional replication database agents. We also amended our document handling architecture. A new caching mechanism that persists to both Azure and AWS storage was developed. We now replicate any service we use in Azure on AWS. The solution is not perfect. It does give us a fall back though and will see us through our current redevelopment phase.
That brings me to my final point about cloud hosting (for now). It is not ideal to only consider it’s impact at the end of your development effort. Designing your infrastructure as you build out your application (or service) is key. As we develop our new codebase at Open Energy Market we are considering not only how we are going to host our services but also if they are portable across multiple cloud providers.
So as per my opening post on this blog, 2017 is a year I wanted to get back to blogging. When I thought about how to host the blog I wanted to meet two criteria:
I wanted a really simple way to publish my thoughts and ramblings. I didn’t want a Content Management System or any complicated software stack.
As this is a way for me to just share thoughts and things I learn as I go I needed the solution to be cheap or zero cost.
A quick Google listed a large number of static site generators based on a wide range of languages. What is a static site generator? A static site generator is a program that merges HTML files with content written in a text format to create a static website. That is a website that provides content only and no complicated functionality.
Due to some familiarity, I opted to go with Jekyll. Jekyll is developed in the Ruby programming language. It is a mature static site generator and is most well known as the source of most Github project pages.
To install Jekyll you will need the Ruby programming language installed. On Arch it is a simple:
For all other operating systems, you should consult the Ruby documentation.
Jekyll is installed via the Ruby package manager as a Gem which is Ruby’s package manager. The Installation Guide covers the required steps.
Creating your first Jekyll static site
As an example of ow easy it is to work with Jekyll open a terminal and enter:
Once you have downloaded the theme you want to use you can unzip them into the directory used to host your site.
This post is more about how to host a static site on S3. So I will end my flash overview of Jekyll here and leave you with the documentation.
Setting up your hosting on AWS
First, you will need to head to Amazon and sign up for an AWS account. As of authoring this post, I am leveraging the benefits of the free tier.
You will also need to download the AWS Command line tool to work with their infrastructure. The tool is Python based and you should have a V2.7 Python environment set up. On Arch I found this out the hard way as I installed via pip when I needed to use pip2. Anyway you can install the command line tool using:
Next, we should create an Amazon user account which Amazon call IAM. This is the account you use within AWS to setup your environment. Go to here and click Create New User. On the displayed page you should enter your username and leave the default options selected. Clicking the Create button will complete the setup process. You should save the API keys generated in this step as you will need them later.
Return to here and select your user and navigate to the Permission tab. You need to select Permission tab and click the Attach Policy button. This should display a list of policies and the first of which should be Administrator/Access. This is what we want and you should select it and click the Attach Policy button.
At this point, we can configure our local CLI environment. We can configure the access keys for the user we have set up so future commands do not need them passed. Type
Complete the credential details requested and leave the region and format defaults as displayed.
S3 is Amazon’s cloud storage service. It can be used for storing all kind of files but we are going to use it for storing our static website. S3 works by defining Buckets which are essentially directories. To set up a Bucket for our site from the AWS CLI enter:
Obviously, substitute mywebsite.com with your domain
This will create the Bucket and configure it to serve a static site and also sets the default document and error page.
So as a reward for sticking with me through the setup and configuration we can now deploy our static site. The first step is switch to the directory containing your site source and run
This will ensure your latest changes are compiled to the _site directory. Now we want to push the contents of the _site directory to S3 which we can do as follows:
This command will sync all the files in the _site directory to your S3 Bucket, The flags passed do the following:
–acl Sets the files as publicly readble.
–sse Sets encryption on the files.
–delete Force S3 to delete any files that are not in your _site directory.
At this point, you will be able to verify that your site is available to view by visiting http://www.mywebsite.com.s3-website-us-east-1.amazonaws.com
If you are only using S3 to host a development release or a site that you do not plan to make public this is where we finish.
If you are deploying a site that will be open to the public then I urge you to investigate the Amazon CloudFront service. This is a CDN service that not only caches copies of your site for faster access but also offers the ability to generate SSL certificates for your site to improve security for your users.
Our lives rely on technology in every sense yet we have become immune to the fact that it is fragile and badly built. Five years ago Scott Hanselman wrote a blog entitled “Everything’s broken and nobody’s upset”. It resonated with me then and it still holds true today, perhaps even more so. In the post, Scott lists many issues he encountered over a week. The issues cover a wide range of both Windows and Apple related software and apps. He also pointed out when you do experience an issue you end up in one of two situations.
“Here’s the worst part, I didn’t spend any time on the phone with anyone about these issues. I didn’t file bugs, send support tickets or email teams. Instead, I just Googled around and saw one of two possible scenarios for each issue.
1 No one has ever seen this issue. You’re alone and no one cares.
2 Everyone has seen this issue. No one from the company believes everyone. You’re with a crowd and no one cares.
Sadly, both of these scenarios ended in one feeling. Software doesn’t work and no one cares.”
He summed up by asking why the situation was so bad and stating that he knew we could do better.
He is right! We can and should do better.
Here are the immediate things that I have experienced over the last week:
Setting up an Amazon Dot. The setup wizard in the Android app did not work and I had to Google to find alternative web-based setup mechanism.
The official Reddit android app has the power to reboot my phone randomly.
After reinstalling Windows and running the setup for Dropbox on a data drive, it did not resync the existing directories. It instead installed the Dropbox directories within the existing directories.
The Great Suspender Chrome add in regularly loses the original page I suspended.
My son’s iPhone refuses to keep a particular wifi password. It only affects one network and doesn’t do it for any other network.
Outlook configured with Google App Sync has stopped searching email. This could be Windows related as it has affected my whole team.
tmux has stopped working at all on my Arch laptop. It appears to be related to a screwed up locale which occurred during an update.
So what has improved over the last 5 years? On the face of it, not a lot. Who is to blame? Everyone! Users need to be much more demanding and less forgiving. Product teams need to do a much better job to meet the increased user expectations.
Why is it that modern software company’s feel they can release untested and sub par products? Where is the pride in the companies and teams that build apps and services? Would we accept this quality from a firm of Architects? “Yes, I know there is a door in the wall but it doesn’t open. You will have to climb in through the window”. There would be an outcry.
Why do users feel unable to report issues and head to Stack Overflow or other support sites instead? Do company’s distance themselves once their app or service is released? Why do company’s not watch the support sites like Stack Overflow? Or do they and they disregard everything but the most damaging (security exploits etc).
The following four reasons are key to the problem:
Pace of development. There is a very real need to develop increasingly difficult software fast. Not just quickly but really really fast. Competitive advantage, proving an MVP for funding, adding a killer new feature. In the past, I have often been asked to start a new feature while wrapping up the current one.
Quality Assurance. Tester’s, Quality Analysts, call them what you will. They are the missing members of too many development teams. Yes, we have automated tests and they are great. They are also written by the developers. True quality assurance is an art form. They should in place as early in your development teams existence as possible. Arguing that the business team can test the features they requested is not right on many levels. Firstly, most non-product team members will only ever test the Happy Path. Secondly, testing should not be an “also responsibility” for someone. It should be their primary focus and they should be given the time to focus on it.
Too many targets. Desktop, web or phone? OS? Runtime? Web Server? Database Server? There are many possible options and teams test against a limited combination of hardware and software stacks. How many times do you hear “it works ok on xyz ….” I do not really care if I run jkl ….
Conflicting implementations. A standard is a standard. If you are going to implement it then do so in whole or not at all. Do not bend it to suit your corporate will or advantage. This relates much more to web browsers and their conflicting implementations. We are in 2017 which makes it more laughable.
This is a serious issue that needs addressing. Nothing has improved in the lat 5 years. With the wider adoption of AI and AR on the horizon do we want quality and reliability to still be an issue in 5 years time?