Package managers are awesome. They blew my mind when I first encountered them and they have only got better over the intervening years.
Package managers also suck! Or more to the point it sucks that there is not a single package manager. While trying not to start a hate campaign, think how awesome a package manager we’d have if all of the developers worked together on one?
Anyway, this post is not about Linux package managers it is about Chocolatey. Chocolatey is a package manager for Windows. Surprised? I was when I first read about it a few years ago. I meant to use it when I last set up a Windows laptop and forgot all about it. With a new laptop on order, I thought now was a good time to look at it.
So what is Chocolatey? According to the website:
Chocolatey is a global PowerShell execution engine using the NuGet packaging infrastructure. Think of it as the ultimate automation tool for Windows. Chocolatey is a package manager that can also embed/wrap native installers and has functions for downloading and checksumming resources from the internet - useful for when you have public packages, but don’t have distribution rights for the underlying software that packages represent (seen all the time with publicly available packages on the community repository).
Installing it is very easy. Open a Powershell with administrative privileges and enter:
For Chocolatey to work once installed you will need to ensure you have set your Powershell script Execution Policy. Full details can be found here. I use RemoteSigned on my machine.
Once installed you can install a package by using the install command:
Chocolatey has all the commands you would expect for a package manager and they are documented here here.
The great thing is Chocolatey is powered by Powershell. You can create scripts to initialise your machines should you use VM’s or switch machines often. This is exactly what I have done. Passing the -y argument will progress the installation without asking for your permission.
The only drawback I can find right now is the speed that packages are created. For example, there is no package that I can find for Visual Studio 2017 right now. You can look through the available packages here.
On the whole, so far it seems quite good. The speed of installations depends on the applications installation method. The only thing I would like is an update notifier which I have not found yet. I may create one in the next few weeks so keep an eye out.
Today is proof that putting all your eggs in one basket is a terrible idea.
Today proved that even if you are Amazon, your cloud services can still let your down. S3 (Amazon’s Cloud Storage) went down in the East Coast region and a large number of sites went down with it. Some sites didn’t go down but lost some functionality. Slack, Quora, Medium and Imgur were all affected. I am certain we will find out the cause in the next few days although people don’t seem to care once a service is back. While it is down all hell breaks loose and Twitter makes for interesting reading.
What this highlights is the trade off you deal with by embracing cloud services. I am old enough to remember pre-cloud. This was a time when running a couple of servers would take 40% of a developers time. It could also cost you hundreds a month to host, not including licence fees if you needed them. Devops back then was far more involved that it is now. You installed and configured everything on the box in most cases from the OS up. These days the OS is irrelevant and deployment can be as easy as a git push You want to set up a virtual machine? Piece of cake. You want to spin up a website or server for an API? 5-minute job. Pre-cloud people had jobs doing nothing but operating servers for small companies. Cloud providers automated these jobs away at many companies’s.
The cloud is the perfect solution to most developers desire. Developers by nature do not want to deal with or be System Administrators. They do not want to deal with servers, licencing and patching security vulnerabilities. They want to code and create and solve problems and then deploy their solution. Click and forget once that initial setup wizard is complete.
So the Cloud sounds great for developers. In many instances, it is the perfect solution. But, once you gain users your hosting stability and availability is far more important than your developer’s convenience.
Each of the vendors has multi-region solutions. They market the fact that their platform or services are resilient. They sell the dream that continuity will prevail and your application will be running even if an outage occurs. This is not the case without a fair amount of effort up front and usually during any moderate incident. It is something that as developers we need to give a lot more thought to. It should not be an afterthought otherwise, it comes back to haunt you.
At Open Energy Market we have suffered two major outages on Microsoft’s Azure platform. Both of these happened during trading hours. Both caused our customers to lose access to our platform. Even from the start, our infrastructure has been set up so we had redundancy. Replicated services across different regions and regular fail-over testing. We were doing everything by the book and then Azures DNS routing suffered an issue and we were down. Having many regions gives you a sense of security until the infrastructure that connects them fails. Then all you can do is sit and wait until Microsoft fixes the issue.
Let’s not hide the fact that their communication at these times is not the best. Most developers would rather know what the issue is and how and whenit’s fixed than there being an issue. Open, transparent, communication is key.
Something interesting happens as well. You explain to your team the situation and say things like “this is bad but anothercompany.com are also down”. Somehow this conveys a sense that we have picked a great platform and are not the only ones suffering. We try and justify our shortcomings by grouping ourselves with stalwarts of the internet that should know better. This brings little comfort to the end users or your team who are having to liaise with them.
There is another interesting point, platform lock-in. These platforms each offer their own take on services and server types that we take for granted. It is very easy to design your codebase around one of the providers SDK. Azure Functions and AWS Lambda offer serverless functionality. They are incompatible and you can not switch from one to the other without modification to your code.
Once bitten and all that. The second time it happened on the 15th September last year we were already designing a Disaster Recovery solution. Irony is explaining that we are building a DR solution during an outage. We had taken the decision that we would stick with Azure as we’d made a fair investment in it but we needed a backup. So we built a replica environment on AWS. We had to make amendments to our code to refactor out Azure SDK specific code. We set up a bi-directional replication database agents. We also amended our document handling architecture. A new caching mechanism that persists to both Azure and AWS storage was developed. We now replicate any service we use in Azure on AWS. The solution is not perfect. It does give us a fall back though and will see us through our current redevelopment phase.
That brings me to my final point about cloud hosting (for now). It is not ideal to only consider it’s impact at the end of your development effort. Designing your infrastructure as you build out your application (or service) is key. As we develop our new codebase at Open Energy Market we are considering not only how we are going to host our services but also if they are portable across multiple cloud providers.
So as per my opening post on this blog, 2017 is a year I wanted to get back to blogging. When I thought about how to host the blog I wanted to meet two criteria:
I wanted a really simple way to publish my thoughts and ramblings. I didn’t want a Content Management System or any complicated software stack.
As this is a way for me to just share thoughts and things I learn as I go I needed the solution to be cheap or zero cost.
A quick Google listed a large number of static site generators based on a wide range of languages. What is a static site generator? A static site generator is a program that merges HTML files with content written in a text format to create a static website. That is a website that provides content only and no complicated functionality.
Due to some familiarity, I opted to go with Jekyll. Jekyll is developed in the Ruby programming language. It is a mature static site generator and is most well known as the source of most Github project pages.
To install Jekyll you will need the Ruby programming language installed. On Arch it is a simple:
For all other operating systems, you should consult the Ruby documentation.
Jekyll is installed via the Ruby package manager as a Gem which is Ruby’s package manager. The Installation Guide covers the required steps.
Creating your first Jekyll static site
As an example of ow easy it is to work with Jekyll open a terminal and enter:
Once you have downloaded the theme you want to use you can unzip them into the directory used to host your site.
This post is more about how to host a static site on S3. So I will end my flash overview of Jekyll here and leave you with the documentation.
Setting up your hosting on AWS
First, you will need to head to Amazon and sign up for an AWS account. As of authoring this post, I am leveraging the benefits of the free tier.
You will also need to download the AWS Command line tool to work with their infrastructure. The tool is Python based and you should have a V2.7 Python environment set up. On Arch I found this out the hard way as I installed via pip when I needed to use pip2. Anyway you can install the command line tool using:
Next, we should create an Amazon user account which Amazon call IAM. This is the account you use within AWS to setup your environment. Go to here and click Create New User. On the displayed page you should enter your username and leave the default options selected. Clicking the Create button will complete the setup process. You should save the API keys generated in this step as you will need them later.
Return to here and select your user and navigate to the Permission tab. You need to select Permission tab and click the Attach Policy button. This should display a list of policies and the first of which should be Administrator/Access. This is what we want and you should select it and click the Attach Policy button.
At this point, we can configure our local CLI environment. We can configure the access keys for the user we have set up so future commands do not need them passed. Type
Complete the credential details requested and leave the region and format defaults as displayed.
S3 is Amazon’s cloud storage service. It can be used for storing all kind of files but we are going to use it for storing our static website. S3 works by defining Buckets which are essentially directories. To set up a Bucket for our site from the AWS CLI enter:
Obviously, substitute mywebsite.com with your domain
This will create the Bucket and configure it to serve a static site and also sets the default document and error page.
So as a reward for sticking with me through the setup and configuration we can now deploy our static site. The first step is switch to the directory containing your site source and run
This will ensure your latest changes are compiled to the _site directory. Now we want to push the contents of the _site directory to S3 which we can do as follows:
This command will sync all the files in the _site directory to your S3 Bucket, The flags passed do the following:
–acl Sets the files as publicly readble.
–sse Sets encryption on the files.
–delete Force S3 to delete any files that are not in your _site directory.
At this point, you will be able to verify that your site is available to view by visiting http://www.mywebsite.com.s3-website-us-east-1.amazonaws.com
If you are only using S3 to host a development release or a site that you do not plan to make public this is where we finish.
If you are deploying a site that will be open to the public then I urge you to investigate the Amazon CloudFront service. This is a CDN service that not only caches copies of your site for faster access but also offers the ability to generate SSL certificates for your site to improve security for your users.
Our lives rely on technology in every sense yet we have become immune to the fact that it is fragile and badly built. Five years ago Scott Hanselman wrote a blog entitled “Everything’s broken and nobody’s upset”. It resonated with me then and it still holds true today, perhaps even more so. In the post, Scott lists many issues he encountered over a week. The issues cover a wide range of both Windows and Apple related software and apps. He also pointed out when you do experience an issue you end up in one of two situations.
“Here’s the worst part, I didn’t spend any time on the phone with anyone about these issues. I didn’t file bugs, send support tickets or email teams. Instead, I just Googled around and saw one of two possible scenarios for each issue.
1 No one has ever seen this issue. You’re alone and no one cares.
2 Everyone has seen this issue. No one from the company believes everyone. You’re with a crowd and no one cares.
Sadly, both of these scenarios ended in one feeling. Software doesn’t work and no one cares.”
He summed up by asking why the situation was so bad and stating that he knew we could do better.
He is right! We can and should do better.
Here are the immediate things that I have experienced over the last week:
Setting up an Amazon Dot. The setup wizard in the Android app did not work and I had to Google to find alternative web-based setup mechanism.
The official Reddit android app has the power to reboot my phone randomly.
After reinstalling Windows and running the setup for Dropbox on a data drive, it did not resync the existing directories. It instead installed the Dropbox directories within the existing directories.
The Great Suspender Chrome add in regularly loses the original page I suspended.
My son’s iPhone refuses to keep a particular wifi password. It only affects one network and doesn’t do it for any other network.
Outlook configured with Google App Sync has stopped searching email. This could be Windows related as it has affected my whole team.
tmux has stopped working at all on my Arch laptop. It appears to be related to a screwed up locale which occurred during an update.
So what has improved over the last 5 years? On the face of it, not a lot. Who is to blame? Everyone! Users need to be much more demanding and less forgiving. Product teams need to do a much better job to meet the increased user expectations.
Why is it that modern software company’s feel they can release untested and sub par products? Where is the pride in the companies and teams that build apps and services? Would we accept this quality from a firm of Architects? “Yes, I know there is a door in the wall but it doesn’t open. You will have to climb in through the window”. There would be an outcry.
Why do users feel unable to report issues and head to Stack Overflow or other support sites instead? Do company’s distance themselves once their app or service is released? Why do company’s not watch the support sites like Stack Overflow? Or do they and they disregard everything but the most damaging (security exploits etc).
The following four reasons are key to the problem:
Pace of development. There is a very real need to develop increasingly difficult software fast. Not just quickly but really really fast. Competitive advantage, proving an MVP for funding, adding a killer new feature. In the past, I have often been asked to start a new feature while wrapping up the current one.
Quality Assurance. Tester’s, Quality Analysts, call them what you will. They are the missing members of too many development teams. Yes, we have automated tests and they are great. They are also written by the developers. True quality assurance is an art form. They should in place as early in your development teams existence as possible. Arguing that the business team can test the features they requested is not right on many levels. Firstly, most non-product team members will only ever test the Happy Path. Secondly, testing should not be an “also responsibility” for someone. It should be their primary focus and they should be given the time to focus on it.
Too many targets. Desktop, web or phone? OS? Runtime? Web Server? Database Server? There are many possible options and teams test against a limited combination of hardware and software stacks. How many times do you hear “it works ok on xyz ….” I do not really care if I run jkl ….
Conflicting implementations. A standard is a standard. If you are going to implement it then do so in whole or not at all. Do not bend it to suit your corporate will or advantage. This relates much more to web browsers and their conflicting implementations. We are in 2017 which makes it more laughable.
This is a serious issue that needs addressing. Nothing has improved in the lat 5 years. With the wider adoption of AI and AR on the horizon do we want quality and reliability to still be an issue in 5 years time?
My main day to day laptop is nearing its end of life and I need to find a replacement. For some reason, this brings dread and fear and months of searching for a replacement.
First off, I am not a Mac fan. No offence, they make nice looking hardware but not nice enough to sell a kidney for. Also, they now have no Esc key which for a Vim user is unthinkable. I have long been in awe of their marketing approach, though!
I am a Lenovo Thinkpad fanboy though and have been for many years (long enough that they used to be IBM Thinkpads). They have amazing keyboards and Linux works flawlessly with little issue. There are two main problems with Thinkpads. The first is costs and justifying the cost to a CFO type person. The majority of their line are less that Mac’s but, and here comes the real issue for me, their specs are average. For Windows based development, which we use at Open Energy Market, I need a machine to power Visual Studio. This seems like a humorous comment but our 18 project solution takes about 1 min 30 to open on my current laptop. Building the solution is getting slower by the day even though my current machine is not low end. So getting a decent spec Thinkpad turns out to be a costly affair for my needs.
Ten years ago a friend and colleague recommended a UK based company called PC Specialist. They are now a fair sized custom PC and laptop manufacturer. They provide base spec models with a range of base models and then allow you to configure them as you wish. Way back in the day I was quite dubious as they used OEM cases and parts. I was very impressed with the first laptop I brought with them. I was using it on average 15 hours a day 6 days a week for 4 years and while the “R” key fell off the keyboard it was solid. I have used them ever since and recommended them at the past 3 companies I have worked,
The only issue I have with PC Specialist is the choice, there is too much. Their configuration pages allow you to toy with everything and it becomes overwhelming. The trick I have learnt is to set your budget first and then build to it. The trouble is that it leads to a lack of focus until you finally get your spec right. I have been looking elsewhere to see what other options are about for similar price and spec but I am not finding much. The only other real option is the Dell XPS range which I have read lots of great things about.
So I have narrowed my search to three:
Thinkpad - should I find a great deal on a great spec.
Dell XPS - If I can find a supplier near my budget.
PC Specialist - Likely choice but I will lose days to their configuration screens
I have set my budget to £1000 (inc VAT) and I’ll let you know what I buy.