Andy Crouch - Code, Technology & Obfuscation ...

Professional Courtesy Regarding Knowledge Transfer From Outsource Teams

Photo: Unsplash - Elias Schupmann

I have looked at a lot of projects over the last year for clients of all sizes at Platform Eight. One of the trends I have seen in a lot of the projects is the failure to provide adequate handover documentation from outsourced teams. In some cases, there has been none.

This is not acceptable. When a business outsources development work they are entrusting the outsource team to deliver in a professional, timely manner. They are buying the expertise offered by the outsource team and have every right to expect the work to be completed to an acceptable level. This is why they are happy to pay outsource team rates. Outsource teams do project work as a matter of course. The expectation should be that they will not build an ongoing relationship with the business and therefore, they should hand over each project like that is the end of the partnership. A full knowledge transfer should be completed.

On more than one occasion I have taken on a project and found at least one of the following problems:

  • The wrong repositories have been shared with the client.

  • The right repository is shared but there is no README to provide a summarized view of how to get the project running locally.

  • The commit history is full of one-word commit messages that provide no context or reason for a change or addition.

  • There has been no handover of updated project assets, plans or documentation.

  • There is no deployment documentation explaining the “where” and “why” around how one or more services is deployed.

This has to stop. There is already a distrust of technology teams and consultants and getting the simple stuff wrong is just amateur.

README Files

Each code repository should include a README file in the root of the project. The quality of the README can be a good indicator of the quality of the project. The aim of the file is to explain the project, provide an overview of how it is built and how to run it.

Like almost everything else these days, there are a lot of articles and guides on how to write a good README. For me, I would suggest that a README file should allow a new developer on the project to:

  • Understand the aim of the project through a descriptive summary.
  • Understand how to clone the project and install the required dependencies.
  • Understand how to set up the project to run locally.
  • Understand if any related projects are needed and the dependencies between the projects.

For open source projects, having the developer contacts and licence information is also required. Maintenance of the README file is important to ensure it stays relevant. This should be part of your code review process. Overly long README files that are not maintained or which are hard to read provide little value.

Given that most cloud-based Git services will generate a README when you set up a project there is no excuse to not have one. Or worse, have a blank one. If you need help or inspiration in creating a README then the following links might be of use:

Commit Messages

When code is being developed, the changes will be added to Git source control. There is an acceptable standard for the messages that should be added to each commit. Again, I instil a standard that I expect from each outsource team at the start of a project. While developers may argue that it takes time to write meaningful commit messages, I disagree. When the developer writes the message the work is fresh in their mind. They have the opportunity to share the change, the context and the reasons they have made the changes they have. This again builds up into valuable knowledge that may not be captured elsewhere.

Knowledge Transfer

Knowledge is power and in the case of the technology powering a business, the difference between success and failure. If you opt to outsource some technical development then you must have a defined outcome and and expected set of assets that you must be provided. Be clear on these when you start the project and enforce receipt of them on completion. As a minimum I would expect:

  • Access to all code repositories. You need to ensure you have access to all code repository’s that were created or used in the project. All code branches should be pushed to whichever cloud service that was used as origin. Each repository should have a README and all code and deployment related assets.
  • A deployment plan. A top-level diagram of the deployment of the application plus a detailed document covering the underlying services used and their configuration. This should also detail how to set up the deployment from scratch, any external services used for monitoring or error tracing and details of the database (schema, security and roles etc).
  • Credentials & Secrets. Ideally, you will set up a centralised password and secret tool to share and keep passwords, API keys and third party service access routes.
  • Design Assets. All design work and generated assets such as image files and videos.
  • Sprint data. You should be using a work planning tool such as Jira or Trello to log the agreed work. Once the project is over you should retain access or have that data exported so no knowledge is lost.
  • Full IP Ownership. Once the project is completed then you should have a document confirming that all IP is transferred to you. This usually is included in the contract. However, I have worked on projects where the outsource team have used in-house developed libraries or systems and you should insist that you retain ownership of this code. If that is not possible then you need to decide on how to proceed but I would recommend at the very least that you agree on a maintenance & licencing basis to ensure you receive all future updates

None of the above assets should be chargeable by an outsource team as they are part and parcel of every professional software project. They are core elements that all projects should contain to ensure that knowledge can be transferred between developers and teams regardless of if they are in-house or outsourced.

I make a point at the start of each relationship with an outsource team to ensure that they know what my expectations are regarding a handover at the end of the project. For non-technical founders and startups that entrust outsource teams, they do not know what to expect upon delivery as in many cases this might be their first time running a technical partnership. Don’t be the team that fails to provide a full handover, do the right thing and raise the standard.

If you have any thoughts or comments about this subject then let me know via twitter or email.

Reap The Benefits Of Include Junior Team Members In Your Code Review Process

Photo: Unsplash - Jud Mackrill

Code reviews are a vital part of running a development team. Once you get past developer number one, its vital to ensure you have an agreed code review checklist and factor time into your workflow to ensure you review all code. I would argue that if you work as a lone developer, you should go back periodically and review code written more than two weeks ago. It is amazing when I do that the things I spot.

A lot of articles exist detailing how to review code and best practices. Ensuring that the requirements have been met, that there are no logic or security issues introduced by the changes and that the code adheres to the in-house style guide are top of most code review checklists. One thing often missed with peer code reviews is that it is an opportunity for learning.

Lots of teams state only a more senior or experienced member of the team is to review pull requests. They do not think about the more junior members of a team reviewing code of the seniors. On the surface this makes sense. You hope that your seniors are capable of recognizing issues and maintaining a high standard in your codebase.

This approach is flawed for a couple of reasons. The first is that you add an increasingly significant burden on your senior developers as your team grows. You reduce the capacity of your sprints and limit the time within them to focus on larger “big picture” changes to your product. Not only that but I have seen senior developers struggle to maintain enthusiasm for code reviews as time goes on and this leads to a reduction of effectiveness.

By enabling more junior members of your team to undertake code reviews you can shift that burden across the team. This has many benefits. If a change is simply adding fields to a form, for example, there is less risk than a change which integrates with a third party API. A non-senior team member should be capable of ensuring that the changes work, that it follows your style guide and that it has supporting tests that are sensible and passing. It also allows them to feel part of the whole process and I have found that has a measurable effect on how they engage in stand-ups and work within the team.

What I would not suggest is that a non-senior member of the team reviews code written by a senior. A large scale architectural change or a brand new feature that has an impact across the whole code base needs to be reviewed by a peer at a similar level. These big picture style changes absolutely have to be checked for security and logic issues and impact on existing code. But, the key thing to consider here is to allow more junior team members to still review the pull request as a learning opportunity. A team is only as strong as the knowledge within it. Each developer can not only learn about the new feature or the code added in the pull request, they should be encouraged to ask questions and learn about the how and why behind the request as well.

By allowing your less senior developers to be a part of the whole development process your team wins on many levels. You spread the burden of reviews across your team which means your senior members can add more value in any given sprint. You decrease the risk of harming the enthusiasm of your senior developers and you enable the less senior members of the team to learn and grow their knowledge. This can only benefit your team as a whole. You reduce the risk of developers leaving, you increase your sprint capacity and you engage and empower the lower two thirds of your team. So if you have a good code review process already which is geared towards top down reviews, think about how you can change it to benefit the whole team.

If you have any questions or thoughts about how to enable your team further through a whole team code review process then let me know via twitter or email.

Configuring Optimus Manager V1.3 On Manjaro Linux

Photo: Unsplash - Samule Sun

My main daily laptop is a Thinkpad P52 which contains an Nvidia/Intel graphics setup. I have written before about how I have the drivers setup for mobile and docked use cases. I use the Optimus Manager program to allow me to switch between graphics cards and this works well especially as I am mainly at a desk these days.

An update in Manjaro Linux recently meant that Optimus was updated to V1.3, I experienced a loss of my external monitors. I discovered that although I had set Optimus Manager to always start up using the NVidia card, it was loading the Intel card by default. I discovered that the V1.3 release of Optimus Manager has now decided to depreciate the –set-startup command which was previously used to set a default card.

The solution I found was to create a file in /etc/optimus-manager called optimus-manager.conf. Obviously, if you have one then the change I am about to describe should be added the existing file. Once you open your optimus-maager.conf file you need to add the following

startup_mode=nvidia

(Obviously, if you want to load the Intel card by default then set that instead).

If you have created the file instead of editing an existing one then you will need to add

[optimus]

To the top of the file.

Once you have created the file you can reboot and the card you specified in the .conf file will be set to load. Multiple monitors once more.

If you have any questions about configuring Optimus Manager then let me know via twitter or email.

Configuring Sony WF-1000XM3 Bluetooth Earbuds On Linux

Photo: Unsplash - Khoa Nguyen

I spend a lot of time wearing headphones. This is for a couple of reasons. The first is that working remotely means I talk a lot on video calls with my team and clients. The second is that when I am not talking I am devouring almost any type of music while I work.

For the last year, I have been using some Taotronics Sound Surge 60 over-ear headphones. These are noise cancelling, Bluetooth over ear headphones that I tried to see if I like over ear style of headphones. They cost about £60 on Amazon when I got them and the sound and battery life have been great. But, when we had warmer weather during the summer I found one issue, they are really hot when worn for eight hours straight.

So, I decided to look for some lighter, earbud style, headphones. I did have a pair of Taotronics Sound Liberty 53’s, again a cheap pair from Amazon. The trouble with these was that the fit was just horrible and the sound was really weak compared to the Sound Surge 60’s. So, after researching what people thought were the best ear buds, I brought some Sony WF-1000XM3 in black. I opted for these as the reviews were glowing and in particular raved about the fit and sound. They are indeed a great set of earbuds and the fact they provide a variety of tips to suit all ear fitments is brilliant. The sound and noise-cancelling are just as good as a pair of over-ear headphones.

As always, it is fun to pair new Bluetooth gadgets up with Linux. While the situation is a lot better of late, it is not always straightforward. I have Blueberry installed in i3 to handle Bluetooth management and it has worked well with both sets of the Taotronics. The Sony’s would show up in Blueberry and even look like they are connected but I would get no sound.

So slipping back into 2015 I hit the terminal and hit up bluetoothctl:

$ bluetoothctl

bluetooth# power on
bluetooth# agent on
bluetooth# default-agent
bluetooth# scan on

At this point, you will see the available devices to pair with scrolling on the screen. Enable pairing on the ear buds and then enter:

bluetooth# pair XX:XX:XX:XX:XX:XX:XX 

(replace XX:XX:XX:XX:XX:XX:XX with the id for the ear buds as shown on the screen. You can type the first few characters and then tab and it should select the right id.)

Once the paired message is displayed then type:

bluetooth# trust XX:XX:XX:XX:XX:XX:XX 
bluetooth# connect XX:XX:XX:XX:XX:XX:XX 

(Again use the ear buds id)

At this point, you should hear the “connected” message in the earbuds. You can now exit bluetoothctl.

I have found that once you have done this once, connecting via Blueberry works fine.

I have been really happy with the WF-1000XM3 earbuds. One of the main issues for me when looking for ear buds was the battery life and I have found no issue here. They easily last an average day for me and if I slip them off then I put them in the case which can hold up to three days charge. The sounds is brilliant and I don’t feel like my ears ache after a day wearing them. If only Linux Bluetooth was as painless.

If you have any questions on using bluetoothctl or about the WF-1000XM’s then let me know via twitter or email.

Automatic i3 Multiple Monitor Configuration With Autorandr

Photo: Unsplash - Fotis Fotopoulos

Six months ago I wrote about scaling up my desk set up and moving to three screens. In doing so I hit something that was a minor problem, screen management.

I use the i3 window manager and on the whole, it is pretty good about handling screens once you have set the configuration. To do that I have been using ARandR which is a UI for XRandR which in itself is a tool for setting the screen configuration. The nice thing with ARandR is you can save configs as separate config files. You can just open and apply the configs as needed when you move between screen setups. The only drawback was having to set a config upon starting my laptop.

My first thought was to set a keybinding for each config I have which would be an improvement. I started to search out how to do that and instead found a much better solution, autorandr.

autorandr is a small app that will auto detect and update your config settings based on connected devices. It actually works really well and after playing with it over a weekend I have settled on this as my preferred way of handling screen configuration going forward.

To install autorandr you can grab the official package from your repo or install it following the instructions here. Once installed you can create different configurations as follows:

$ autorandr --save laptop

“laptop” here is the name of the configuration of the current monitor setup. To create each config just plug in your various screen combinations and save each one.

Now autorandr can detect which hardware setup is active:

$ autorandr
mobile
laptop (detected)

To automatically reload your setup:

$ autorandr --change

To manually load a profile:

$ autorandr --load <profile>

or simply:

$ autorandr <profile>

It really is that simple. The great thing is that once you have your configurations set up then autorandr does the rest. Each time I change the connected screens it just loads the correct config and the screens are adjusted automatically. Such a great utility.

If you have any questions or further tips for using autorandr then let me know via twitter or email.