Throughout my career, everyone has used it. “We just need to upgrade this library to get feature x”. Or “we just need to make this change to fundamentally alter a feature”. Or my personal favourite “we just need to do this massive task by tomorrow 9
am”. This one is always said at around 5 o’clock in the evening.
This is by no means a moaning post. It is a natural thing to try and reduce any task to it’s smallest form. Doing so though can have a negative effect on the person “just” doing the task. I know that early in my career when I was told to “just” do x and x took me 3 days it made me feel stupid. I felt like I was failing in some way. On one occasion the solution to the task I was “just” to do took a senior engineer 4 days (and I suspect most of the night). It was not a straightforward change. Nor was it really fair to just give it to the team junior.
I notice that business stakeholders use the word a lot. I suspect it is because the technical team have given them a longer time frame than they would have liked. “Two weeks, oh I think you are wrong. All you just have to do is create this new feature”. In this instance, the business stakeholder should listen to the technical team. They might be equally offended if their knowledge of finance or sales was questioned. Can you imagine the response if a technical person said: “Oh all you have to do is just raise £1m by next Friday at 5 pm”. I can and they wouldn’t appreciate that approach either.
This is one of those age-old problems of domain ignorance. Development makes this matter worse by being largely invisible to the outside world. A new feature appears in the application and sales and the CEO love it. Do they have a realistic appreciation of the effort it took? No. Neither does the technical team on the effort the accounts team took to publish the accounts. A clear case for better communication for sure.
It has taken 20 years for me to realise this situation and there is no easy fix. Patience and education between teams is key. Having faith your teams as well is key. Believe what they say and don’t just try and force a quicker solution.
Is this something you have considered or addressed on your team? Please share your thoughts with me via twitter or email.
I have recently read some article’s that propose that you should not follow DRY or SOLID principles when writing unit tests.
I thoroughly disagree.
Unit tests are there to confirm that your code functions as specified. They ensure contracts within your code base are maintained. They evolve with your code and ensure it’s integrity. They are equally as important as the system under test code. So why would you want a beautify crafted application code base and a spaghetti mess of a test suite? This makes no sense to me.
Some of the articles state that your test code should be throw away code. That you should write it in a way that validates the code at that moment in time but not care about the future. It’s experience that tells me that when you have a significant code base across many projects and a team of only 3 that this approach doesn’t work.
As your test suite grows you find common code that starts in one test class/module and then is needed in another. I tend to move code like that into base class objects. If class A and class B both need to read a file to complete a test then why repeat the file reading code. If your test suite grew to 100 tests that need to pass a file and the mechanism changed, that’s a lot of updates to get your test passing. I have seen this situation in real life when an in-memory database provider was depreciated. Had the code been abstracted it would have been a small task. Instead, thanks to copy and paste coding in the tests it was a 3-week task.
Given the importance that people give to unit testing, I am surprised at this line of thinking. Developers are frowned upon for not writing tests but now are told that the test structure doesn’t matter. For junior developers especially this must be confusing. Remember that a good test suite can be used to describe your codebase to developers new and old. If your first view of an application was a test suite that was badly architected and had repeated code, what would your first impression be?
I’m sure some people will disagree with some of my views here. That is fine and you should work to the size and requirements of your project. But, I would say that a number of small projects I have worked on end up growing significantly. It is harder to retrofit tests in a well-structured way. Please share your thoughts with me via twitter or email.
I’ve recently seen some Entity Framework entity classes that are doing more than they should. A good example of what I am talking about is something like the following:
I have put that together that code to highlight a point, hence the lack of attributes and EF scaffolding. This is a common pattern of code where you need to set ViewModel or DTO values from your entity.
The point is that EF entity classes should be anemic.
If ever a statement might get me some hate that will and I know some people disagree. But, if you are using POCO’s to represent your database records they should not encapsulate behaviour as well. Keeping your logic apart from your data makes sense. Logic and behaviour should work on and not with data. Very rarely from my experience is this not the case.
The most obvious solution to this kind of code is using an Object Mapper such as Automapper or Mapster. These libraries are designed to remove this kind of mapping code. They are also designed to handle deep object cloning (which is another post in itself).
With most of the mappers, you generally can either set up some configuration or specify the fields to map. In most projects, I set up configuration in the applications startup or initialisation stage. You would perhaps add a line such as:
You can then call the mapping function provided by the library to populate the target object.
This will map the values between the objects where the property names and types match. This approach will keep configuration code to a minimum. Most of the mapping libraries will also provide ways of specifying options when mapping values. These vary between library’s and so reading the documentation is best.
The benefit of this approach is that you can specify your mapping configurations in one place. If a change has to be made then you get to update a single code file. You don’t have to find all occurrences of object mappings to update.
There other, sometimes more beneficial, options in this area such as converters. I may revisit this area soon with more ideas.
I’m sure some views here are worthy of discussion and I’d like to hear about any other mapping library’s worthy of use. Please share them with me via twitter or email.
When I decided to start a new blog last year I had a very focused reason to write. I wanted to improve my written English. I also wanted to improve the speed at which I could write without procrastination. As I have moved into more managerial roles over the years I have found the process of writing hard. I find coding easy, it flows from me but, I carry a lot of abbreviations and shortcuts back to reports and updates. So as I was writing more English and less code I wanted a way to practice it. This site is it in a way (but also an outlet for thoughts and experience).
Someone was discussing this process with me recently. They asked how I create content and publish my site. So I thought I would share here for reference.
One thing that was clear when I put the site together was that it needed to be very easy to publish content. I didn’t want to maintain the site past publishing on an ongoing basis. That meant a static site and for me, I opted for Jekyll. I opted for a simple template and then fleshed out the details. My template is the Lanyon theme and all imagery is sourced from Unsplash. I did have to tweak the themes CSS and js to get the Google page speed results that I wanted. But, since finalising the template some time ago I haven’t needed to maintain the site as planned.
I mentioned that I source images (as a lot of people do) from Unsplash. One thing I do to maintain the page load times is compress the image files as a good site should. I have to be honest and say that I do this manually at present using GIMP. I have the “Save To Web” plugin installed and I resize the images I use to around 720 pixels wide and then compress by 80% using GIMP. This results in much smaller files which load quickly.
I write articles in Hemingwayapp.com. It is a very plain and simple editor that highlights bad and hard to read English. I then use Grammarly to try and catch any bad grammar or spellings and then copy the reviewed content to Vim. In there I add markup.
The code for this site is managed by a Bitbucket Git repo. I have opted to host the site on Netlify. I have mentioned and written about how easy this solution is. Netlify lets you hook it up to a Git provider so that you get automated publishing of the site on pushing changes to your repo.
This method of publishing content is not perfect but it works for me. I will look at some point when I have time at creating a script to automatically shrink and compress images. I have also been researching writing tools for Vim that would allow me to get the same experience as Hemingway. Some come close so that is an ongoing task. If there are any recommendations from you I would be very keen to hear about them.
I’d be very keen to hear about the approaches you take and the tools you use in publishing your blogs. Please share them with me via twitter or email.
Today I installed PostgreSQL on my local machine to work through some tutorials for a new language. (More on that soon). It wasn’t the most straightforward install and set up ever. Some of the steps appear to be missing or different from the Arch wiki and so I thought I would document it here.
First up install PostgreSQL in pacman with
Next, you need to initialise your database storage cluster on disk. This will be the directory which stores all the data. There is no default location but most people stick with the convention of mapping it to /var/lib/postgres/data. You can initialise this with
You then need to set the owner of that directory to be the PostgreSQL user with
Next we need to switch to the PostgreSQL user and initialise a database cluster which we do with
Once completed you can log out and start PostgreSQL with
If you want PostgreSQL to start each time you launch your machine then run
The final thing to do is to grant your usual user access to save you having to keep switching to the PostgreSQL user to access the PostgreSQL shell which you can do with
At this point, you should have a functioning PostgreSQL install.
I’m now looking for a good GUI for PostgreSQL. Please share your recommendations with me via twitter or email.