Andy Crouch - Code, Technology & Obfuscation ...

Keeping Entity Objects Anemic

Map Plotting Next To Macbook

Photo: Unsplash

I’ve recently seen some Entity Framework entity classes that are doing more than they should. A good example of what I am talking about is something like the following:

public class MyEntity
{
    public string Name{get; set;}
    public int Age{get; set;}
    public DateTime DOB{get; set;}

    public void CreateMyEntityFrom(SomeOtherViewModel someOtherViewModel)
    {
        this.Name = someOtherViewModel.Name;
        this.Age = someOtherViewModel.Age;
        this.DOB = someOtherViewModel.DOB;
    }

    public SomeOtherViewModel CreateSomeOtherViewModelFromThis()
    {
        return new SomeOtherViewModel
        {
            Name = this.Name,
            Age = this.Age,
            DOB = this.DOB
        };
    }
}

public class SomeOtherViewModel
{
    public string Name{get; set;}
    public int Age{get; set;}
    public DateTime DOB{get; set;}
    public string AddressOne{get; set;}
    public string AddressTwo{get; set;}
    public string AddressThree{get; set;}
    public string AddressFour{get; set;}
    public string PostCode{get; set;}
}

I have put that together that code to highlight a point, hence the lack of attributes and EF scaffolding. This is a common pattern of code where you need to set ViewModel or DTO values from your entity.

The point is that EF entity classes should be anemic.

If ever a statement might get me some hate that will and I know some people disagree. But, if you are using POCO’s to represent your database records they should not encapsulate behaviour as well. Keeping your logic apart from your data makes sense. Logic and behaviour should work on and not with data. Very rarely from my experience is this not the case.

The most obvious solution to this kind of code is using an Object Mapper such as Automapper or Mapster. These libraries are designed to remove this kind of mapping code. They are also designed to handle deep object cloning (which is another post in itself).

With most of the mappers, you generally can either set up some configuration or specify the fields to map. In most projects, I set up configuration in the applications startup or initialisation stage. You would perhaps add a line such as:

AutoMapper.Mapper.CreateMap<MyEntity, SomeOtherViewModel>();

You can then call the mapping function provided by the library to populate the target object.

var somOtherViewModel = AutoMapper.Mapper.Map<SomeOtherViewModel>(myEntity);

This will map the values between the objects where the property names and types match. This approach will keep configuration code to a minimum. Most of the mapping libraries will also provide ways of specifying options when mapping values. These vary between library’s and so reading the documentation is best.

The benefit of this approach is that you can specify your mapping configurations in one place. If a change has to be made then you get to update a single code file. You don’t have to find all occurrences of object mappings to update.

There other, sometimes more beneficial, options in this area such as converters. I may revisit this area soon with more ideas.

I’m sure some views here are worthy of discussion and I’d like to hear about any other mapping library’s worthy of use. Please share them with me via twitter or email.

How I Publish Blog Posts

Person Typing On Laptop

Photo: Kaitlyn Baker - Unsplash

When I decided to start a new blog last year I had a very focused reason to write. I wanted to improve my written English. I also wanted to improve the speed at which I could write without procrastination. As I have moved into more managerial roles over the years I have found the process of writing hard. I find coding easy, it flows from me but, I carry a lot of abbreviations and shortcuts back to reports and updates. So as I was writing more English and less code I wanted a way to practice it. This site is it in a way (but also an outlet for thoughts and experience).

Someone was discussing this process with me recently. They asked how I create content and publish my site. So I thought I would share here for reference.

One thing that was clear when I put the site together was that it needed to be very easy to publish content. I didn’t want to maintain the site past publishing on an ongoing basis. That meant a static site and for me, I opted for Jekyll. I opted for a simple template and then fleshed out the details. My template is the Lanyon theme and all imagery is sourced from Unsplash. I did have to tweak the themes CSS and js to get the Google page speed results that I wanted. But, since finalising the template some time ago I haven’t needed to maintain the site as planned.

I mentioned that I source images (as a lot of people do) from Unsplash. One thing I do to maintain the page load times is compress the image files as a good site should. I have to be honest and say that I do this manually at present using GIMP. I have the “Save To Web” plugin installed and I resize the images I use to around 720 pixels wide and then compress by 80% using GIMP. This results in much smaller files which load quickly.

I write articles in Hemingwayapp.com. It is a very plain and simple editor that highlights bad and hard to read English. I then use Grammarly to try and catch any bad grammar or spellings and then copy the reviewed content to Vim. In there I add markup.

The code for this site is managed by a Bitbucket Git repo. I have opted to host the site on Netlify. I have mentioned and written about how easy this solution is. Netlify lets you hook it up to a Git provider so that you get automated publishing of the site on pushing changes to your repo.

This method of publishing content is not perfect but it works for me. I will look at some point when I have time at creating a script to automatically shrink and compress images. I have also been researching writing tools for Vim that would allow me to get the same experience as Hemingway. Some come close so that is an ongoing task. If there are any recommendations from you I would be very keen to hear about them.

I’d be very keen to hear about the approaches you take and the tools you use in publishing your blogs. Please share them with me via twitter or email.

Setting Up PostgreSQL On Arch

Man Looking At Monitor

Photo: Austin Distel - Unsplash

Today I installed PostgreSQL on my local machine to work through some tutorials for a new language. (More on that soon). It wasn’t the most straightforward install and set up ever. Some of the steps appear to be missing or different from the Arch wiki and so I thought I would document it here.

First up install PostgreSQL in pacman with

$ sudo pacman -S postgresql

Next, you need to initialise your database storage cluster on disk. This will be the directory which stores all the data. There is no default location but most people stick with the convention of mapping it to /var/lib/postgres/data. You can initialise this with

$ sudo mkdir /var/lib/postgres/data

You then need to set the owner of that directory to be the PostgreSQL user with

$ sudo chown -c -R postgres:postgres /var/lib/postgres

Next we need to switch to the PostgreSQL user and initialise a database cluster which we do with

$ sudo -i -u postgres
$ initdb -D '/var/lib/postgres/data'

Once completed you can log out and start PostgreSQL with

$ logout && sudo systemctl start postgresql

If you want PostgreSQL to start each time you launch your machine then run

$ sudo systemctl enable postgresql

The final thing to do is to grant your usual user access to save you having to keep switching to the PostgreSQL user to access the PostgreSQL shell which you can do with

$ createuser -s -U postgres --interactive
  Enter name of role to add: YourUsualLoginUserName

At this point, you should have a functioning PostgreSQL install.

I’m now looking for a good GUI for PostgreSQL. Please share your recommendations with me via twitter or email.

Debugging Object Lists

Man Sitting At Deak Looking At Monitor

Photo: Austin Distel - Unsplash

Something that I wish I had written for Visual Studio over the years was an object list debugger. Data Tables in ADO.Net had the DataSet visualizer which allowed you to see the contents. You were able to copy those contents out to Excel if need be and it was genuinely useful.

Something that I coded up some time ago was a class that takes a list of Entity objects and creates a Data Table from it. I developed it to use within a Table Gateway implementation that I created to improve bulk database insert times. As a side note, you really can not beat the speed of the SQlBulkCopy utility class for inserts on large datasets. Perhaps I will write a post on those topics soon.

I was recently tracking down an intermittent bug. The issue was obviously down to some data passed to a processing routine. The idea came to me to use my EntityDataTableFactory class to dump out objects at debug time. I didn’t really have time to create a polished solution. But, my theory worked and by plugging in my factory, I was able to inspect the data in the DataSet visualiser.

See the optional visualiser option when debugging and results below. The visualiser is available when you hover over a Data Table variable.

Visual Studio Debugging Code


Visual Studio Dataset Visualisation

The EntityDataTableFactory class will work on any strongly typed object List<T> type. You need to declare a table to store the resulting Entity table to and then you can use the DataSet visualiser. Using this to inspect the object list takes a couple of lines to set up. But, I find it is easier to use than Watch values and the property viewer.

I have condensed the code into a single namespace below in case you find it useful yourself.

using EntityListViewer.ExtensionMethods;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
using System.Data;
using System.Linq;
using System.Reflection;

namespace DataUtilities
{
    public interface IEntityDataTableFactory
    {
        DataTable CreateDataTableFrom<T>(List<T> entityList) where T : class;
    }

    public class EntityDataTableFactory : IEntityDataTableFactory
    {
        public DataTable CreateDataTableFrom<T>(List<T> entityList) where T : class
        {
            if (entityList.IsNullOrEmpty())
                return new DataTable();

            DataTable entityDataTable = CreateDataTableFor(entityList);

            const int MinimumExpectedColumns = 1;
            if (entityDataTable.Columns.Count.IsLessThan( MinimumExpectedColumns))
                return new DataTable();

            foreach (var entity in entityList)
            {
                GenerateDataRowFrom(entity, ref entityDataTable);
            }

            return entityDataTable;
        }

       private DataTable CreateDataTableFor<T>(List<T> entityList) where T : class
        {
            Type classType = entityList.First().GetType();

            List<PropertyInfo> propertyList 
                = classType
                     .GetProperties()
                     .Where(p => p.GetCustomAttributes(typeof(NotMappedAttribute)).Any() == false && 
                                 p.GetCustomAttributes(typeof(DatabaseGeneratedAttribute)).Any() == false)
                     .ToList();

            const int MinimumPropertyCount = 1;
            if (propertyList.Count < MinimumPropertyCount)
                return new DataTable();

            string entityName = classType.UnderlyingSystemType.Name;
            DataTable entityDataTable = new DataTable(entityName);

            foreach (PropertyInfo property in propertyList)
            {
                DataColumn column = new DataColumn();
                column.ColumnName = property.Name;

                Type dataType = property.PropertyType;

                if (IsNullable(dataType))
                {
                    if (dataType.IsGenericType)
                    {
                        dataType = dataType.GenericTypeArguments.FirstOrDefault();
                    }
                }
                else
                {   
                    column.AllowDBNull = false;
                }

                column.DataType = dataType;

                entityDataTable.Columns.Add(column);
            }

            return entityDataTable;
        }


        private void GenerateDataRowFrom<T>(T entity, ref DataTable entityDataTable) where T : class
        {
            Type classType = entity.GetType();

            DataRow row = entityDataTable.NewRow();
            List<PropertyInfo> entityPropertyInfoList = classType.GetProperties().ToList();

            foreach (PropertyInfo propertyInfo in entityPropertyInfoList)
            {
                if (entityDataTable.Columns.Contains(propertyInfo.Name))
                {
                    if (entityDataTable.Columns[propertyInfo.Name].IsNotNull())
                    {
                        row[propertyInfo.Name] = propertyInfo.GetValue(entity, null) ?? DBNull.Value;
                    }
                }
            }

            entityDataTable.Rows.Add(row);
        }

        private bool IsNullable(Type type)
        {
            if (type.IsValueType.IsFalse()) 
                return true; 

            if (Nullable.GetUnderlyingType(type).IsNotNull()) 
                return true; 

            return false; 
        }
    }

    public static class ExtensionMethods
    {
        public static bool IsNull(this object obj)
        {
            return obj == null;
        }

        public static bool IsNotNull(this object obj)
        {
            return obj != null;
        }

        public static bool IsNullOrEmpty<T>(this IEnumerable<T> enumerable)
        {
            return enumerable.IsNull() || !enumerable.Any();
        }

        public static bool IsLessThan(this int number, int value)
        {
            return number < value;
        }

        public static bool IsFalse(this bool bln)
        {
            return bln == false;
        }
    }

}

I’d be interested to hear your thoughts on better ways to debug within Visual Studio. Please share them with me via twitter or email.

The i-Stay Fineline Laptop Bag

i-Stay Fineline Laptop Bags

Photo: Andy Crouch

I work remotely a lot of the time. But, as my role has developed I travel more and carry my office with me. I am always trying new laptop bags in an effort to find the balance between space and weight.

For many years I have used STM bags. Before that Pakuma which, unfortunately, have ceased trading. I have tried all kind of messenger bags, briefcases and backpacks.

For a long time, especially while I rode a motorbike, I settled on backpacks. They are functional but at a cost. I find that using them damages clothing, especially wool based clothing.

I decided to try a messenger type bag again recently and after much research, I have opted for an i-Stay Fineline. This is a fairly standard laptop and tablet bag. Given its slim nature, it has a number of compartments and pockets. It takes a laptop to 15.6” and a tablet up to 12”. I can easily get a power brick in along with pens, moleskin and various charging cables. The outer flap has a large zipped pocket and the rear has a velcro fastened flap for magazines or paperwork. All in all a slim, neat and waterproof bag.

The standout point on the bag though is the strap. If you have used a messenger bag with a material strap you will know they are no good at staying on your shoulder. As you walk, it moves and slips and you are forever pulling the bag back onto your shoulder. The i-Stay has a rubber shoulder strap which goes nowhere. If you run with a full bag, it doesn’t move. It is the bags USP and it works.

This isn’t meant to be any kind of endorsement for the bag but it is one of the better slim laptop bags I have used. If you don’t need to carry the kitchen sink with you on your daily commute, you should check it out.

I’d love to hear your recommendations on laptop bags and which you use. Please share them with me via twitter or email.