Trigger CI/CD Rebuild Without Trivial Changes

Recently, someone on my team asked how they could trigger a rebuild of a branch on our continuous delivery/integration agent.

The initial suggestion was to introduce a trivial change - such as adding/removing whitespace to the README.
This is a viable option, and one I would have also suggested.
At least, until I saw a method that does not require any changes.

I could not remember the specifics of this method though.
To make matters worse, I was having trouble locating the information again.
This post is to prevent this situation from happening again.

How

Simply put, you tell git to allow empty change-sets.

1
git commit --allow-empty -m "Trigger Rebuild"

What other useful git nuggets exist that people may not be aware of?

References

I’ve used git for years, but I just realized I can make empty commits to trigger CI pipelines:

git commit --allow-empty -m "Updated upstream code"

Usually recording a commit that has the exact same tree as its sole parent commit is a mistake, and the command prevents you from making such a commit.
This option bypasses the safety, and is primarily for use by foreign SCM interface scripts.

2019 Year in Review

It is time to look over 2019 to see if or how I have grown, what or where I put my focus, and determine whether or not I need to re-align myself for the coming year.

Blog

I migrated from BitBucket Pages to GitLab Pages for various reasons.
There were a few hiccups along the way but overall it seems to have been a pretty smooth transition.

Part of my goal after the migration was to consistently produce content.
Perhaps I was too ambitious with my goals/expectations.
I was able to produce consistently throughout October but fell off in November due to numerous issues and frustrations with my Raspberry Pi-Hole project.

Over this next year, I would like to keep the goal of producing content regularly and personalize and/or create a blog theme.
At this time, I want to focus of the content to be more how-to’s, notes, and improving my communication.

Books

One of my goals for 2019 was to read 12 books.
I thought a book a month was reasonable and achievable.
And for the first quarter of the year, it seemed like it was going to be.
I was able to make my way through the following:

I am not sure what happened after that to cause me to get off track.
I started reading Getting Things Done in an effort to re-emphasize some of the techniques he provides.
The irony is not lost on me that this was the book to break my trend.

Although important, I am not sure I want to have a reading goal for 2020 - except for finishing Getting Things Done.

PluralSight Courses

PluralSight may be part of the reason I was unable to complete my reading goal for 2019.
Over the course of 2019, I completed the following PluralSight Courses and started several others:

I planned on creating posts reviewing each one containing my notes to keep them in a central location.
Unfortunately, I was not happy with the format/quality for the few courses that I did create reviews for.
While I still want to create those entries, I need to figure out how to communicate the review effectively.

I also took the C# Skill IQ Evaluation and scored in the 95th percentile with a 248.
I plan on improving this score over 2020, but also want to take courses to build other skills.

Programming Languages

I have worked with C# for a decade.
While I love the language, I have started to feel as though the problem space has become stagnant and repetitive.
It seems as though I am not the only one feeling this way either.

This is not to say that I think .NET is dying/dead.
I still very much enjoy it and have much left to learn about it - particularly with the quality of life changes .NET Core is providing.
I simply want to expand my thinking and skillset and learning a new programming language may be better suited to that.

Specifically, I am considering Go and Functional Programming.
Begrudingly Pragmatically, I am considering JavaScript/TypeScript and React/React Native.

Projects

I have no shortage of projects.
My problem is in finishing them and/or making them public.

One of those projects this year was my Raspberry Pi-Hole, but have not circled back to it yet.
Once that project is completed, I plan on making a Raspberry Pi Development Server.
The idea is for it to contain a dockerized Jenkins, RedMine, SonarQube, and/or other software used in my development lifecycle.
It is portable enough that it can be brought with me on the go or can be configured with a VPN and accessed remotely.

In December, I started JHache.Enums to experiment with BenchmarkDotNet and writing high-performance C# code.

Software Setup

On a day-to-day basis I use:

  • Visual Studio
    • CodeMaid
    • Editor Guidelines
    • File Icons
    • File Nesting
    • Power Commands
    • Productivity Power Tools
    • Roslynator
    • Shrink Empty Lines
    • SonarLint
    • StyleCop
    • VSColorOutput
  • Visual Studio Code
  • SourceTree
  • LINQPad

I started trying to learn Rider.

With the announcment that Google Chrome would be making it more difficult for ad-blockers I looked at alternatives.
I tried Brave and Vivaldi, but due to several issues I am switching back to Firefox.

I looked at Fork and GitKraken as SourceTree replacements.
GitKraken is my favorite, if I only had a single account it would probably be my daily driver.
Overall, Fork looks like a good replacement for SourceTree but I have not spent enough time with it.
I can say it’s merge tool is one of the best.

For productivity, I settled on TickTick for task management and Dynalist as my work journal.

I gave up on trying to get HyperJS to work the way I wanted.
Initially, I tried Windows Terminal but ended up returning to Cmder.

I am starting to use Docker for Windows but still need more exposure to using it.

Tech Setup

I upgraded my wife’s computer so she could do her design school work.
Given the programs she needs to run and that she does not game too much I opted for an AMD Ryzen 3700X and NVIDIA 2070 Super.
She also got a Secret Lab Omega

My computer (~10 years old) and desk are due for an upgrade this year.
Additionally, we are looking to sound-proof my office in order to get a streaming setup started.

Work

I accepted a new opportunity working on a WPF Prism application (as evidenced by blog posts and PluralSight history).

Lacking

Going over this year I realize there are three areas that are lacking: family, relaxation, and exercise.
These are areas I will need to make time to focus on in 2020.

AdGuard Home Initial Setup

Introduction

I have been struggling to get ArchARM setup on my Raspberry Pi.
I believe I have identified the root cause of my current issue; however, I could be wrong or still have more issues to troubleshoot and resolve.
Without the Raspberry Pi, I am unable to configure my router to use PiHole or AdGuard Home as a network-wide ad-blocker.

Fortunately, AdGuard Home has a few servers that I can use to see whether or not it blocks ads until I can get my Raspberry Pi setup.
Unfortunately, there will not be any back-end dashboard to view to administer block lists or see blocked traffic.
Since ads are so prevalent these days I am not worried about wondering if it is blocking ads.

Baseline

To gauge the effectiveness, a few sites have to be checked before the settings are applied.

eBay

Forbes

CNN

Ads-Blocker

Setup

The next step is to configure the router’s DNS to use AdGuard Home’s DNS servers.
AdGuard Home’s Knowledge Base provides the steps to accomplish this:

  1. Open the preferences for your router
    Usually you can access it from your browser via a URL (like http://192.168.0.1/ or http://192.168.1.1/).
    You may be asked to enter the password.
    If you don’t remember it, you can often reset the password by pressing a button on the router itself.
    Some routers have a specific application which should be already installed on your computer in that case.

  2. Find the DNS settings
    Look for the ’DNS’ letters next to a field which allows two or three sets of numbers, each broken into four groups of one to three numbers.

  3. Enter our DNS server addresses there
    176.103.130.130
    176.103.130.131

I have a LinkSys WRT32 router and found this under Advanced SettingsInternet Connection Settings.

All I had to do was change this to Custom and provide the DNS server addresses.

Verification

All that is left to see is how it performs using the same sites as before - using a hard refresh to prevent cached ads from being served.

eBay

Forbes

CNN

Ads-Blocker

Conclusion

It seems as though some ads are being blocked but I am surprised at how many are still being served.
To me this looks like the ad-blocking lists need to be updated but cannot say for certain without installing it myself.

I still will be loading Pi-Hole or AdGuard Home on my Raspberry Pi as this keeps my data in-house.
As much as I love of what I have seen of AdGuard Home’s admin dashboard, this experience is not re-assuring me of it’s effectiveness.
In the end, the more effective product will be the one I use.

GitLab Pages Theme Submodules

Introduction

At the end of the GitLab Pages Setup post, I described an issue I encountered with GitLab CI generating the static site.
The issue is caused by Hexo themes being linked as Git submodules - which means the default theme configuration is used.
At the time, I only had a single idea - but was not sure whether or not it would work.
Since then I have had a few more ideas about possible solutions and this post will describe them.

Overwrite Default Theme Configuration

The most straightforward approach was to add the modified configuration file to the repository.
Using the GitLab build configuration, the configuration file could be copied to the themes folder overwriting the default theme configuration.

The only benefit to this approach is it’s simplicity.
Adding the file is trivial and the modified GitLab build configuration would look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
image: node:10.15.3

variables:
GIT_SUBMODULE_STRATEGY: recursive

cache:
paths:
- node_modules/

before_script:
- npm install hexo-cli -g
- test -e package.json && npm install
- mv themes/_config.yml themes/icarus/_config.yml
- hexo generate

pages:
script:
- hexo generate
artifacts:
paths:
- public
only:
- master

In case it isn’t clear because it’s only a line, the change is:

1
- mv themes/_config.yml themes/icarus/_config.yml

This approach has a few problems though.
The first inconvenience is if the theme allows configuration of branding images or avatars that utilize a path in the theme structure these files must also be copied or moved into the theme after the submodule has been initialized.
This could probably be overcome by mirroring the structure of the theme and modifying the command to move a folder into the theme during the build process.
Changing themes requires updating the theme config file, the GitLab build configuration file, and adding the theme as a submodule reference.
Most troubling however is that if the submodule is updated, there is no way to detect conflicts or updates that may break the theme except at runtime.
Overall, this does not seem like a good approach anymore.

Include Theme Contents

The second option could be to include the theme contents directly instead of referencing them as a submodule.
Doing so would eliminate the first two issues described with the previous method above.
However, it still suffers from the final issue, although slightly different.
Updates now are a completely manual process, which will likely entail overwriting the files - potentially leaving orphaned files.
Additionally, it also adds bloat to the Hexo content repository that probably is not necessary.
Overall, this solution is better than the previous one but still is less than ideal.

Fork and Update

The final idea I had is to fork the theme being used.
Updates can be made to this new repository that are specific to the Hexo instance.
Updates can be applied by merging the original master repository into this forked copy.
If the update changes something that was unexpected, a conflict will occur that the user will have to resolve in order to finish the merge.
The Hexo content repository would then have the Git submodule reference the forked copy.
Another great part about this is that the submodule can be edited directly and changes committed for it from the parent module.

Summary

I think the final solution eliminates most of the major risks associated with the other options and is what I will be using.
I can even make my forked repository private and have the GitLab runner still able to access it thanks to GitLab’s CI build permissions.
The only differences are that my submodule name will need to be the project name and the url will need to be changed from an absolute URL to a relative url:

1
2
3
[submodule "hexo-theme-icarus"]
path = themes/icarus
url=../hexo-theme-icarus.git

The other solutions should work fine but they seemed wrong for one reason or another.
Choose whichever option has risks that you can live with.

GitLab Pages Setup

Introduction

Depending on the referrer (Twitter), it may be easy to find out that I used to publish articles on BitBucket pages.
I initially chose BitBucket pages because they were the only repository provider that supported free Private Repositories.
However, I am looking at alternatives because BitBucket does not support domain forwarding (anymore) for BitBucket pages.

This is a problem because if BitBucket were to go out of business (doesn’t seem likely but hypothetically) then all my links die with them.
While I appreciate the ‘free’ server, I am also not keen on my url structure being a sub-domain of theirs - it just seems unprofessional of me.
This is why I never really considered options like Wordpress.com or BlogSpot.
There is nothing wrong with these companies or the services they provide.
On the contrary, it is great that they provide these services because it opens up options for people to choose from.

My biggest issue though, is if I were to switch service providers (which I am doing), I now have to either go edit all previously published links to use the new url or abandon them.
Either option is not really a good option.
Which is what prompted me to look into service providers that supports domain forwarding so that the platform I actually publish my articles to is irrelevant.

The two big contenders I know about are GitHub and GitLab.

Why GitLab?

I ended up settling on GitLab because I am more familiar with their organization setup and I wanted to get some exposure to their CI/CD pipeline.
GitHub may be a viable option for CI/CD with the release of GitHub Actions but as of this time it is still in beta.
GitHub’s major appeal for me was how simple it seems to setup a custom domain and their documentation is superb.
I had a few issues with GitLab, some of this seeming to stem from NameCheap, my domain registrar.

GitLab does have static site generator templates that can be forked as a starting point.

While you can create a project from scratch, let’s keep it simple and fork on of your favorite example projects to get a quick start.
GitLab Pages works with any static site generator.

This is also visible when creating a new project:

My pain may have been less had I used one of these.

Setup

The first step is to create a new repository project on GitLab.
This project should follow the naming convention of organization/user.gitlab.io
As embarrassing as it is, I have to admit that this was my first mistake.
I am not sure why, since BitBucket follows a similar naming convention but for whatever reason I named the project jhache.me.

In case others make this same mistake, it is not the end of the world.
Go into General Settings and rename the project.

This does not change the URL however, so it may be a good idea to update this before progressing as well.
In the General Settings area expand the Advanced section and look for the Change Path section taking note of the warnings:

This should allow the project to be referenced directly with the GitLab url (such as www.gitlab.com/jhache/jhache.gitlab.io).
I am not sure if this is necessary though since I was troubleshooting 404 errors that may have been caused by the next step not being run.

With the repository created, the local Hexo folder should be committed and pushed.
This differs from my previous workflow on BitBucket where I had a repository containing the Hexo folder and another repository containing only the site content that I would hexo deploy to.
Since I want to leverage the GitLab CI/CD this is a necessary change.

CI/CD

GitLab CI/CD uses the a YAML configuration file called .gitlab-ci.yml, much like Appveyor or TravisCI.
The file tells the CI/CD pipeline how to build the repository.
This file can be copied from the Hexo Pages Template:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
image: node:10.15.3

cache:
paths:
- node_modules/

before_script:
- npm install hexo-cli -g
- test -e package.json && npm install
- hexo generate

pages:
script:
- hexo generate
artifacts:
paths:
- public
only:
- master

Note: If this is a migration (as in my case) from another service provider, be sure to commit the contents of your local repository (if you want to save the history) before doing this or you will have conflicting heads that you will need to resolve somehow.

Since Hexo themes are added as Git submodules (if done according to the Hexo documentation), using one required a change to the configuration file to checkout the submodules.
The final configuration file up to this point looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
image: node:10.15.3

variables:
GIT_SUBMODULE_STRATEGY: recursive

cache:
paths:
- node_modules/

before_script:
- npm install hexo-cli -g
- test -e package.json && npm install
- hexo generate

pages:
script:
- hexo generate
artifacts:
paths:
- public
only:
- master

Committing this file should trigger a GitLab build.
Once complete, the GitLab page should be able to be accessed from the name template described above (jhache.gitlab.io).
If not, troubleshoot this issue before continuing.

Custom Domains

As explained above GitLab supports (multiple) custom domains to be routed to their pages.
This is configured/setup in the Pages Settings of the GitLab repository.

Add a New Domain:

Once done, a screen will appear that must be taken to a Domain Registrar to configure the details of:

This must also be done for www domains since www is technically a subdomain.

I use NameCheap as my Domain Registrar so the remaining steps will use their Advanced DNS page for the domain.
Other Domain Registrars may have a similar setup but I cannot guarantee this.
GitLab does provide documentation from some Domain Registrars but NameCheap was not one of them.
Fortunately, I was able to find one article that helped me configure my DNS Host Records:

The A record redirects jhache.me to GitLab pages.
The first CNAME aliases jhache.gitlab.io as jhache.me.
The second CNAME record aliases jhache.gitlab.io as www.jhache.me.
The first TXT record is the verification code provided by the Pages Domain Details for jhache.me.
The second TXT record is the verification code provided by the Pages Domain Details for www.jhache.me.

Since the Pages Domain Details contain the raw TXT record that should be used, the NameCheap user interface contains a substring of the value.
The TXT records should be left so that certbot can verify ownership of the domain when certificates are regenerated - in case the domain name is transferred.

Once your domain has been verified, leave the verification record in place: your domain will be periodically reverified, and may be disabled if the record is removed.

Depending on the TTL settings for the host record, this could take some time to propagate.
You can use the dig command or Online dig to check the DNS records associated to a domain.

Back in the Pages Domain Details refresh the verification until it is green.
Once verified, wait for a certificate to be generated - the Pages domain settings will look like this when it has been acquired:

Once acquired, the Force HTTPS setting can be set.
With that requests to the configured domain should redirect to GitLab pages over HTTPS.

Outstanding Issues

Because GitLab CI/CD checks out themes as part of the build process, my theme configuration settings have been lost.
I have an idea on how this can be resolved by modifying the .gitlab-ci.yml to copy the desired _config.yml from a directory in the main repository, but I have not verified if this will work.

Prism Module InitializationMode Comparison

Introduction

As part of my self-improvement challenge, I have been watching the Introduction to Prism course from PluralSight.
I chose this course so I am better equipped for my teams’ Prism application project at work where I was recently tasked to improve the startup performance.

At this time, the project contains sixty-nine IModule implementation types; however, that number is continuing to grow.
All of these modules will not be loaded at once and some of them may not be used/loaded at all.
Some of them are conditionally loaded during runtime, when certain criteria is met.

While watching the Initializing Modules video I found myself wondering if anything would change if I were to change these conditionally loaded modules InitializationMode from the default WhenReady to OnDemand.
My reasoning behind this is because Brian Lagunas explains that WhenReady initializes modules as soon as possible or OnDemand when the application needs them in the video.
Brian recommends using OnDemand if the module is not required to run, is not always used, and/or is rarely used.

I have a few concerns:

  1. Impacting features because the module is not loaded beforehand or the Module initialization is not done manually.
  2. No performance impact because this project handles module initialization itself in an effort to parallelize it instead of letting Prism manage it.

In the end, only benchmarking each option will provide information to make a decision.
To do this I used JetBrains dotTrace, focusing on the timings for App.OnStartup, Bootstrapper.Run, Bootstrapper.ConfigureModuleCatalog, and Bootstrapper.InitializeModules.
Since we try to load modules in parallel, I ended up adding the timing for this as well - otherwise the timing may have appeared off.

Baseline - InitializationMode.WhenAvailable

The first step was to gather baseline metrics.

Profile #1 Profile #2 Profile #3 Profile #4 Profile #5 Min Average Median Max STD
App.OnStartup 5845 4687 4220 4545 4973 4220 4854 4687 5845 551.6462635
Bootstrapper.Run 5954 3986 2598 3293 2779 2598 3722 3293 5954 1215.581013
Bootstrapper.ConfigureModuleCatalog 1148 767 363 511 1.5 1.5 558.1 511 1148 385.1511911
Bootstrapper.InitializeModules 184 109 117 85 71 71 113.2 109 184 39.0404918
Asynchronous Module Initialization 1821 2233 2311 2571 2564 1821 2300 2311 2571 274.6590614

Not terrible, but not ideal.
The application splash screen is displayed for about 4.5 seconds on average on a developer machine with only a few conditional modules enabled.

InitializationMode.OnDemand

With the baseline determined, a comparison can be made when switching the modules to be loaded OnDemand.

Profile #1 Profile #2 Profile #3 Profile #4 Profile #5 Min Average Median Max STD
App.OnStartup 5419 3969 4391 5919 5490 3969 5037.6 5419 5919 733.0750575
Bootstrapper.Run 2770 2197 2017 2086 2238 2017 2261.6 2197 2770 266.0320281
Bootstrapper.ConfigureModuleCatalog 408 374 340 352 388 340 372.4 374 408 24.40983408
Bootstrapper.InitializeModules 143 67 69 69 66 66 82.8 69 143 30.1224169
Asynchronous Module Initialization 1926 1639 1699 1603 1632 1603 1699.8 1639 1926 117.3292802

All the Bootstrapper methods seemed to have improved, but overall the App.OnStartup took approximately the same amount of time.

Summary

There was an impact, but not in the overall startup time - which I find a little peculiar.
It seems as though the overhead may have been shifted elsewhere in the startup process.

This may mean that a hybrid approach to Bootstrapper.InitializeModules does have merits although not as much as I had hoped.
Another option may be to change the Bootstrapper.ConfigureModuleCatalog to conditionally determine to add modules instead of applying a ‘safe’ default.
Or perhaps I am diagnosing the wrong problem and should at other options - such as switching Dependency Injection frameworks.

In any case, I am going to discuss this as an option with my team - and see if additional testing can be done with more conditional modules enabled.

Increasing Productivity by Beating Procrastination Review

Today, I decided I was going to challenge myself to write a blog post and/or watch a PluralSight modules/course every day.
I have been feeling stagnant lately and want to get back into improving myself.
I cannot think of a better way to start off that adventure than by writing a blog post about a PluralSight course on productivity and overcoming procrastination.

Steven Haunts put together the Increasing Productivity by Beating Procrastination course that released December 11, 2018.

One of the most significant threats to our productivity at work is procrastination and the difficulty in getting focused.
This course will teach you how to understand procrastination and offer practical tips for beating the habit and getting focused.

The course has four main modules:

  1. What Is Procrastination?
  2. Understanding Procrastination
  3. Overcoming Procrastination
  4. Developing an Ability to Focus

What Is Procrastination?

In this module, Steven defines procrastination as:

The habit of putting off or delaying, especially something requiring immediate attention.

From my personal experience, this seems like an accurate definition.

He continues by outlining why he thinks we procrastinate:

  1. Fear of Failure
  2. Procrastinators are Perfectionists
  3. Low Energy Levels
  4. Lack of Focus

Fear of Failure and Perfectionism seem like the same thing to me but arguably is probably the biggest reason why I procrastinate personally.
These are also the two that make absolutely the least amount of sense.
“Failure” is the best way to learn - this is how all children learn.
Somewhere while growing up this learning paradigm shifts into an avoidance of failure.

“I have not failed. I’ve just found 10,000 ways that won’t work”

Low Energy Levels seems like they could have a contributing impact on productivity, particularly towards the beginning of the week.
The phrase “a case of the Monday’s” supports this:

symptoms of a useless or horrible Monday morning after returning from the weekend, used in the movie Office Space

Lack of Focus seems a little too open-ended considering the Attention Deficit Disorder society we live in.
Everything is competing for focus at the same time and we only have a limited supply of it and willpower.
Perhaps this is what the author means.

Understanding Procrastination

With Procrastination defined, the author continues with identifying where procrastination occurs to try to help train your awareness of it.
With this there are some things we individually will need to accept in order to decrease the chances of procrastination occurring:

  1. Accepting we are not perfect
  2. Understanding failure is not fatal
  3. Aim to do your best and be happy about the output
  4. Try to develop a healthier lifestyle to get more energy
  5. Go to bed earlier
  6. Reduce screen time before bed time

The nature of the first few seem very Zen/Stoic.

Zen:

An approach to an activity, skill, or subject that emphasizes simplicity and intuition rather than conventional thinking or fixation on goals.

Stoic:

of or pertaining to the school of philosophy founded by Zeno, who taught that people should be free from passion, unmoved by joy or grief, and submit without complaint to unavoidable necessity.

The remaining ones seem related to each other but do play an important part in our lifestyles and self-improvement.

The overall message seems to be “do the best possible, reflect, and improve”.

Overcoming Procrastination

Stephen provides the following options to get rid of obstacles that lead to procrastination:

  1. Avoid the distraction (move away from the distraction)
  2. Blocking the distraction (prevent the distraction from occurring)
  3. Satisfy the need (hunger)
  4. Confront the distraction (environmental noise)
  5. Just start the task

Out of all of these, Just Start the Task has been the biggest boon to my productivity.
I find that within five minutes of starting a task I have overcome my procrastination.

It is for this reason I appreciate techniques/frameworks/guidelines such as the Pomodoro Technique, Kanban, and Getting Things Done.
These tools are what I use as the foundation for my habits that the author encourages to build.
For Stephen, creating a habit should have the following guidelines:

  • A productive mindset
  • Set goals (measurable and prioritized)
  • Identify tasks that can be turned into habits
  • Put a place and time for the habit (define your habits environment)
  • Remind yourself of the goal

Summary

Overall, the course did not provide me with any new insights or tools to help me overcome procrastination.
But realistically, should there be?
David Allen, the creator of Getting Things Done admits in his book that he would not be teaching how to do anything new but would be providing the framework that utilizes all we know.

If nothing else, the course was a different perspective and reassurance that others suffer from procrastination.
My biggest takeaway from the course will be “If you fail, forgive yourself, make adjustments and try again.”
Must be my perfectionism wanting to get it right the first time or not at all.

Project Euler #0001: Multiples of 3 and 5

Problem Description

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6, and 9.
The sum of these multiples is 23.

Find the sum of all the multiples of 3 or 5 below 1000.

Simple Solution

The simplest solution is to iterate over all numbers up to the limit.
If any of these numbers are a multiple of one of the factors then it is included in the summation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public static ulong SumFactorMultiplesBelowLimit(int limit, params int[] factors)
{
if (factors == null
|| !factors.Any())
{
throw new System.ArgumentException("Invalid factors.", nameof(factors));
}

ulong sum = 0;

for (int i = 0; i < limit; i++)
{
foreach (int factor in factors)
{
if (i % factor == 0)
{
sum += (ulong)i;
break;
}
}
}

return sum;
}

Timing the operation yields:

Minimum Elapsed: 00:00:00.0000114
Average Elapsed: 00:00:00.0000545
Maximum Elapsed: 00:00:00.0004331

Pretty quick, but that is most likely because the problem space is small.
If the limit is increased, or if more factors are introduced the number of operations performed is increased.
In big-o notation, this approach is $$ O(n * m) $$.

Asynchronous Simple Solution

Note, this particular approach is not recommended for the problem as it is layed out in the description but is included to compare results and as a thought experiment.
This solution could possibly be viable if the number of factors used increased and there was a mechanism to reduce the amount of duplicated iterative work performed.

Another possible way to structure a solution to the problem is by giving each factor provided a thread to calculate the multiples up to the limit it has.
Once each thread has completed, the resulting multiples are compared for matching numbers that are used to generate a sum:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
public static async Task<int> SimpleSumFactorMultiplesBelowLimitAsync(int limit, params int[] factors)
{
if (factors == null
|| !factors.Any())
{
throw new System.ArgumentException("Invalid factors.", nameof(factors));
}

IList<Task<ICollection<int>>> taskCollection =
new List<Task<ICollection<int>>>();

foreach (int factor in factors)
{
taskCollection.Add(GetFactorMultiplesBelowLimitAsync(limit, factor));
}

await Task.WhenAll(taskCollection);

ICollection<int> factorMultiples =
new HashSet<int>(await taskCollection.First());

for (int i = 1; i < taskCollection.Count; i++)
{
ICollection<int> factorMultiplesResults = await taskCollection[i];
foreach (int factorMultiple in factorMultiplesResults)
{
factorMultiples.Add(factorMultiple);
}
}

return factorMultiples.Sum();
}

The iterative work for this solution was extracted to a helper method in order to parallelize it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public static async Task<ICollection<int>> GetFactorMultiplesBelowLimitAsync(int limit, int factor)
{
ICollection<int> factorMultiples = new HashSet<int>();

for (int i = 0; i < limit; i++)
{
if (i % factor == 0)
{
factorMultiples.Add(i);
}
}

return factorMultiples;
}

This is not the prettiest solution by any stretch and yields slightly worse results than the Simple Solution:

Minimum Elapsed: 00:00:00.0000317
Average Elapsed: 00:00:00.0002025
Maximum Elapsed: 00:00:00.0017206

The results are not surprising because of how the work is performed.
There are two or more loops iterating over all of the numbers - duplicating the amount of operations and comparisons that must be done.

One possible improvement for this solution could be to have a single shared collection of possible numbers that would be updated to reduce the number of iterations that are performed by each thread instead of joining the results after they have all completed.
This could also introduce a race condition so a thread-safe data structure is recommended if this approach is taken.

Another possible improvement would be to start with the largest factor from the available factors so that the initial set of numbers is the smallest starting point that it could be to iterate over.

As stated before, this is not a recommended solution when the number of factors is small since the iterative work is duplicated.
The only redeeming factor of this approach is that the work is done on multiple threads so if two threads are available at the same time it may appear that the solution is the same as the Synchronous Simple Solution.
This can be seen in the Minimum Elapsed Time measurement being comparable to the Average Elapsed Time in the previous results.

Simple LINQ Solution

The Simple Solution can be converted into a more fluent LINQ syntax - at the cost of some performance:

1
2
3
4
5
6
7
8
9
10
11
12
public static ulong SimpleLinqSumFactorMultiplesBelowLimit(int limit, params int[] factors)
{
if (factors == null
|| !factors.Any())
{
throw new System.ArgumentException("Invalid factors.", nameof(factors));
}

return (ulong)Enumerable.Range(1, limit - 1)
.Where(number => factors.Any(factor => number % factor == 0))
.Sum();
}

The type cast is necessary to convert the Sum() operation to the return type.
This could cause a little performance degradation but was not significant in the measurements.

This solution reads a little easier to read and possibly understand since it reads like a sentence.

The results of this solution are:

Minimum Elapsed: 00:00:00.0000574
Average Elapsed: 00:00:00.0001152
Maximum Elapsed: 00:00:00.0006285

As expected, this is slower than the simple solution but still fast.
The tradeoff could be worth it for the improved read-ability.

Surprisingly, this solution is actually slower than the asynchronous solution in some scenarios - seen by comparing the Minimum Elapsed Time of the two results.

Algorithmic Solution

The simple solution satisfies the criteria to generate an answer, but the performance can be improved by looking for an algorithmic solution instead of a brute-force solution.
Conveniently, the problem is asking for a solution to a Finite Arithmetic Progression, specifically an Arithmetic Series which has an algorithmic solution.

… a sequence of numbers such that the difference between the consecutive terms is constant.
… The sum of the members of a finite arithmetic progression is called an arithmetic series.
… [The] sum can be found quickly by taking the number n of terms being added, multiplying by the sum of the first and last number in the progression, and dividing by 2:

The equation for solving this kind of problem is:

$$\begin{equation}
n(a_1+a_n)\over2
\end{equation}$$

In this equation:

  • $$ n $$ is the number of terms being added.
  • $$ a_1 $$ is the initial term.
  • $$ a_n $$ is the last number in the progression.

Using the value 3 from the problem description’s example in this manner yields the following progression:

$$ 3 + 6 + 9 $$

The $$ n $$ in the algorithm can be solved for by taking the limit and dividing it by the starting term in the progression and discarding any remainder.

When substituting values into the equation it becomes the following:

$$\begin{eqnarray}
3(3 + 9)\over2
&=& 3(12)\over2
&=& 36\over2
&=& 18
\end{eqnarray}$$

With this algorithm, the sum for each specified multiple can be calculated.
Keep in mind that all shared multiples for all factors must be subtracted.
In the problem description this would be $$ 5 * 3 = 15 $$ if the limit was larger than the multiple (15 in this case).

Synchronous Algorithmic Solution

A synchronous solution for this problem with only two factors could look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public static async Task<int> AlgorithmicSumFactorMultiplesBelowLimit(int limit, params int[] factors)
{
if (factors == null
|| !factors.Any())
{
throw new System.ArgumentException("Invalid factors.", nameof(factors));
}

int sum = 0;
ICollection<int> factorLookup = new HashSet<int>(factors);
ICollection<int> multiples = new HashSet<int>();

for(int i = 0; i < factors.Length; i++)
{
int factor = factors[i];
sum += AlgorithmicSumFactorBelowLimit(limit, factor);

for (int j = i + 1; j < factors.Length; j++)
{
int multiple = factor * factors[j];

if (!factorLookup.Contains(multiple)
&& limit > multiple
&& !multiples.Contains(multiple))
{
multiples.Add(multiple);
sum -= AlgorithmicSumFactorBelowLimit(limit, multiple);
}
}
}

return sum;
}

The AlgorithmicSumFactorBelowLimit method looks like this:

1
2
3
4
5
6
7
8
private static int AlgorithmicSumFactorBelowLimit(int limit, int factor)
{
int n = (limit - 1) / factor;
int a1 = factor;
int an = n * a1;

return n * (a1 + an) / 2;
}

The limit is subtracted by one when calculating $$ n $$ so that factors that evenly divide the limit do not generate an off-by-one error.

The performance of this solution is:

Minimum Elapsed: 00:00:00.0000007
Average Elapsed: 00:00:00.0000011
Maximum Elapsed: 00:00:00.0000045

Initially, I had the algorithmic method asynchronous to share code but wanted to ensure there was not any skewing of results that may have occurred from using .GetAwaiter().GetResult().
Spoiler alert, the results were approximately the same in both - meaning there probably would not have been any perceptible difference in the results.

Asynchronous Algorithmic Solution

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public static async Task<int> AlgorithmicSumFactorMultiplesBelowLimitAsync(int limit, params int[] factors)
{
if (factors == null
|| !factors.Any())
{
throw new System.ArgumentException("Invalid factors.", nameof(factors));
}

int sum = 0;
ICollection<int> factorLookup = new HashSet<int>(factors);
ICollection<int> multiples = new HashSet<int>();

for (int i = 0; i < factors.Length; i++)
{
int factor = factors[i];
sum += AlgorithmicSumFactorBelowLimit(limit, factor);

for (int j = i + 1; j < factors.Length; j++)
{
int multiple = factor * factors[j];

if (!factorLookup.Contains(multiple)
&& limit > multiple
&& !multiples.Contains(multiple))
{
multiples.Add(multiple);
sum -= AlgorithmicSumFactorBelowLimit(limit, multiple);
}
}
}

return sum;
}

And updating the algorithm to be asynchronous as well:

1
2
3
4
5
6
7
8
private static async Task<int> AlgorithmicSumFactorBelowLimitAsync(int limit, int factor)
{
int n = (limit - 1) / factor;
int a1 = factor;
int an = n * a1;

return n * (a1 + an) / 2;
}

Measuring the performance of this solution generated:

Minimum Elapsed: 00:00:00.0000006
Average Elapsed: 00:00:00.0000010
Maximum Elapsed: 00:00:00.0000022

In this case the asynchronous solution impacts the performance results positively because each thread is able to contribute to solving the problem without duplicating any of the work.

Because each thread is able to do work in isolation, this solution will scale well even as the number of factors increases - as long as there are threads available to do the processing work.

log4net and Web Configuration Transformations

Introduction

Microsoft puts it best in their documentation:

When you deploy a Web site, you often want some settings in the deployed application’s Web.config file to be different from the development Web.config file.
For example, you might want to disable debug options and change connection strings so that they point to different databases.^1

The project I currently work on has nine environment configurations with a tenth one potentially in the works.

Dev, Test, and Load have two environments each to support two releases.
Initially, we did only have a single instance for each environment, but development teams started halting when those environments were going through the deployment process.
While one is being finalized (bug-fixes found in a higher environment, stage-gate-reviews, security scanning, etc.) development on the next release can continue without impeding other teams.

User Acceptance Testing (UAT), Train, and Prod only have a single environment because they are focusing on accepting a single release at a time.

As far as I know Demo is no longer operational and should probably be removed.
Although, a new environment called Diag has servers set up but none of the configuration information has been communicated to the development team - yet.

log4net Configuration Transformations

log4net is a logging framework for .NET.
The log4net webpage describes it as:

The Apache log4net library is a tool to help the programmer output log statements to a variety of output targets.
log4net is a port of the excellent Apache log4j™ framework to the Microsoft® .NET runtime.
We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime.
For more information on log4net see the features document.

Honestly, log4net is not my preferred logging library; (mostly) due to performance concerns.
Although, the article’s publish date is January 12, 2016 - which is a lifetime for a software application - meaning it is possible these concerns have been addressed and resolved.
However, checking log4net‘s NuGet Page reveals that only three versions have been released since the article was published.
This does not invoke confidence that the performance concerns have been addressed and resolved.

Unfortunately, that does not help, log4net v1.2.10 (release January 7th, 2011) is the version required by the dependency assemblies embedded as part of the applications.
There is no choice but to accept log4net as a dependency in this project - for the time being.

These dependency assemblies must have log4net.config files - for each environment configuration.

When we took over the project, all of the log4net.config files were bundled as part of the release and the environment’s web.config file had an appSetting that was used to select the proper log4net.config file for that environment.

The purpose in describing this is to illustrate why I set out to make the log4net.config files in my project behave like web.config files.

Setup

To start, I created a base file called log4net.Template.config (more on ‘why’ later).
This file will hold the initial configuration information that can be transformed for each environment.
It is set up something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
<?xml version="1.0"?>
<configuration>
<log4net>
<!-- Appenders Go Here -->

<appender name="ApplicationLogAppender" type="log4net.Appender.AdoNetAppender">
<bufferSize value="1"/>
<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
<connectionString value="server=(local)\MSSQLSERVER2012;uid=user;pwd=password;database=database"/>
<reconnectonerror value='true' />
<commandText value="INSERT INTO application_log (
server_name,
log_date,
thread_number,
log_message,
exception
)
VALUES
('${COMPUTERNAME}', @log_date, @thread_number, @log_message, @exception)"/>
<parameter>
<parameterName value="@log_date"/>
<dbType value="DateTime"/>
<layout type="log4net.Layout.RawTimeStampLayout">
</layout>
</parameter>
<parameter>
<parameterName value="@thread_number"/>
<dbType value="String"/>
<size value="255"/>
<layout type="log4net.Layout.PatternLayout" value="%thread"/>
</parameter>
<parameter>
<parameterName value="@log_message"/>
<dbType value="AnsiString"/>
<size value="8000"/>
<layout type="log4net.Layout.PatternLayout" value="%message"/>
</parameter>
<parameter>
<parameterName value="@exception"/>
<dbType value="AnsiString"/>
<size value="8000"/>
<layout type="log4net.Layout.ExceptionLayout">
</layout>
</parameter>
</appender>

<!-- Root Element Goes Here -->

<!-- Loggers Go Here -->
</log4net>
</configuration>

Then a separate log4net.config file is created for each environment (such as log4net.Debug.config) and nested under the base configuration.

These configurations can remove pieces of the base template when transformed like so:

1
2
3
4
5
6
7
8
9
10
<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<log4net>
<appender
name="ApplicationLogAppender"
type="log4net.Appender.AdoNetAppender"
xdt:Locator="Match(name)"
xdt:Transform="Remove" />
</log4net>
</configuration>

Or, the configuration can change specific settings - like the appender‘s type or connection string:

1
2
3
4
5
6
7
8
9
10
11
12
13
<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<log4net>
<appender
name="ApplicationLogAppender"
type="log4net.Appender.AdoNetAppender"
xdt:Locator="Match(name)">
<connectionString
value="server=(local);uid=user;pwd=password;database=database"
xdt:Transform="Replace" />
</appender>
</log4net>
</configuration>

Finally, an empty log4net.config file is created in the project.
The file is included as part of the project so that it can have a Build Action associated to it, be ‘built’, and be included as part of any artifacts generated from the process.
The contents of this file will be overwritten with the environment configuration selected, which is why a log4net.Template.config file is used as the base instead of log4net.config.
This will prevent the base from being lost after the first transform is applied.
It is best to leave the contents blank and exclude any additional changes from source control to preserve a clean source control history.

The end result looks something like this:

The next step is to get Visual Studio and/or the build server to apply the transforms that were set up.

Transforming

Visual Studio does not apply transformations for any configuration file in the project/solution - except the web.config file (and even then that is only when special circumstances have been met that will be mentioned in the next section).
To get Visual Studio to transform the log4net.config file, a build step must be configured for the project.

There are two ways this can be done.

  1. Project Build Events
  2. Project Build Targets

Project Build Events

Use build events to specify commands that run before the build starts or after the build finishes.
Build events are executed only if the build successfully reaches those points in the build process.^2

Project build events are configured by right clicking on the StartUp project in the Solution Explorer and selecting properties (Alt+Enter when project is highlighted).

In the window that appears, navigate to the Build Events tab.

This method has the advantage of only performing the Post-Build Event when a selected criteria is met.

Project Build Targets

The other method is to create MSBuild Targets.

MSBuild includes several .targets files that contain items, properties, targets, and tasks for common scenarios.
These files are automatically imported into most Visual Studio project files to simplify maintenance and readability.^3

Visual Studio

To do this, the StartUp project in the Solution Explorer must be Unloaded (some versions of Visual Studio may allow the project to be edited directly):

This allows the project to be Edited in Visual Studio (with the benefit of syntax highlighting):

When all changes have been made, the project can be Reloaded:

Text Editor

Alternatively, it can be opened in an external text editor (may have limited syntax highlighting):

The project will re-load every time Visual Studio becomes the active window and detects that changes were made to the project.

Transform Steps

Each method has pros and cons - my project opted for the Project Build Targets method which will be outlined here.
Regardless of the chosen method, the same tasks need to be run:

  1. TransformXml
  2. [Copy]
  3. [Delete]
1
2


The first one is obvious, and may seem like it should be the only one that needs to be done.

Regardless of the method chosen, the same steps need to be applied.
My project opted to use the Project Build Targets method.

Web Configuration Transformations

DryIoc - Dependency Injection via Reflection

Introduction

My work inherited an ASP.NET WebApi Project from a contracted company.
One of the first things added to the project was a Dependency Injection framework.
DryIoc was selected for it’s speed in resolving dependencies.

In this post, I show why and how reflection was utilized to improve the Dependency Injection Registration code.

Architecture

First, let me provide a high-level overview of the architecture (using sample class names).
The project is layered in the following way:

Controllers

Controllers are the entry-point for the code (just like all WebApi applications) and are structured something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[RoutePrefix("posts")]
public class BlogPostController : ProjectControllerBase
{
private readonly IBlogPostManager blogPostManager;

public BlogPostController(ILogger logger, IBlogPostManager blogPostManager)
: base(logger)
{
this.blogPostManager = blogPostManager;
}

[HttpGet]
[Route("{blogPostId}")]
[ResponseType(typeof(BlogPost))]
public async Task<IHttpActionResult> GetBlogPost(int blogPostId)
{
// Logging logic to see what was provided to the Controller method.

if (blogPostId <= default(int))
{
return this.BadRequest("Invalid Blog Post Id.");
}

BlogPost blogPost = await this.blogPostManager.GetBlogPost(blogPostId);

if (blogPost == null)
{
return this.NotFound();
}

return this.Ok(blogPost);
}

// ... Additional Service Methods
}

Controllers are responsible for ensuring sane input values are provided before passing the parameters through to the Manager layer.
In doing so, the business logic is pushed downwards allowing for more re-use.

The ProjectControllerBase provides instrumentation logging and as a catch-all for any errors that may occur during execution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
public abstract class ProjectControllerBase : ApiController
{
public ProjectControllerBase(ILogger logger)
{
this.Logger = logger;
}

protected ILogger Logger { get; }

public override async Task<HttpResponseMessage> ExecuteAsync(
HttpControllerContext controllerContext,
CancellationToken cancellationToken)
{
HttpRequestMessage requestMessage = controllerContext.Request;

// Logging request details

Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();

try
{
return await base.ExecuteAsync(controllerContext, cancellationToken);
}
catch(ProjectException projectException)
{
// Contracting developers decided to throw exceptions for control-flow - this handles this case.

return requestMessage.CreateErrorResponse(HttpStatusCode.InternalServerError, projectException.Message);
}
catch(Exception exception)
{
// Log the exception.

return requestMessage.CreateErrorResponse(HttpStatusCode.InternalServerError, exception.Message);
}
finally
{
stopwatch.Stop();

// Log the time.
}
}
}

My goal is to refactor this at some point to remove the ILogger dependency.

Managers

Managers perform more refined validation and contain the business logic for the application.
This allows for the business logic to be referenced by a variety of front-end applications (web site, api, desktop application) easily.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class BlogPostManager : IBlogPostManager
{
public readonly IBlogPostRepository blogPostRepository;

public BlogPostManager(IBlogPostRepository blogPostRepository)
{
this.blogPostRepository = blogPostRepository;
}

public async Task<BlogPost> GetBlogPost(int blogPostId)
{
// Logging logic to see what was provided to the Manager method.

// Any additional validation logic can be performed here - such as ensuring the blog post exists.

BlogPostEntity blogPostEntity = await this.blogPostRepository.GetBlogPost(blogPostId);

// Any additional validation logic can be performed here - such as ensuring the blog post is not in Draft status.

if (blogPostEntity == null)
{
// Throw an exception (NullReference), or return null.
return null;
}

return new BlogPost()
{
// Mapping from BlogPostEntity to BlogPost model.
};
}

// ... Additional functionality
}

Each Manager is coded to an interface.

1
2
3
4
5
6
public interface IBlogPostManager
{
Task<BlogPost> GetBlogPost(int blogPostId);

// ... Additional functionality
}

By doing this, the Liskov Substitution Principle can be applied; allowing for flexible and isolated unit tests.

Repositories

Repositories act as the data access component for the project.

Initially, Entity Framework was used exclusively for data access.
However; for read operations, Entity Framework is being phased for Dapper due to performance issues.

Entity Framework applies an ORDER BY clause to ensure results are grouped together.
In some cases, this caused queries to timeout.
Often, this is a sign that the data model needs to be improved and/or that the SQL queries were too large (joining too many tables).

Additionally, our Database Administrators wanted read operations to use WITH (NOLOCK).

To the best of our knowledge, a QueryInterceptor would need to be used.
This seemed to be counter-intuitive and our aggressive timeline would not allow for any time to tweak and experiment with the Entity Framework code.

For insert operations, Entity Framework is preferred.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public class BlogPostRepository : IBlogPostRepository
{
private readonly BlogEntities blogEntities;
private readonly string databaseConnectionString;

public BlogPostRepository(BlogEntities blogEntities)
{
this.blogEntities = blogEntities;
this.databaseConnectionString = blogEntities.Database.ConnectionString;;
}

public async Task<BlogPostEntity> GetBlogPost(int blogPostId)
{
// Logging logic to see what was provided to the Repository method.

DynamicParameters sqlParameters = new DynamicParameters();
sqlParameters.Add(nameof(blogPostId), blogPostId);

StringBuilder sqlBuilder = new StringBuilder()
.AppendFormat(
@"SELECT
* -- Wildcard would not be used in actual code.
FROM blog_posts WITH (NOLOCK)
WHERE
blog_posts.blog_post_id = @{0}", nameof(blogPostId));

using (SqlConnection sqlConnection = new SqlConnection(this.databaseConnectionString))
{
await sqlConnection.OpenAsync();

// Logging logic to time the query.
BlogPostEntity blogPostEntity =
await sqlConnection.QueryFirstOrDefaultAsync(
sqlBuilder.ToString(),
sqlParameters);

return blogPostEntity;
}
}
}

Each Repository is coded to an interface.

1
2
3
4
5
6
public interface IBlogPostRepository
{
Task<BlogPostEntity> GetBlogPost(int blogPostId);

// ... Additional functionality
}

By doing this, the Liskov Substitution Principle can be applied; allowing for flexible and isolated unit tests.

DryIoc

DryIoc is fast, small, full-featured IoC Container for .NET

Registration

The Dependency Injection framework is registered during application start-up with OWIN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class StartUp
{
public void Configuration(IAppBuilder appBuilder)
{
HttpConfiguration httpConfiguration = GlobalConfiguration.Configuration;

// ... Additional Set Up Configuration

DependencyInjectionConfiguration.Register(httpConfiguration);

// ... Additional Set Up Configuration

httpConfiguration.EnsureInitialized();

// ... Additional Start Up Configuration
}
}

The DependencyInjectionConfiguration class registers the container for the application to resolve dependencies using the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
public static class DependencyInjectionConfiguration
{
public static void Register(HttpConfiguration httpConfiguration)
{
IContainer container = new Container().WithWebApi(httpConfiguration);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterEntityFrameworkContexts(container);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterManagers(container);
DependencyInjectionConfiguration.RegisterRepositories(container);

container.VerifyResolutions();
}

private static T CreateDbContext<T>()
where T : DbContext, new()
{
T context = new T();

context.Configuration.LazyLoadingEnabled = false;

// ... Set Up Database Logging: context.Database.Log = a => <logging mechanism>;

return context;
}

private static void RegisterEntityFrameworkContexts(IContainer)
{
container.Register<BlogEntities>(Reuse.InWebRequest, Made.Of(() => CreateDbContext<BlogEntities>()));
}

private static void RegisterManagers(IContainer)
{
// ... Additional Managers

container.Register<IBlogPostManager, BlogPostManager>(Reuse.InWebRequest);

// ... Additional Managers
}

private static void RegisterRepositories(IContainer)
{
// ... Additional Repositories

container.Register<IBlogPostRepository, BlogPostRepository>(Reuse.InWebRequest);

// ... Additional Repositories
}
}

Problems with this would occasionally arise when a developer introduced new Manager or Repository classes but did not remember to register instances of those classes with the Dependency Injection container.
When this occurred, the compilation and deployment would succeed; but the following runtime error would be thrown when the required dependencies could not be resolved:

An error occurred when trying to create a controller of type ‘BlogPostController’.
Make sure that the controller has a parameterless public constructor.

The generated error message is not helpful in identifying what the underlying issue is.

To prevent this from occurring, all Manager and Repository classes would need to automatically register themselves during start-up.

Reflection

To automatically register classes, reflection can be utilized to iterate over the assembly types and register all Manager and Repository implementations.
Initially, this was done by loading the assembly containing the types directly from the disk:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
public static class DependencyInjectionConfiguration
{
public static void Register(HttpConfiguration httpConfiguration)
{
IContainer container = new Container().WithWebApi(httpConfiguration);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterEntityFrameworkContexts(container);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterManagersAndRepositories(container);

// ... Additional Registrations

container.VerifyResolutions();
}

// ... Additional Functions

private static bool IsInterfaceOrAbstractClass(Type exportedType)
{
return exportedType.IsInterface
|| exportedType.IsAbstract;
}


private static bool IsNotManager(Type exportedType)
{
return !exportedType.Name.EndsWith("Manager", StringComparison.InvariantCultureIgnoreCase);
}

private static bool IsNotRepository(Type exportedType)
{
return !exportedType.Name.EndsWith("Repository", StringComparison.InvariantCultureIgnoreCase);
}

// ... Additional Functions

private static void RegisterDependencies(IContainer container)
{
string assemblyPath = HttpContext.Current.Server.MapPath("~/bin/Dependencies.dll");
Assembly dependencyAssembly = Assembly.LoadFrom(assemblyPath);

foreach (Type exportedType in dependencyAssembly.GetExportedTypes())
{
// Skip registering items that are an interface or abstract class since it is
// not known if there is an implementation defined in this assembly.
if (DependencyInjectionConfiguration.IsInterfaceOrAbstractClass(exportedType))
{
continue;
}

// Skip registering items that are not a Manager, or Repository.
if (DependencyInjectionConfiguration.IsNotManager(exportedType)
&& DependencyInjectionConfiguration.IsNotRepository(exportedType))
{
continue;
}

string interfaceName = $"I{exportedType.Name}";
Type[] interfaceTypes = exportedType.GetInterfaces();

Type serviceType =
interfaceTypes.FirstOrDefault(
interfaceType =>
interfaceType.Name.Equals(interfaceName, StringComparison.InvariantCultureIgnoreCase))
?? exportedType;

container.Register(
serviceType,
exportedType,
Reuse.InWebRequest,
ifAlreadyRegistered: IfAlreadyRegistered.Keep);
}
}

// ... Additional Functions
}

While this works, it felt wrong to load the assembly from disk using a hard-coded path; especially when the assembly will be loaded by the framework automatically.
To account for this, the code was modified in the following manner:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
public static class DependencyInjectionConfiguration
{
public static void Register(HttpConfiguration httpConfiguration)
{
IContainer container = new Container().WithWebApi(httpConfiguration);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterEntityFrameworkContexts(container);

// ... Additional Registrations

DependencyInjectionConfiguration.RegisterManagersAndRepositories(container);

// ... Additional Registrations

container.VerifyResolutions();
}

// ... Additional Functions

private static bool IsInterfaceOrAbstractClass(Type exportedType)
{
return exportedType.IsInterface
|| exportedType.IsAbstract;
}


private static bool IsNotManager(Type exportedType)
{
return !exportedType.Name.EndsWith("Manager", StringComparison.InvariantCultureIgnoreCase);
}

private static bool IsNotRepository(Type exportedType)
{
return !exportedType.Name.EndsWith("Repository", StringComparison.InvariantCultureIgnoreCase);
}

// ... Additional Functions

private static void RegisterDependencies(IContainer container)
{
AssemblyName dependencyAssemblyName = Assembly.GetExecutingAssembly()
.GetReferencedAssemblies()
.FirstOrDefault(referencedAssembly => referencedAssembly.Name.Equals("Dependencies"));
Assembly dependencyAssembly = Assembly.Load(dependencyAssemblyName);

foreach (Type exportedType in dependencyAssembly.GetExportedTypes())
{
// Skip registering items that are an interface or abstract class since it is
// not known if there is an implementation defined in this assembly.
if (DependencyInjectionConfiguration.IsInterfaceOrAbstractClass(exportedType))
{
continue;
}

// Skip registering items that are not a Manager, or Repository.
if (DependencyInjectionConfiguration.IsNotManager(exportedType)
&& DependencyInjectionConfiguration.IsNotRepository(exportedType))
{
continue;
}

string interfaceName = $"I{exportedType.Name}";
Type[] interfaceTypes = exportedType.GetInterfaces();

Type serviceType =
interfaceTypes.FirstOrDefault(
interfaceType =>
interfaceType.Name.Equals(interfaceName, StringComparison.InvariantCultureIgnoreCase))
?? exportedType;

container.Register(
serviceType,
exportedType,
Reuse.InWebRequest,
ifAlreadyRegistered: IfAlreadyRegistered.Keep);
}
}

// ... Additional Functions
}

Unfortunately, there is no timing metrics available to for measuring if there are any performance improvements for either implementation.
With that said, the second implementation seems faster.
This may be because the assembly is already loaded due to other registrations that occur before the reflection registration code is executed.
For this reason, results may vary from project to project.

Overall, the solution works well and has limited the runtime error appearing only when a new Entity Framework context is added to the project.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×