# Prism Module InitializationMode Comparison

## Introduction

As part of my self-improvement challenge, I have been watching the Introduction to Prism course from PluralSight.
I chose this course so I am better equipped for my teams’ Prism application project at work where I was recently tasked to improve the startup performance.

At this time, the project contains sixty-nine IModule implementation types; however, that number is continuing to grow.
All of these modules will not be loaded at once and some of them may not be used/loaded at all.
Some of them are conditionally loaded during runtime, when certain criteria is met.

While watching the Initializing Modules video I found myself wondering if anything would change if I were to change these conditionally loaded modules InitializationMode from the default WhenReady to OnDemand.
My reasoning behind this is because Brian Lagunas explains that WhenReady initializes modules as soon as possible or OnDemand when the application needs them in the video.
Brian recommends using OnDemand if the module is not required to run, is not always used, and/or is rarely used.

I have a few concerns:

1. Impacting features because the module is not loaded beforehand or the Module initialization is not done manually.
2. No performance impact because this project handles module initialization itself in an effort to parallelize it instead of letting Prism manage it.

In the end, only benchmarking each option will provide information to make a decision.
To do this I used JetBrains dotTrace, focusing on the timings for App.OnStartup, Bootstrapper.Run, Bootstrapper.ConfigureModuleCatalog, and Bootstrapper.InitializeModules.
Since we try to load modules in parallel, I ended up adding the timing for this as well - otherwise the timing may have appeared off.

## Baseline - InitializationMode.WhenAvailable

The first step was to gather baseline metrics.

Profile #1 Profile #2 Profile #3 Profile #4 Profile #5 Min Average Median Max STD
App.OnStartup 5845 4687 4220 4545 4973 4220 4854 4687 5845 551.6462635
Bootstrapper.Run 5954 3986 2598 3293 2779 2598 3722 3293 5954 1215.581013
Bootstrapper.ConfigureModuleCatalog 1148 767 363 511 1.5 1.5 558.1 511 1148 385.1511911
Bootstrapper.InitializeModules 184 109 117 85 71 71 113.2 109 184 39.0404918
Asynchronous Module Initialization 1821 2233 2311 2571 2564 1821 2300 2311 2571 274.6590614

Not terrible, but not ideal.
The application splash screen is displayed for about 4.5 seconds on average on a developer machine with only a few conditional modules enabled.

## InitializationMode.OnDemand

With the baseline determined, a comparison can be made when switching the modules to be loaded OnDemand.

Profile #1 Profile #2 Profile #3 Profile #4 Profile #5 Min Average Median Max STD
App.OnStartup 5419 3969 4391 5919 5490 3969 5037.6 5419 5919 733.0750575
Bootstrapper.Run 2770 2197 2017 2086 2238 2017 2261.6 2197 2770 266.0320281
Bootstrapper.ConfigureModuleCatalog 408 374 340 352 388 340 372.4 374 408 24.40983408
Bootstrapper.InitializeModules 143 67 69 69 66 66 82.8 69 143 30.1224169
Asynchronous Module Initialization 1926 1639 1699 1603 1632 1603 1699.8 1639 1926 117.3292802

All the Bootstrapper methods seemed to have improved, but overall the App.OnStartup took approximately the same amount of time.

## Summary

There was an impact, but not in the overall startup time - which I find a little peculiar.
It seems as though the overhead may have been shifted elsewhere in the startup process.

This may mean that a hybrid approach to Bootstrapper.InitializeModules does have merits although not as much as I had hoped.
Another option may be to change the Bootstrapper.ConfigureModuleCatalog to conditionally determine to add modules instead of applying a ‘safe’ default.
Or perhaps I am diagnosing the wrong problem and should at other options - such as switching Dependency Injection frameworks.

In any case, I am going to discuss this as an option with my team - and see if additional testing can be done with more conditional modules enabled.

# Project Euler #0001: Multiples of 3 and 5

## Problem Description

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6, and 9.
The sum of these multiples is 23.

Find the sum of all the multiples of 3 or 5 below 1000.

## Simple Solution

The simplest solution is to iterate over all numbers up to the limit.
If any of these numbers are a multiple of one of the factors then it is included in the summation.

Timing the operation yields:

Minimum Elapsed: 00:00:00.0000114
Average Elapsed: 00:00:00.0000545
Maximum Elapsed: 00:00:00.0004331

Pretty quick, but that is most likely because the problem space is small.
If the limit is increased, or if more factors are introduced the number of operations performed is increased.
In big-o notation, this approach is $$O(n * m)$$.

## Asynchronous Simple Solution

Note, this particular approach is not recommended for the problem as it is layed out in the description but is included to compare results and as a thought experiment.
This solution could possibly be viable if the number of factors used increased and there was a mechanism to reduce the amount of duplicated iterative work performed.

Another possible way to structure a solution to the problem is by giving each factor provided a thread to calculate the multiples up to the limit it has.
Once each thread has completed, the resulting multiples are compared for matching numbers that are used to generate a sum:

The iterative work for this solution was extracted to a helper method in order to parallelize it:

This is not the prettiest solution by any stretch and yields slightly worse results than the Simple Solution:

Minimum Elapsed: 00:00:00.0000317
Average Elapsed: 00:00:00.0002025
Maximum Elapsed: 00:00:00.0017206

The results are not surprising because of how the work is performed.
There are two or more loops iterating over all of the numbers - duplicating the amount of operations and comparisons that must be done.

One possible improvement for this solution could be to have a single shared collection of possible numbers that would be updated to reduce the number of iterations that are performed by each thread instead of joining the results after they have all completed.
This could also introduce a race condition so a thread-safe data structure is recommended if this approach is taken.

Another possible improvement would be to start with the largest factor from the available factors so that the initial set of numbers is the smallest starting point that it could be to iterate over.

As stated before, this is not a recommended solution when the number of factors is small since the iterative work is duplicated.
The only redeeming factor of this approach is that the work is done on multiple threads so if two threads are available at the same time it may appear that the solution is the same as the Synchronous Simple Solution.
This can be seen in the Minimum Elapsed Time measurement being comparable to the Average Elapsed Time in the previous results.

## Simple LINQ Solution

The Simple Solution can be converted into a more fluent LINQ syntax - at the cost of some performance:

The type cast is necessary to convert the Sum() operation to the return type.
This could cause a little performance degradation but was not significant in the measurements.

This solution reads a little easier to read and possibly understand since it reads like a sentence.

The results of this solution are:

Minimum Elapsed: 00:00:00.0000574
Average Elapsed: 00:00:00.0001152
Maximum Elapsed: 00:00:00.0006285

As expected, this is slower than the simple solution but still fast.

Surprisingly, this solution is actually slower than the asynchronous solution in some scenarios - seen by comparing the Minimum Elapsed Time of the two results.

## Algorithmic Solution

The simple solution satisfies the criteria to generate an answer, but the performance can be improved by looking for an algorithmic solution instead of a brute-force solution.
Conveniently, the problem is asking for a solution to a Finite Arithmetic Progression, specifically an Arithmetic Series which has an algorithmic solution.

… a sequence of numbers such that the difference between the consecutive terms is constant.
… The sum of the members of a finite arithmetic progression is called an arithmetic series.
… [The] sum can be found quickly by taking the number n of terms being added, multiplying by the sum of the first and last number in the progression, and dividing by 2:

The equation for solving this kind of problem is:

$$n(a_1+a_n)\over2$$

In this equation:

• $$n$$ is the number of terms being added.
• $$a_1$$ is the initial term.
• $$a_n$$ is the last number in the progression.

Using the value 3 from the problem description’s example in this manner yields the following progression:

$$3 + 6 + 9$$

The $$n$$ in the algorithm can be solved for by taking the limit and dividing it by the starting term in the progression and discarding any remainder.

When substituting values into the equation it becomes the following:

$$\begin{eqnarray} 3(3 + 9)\over2 &=& 3(12)\over2 &=& 36\over2 &=& 18 \end{eqnarray}$$

With this algorithm, the sum for each specified multiple can be calculated.
Keep in mind that all shared multiples for all factors must be subtracted.
In the problem description this would be $$5 * 3 = 15$$ if the limit was larger than the multiple (15 in this case).

## Synchronous Algorithmic Solution

A synchronous solution for this problem with only two factors could look something like this:

The AlgorithmicSumFactorBelowLimit method looks like this:

The limit is subtracted by one when calculating $$n$$ so that factors that evenly divide the limit do not generate an off-by-one error.

The performance of this solution is:

Minimum Elapsed: 00:00:00.0000007
Average Elapsed: 00:00:00.0000011
Maximum Elapsed: 00:00:00.0000045

Initially, I had the algorithmic method asynchronous to share code but wanted to ensure there was not any skewing of results that may have occurred from using .GetAwaiter().GetResult().
Spoiler alert, the results were approximately the same in both - meaning there probably would not have been any perceptible difference in the results.

## Asynchronous Algorithmic Solution

And updating the algorithm to be asynchronous as well:

Measuring the performance of this solution generated:

Minimum Elapsed: 00:00:00.0000006
Average Elapsed: 00:00:00.0000010
Maximum Elapsed: 00:00:00.0000022

In this case the asynchronous solution impacts the performance results positively because each thread is able to contribute to solving the problem without duplicating any of the work.

Because each thread is able to do work in isolation, this solution will scale well even as the number of factors increases - as long as there are threads available to do the processing work.

# DryIoc - Dependency Injection via Reflection

## Introduction

My work inherited an ASP.NET WebApi Project from a contracted company.
One of the first things added to the project was a Dependency Injection framework.
DryIoc was selected for it’s speed in resolving dependencies.

In this post, I show why and how reflection was utilized to improve the Dependency Injection Registration code.

## Architecture

First, let me provide a high-level overview of the architecture (using sample class names).
The project is layered in the following way:

### Controllers

Controllers are the entry-point for the code (just like all WebApi applications) and are structured something like this:

Controllers are responsible for ensuring sane input values are provided before passing the parameters through to the Manager layer.
In doing so, the business logic is pushed downwards allowing for more re-use.

The ProjectControllerBase provides instrumentation logging and as a catch-all for any errors that may occur during execution:

My goal is to refactor this at some point to remove the ILogger dependency.

### Managers

Managers perform more refined validation and contain the business logic for the application.
This allows for the business logic to be referenced by a variety of front-end applications (web site, api, desktop application) easily.

Each Manager is coded to an interface.

By doing this, the Liskov Substitution Principle can be applied; allowing for flexible and isolated unit tests.

### Repositories

Repositories act as the data access component for the project.

Initially, Entity Framework was used exclusively for data access.
However; for read operations, Entity Framework is being phased for Dapper due to performance issues.

Entity Framework applies an ORDER BY clause to ensure results are grouped together.
In some cases, this caused queries to timeout.
Often, this is a sign that the data model needs to be improved and/or that the SQL queries were too large (joining too many tables).

Additionally, our Database Administrators wanted read operations to use WITH (NOLOCK).

To the best of our knowledge, a QueryInterceptor would need to be used.
This seemed to be counter-intuitive and our aggressive timeline would not allow for any time to tweak and experiment with the Entity Framework code.

For insert operations, Entity Framework is preferred.

Each Repository is coded to an interface.

By doing this, the Liskov Substitution Principle can be applied; allowing for flexible and isolated unit tests.

## DryIoc

DryIoc is fast, small, full-featured IoC Container for .NET

### Registration

The Dependency Injection framework is registered during application start-up with OWIN:

The DependencyInjectionConfiguration class registers the container for the application to resolve dependencies using the following code:

Problems with this would occasionally arise when a developer introduced new Manager or Repository classes but did not remember to register instances of those classes with the Dependency Injection container.
When this occurred, the compilation and deployment would succeed; but the following runtime error would be thrown when the required dependencies could not be resolved:

An error occurred when trying to create a controller of type ‘BlogPostController’.
Make sure that the controller has a parameterless public constructor.

The generated error message is not helpful in identifying what the underlying issue is.

To prevent this from occurring, all Manager and Repository classes would need to automatically register themselves during start-up.

### Reflection

To automatically register classes, reflection can be utilized to iterate over the assembly types and register all Manager and Repository implementations.
Initially, this was done by loading the assembly containing the types directly from the disk:

While this works, it felt wrong to load the assembly from disk using a hard-coded path; especially when the assembly will be loaded by the framework automatically.
To account for this, the code was modified in the following manner:

Unfortunately, there is no timing metrics available to for measuring if there are any performance improvements for either implementation.
With that said, the second implementation seems faster.
This may be because the assembly is already loaded due to other registrations that occur before the reflection registration code is executed.
For this reason, results may vary from project to project.

Overall, the solution works well and has limited the runtime error appearing only when a new Entity Framework context is added to the project.