Streamline Deploying your documentation with DocFx 2, DotNet 5, and Docker

At the time of writing this DocFx is not playing well DotNet (.NET) 5 Projects. The error specifically is reading .csproj and .sln files to get the API documentation, or your 3 slash code comments. This presented me with a slight problem, and here was my solution.

I was needing to publish my documentation for a work project since we just upgraded to .NET 5. So I first was working on running Docfx with Mono inside a docker container it took me a few minutes to get that running after getting that to successful work I came across the bug between .NET 5 & Docfx, noted here: Github. So you just have to build your code, let docfx read those .dll files, then finally deploy to a web server for view.

To achieve this I built a 3 stage docker file to get the documentation built you can see an example of the Dockerfile below.

# build .dll files for docfx to inspect
FROM mcr.microsoft.com/dotnet/sdk:5.0.102-alpine3.12-amd64 AS code-env

WORKDIR /app

COPY . .

RUN dotnet publish -o ./.build src/FileShare.API

# build docs with docfx using mono
FROM mono as docs-env

RUN apt update && apt install -y unzip wget

WORKDIR /tools

RUN wget https://github.com/dotnet/docfx/releases/download/v2.56.7/docfx.zip

RUN unzip docfx.zip -d ./docfx

WORKDIR /build

COPY --from=code-env /app/.build .build

COPY . .

RUN mono /tools/docfx/docfx.exe build docfx_project/docfx.json

# deploy using nginx for static file hosting
FROM nginx:latest

WORKDIR /var/www/html

COPY --from=docs-env /build/docfx_project/_site .

WORKDIR /etc/nginx/conf.d

COPY ./docfx_project/default.conf default.conf

Why I use Docker for documentation deployment

I have used docker deploying services for some time now. I like that there is no configuring the service on a server, or a developer on another team, can pull it and run it locally, so they have access offline. It also integrates well in my build pipeline.

How to Test Your .NET Core Service Registration

One of the most common runtime exceptions or errors I face using Dot Net Core is that a service is not registered for Dependency Injection. This post focuses on the Microsoft.Extensions.DependencyInjection library. Registering dependencies is a required step anytime a new service created and I often forget. I have found a way to easily test and ensure services are generated properly from the service provider (also known as the dependency injection container).

Below is a sample extension method used by a class library to register its services in the Applications container. This is fairly common practice with class libraries. I often forget to add to these as the number of services increase.

namespace MyApp.Services
{
	public static class RegisterServices
    {
    	 public static IServiceCollection UseTextServices(this IServiceCollection services)
         {
             services.AddScoped<ITextService,TextService>();
         	 return services;
         }
    }
}

Let’s take a look at a way to test and ensure we generate our service from the provider without error. The method below builds a ServiceProvider object using our extension method. After the service provider is built, we can use the GetService<T>() to retrieve our service and ensure it returns the correct type. When using the GetService<T>() method, you want to use the interface like you would when implementing through a constructor.

namespace MyApp.Test
{
	public class RegisterServicesTest
    {
        [Fact]
        public void TestServiceRegistration()
        {
        	ServiceProvider provider = new ServiceCollection()
                .UseTextServices()
                .BuildServiceProvider();
          
            Assert.IsType<TextService>(provider.GetService<ITextService>());
        }
    }
}

This not only tests that services are registered, but anything those services depend on and so on. I would recommend adding unit tests to ensure your dependency injection provider is setup properly.

Entity Framework Config Builder

I work with a legacy Microsoft SQL Server database with many inconsistencies and large tables. When working on a .NET Core API with Entity Framework Core, I had to map tables from the database, which became a long a tedious process. I know there had to be a better way, than writing the configuration by hand. This where I worked to write a small console app to create the config for me.

This app started as a simple ruby script at first. As our team grew I converted it over to a .NET Core console app, so It could be used across our team. The app takes a CSV output of a sp_columns stored procedure from SQL Server database, reads it and outputs the config to the console.

You can check it out here: https://github.com/rebelweb/ConfigBuilder

Your coding skills are your most powerful asset, use them to your advantage. When you face a tedious repetitive task, see if you can write a few lines of code to save hours of time.

Moving Data From Database Engine to Another Using Entity Framework Core

Sometimes a project’s requirements change and you need to change the database engine in which you store your data. In my case, I needed to switch from PostgreSQL to Microsoft SQL Server. Changing some configuration is easy, but sometimes moving data can be a challenge.

The main challenge I have faced when moving data between data stores, whether it be from MySQL to PostgreSQL or PostgreSQL to Microsoft SQL Server is the encoding between databases. This how characters are handled, in strings. It can cause issues when moving data, using conventional methods. In addition to it being easier, I find it more enjoyable to use code, rather than spending time manually working with the data.

Now, let’s get down to business looking over a code sample. The code sample below is a simple .NET Core Console that connects to one database via EntityFrameworkCore and stores the data in a List<> data type, to then write it back to the new database.

The DbContext to configure this setup is simple as well. It is just a context with a single property and the entity configuration.

Overall, if you are facing issues migrating data from database engine to another, give your ORM (Object Relational Mapper) a try.

If you want to see the working solution, you can see the full project here, https://github.com/rebelweb/DataConverter

Windows File Shares & Dot Net Core – Part 1

A project for work required me to interact with a Windows File Share. I had other constraints as well that made this a little more difficult. I needed to be cross-platform compatible (run on both Windows and Linux) and I couldn’t use the SMB1 protocol due to security vulnerabilities. Here are my findings and a basic implementation of how I interact with Windows File Shares with Dot Net Core.

What is SMB and What is Wrong with SMB1?

SMB or Server Message Block is a protocol dating back to 1983 and created by IBM to create network file shares on DOS systems. Microsoft got involved and merged it with other products of theirs for better integration. Microsoft has continued to evolve it over the years and has a new standard that was introduced in 2006.

Recently there have been some growing concerns about security issues with the SMB protocol version 1, relating to denial of service attacks and remote code execution. This caused Microsoft to put the SMB 1 protocol on the depreciated list for Windows Server 2012, and it is disabled in Windows Server 2016 by default.

Cross-Platform and Why not mount the share on Unix Systems?

The key benefit of using Dot Net Core is that apps can run on a variety of hosts, not just Linux or Windows. This allows the user to install and run on their preferred system. In my case, we have customers that run our application on-site and we use Linux for our cloud infrastructure. You should minimize any branching based on the platform, if at all possible.

With Unix systems such as Linux, to work with Windows file shares typically you would mount them as drives and they work like a directory would. I find this method hard to work with for a few reasons. The main reason I dislike mounting shares for Linux systems is that you have to have elevated permissions to mount the drive, which may not always be possible, especially in the case of Docker containers.

The Code…

I tried a couple of different libraries and finally settled on SMBLibrary, available here https://github.com/TalAloni/SMBLibrary. This library was the only one I could find with Windows File Share Access using SMB version 2 protocol. You need to create a connection then access the file in blocks of 65536 bytes. This is a known limit of early implementations of the protocol.

The client implements Idisposable so we can use the c# using statement to set the connection and authentication up. See the example below for a sample client. While this implementation is not perfect it is the first time I have attempted to implement IDisposable.

Then we have a service that we pass a DTO into the method to retrieve a file from the share and display the contents. In the example below, we access the file in blocks and add them together, before we finally read out the byte array.

I have a full repository on GitHub implementing everything discussed in this post, available here: https://github.com/rebelweb/DotNetCoreFileShare. In part 2 we will look at writing files to the share.

Resetting Rails Counter Cache with ActiveJob

I have recently tried something a little different, when working with Rails counter caches. For those new to rails counter cache columns are where you setup a column to hold a count of a has_many relationship to make a lookup faster than a count SQL query. You can read more about that more about them in the API, Rails Associations

I tried setting up resetting counter caches in ActiveJob, instead of using a rake task to do so. Use case for this would be when someone updates the count from SQL or when you first implement the counts. This would allow me to call the job from the admin interface of my application, I still can call the job from a rake task if I needed to. Lets take a closer look:

I started thinking how to do this efficiently without minimal coding. I started digging to see how to turn snake case into a class name, so I take something like category and turn it into Category. I know the generators that come baked into Rails work this way, so I searched the Rails repository on github, to see how it works and used it in this example.

The update script uses a JSON file to store the class and the relating relationships that need updated. It is structured with the class name in snake case as the key, and the value of an array of all the relationships needing updated. See the example below.

Now for the job code (see below). The job accepts one parameter which is the key. The key is the key from the JSON file discussed earlier so you can do all tables or just a single table, to allow for maximum flexibility, or concurrency since we are using Active Job. To use concurrency simple spin each class in its own job. The job loads the JSON file storing the configuration, and depending on if a key was passed it will loop through all keys and columns or just the one specified.

For the actual business end of things the update_cache_columns method does the brunt of the work. The method takes the key and turns it into a class_name, and updates each one of it’s cache columns.

Testing this is easy. First, we’ll create the related object and update the count via raw SQL. Then we’ll finally run the job and verify it updated the count successfully.

I am including my base model code, to help any one see how a counter cache is setup. The category relation on ArticleCategory contains counter_cache: :articles_count, which is what typically updates the column in the category table every time one is created or destroyed. The job above is for when the counts are wrong due to something going a rye.

This is a different spin on how this is typically handled. I welcome any thoughts on this implementation.