Windows File Shares & Dot Net Core – Part 1

A project for work required me to interact with a Windows File Share. I had other constraints as well that made this a little more difficult. I needed to be cross-platform compatible (run on both Windows and Linux) and I couldn’t use the SMB1 protocol due to security vulnerabilities. Here are my findings and a basic implementation of how I interact with Windows File Shares with Dot Net Core.

What is SMB and What is Wrong with SMB1?

SMB or Server Message Block is a protocol dating back to 1983 and created by IBM to create network file shares on DOS systems. Microsoft got involved and merged it with other products of theirs for better integration. Microsoft has continued to evolve it over the years and has a new standard that was introduced in 2006.

Recently there have been some growing concerns about security issues with the SMB protocol version 1, relating to denial of service attacks and remote code execution. This caused Microsoft to put the SMB 1 protocol on the depreciated list for Windows Server 2012, and it is disabled in Windows Server 2016 by default.

Cross-Platform and Why not mount the share on Unix Systems?

The key benefit of using Dot Net Core is that apps can run on a variety of hosts, not just Linux or Windows. This allows the user to install and run on their preferred system. In my case, we have customers that run our application on-site and we use Linux for our cloud infrastructure. You should minimize any branching based on the platform, if at all possible.

With Unix systems such as Linux, to work with Windows file shares typically you would mount them as drives and they work like a directory would. I find this method hard to work with for a few reasons. The main reason I dislike mounting shares for Linux systems is that you have to have elevated permissions to mount the drive, which may not always be possible, especially in the case of Docker containers.

The Code…

I tried a couple of different libraries and finally settled on SMBLibrary, available here https://github.com/TalAloni/SMBLibrary. This library was the only one I could find with Windows File Share Access using SMB version 2 protocol. You need to create a connection then access the file in blocks of 65536 bytes. This is a known limit of early implementations of the protocol.

The client implements Idisposable so we can use the c# using statement to set the connection and authentication up. See the example below for a sample client. While this implementation is not perfect it is the first time I have attempted to implement IDisposable.

Then we have a service that we pass a DTO into the method to retrieve a file from the share and display the contents. In the example below, we access the file in blocks and add them together, before we finally read out the byte array.

I have a full repository on GitHub implementing everything discussed in this post, available here: https://github.com/rebelweb/DotNetCoreFileShare. In part 2 we will look at writing files to the share.

How I Clear My Mind

As a software developer, it is important to take a rest now and again to avoid burn out. What I do to get away and clear my mind, is hiking and photography. It allows me to get away from most technology to clear my and return refreshed.

The things I love about hiking is exploring new places and enjoying the scenic view. I live in central Illinois, we don’t have much nearby. If I drive a couple of hours south I have a national forest that gives me hundreds of miles of trails with interesting views. Seeing these relaxes me and allows me to clear my mind.

I encourage others to share what they do unwind from developing.

Just Got Home From DevUp 2019

I have just arrived home from the DevUp Conference in St. Louis, my conference experience was a mixed bag this year. There were things that could be improved, but I want to focus on some of my favorite sessions from this year’s conference.

Also came away with come personal action items from the conference. I have started working on building out a better portfolio of what I am working on. Other items include promoting myself more, and possibly live streaming some coding sessions, so keep an eye out for that.

Entity Framework Core Debugging using SQL Server: A Detective Story by Chris Woodruff

This session talked about tools you can use to debug performance issues. The first thing discussed was tagging your queries, so it comes across and adds comments to the SQL Server Profiler. Also discussed was checking query plans, and other tools included in the newer version of SQL Server.

Document Databases vs Relational Databases: An Honest Comparison and Things To Consider by Keven Grossnicklaus

I only caught part of this session, but still learned some interesting points. Essentially if your data doesn’t change much you may consider using a document database. It was also discussed, to use a document database in a hybrid scenario where you mainly use a relational database for main data store, but pages that need to load a lot of data quickly you may load a cached version to the document database.

Going From No Code to App Store in 30 days by Lauren Hilton & Eric Bloomquist

This session discussed taking an existing Angular Application and using parts of it to build an mobile application using Ionic. When building an Ionic app you can use most of your logic and create new views. Overall an interesting session that sparked on some ideas I may do in the future.

Updating My Time Rounding Library

It has been awhile since I have touched the time_rounder gem. When I left off, I only implemented the 15 minute schedule. I have recently started work on that gem again in an effort to commit more to open source and build up my portfolio. Here is what I have been up to with the gem.

First, I have improved the rounding schedule setup. When I started the gem I just simple used large hashes containing every minute of an hour and what it rounded to. I since found a happy medium in using Array’ min_by and a small amount of math. I may make further improvements to the code in future but I feel it is pretty good at the moment.

Next, I have improved the tests and made them easier to understand and not have so many examples by lumping common examples together. I essentially take all the minutes that round to the same number and test them in a loop, instead of repeating a test for each minute of the hour.

Lastly, I am working on the other rounding schedules. At the moment there is only the 15 minute schedule. The plan is to add 1 hour, 30 minute, 20 minute, 10 minute. and 5 minute schedule. Once all the schedules are complete the gem will move to a 1.0 release.

Attending Globalhack VI

A couple co workers and myself registered for this hackathon to end homelessness in the St. Louis region. We are currently awaiting the true challenge of the hackathon. I will update this post more throughout the weekend.

Update (11/2/2016)

So the hackathon is over. We didn’t win any prizes but learned several things about out team and pieces of our development process we can improve. We mainly use Ruby on Rails (Rails) as our development environment, some of the things we learned pertained to that style of development.

First, thing we learned is using generators is great, but I like my code formatted a certain way, different than the scaffold generators, generate code. This would mean we need to create our own set to use to quickly build an application. I have started a open source project for these generators, because I use Rails outside of work as well.

Second, I learned how uncomfortable I get writing code without any unit tests. I have come to believe that they are necessary, they help with not having to manually test multiple pieces of code continually. We chose not to use tests, we had parts of our app that didn’t work properly. Not sure how to feel here, I understand they take time and when you have less than 48 hours to develop a functioning app stuff gets cut. I am looking forward to build some generators, because it should help by pre generating those tests for us.

Overall it was a fun experience for my first hackathon and learned quite a bit about myself, my development skills, and my development style. I look forward to taking what I learned and improving myself over the next several months.

Resetting Rails Counter Cache with ActiveJob

I have recently tried something a little different, when working with Rails counter caches. For those new to rails counter cache columns are where you setup a column to hold a count of a has_many relationship to make a lookup faster than a count SQL query. You can read more about that more about them in the API, Rails Associations

I tried setting up resetting counter caches in ActiveJob, instead of using a rake task to do so. Use case for this would be when someone updates the count from SQL or when you first implement the counts. This would allow me to call the job from the admin interface of my application, I still can call the job from a rake task if I needed to. Lets take a closer look:

I started thinking how to do this efficiently without minimal coding. I started digging to see how to turn snake case into a class name, so I take something like category and turn it into Category. I know the generators that come baked into Rails work this way, so I searched the Rails repository on github, to see how it works and used it in this example.

The update script uses a JSON file to store the class and the relating relationships that need updated. It is structured with the class name in snake case as the key, and the value of an array of all the relationships needing updated. See the example below.

Now for the job code (see below). The job accepts one parameter which is the key. The key is the key from the JSON file discussed earlier so you can do all tables or just a single table, to allow for maximum flexibility, or concurrency since we are using Active Job. To use concurrency simple spin each class in its own job. The job loads the JSON file storing the configuration, and depending on if a key was passed it will loop through all keys and columns or just the one specified.

For the actual business end of things the update_cache_columns method does the brunt of the work. The method takes the key and turns it into a class_name, and updates each one of it’s cache columns.

Testing this is easy. First, we’ll create the related object and update the count via raw SQL. Then we’ll finally run the job and verify it updated the count successfully.

I am including my base model code, to help any one see how a counter cache is setup. The category relation on ArticleCategory contains counter_cache: :articles_count, which is what typically updates the column in the category table every time one is created or destroyed. The job above is for when the counts are wrong due to something going a rye.

This is a different spin on how this is typically handled. I welcome any thoughts on this implementation.