Monday, December 30, 2013

NuGet: Publishing different packages from the same solution

Let's say we need to generate different NuGet packages from our solution: core package and some specific adapter packages(e.g. Windsor container adapter package, EntityFramework + SQL CE adapter package).

This raises some interesting questions:
  • How not to get confused with different versions when republishing packages?
  • How not to forget to republish some package after you updated one of underlying projects?
  • How to remember that you need to update some package(because you made some changes in underlying projects) and you don't need to update other packages(because you didn't make any changes to that projects)?

So, how the hell to keep this process simple and straightforward?

My answer is to republish every NuGet package from solution at once.
I have just one script file to run. When I run this file the following is happened:
  • All projects from my solution are built in Release mode;
  • Tests are run;
  • All NuGet packages are created;
  • All NuGet packages are published with the same specified version;

When I need to update any package from my solution I just run this script. I do not need to remember which projects from my solution I updated and which package I need to republish(core or some adapter). All I need is to run one script without any potential problems to upload broken package to NuGet repository.

Now I am planning to include republishing step in our Continuous Integration Server(TeamCity) using almost the same script.

See example of the build script in our Antler project:

Saturday, November 30, 2013

Antler Framework: use the same syntax to work with different databases and different ORMs.

I am starting to contribute to Antler project which i believe may be useful for many .Net developers as well.

Antler is a simple framework for super-easy working with different databases(SQL CE, Sqlite, SqlExpress, SqlServer, Oracle etc) and different ORMs(NHibernate, EntityFramework Code First) using the same syntax.

Project is at an early stage, but we have a clear understanding of the requirements.

So, what are the requirements?
  • Support for multiple storages at the same time.
  • Use common syntax to work with them, so you can easily substitute one storage with another.
  • Have strong architectural base including UnitOfWork/DataSession/Repository etc. notions.
  • Be fully pluggable. For example, it should be damn easy to choose which storage or IoC container to use.
  • Good coverage by Unit/Integration tests. Most of the integration tests should pass for any combination of Database/ORM.

Configuration example

 var configurator = new AntlerConfigurator();  

Usage example

Adding teams to database:

 UnitOfWork.Do(uow =>  
    uow.Repo<Team>().Insert(new Team() {Name = "Super", BusinessGroup = "Great"});  
    uow.Repo<Team>().Insert(new Team() {Name = "Good", BusinessGroup = "Great"});  
    uow.Repo<Team>().Insert(new Team() {Name = "Bad", BusinessGroup = "BadBg"});  

Querying teams from database:

 var found = UnitOfWork.Do(uow => uow.Repo<Team>().AsQueryable().Where(t => t.BusinessGroup == "Great").OrderBy(t => t.Name).ToArray());  

Project is available on GitHub:
Everyone is welcome!

Saturday, October 12, 2013

Integration testing approaches: Should we use in-memory database?

Integration testing is a form of testing that verifies that components of our application properly work together & with external resources(database, disk-drive etc).

Let's say we have application that uses database in some way. How can we cover components that use database by Integration tests?

The most important principle of testing is Isolation:

Tests should be isolated in the data creating and quering from other tests.

So, each good Integration test that uses database should consists of the following steps:
  • Cleaning database 
  • Inserting testing data 
  • Calling component being tested 
  • Checking result / database state 

There are 3 popular ways to write Integration Tests:
  1. Use the same development database for Integration tests 
  2. Generate empty database before each test 
  3. Use some in-memory database 
1. The first choice is to use the same development database for testing. This choice is the worst one, because we can't populate development database by testing data without breaking Isolation principle.

So, it's read-only mode testing. We can test some read-only requests to database, but even then, we are unable to make assertions well, because this data is fragile(we can make changes in our development database at any time and tests will be broken).

2. The second choice is to generate empty database with testing data before each test.
It's good practice to have testing environment that resembles real production environment as close as possible. So, in this case we use the same database-engine for testing as production's one, which makes possible to catch some database-specific problems early.

Disadvantage of this approach is that Integration tests can be pretty slow.

3. The third choice is a to use in-memory database.
If for some reasons we can't use the second choice then we can use in-memory database for testing. It's the compromise choice, because we can ensure Isolation of our tests + high speed of execution, but we sacrifice the closeness to the real environment.

Sqlite is a great example of such database.

For example, if we use NHibernate it's very easy to inject Sqlite database(instead of our real one) in our tests. This database will be based on the same NHibernate mappings and will use the same database-logic.

If you are interested in this approach, you can find example project of using NHibernate + Sqlite for testing on GitHub.

Sunday, September 1, 2013

JasmineJs integration with ReSharper & TeamCity

JasmineJs is a great framework for testing JavaScript code.

We'll discuss here how to integrate it with other amazing products: ReSharper 7 & TeamCity.


You can download standalone example here(with additional files to run tests from ReSharper & TeamCity).

This package contains two folders: SpecRunner and YourWebSite. We’ll need them later.

JasmineJs + ReSharper 7 = Friends

To run client-side tests without dependency on your browser we can use PhantomJs WebKit.

Put files from downloaded SpecRunner folder somewhere in your solution.

For example, in my solution these files are located in MyProject/Testing/Client/ folder.

Now we can configure ReSharper to use PhantomJs:

Now we can create JS-files with tests, and run them directly from Visual Studio(thanks to ReSharper) without running our browser(thanks to PhantomJs):

You need to include necessary scripts(scripts to be tested + their dependencies) using reference syntax.
However, note that we don’t need to include JasmineJs scripts, because ReSharper has it’s own inside.

JasmineJs + TeamCity = Friends

Preparing project for running JS-tests from TeamCity 

Put downloaded YourWebSite/lib/ folder in Scripts/Jasmine/ folder of your Web project.

Put downloaded YourWebSite/SpecRunner.htm file in the root of your Web project.

For example:

Then you need to configure SpecRunner.htm to include all necessary scripts:
  • JasmineJs scripts(ReSharper doesn’t need them, but TeamCity does); 
  • Source files to be tested + their dependencies; 
  • Spec files with tests; 
  • jasmine.teamcity_reporter.js 
Note, that we include jasmine.teamcity_reporter.js that is needed for integration with TeamCity.

Example of SpecRunner.htm:

Configuring TeamCity for running JS-tests from our project

We need to create additional Build Step in TeamCity:

Note, that we specify Working directory as Testing/Client/ folder of our solution(where we put downloaded files from SpecRunner folder).

Sunday, August 11, 2013

Creating and Installing Windows Services with TopShelf

TopShelf  is a library for .NET that makes it very easy to create and install a Windows service.

For example, all necessary code for creating Windows Service might look like:
 static void Main()  
   HostFactory.Run(x =>  
     x.Service(s =>   
        s.ConstructUsing(name => new MyService());  
        s.WhenStarted(tc => tc.OnStart());  
        s.WhenStopped(tc => tc.OnStop());  
     x.SetDescription("My Service is Great!");  
     x.SetDisplayName("My Service");  

MyService is a class with two public methods(OnStart and OnStop). OnStart contains code to preform in service. OnStop may do some cleanup.

The best benefit that comes with TopShelf is that service is started as Console application during development(running, debugging etc).
But if you want to install service as real Windows Service, you just need to run you exe file with special parameter:
 MyService.exe install  

Saturday, July 27, 2013

Using Oracle in a standalone .NET application

Our goal is to set up standalone .NET application without dependency on installed Oracle client in our system. In other words, our application should contain all necessary dll's in the bin folder of our project .

We need following dll's from Oracle client :
  • oci.dll
  • OraOps11w.dll
  • oraocci11.dll
  • orannzsbb11.dll
  • oraociei11.dll
  • Oracle.DataAccess.dll
You can get them from installed version of your Oracle client. In my case it was 11.2 version of Oracle client.
You can put this assemblies in some folder in your solution, let's say this folder is named "Components".
Now you can refer Oracle.DataAccess in your solution projects whenever you need. So, Oracle.DataAccess will get to the bin folder of your project when you'll build it.

But we need other Oracle libs to be in the bin folder too! How to force this to happen?

You can achieve this using Pre-Build Events of your project. Go to Properties of your project, then select the Build Events tab and paste the following lines in Pre-build event box:

copy $(ProjectDir)\..\..\Components\oci.dll $(TargetDir)
copy $(ProjectDir)\..\..\Components\OraOps11w.dll $(TargetDir)
copy $(ProjectDir)\..\..\Components\oraocci11.dll $(TargetDir)
copy $(ProjectDir)\..\..\Components\orannzsbb11.dll $(TargetDir)
copy $(ProjectDir)\..\..\Components\oraociei11.dll $(TargetDir)

Maybe, you'll need to correct paths for your case, but you should get the idea: these Oracle libraries will be copied into the bin folder of your project before the each build.
So, after each build you will get standalone application(with all necessary Oracle client libs), that can run on any computer without the need to install Oracle on this computer.

Wednesday, July 3, 2013

Using Rhino Mocks after FakeItEasy experience

Recent years i always use FakeItEasy as my favorite mock framework. And i was very happy about it.

But few weeks ago i have joined team that uses Rhino Mocks. So, i've got a chance to compare them.

Both frameworks provide just about the same facilities, but i found FakeItEasy syntax more convenient to use. Plus, there is no difference between stub and mock in FakeItEasy- everything is just a fake! You don't need to remember which one to use. In Rhino Mocks i sometimes create stub, then later, decide to create expectations on it and then wonder: "Why my expectations are always pass?" Then i realize that i need to refactor my code to use mock instead of stub. There are no such problems in FakeItEasy.

Maybe it's a matter of habit, but my advice is to use FakeItEasy, it's much cooler. 

Friday, May 31, 2013

Programming in Seychelles

Just kidding... It's impossible to concentrate on serious work in Seychelles, because it's paradise without any doubt. 

I had long vacation on Seychelles islands and i tell you now: i have never seen such a beautiful country before.

Anse Georgette beach(Praslin island):
Anse Georgette beach(Praslin island)

Giant Tortoise on Curious island:
Giant Tortoise on Curious island

Petite Anse Kerlan beach(Praslin island):
Petite Anse Kerlan beach(Praslin island)

Small and beautiful Coco island:

Small and beautiful Coco island

Sunset on Anse Lazio beach(Praslin island):
Sunset on Anse Lazio beach(Praslin island)

Grand Anse beach(La Digue island):
Grand Anse beach(La Digue island)

Friday, April 12, 2013

Good programmer is eternal student

Programming is not like most other professions. Programmer can't allow himself to relax, because technologies is always moving forward.

And if he does not follow this technologies, very soon, he will be useless for serious problems in IT world.

Good programmer is eternal student. He always should be on the lookout for new and more innovative solution, even if he knows good old solution for the problem.

How not to miss the right moment for changes?
  • Always read news in your focus area
  • Read new books by competent authors
  • Follow great projects
  • Try new solutions for old problems

Thursday, April 4, 2013

Git private repositories: GitHub vs Bitbucket

I'm really enjoying to use GitHub. It's incredibly comfortable service for sharing code using Git public repositories. But when it comes to private repositories - prices are biting(22$/month for 20 private repositories).

That's why, i've decided to try BitBucket which suggests unlimited number of private repositories for 5 users.
Besides, you can easily import you repositories, SSH keys etc. from GitHub.

So, for me, BitBucket is the best place for private Git repositories. But for public repos, however, it's better to use GitHub, because it's the place where the community is. But who knows, maybe little by little community will be moved to BitBucket for public repositories too.

Friday, March 22, 2013

Classic ASP is Hell

After working for the long time with C#, ASP.NET MVC, Unit Tests, NuGet, and other pleasures of .NET life, i was assigned to the old Classic ASP project.

I did not work with Classic ASP and VBScript before and i tell you now: it is absolute Hell.

Now I realized how far is Microsoft progress in this direction in the last 10 years:
  • C#- the most powerful Object Oriented language. It's even ahead of Java in many ways.
  • ASP.NET MVC - great framework for building lightweight, highly testable web-applications.
So, i am very happy that I did not catch the time when most web projects was written on Classic ASP.

Tuesday, March 12, 2013

Preloading assemblies from Application folder in C#

As you may know, AppDomain.CurrentDomain.GetAssemblies() returns list of assemblies that are loaded in current Application Domain. Even if your executing project has reference to some assembly there is no guarantee that this assembly has been already loaded. It will be loaded lazily on first use. But what if you need all assemblies now? For example, you want to search assemblies for classes that implement some interface or something else.

You can easily force loading assemblies into your Application Domain.

Let's create helper class to return list of all files in our Application folder(Bin folder in case of Web Application):
 public static class From  
   public static IEnumerable<FileInfo> AllFilesIn(string path, bool recursively = false)  
     return new DirectoryInfo(path)  
               .EnumerateFiles("*.*", recursively ? SearchOption.AllDirectories : SearchOption.TopDirectoryOnly);  
   public static IEnumerable<FileInfo> AllFilesInApplicationFolder()  
     var uri = new Uri(Assembly.GetExecutingAssembly().CodeBase);  
     return AllFilesIn(Path.GetDirectoryName(uri.LocalPath));  

So, now you can preload assemblies as follows:
 From.AllFilesInApplicationFolder().Where(f => f.Extension == ".dll").ForEach(fi => Assembly.LoadFrom(fi.FullName));  

This solution is applicable both for Desktop and Web Applications.

Friday, February 22, 2013

Safe forms without captcha - hybrid approach

Captcha is very annoying for users. How can we make safe forms without captcha?

Today's popular approaches are:
  • CSS technique. Idea about this is to create invisible field(hidden via CSS) in form. Silly spambots don't know that people can't see this field and fill it out. So, on server side you should make sure that this field is empty. 
  • Javascript technique. Idea about this is to generate and fill out some field using javascript. Silly spambots can't process javascript. So, on server side you should make sure that this field is not empty. This approach is good, but it has big disadvantage: if real user has javascript turned off he will not get this generated field, so he will not fill it out. Real user with disabled javascript = Silly spam bot. 
Problem with this approaches: What if spam bot is Not Silly ?

First approach is great in its simplicity, but i think that it is more easily for spam bots to overcome this obstacle, than second one. Javascript processing requires more brains in spambot head. So i like second approach more, but i do not like the fact that poor people with disabled javascript will suffer.

I want not to bother user with captcha if he has javascript enabled. But if poor user has javascript disabled, he will get captcha(and spambot will get captcha too).

So, my hybrid approach is to use captcha block(captcha image+ input field) wrapped in <noscript/> tags. And then use javascript to hide captcha block, remove <noscript/> tags, and fill out captcha with valid value.

As result, we have captcha that is showing for users that have javascript disabled(as well as spambots). If user has javascript turned on, then captcha will be hidden and filled out using javascript. 
On server side we just check if captcha is valid- it is not matter for us whether user has javascript enabled!

Saturday, February 2, 2013

10gen's MongoDB course - it was great!

Recently, I took a free course, organized by the 10gen company(it is famous for MongoDb NoSQL data-store). It was a great course that covered many interesting aspects about MongoDb. There are was video lectures, regular homeworks, and the final exam. You need 65% grade to pass course. I, actually, got 100% and earned beautiful "M101: MongoDB for Developers" certificate:

Thank's very much to 10gen. They have not only created a great free product(MongoDb), but also help developers to learn it.