Category: cruisecontrol

Tweet config transform to support non-web.config files and non-Web project

[Update] There is a VS plugin called SlowCheetah to extend native config transform. To make it CI friendly, you need to copy the extension targets file from %LOCALAPPDATA%\Microsoft\MSBuild\SlowCheetah\v1\ to you source control folder then add this parameter to msbuild.

<exec program="${tools.msbuild.console}"
<arg value="/p:Configuration=${env}" />
<arg value="/p:OutDir=${dir.release}\Client.Admin\${env}\" />
<arg value="/p:SlowCheetahTargets=${tools.SlowCheetah.Transforms.targets}" />

Right now the config transform feature in VisualStudio only available for WebProject. No luck for non-web project, like, WPF/Silver light project.

Even in WebProject, this transform is limited to web.config only.

Let’s tweet it to make support more.

Step 1, grab this msbuild extension file, put it into your path, I saved it into my {solution.dir}/build folder, then check into source control.

edit project file:

Make your config file transformable, in my case, log4net.config:

Add this transformfiles.targets file.

Step 2, create different copies of transform files for each configuration, set transform rule in it, this is a simple replace:

Build, check output folder. You might want to set “Copy to Output Directory” property to “Do not copy” because config transform is taking over this process.

Step3, make those files look nested in Visual Stuido, open project file again, add this:

WPF/Silverlight project can use the same tweet.


A fancy transform, remove-all and insert, really powerful:

        Communications General

Unfortunately, this kind of tweet only works for TRANSFORM_ON_BUILD, while DEPLOY PACKAGE and PUBLISH won’t trigger transform in current version of VS2010.

The evolution of deploy process, from NAnt token to config transform in MSBuild

Deploy process used to be very simple, copy/xcopy + a few manual modifications for connection strings and other stuff.

But too many manual operation in deploy is always problematic, due to tired eyes, fingers and people, etc.

Using NAnt tokenized config file looks very elegant then, here is typical layout of our project.


Tokens in app.config.template will be replaced by the values defined in each properties.xml file for different environment.

After looking at Soctt Hanselmn’s post about MSDeploy and config transform in deploy process, this built-in config xml transform really impressed me, time to give it a try.

Apply config transform,

For us, the transform happens in acceptance/production is just to change connect string and set includeExceptionDetailInFaults to false.

<?xml version="1.0"?>

<!-- For more information on using web.config transformation visit -->

<configuration xmlns:xdt="">
      <add name="MyDB"
        connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True"
    <compilation xdt:Transform="RemoveAttributes(debug)" />
      <behavior name="DefaultServicesBehavior">
        <serviceDebug includeExceptionDetailInFaults="false" xdt:Transform="Replace"/>

Do a publish for this wcf service project, choose publish method as File System, the web.config in the output folder is transformed.

Build package can be done either through  GUI or command line or NANT.

  <target name="release.server" depends="">
    <msbuild project="${dir.src}/MyWcfServices/WcfServices.csproj" target="package" >
      <property name="Configuration" value="Release" />
      <property name="OutDir" value="${dir.release}/Server/" />

The generated package:

This package can be deployed on target server by cmd, as described in readme.txt, or by IIS admin console.

Note, those deploy options won’t appear until MSDeploy is installed on IIS server through

Import application wizard is very friendly in normal MS way.

Here is the most beautiful step, parameterss for this deploy are automatically read from WcfServices.SetParameters.xml file located beside the package, connection string, app name, blah blah:

More transform syntax can be found at

About MSDeploy package,

Our updated NAnt build script:

 <target name="release" depends="">
    <delete dir="${dir.release}" />
    <foreach item="String" in="Release Acceptance Production" delim=" " property="env">
      <echo message="releasing for env: ${env}" />
      <call target="release.client" />
      <call target="release.server" />

  <target name="release.client" depends="rebuild.assemblyinfo">
    <delete dir="${dir.release}/Client" />
    <msbuild project="${dir.src}/MyProject.UI.Wpf/MyProject.UI.Wpf.csproj"  >
      <property name="Configuration" value="${env}" />
      <property name="OutDir" value="${dir.release}/${env}/Client/" />

  <target name="release.server" depends="rebuild.assemblyinfo">
    <msbuild project="${dir.src}/MyProject.WcfServices/WcfServices.csproj" target="Package">
      <property name="Configuration" value="${env}" />
      <property name="OutDir" value="${dir.release}/tmp/" />
    <copy todir="${dir.release}/${env}/Server/" flatten="true">
        <include name="${dir.release}/tmp/_PublishedWebsites/WcfServices_Package/*.*" />
    <delete dir="${dir.release}/tmp/" />

I personally think this is really cool, unfortunately config transform is only available for Web project for now. I actually already did some tweet to WPF project to make it config transformable, I will put it in another post.

Transform NCover xml output on TeamCity

Comparing to, there is no xml/xlst transaformation in teamcity, or I still haven’t found how to do it yet. The NCover HTML report will be displayed in iframe of a tabpage.

TeamCity recommend using NUnit2 for NAnt users, I tried adding runtime redirect in test config  file, no go, and I don’t like this messy work around either.

So using exec task for NUnit, passing server message to teamcity as shown in NCover official doc for TeamCity integration.

One thing hit me is that the output folder must be set to {}, other folder doesn’t bring html result to tabpage, instead teamcity will display an auto-gened index.html with one line warning message:

This is an autogenerated index file (there was no index.html found in the generated report).
The coverage xml ouput is automatically included in artifact, watch out the hidden/show link.
Be default .teamcity folder is hidden, click show link, it should appear like this:
The idea of this is to allow user open xml in NCover client, or do xlst transformation by their own?
In XML report processing plugin, there is no report type related to NCover.

Enable svn proxy on TeamCity server

Problem described as this post. We need to setup a CI for an external svn repository on teamcity server sitting behind the firewall.

(NOTE: if setting proxy is too hard for you, the easiest workaround is to create a local git clone, then share this folder, say, \\tcserver\gitrepo, in teamcity VCS fetch url section, set it to this exact same sharing name, \\tcserver\gitrepo, the build will be hooked up. Obviously you don’t have the real trigger setup, you need to manually git pull to local before you trigger the build, a little bit complex, but it works.)


On we have the svn Tortoise client installed so we can easily control proxy through TortoiseSVN settings->Network, Enable Proxy Server. But TeamCity has its own svn library, SVNKit, how to configure proxy then?

According to SVNKit,

By default SVNKit uses proxy settings from the servers configuration file that is located in the default SVN run-time configuration area.

And that stackoverflow post did point out the area to look for should be:

  • C:\Users\AccountName\AppData\Roaming\Subversion\servers on Vista/7/2008 (domain account)
  • C:\Windows\ServiceProfiles\LocalService\AppData\Roaming\Subversion\servers on Vista/7/2008 (service account)
  • C:\Documents and Settings\AccountName\Application Data\Subversion\servers on XP/2003

It’s very confusing that teamcity use C:\Windows\system32\config\systemprofile\AppData\Roaming\Subversion as default config folder. For some reason this folder is set to system folder on our server, which caused a lot of trouble to us.

Switching to a regular folder, e.g., c:/tmp, even C:\Users\TCADMIN\AppData\Roaming\Subversion, when first time clicking test connection button, it will create three files in that folder: config, servers and README.

To set proxy,we just need to open servers file, change the those settings:

http-proxy-host =
http-proxy-port = 80
http-proxy-username = blah
http-proxy-password = doubleblah

If VCS checkout mode is set to Automatically on Server, this is all we need. If VCS checkout mode is set to Automatically on Agent, watch out! The config folder will be created on agent machine, and another auto-generated servers file! We have to change this servers file again, and the proxy username is not for agent, it should be the one works on Server!

Install cc.rb as NT Service

People said the easiest way to install cc.rb as a NT Service is using cygwin.

My problem is still multiple ruby env. The trick is to set the correct GEM_HOME.

cygwin is the tool/simulate env to use windows as linux, package management is very handy.

It seems jruby is having problems to get the correct child status return code, even a simple echo it reported $! as 256 while in pure cygwin env it’s zero.

I had to install another ruby env in cygwin, which is very joyful in cygwin.

Druing this investigation period, rspec has been upgraded from 1.3 to 2.2, then 2.3, I have no time/interest to change my spec code yet, fortuantely I found the way to use gem to get previous version:

gem install rspec -v 1.3

Install json needs make and gcc, re-run setup for cygwin, get it as needed.

Try ‘ruby cruise start’ in cygwin command line first, got lots of openpath path too long warning. Don’t know why and how to sovle it. Build passed anyway, ignore it for now.

Then add NT Service using cygrunsrv, note the different path in args, otherwise got can’t load build_start problem when cc.rb trying to start.

cygrunsrv –install CruiseControl.rb –path ‘/usr/bin/ruby.exe’ –args ‘/cygdrive/d/app/cruisecontrol-1.4.0/cruise start’ –chdir ‘d:/app/cruisecontrol-1.4.0’  –env GEM_HOME=”/lib/ruby/gems” -u Mao –passwd youknow

Setting user/pswd is to ensure cruise_data path is not pointing to /home/SYSYEM folder.

cc.rb, rspec, rake, gem… oh man

I just wanted to install CruiseControl.rb, then I realized I need to fix rake in my project first. For some reason, rake didn’t pick up the gem path (while running the single test/spec class in NetBeans is OK). I guess the reason is that I have multiple versions of ruby environment on same PC.

If it’s only one ruby environment, the only thing I need to do for windows ruby dev is to add a “RUBYOPT=rubygems” environment variable.

This problem is complicated for multiple ruby env. To simplify it, I manually set my preferred/primary ruby env to PATH var, or move it to first spot. This can make sure both rake and gem are pointing to the same ruby I was working on.

While updating gems, I noticed there is a new version 2.2 of rspec. So I uninstalled the old one 1.3, which is a disaster, because 1.3 to 2.2 is a broken upgrade. spec is gone (renamed to rspec), but NetBeans still trys to find the old spec… I’m hooked.

Fortunately I still have another ruby env, re-do the environment variables, rake is back to live.

The logic of looking for rake task in cc.rb is weird, test->default->…, what I really want is spec, so I set task :default=>[:spec], because test task is still there, so the default spec task couldn’t be picked up by cc.rb.

Good thing this is still configurable:

project.rake_task = 'spec'

Source control database scripts

I’m very jealous to those lucky developers who are work under source-controlled  database environment.

It’s shameful to admit we are still working on one same test/develop database all the day, without automatic nightly refresh. Developers play on test db and override each other’s test data.  When things on test db really become ugly and out-of-date, one guy requests a refresh, then the same old story start over.

Local database develop idea is too new to most of us,  after months of effort management allow the new project can go with local db mode development. Which means, DBA has to present scripts instead of centrol shared database to developers.

This is a huge move to our database team, I understand they don’t want put / check-in their scripts into developers’ svn repository for now (too many new stuff come together, what? source control?) They feel comfortable just copying their work to the network drive. It can kind of work with continuous integration, but, no source control mean no history, no branch, no rollback…

I have to find a work around to this until those scripts can be checked into svn repository.

Here is our project build routine:

  1. db-init (run script from network drive, always latest)
  2. db-test
  3. code compile (from svn repository)
  4. code unit-test
  5. release (will move to artifact folder)

The first two step only exist in new project, my solution is to save every script to local drive, then zip them into db-release folder before 3, . In case developers wants to build a specific build, they can not run 1 and 2 anymore, because scripts might newer that code, they only need to unzip the zip file and run the separate db-init manually, then start from step 3.

An ugly solution, I know we don’t have to do this if we can get EVERYTHING including db scripts from source control repository. Someday it might happen to us.

Here are some NAnt trick I’ve used, including prefix the script with number to enable order the name.

  <target name="exec_SQL_as_SA">
     <echo message='executing ${osql.exe} ${sa.osql.ConnectionString} -b -i ${filename}  -v DBDIR="${DB.DIR}"...' />
    <property name="count" value="${int::parse(count)+1}" />
    <property name="padded_count" value="${string::pad-left(count, 3,'0')}"/>
    <copy file='${filename}'  tofile='${sql.release.dir}/sa/[${padded_count}]-${path::get-file-name(filename)}'/>

	  <exec program="${osql.exe}" failonerror="true">
	    <arg line='${sa.osql.ConnectionString} -b -i "${filename}" -v DBDIR="${DB.DIR}"'  />

<target name="run.db.init">
         <zip zipfile="${dir.release}\">
        <fileset basedir="${dir.sql}\release">
          <include name="**/*" />

About deploy, it’s better to use batch instead of NAnt.

cd sa
rem if NOT Exist deploy.log (
@echo > deploy.log
rem )

for /f "delims=" %%a IN (‘dir /b *.sql’) do (
@echo executing %%~fa
@echo executing %%~fa >tmp.log
%SQLCMD% -S %SERVER% %AS_SA% -b -i "%%~fa" -v DBDIR=%DBDIR% -o tmp.log
if exist tmp.log copy deploy.log+tmp.log >NUL
if exist tmp.log del tmp.log >NUL
move deploy.log .. >NUL

cd ../user

@echo > testresults.txt
for /f "delims=" %%a IN (‘dir /b *.sql’) do (
@echo testing %%~fa
@echo testing %%~fa >temp.log
%SQLCMD% -S %SERVER% %AS_USER% -b -i "%%~fa" -v DBDIR=%DBDIR% -o tmp.log
copy testresults.txt+tmp.log >NUL
if exist tmp.log del tmp.log >NUL
move testresults.txt .. >NUL

cd ..


Embed NhProf into

There is a brief document on NHProf website describing how to embed NhProf into CI server, here are practical problems I encountered and my solutions/workaround.

First problem is, once NhProf command line start up, it sits there until Ctl+C. I need to make it run in background mode, in window term, start it in a new command window. This is simple, adding a start right before the start batch fixed the problem.

But the second problem is very difficult, NhProf can output to xml format or html format, but there is no xslt file available to let user to do xml transformation.

I could create my own xslt file, but looked into the html output, it’s a lot of work. I decided to simply embed the whole html chunk as the CData block into CruiseControl merged result.


  <target name="run.test.with.nhprof" depends="launch.nhprof, run.test, shutdown.nhprof, wrap.nhprof.output" />

  <target name="launch.nhprof" >
	<if test="${file::exists('c:\app\nhprof\start_nhprof.bat')}">
	    <exec basedir="${}/nhprof/" workingdir="${dir.compile}" program="c:\app\nhprof\start_nhprof.bat" />
		<echo message="NHibernate Profiler started." />

  <target name="shutdown.nhprof" >
  	<if test="${file::exists('c:\app\nhprof\shutdown_nhprof.bat')}">
		<exec basedir="${}/nhprof/" workingdir="${dir.compile}" program="c:\app\nhprof\shutdown_nhprof.bat" />

  <target name="wrap.nhprof.output" >
	<echo message="Waiting for NhProf generating output file..." />
	<sleep milliseconds="1000" />

                property="nhprof.output" />
	<echo file="${dir.compile}/nhprof_output.xml" message="&lt;nhprof&gt;&lt;![CDATA[${nhprof.output}]]&gt;&lt;/nhprof&gt;" />


 	<xsl:template match="/">
		<xsl:value-of select="cruisecontrol//nhprof" disable-output-escaping="yes" />

Here is the result:

Note: IE doesn’t support nested html tag (html/html), this screen shot is taken from Firefox.

Automated doc for dotnet app

It seems Sandcastle is the only option for auto-doc dotnet app.

What needs to installed:

The Sandcastle help file builder supports command line very well, it can be very easily added to

  <target name="build.sandcastle.doc" >

	<property name="msbuild.exe" value="c:\windows\Microsoft.NET\Framework\v3.5\msbuild.exe" />
	<property name="sandcastle.proj" value="doc.shfbproj" />
       <exec program="${msbuild.exe}" failonerror="true">
          <arg line="${sandcastle.proj}"  />
          <arg line="/p:Configuration=release"  />

    <copy file="${dir.base}/help/Documentation.chm" todir="${dir.release}" />

Reference: Recommended Tags for Documentation Comments


Namespace comments can be defined in a class, or in sandcastle project config file:

Deploy jars to easerver

Our CI server right now can deploy component to local/dev easerver, and export it as the jar file to be deploy to acceptance or production later. We don’t want use Nant to do production depoy, basic dos command should be used instead.

Result we try to achieve : One click deploy command to target server.


  1. Use “start /wait /min ” syntax to call jagtool to enable waiting for the deploy process.
  2. Add an exit command in jagtool.bat, to allow this separated window to close after it finished.
  3. Merge the log result into a single log file, jagtool by default always overwrite the existing file, not append.
  4. Loop through the jar folder.

Here is the dos command I end up with.

@echo off
@echo #########################################################
@echo   You must logon to target server to run this command
@echo #########################################################

if NOT Exist deploy.log (
  echo > deploy.log

for /f %%a IN ('dir /b *.jar') do (
  echo "Deploying %%a"   >deploy.log
  start /WAIT /MIN jagtool2  -local -logfile tmp.log deploy -type jagjar -jagjartype Package %%a
  copy deploy.log+tmp.log

del tmp.log

@echo on