Tweet config transform to support non-web.config files and non-Web project

[Update] There is a VS plugin called SlowCheetah to extend native config transform. To make it CI friendly, you need to copy the extension targets file from %LOCALAPPDATA%\Microsoft\MSBuild\SlowCheetah\v1\ to you source control folder then add this parameter to msbuild.

<exec program="${tools.msbuild.console}"
workingdir="${dir.project.ui}"
>
<arg value="/p:Configuration=${env}" />
<arg value="/p:OutDir=${dir.release}\Client.Admin\${env}\" />
<arg value="/p:SlowCheetahTargets=${tools.SlowCheetah.Transforms.targets}" />
</exec>

Right now the config transform feature in VisualStudio only available for WebProject. No luck for non-web project, like, WPF/Silver light project.

Even in WebProject, this transform is limited to web.config only.

Let’s tweet it to make support more.

Step 1, grab this msbuild extension file, put it into your path, I saved it into my {solution.dir}/build folder, then check into source control.

edit project file:

Make your config file transformable, in my case, log4net.config:

Add this transformfiles.targets file.

Step 2, create different copies of transform files for each configuration, set transform rule in it, this is a simple replace:

Build, check output folder. You might want to set “Copy to Output Directory” property to “Do not copy” because config transform is taking over this process.

Step3, make those files look nested in Visual Stuido, open project file again, add this:

WPF/Silverlight project can use the same tweet.

Extra:

A fancy transform, remove-all and insert, really powerful:


        Communications General

Unfortunately, this kind of tweet only works for TRANSFORM_ON_BUILD, while DEPLOY PACKAGE and PUBLISH won’t trigger transform in current version of VS2010.

Advertisements

The evolution of deploy process, from NAnt token to config transform in MSBuild

Deploy process used to be very simple, copy/xcopy + a few manual modifications for connection strings and other stuff.

But too many manual operation in deploy is always problematic, due to tired eyes, fingers and people, etc.

Using NAnt tokenized config file looks very elegant then, here is typical layout of our project.

—-+-cfg
—-+—app.config.template
—-+—local.properties.xml
—-+—acceptance.properties.xml
—-+—production.properties.xml

Tokens in app.config.template will be replaced by the values defined in each properties.xml file for different environment.

After looking at Soctt Hanselmn’s post about MSDeploy and config transform in deploy process, this built-in config xml transform really impressed me, time to give it a try.


Apply config transform,

For us, the transform happens in acceptance/production is just to change connect string and set includeExceptionDetailInFaults to false.


<?xml version="1.0"?>

<!-- For more information on using web.config transformation visit http://go.microsoft.com/fwlink/?LinkId=125889 -->

<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <connectionStrings>
   <connectionStrings>
      <add name="MyDB"
        connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True"
        xdt:Transform="Replace"/>
    </connectionStrings>
  </connectionStrings>
  <system.web>
    <compilation xdt:Transform="RemoveAttributes(debug)" />
  </system.web>
  <system.serviceModel>
  <behaviors>
    <serviceBehaviors>
      <behavior name="DefaultServicesBehavior">
        <serviceDebug includeExceptionDetailInFaults="false" xdt:Transform="Replace"/>
      </behavior>
    </serviceBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

Do a publish for this wcf service project, choose publish method as File System, the web.config in the output folder is transformed.

Build package can be done either through  GUI or command line or NANT.


  <target name="release.server" depends="">
    <msbuild project="${dir.src}/MyWcfServices/WcfServices.csproj" target="package" >
      <property name="Configuration" value="Release" />
      <property name="OutDir" value="${dir.release}/Server/" />
    </msbuild>
  </target>

The generated package:

This package can be deployed on target server by cmd, as described in readme.txt, or by IIS admin console.

Note, those deploy options won’t appear until MSDeploy is installed on IIS server through http://www.iis.net/download/WebDeploy.

Import application wizard is very friendly in normal MS way.


Here is the most beautiful step, parameterss for this deploy are automatically read from WcfServices.SetParameters.xml file located beside the package, connection string, app name, blah blah:

More transform syntax can be found at http://msdn.microsoft.com/en-us/library/dd465326(VS.100).aspx

About MSDeploy package, http://msdn.microsoft.com/en-us/library/dd547591.aspx.

Our updated NAnt build script:


 <target name="release" depends="">
    <delete dir="${dir.release}" />
    <foreach item="String" in="Release Acceptance Production" delim=" " property="env">
      <echo message="releasing for env: ${env}" />
      <call target="release.client" />
      <call target="release.server" />
    </foreach>
  </target>

  <target name="release.client" depends="rebuild.assemblyinfo">
    <delete dir="${dir.release}/Client" />
    <msbuild project="${dir.src}/MyProject.UI.Wpf/MyProject.UI.Wpf.csproj"  >
      <property name="Configuration" value="${env}" />
      <property name="OutDir" value="${dir.release}/${env}/Client/" />
    </msbuild>
  </target>

  <target name="release.server" depends="rebuild.assemblyinfo">
    <msbuild project="${dir.src}/MyProject.WcfServices/WcfServices.csproj" target="Package">
      <property name="Configuration" value="${env}" />
      <property name="OutDir" value="${dir.release}/tmp/" />
    </msbuild>
    <copy todir="${dir.release}/${env}/Server/" flatten="true">
      <fileset>
        <include name="${dir.release}/tmp/_PublishedWebsites/WcfServices_Package/*.*" />
      </fileset>
    </copy>
    <delete dir="${dir.release}/tmp/" />
  </target>

I personally think this is really cool, unfortunately config transform is only available for Web project for now. I actually already did some tweet to WPF project to make it config transformable, I will put it in another post.

Transform NCover xml output on TeamCity

Comparing to CC.net, there is no xml/xlst transaformation in teamcity, or I still haven’t found how to do it yet. The NCover HTML report will be displayed in iframe of a tabpage.

TeamCity recommend using NUnit2 for NAnt users, I tried adding runtime redirect in test config  file, no go, and I don’t like this messy work around either.

So using exec task for NUnit, passing server message to teamcity as shown in NCover official doc for TeamCity integration.

One thing hit me is that the output folder must be set to {teamcity.report.path}, other folder doesn’t bring html result to tabpage, instead teamcity will display an auto-gened index.html with one line warning message:

This is an autogenerated index file (there was no index.html found in the generated report).
The coverage xml ouput is automatically included in artifact, watch out the hidden/show link.
Be default .teamcity folder is hidden, click show link, it should appear like this:
The idea of this is to allow user open xml in NCover client, or do xlst transformation by their own?
In XML report processing plugin, there is no report type related to NCover.

Enable svn proxy on TeamCity server

Problem described as this post. We need to setup a CI for an external svn repository on teamcity server sitting behind the firewall.

(NOTE: if setting proxy is too hard for you, the easiest workaround is to create a local git clone, then share this folder, say, \\tcserver\gitrepo, in teamcity VCS fetch url section, set it to this exact same sharing name, \\tcserver\gitrepo, the build will be hooked up. Obviously you don’t have the real trigger setup, you need to manually git pull to local before you trigger the build, a little bit complex, but it works.)

 

On CC.net we have the svn Tortoise client installed so we can easily control proxy through TortoiseSVN settings->Network, Enable Proxy Server. But TeamCity has its own svn library, SVNKit, how to configure proxy then?

According to SVNKit,

By default SVNKit uses proxy settings from the servers configuration file that is located in the default SVN run-time configuration area.

And that stackoverflow post did point out the area to look for should be:

  • C:\Users\AccountName\AppData\Roaming\Subversion\servers on Vista/7/2008 (domain account)
  • C:\Windows\ServiceProfiles\LocalService\AppData\Roaming\Subversion\servers on Vista/7/2008 (service account)
  • C:\Documents and Settings\AccountName\Application Data\Subversion\servers on XP/2003

It’s very confusing that teamcity use C:\Windows\system32\config\systemprofile\AppData\Roaming\Subversion as default config folder. For some reason this folder is set to system folder on our server, which caused a lot of trouble to us.

Switching to a regular folder, e.g., c:/tmp, even C:\Users\TCADMIN\AppData\Roaming\Subversion, when first time clicking test connection button, it will create three files in that folder: config, servers and README.

To set proxy,we just need to open servers file, change the those settings:

http-proxy-host = proxy1.some-domain-name.com
http-proxy-port = 80
http-proxy-username = blah
http-proxy-password = doubleblah

If VCS checkout mode is set to Automatically on Server, this is all we need. If VCS checkout mode is set to Automatically on Agent, watch out! The config folder will be created on agent machine, and another auto-generated servers file! We have to change this servers file again, and the proxy username is not for agent, it should be the one works on Server!

Install cc.rb as NT Service

People said the easiest way to install cc.rb as a NT Service is using cygwin.

My problem is still multiple ruby env. The trick is to set the correct GEM_HOME.

cygwin is the tool/simulate env to use windows as linux, package management is very handy.

It seems jruby is having problems to get the correct child status return code, even a simple echo it reported $! as 256 while in pure cygwin env it’s zero.

I had to install another ruby env in cygwin, which is very joyful in cygwin.

Druing this investigation period, rspec has been upgraded from 1.3 to 2.2, then 2.3, I have no time/interest to change my spec code yet, fortuantely I found the way to use gem to get previous version:

gem install rspec -v 1.3

Install json needs make and gcc, re-run setup for cygwin, get it as needed.

Try ‘ruby cruise start’ in cygwin command line first, got lots of openpath path too long warning. Don’t know why and how to sovle it. Build passed anyway, ignore it for now.

Then add NT Service using cygrunsrv, note the different path in args, otherwise got can’t load build_start problem when cc.rb trying to start.

cygrunsrv –install CruiseControl.rb –path ‘/usr/bin/ruby.exe’ –args ‘/cygdrive/d/app/cruisecontrol-1.4.0/cruise start’ –chdir ‘d:/app/cruisecontrol-1.4.0’  –env GEM_HOME=”/lib/ruby/gems” -u Mao –passwd youknow

Setting user/pswd is to ensure cruise_data path is not pointing to /home/SYSYEM folder.

cc.rb, rspec, rake, gem… oh man

I just wanted to install CruiseControl.rb, then I realized I need to fix rake in my project first. For some reason, rake didn’t pick up the gem path (while running the single test/spec class in NetBeans is OK). I guess the reason is that I have multiple versions of ruby environment on same PC.

If it’s only one ruby environment, the only thing I need to do for windows ruby dev is to add a “RUBYOPT=rubygems” environment variable.

This problem is complicated for multiple ruby env. To simplify it, I manually set my preferred/primary ruby env to PATH var, or move it to first spot. This can make sure both rake and gem are pointing to the same ruby I was working on.

While updating gems, I noticed there is a new version 2.2 of rspec. So I uninstalled the old one 1.3, which is a disaster, because 1.3 to 2.2 is a broken upgrade. spec is gone (renamed to rspec), but NetBeans still trys to find the old spec… I’m hooked.

Fortunately I still have another ruby env, re-do the environment variables, rake is back to live.

The logic of looking for rake task in cc.rb is weird, test->default->…, what I really want is spec, so I set task :default=>[:spec], because test task is still there, so the default spec task couldn’t be picked up by cc.rb.

Good thing this is still configurable:

project.rake_task = 'spec'

Source control database scripts

I’m very jealous to those lucky developers who are work under source-controlled  database environment.

It’s shameful to admit we are still working on one same test/develop database all the day, without automatic nightly refresh. Developers play on test db and override each other’s test data.  When things on test db really become ugly and out-of-date, one guy requests a refresh, then the same old story start over.

Local database develop idea is too new to most of us,  after months of effort management allow the new project can go with local db mode development. Which means, DBA has to present scripts instead of centrol shared database to developers.

This is a huge move to our database team, I understand they don’t want put / check-in their scripts into developers’ svn repository for now (too many new stuff come together, what? source control?) They feel comfortable just copying their work to the network drive. It can kind of work with continuous integration, but, no source control mean no history, no branch, no rollback…

I have to find a work around to this until those scripts can be checked into svn repository.

Here is our project build routine:

  1. db-init (run script from network drive, always latest)
  2. db-test
  3. code compile (from svn repository)
  4. code unit-test
  5. release (will move to artifact folder)

The first two step only exist in new project, my solution is to save every script to local drive, then zip them into db-release folder before 3, . In case developers wants to build a specific build, they can not run 1 and 2 anymore, because scripts might newer that code, they only need to unzip the zip file and run the separate db-init manually, then start from step 3.

An ugly solution, I know we don’t have to do this if we can get EVERYTHING including db scripts from source control repository. Someday it might happen to us.

Here are some NAnt trick I’ve used, including prefix the script with number to enable order the name.

  <target name="exec_SQL_as_SA">
     <echo message='executing ${osql.exe} ${sa.osql.ConnectionString} -b -i ${filename}  -v DBDIR="${DB.DIR}"...' />
    <property name="count" value="${int::parse(count)+1}" />
    <property name="padded_count" value="${string::pad-left(count, 3,'0')}"/>
    <copy file='${filename}'  tofile='${sql.release.dir}/sa/[${padded_count}]-${path::get-file-name(filename)}'/>

	  <exec program="${osql.exe}" failonerror="true">
	    <arg line='${sa.osql.ConnectionString} -b -i "${filename}" -v DBDIR="${DB.DIR}"'  />
	  </exec>
  </target>

<target name="run.db.init">
         ....
         <zip zipfile="${dir.release}\database_scripts.zip">
        <fileset basedir="${dir.sql}\release">
          <include name="**/*" />
        </fileset>
      </zip>
    </if>
  </target>

About deploy, it’s better to use batch instead of NAnt.

cd sa
rem if NOT Exist deploy.log (
@echo > deploy.log
rem )

for /f "delims=" %%a IN (‘dir /b *.sql’) do (
@echo executing %%~fa
@echo executing %%~fa >tmp.log
%SQLCMD% -S %SERVER% %AS_SA% -b -i "%%~fa" -v DBDIR=%DBDIR% -o tmp.log
if exist tmp.log copy deploy.log+tmp.log >NUL
)
if exist tmp.log del tmp.log >NUL
move deploy.log .. >NUL

cd ../user

@echo > testresults.txt
for /f "delims=" %%a IN (‘dir /b *.sql’) do (
@echo testing %%~fa
@echo testing %%~fa >temp.log
%SQLCMD% -S %SERVER% %AS_USER% -b -i "%%~fa" -v DBDIR=%DBDIR% -o tmp.log
copy testresults.txt+tmp.log >NUL
)
if exist tmp.log del tmp.log >NUL
move testresults.txt .. >NUL

cd ..

ref:http://www.computerhope.com/forhlp.htm