Wednesday, October 13, 2010

Agile Retrospective Example

We have a one week iteration that starts on Wednesday.  Every Wednesday morning we do our retrospective and our planning meeting back to back.  This is the agenda that we use for our retrospectives and will hopefully serve as an idea generator for yours.

The Scrum Master is responsible for making everyone feel safe and organizing and facilitating the meeting.  Safety and preparation are key and will result in a smoother retrospective. 

In your retrospective, make sure everyone feels they are in a safe environment.  Without safety, members have a tendency to not speak up.  One way to foster a safe environment is by having everyone sit in a chair in a circle with no tables.  This represents no barriers between you and your teammates.  If individuals not on your team are attending your retrospective and do not contribute to a safe environment, ask them to not attend but that you’ll review the retrospective with them later.  This allows communication between those individuals but also allows topics to be discussed openly.

Before the meeting begins, have post it notes handed out, extra pens available, and any other needed material ready for the taking in addition to the circles in "The Soup" drawn on a whiteboard (more explanation to come).  The more prep work that is done at the beginning, the smoother the meeting will go.

At a high level, here's the agenda for our retrospectives:

Agenda
1.  Set the stage
2.  Gather Data
3.  Generate Insights (The Soup)
4,  Decide What To Do (Action Items)
5.  Close the Retrospective (Thanks)

Set the Stage

Welcome the team to the retrospective and go over the agenda items.  During this stage, we also review the swim lane board to make sure it is accurate.  You'll undoubtedly have someone say, "Oh, that's done", so a quick look at the board is useful. 

Gather Data

First, we want to gather data on how the iteration went.  We want to get each team member's feel from both a technical and process aspect on how the iteration went.  What problems arose, how can we address them, and, most importantly, what are measurable action items can we do to improve the situation?  We do two activities: the Happy/Sad, and the Good/Confusing/Painful activity as described below:

Happy/Sad Activity
Purpose:  Get a general feel on the team's morale of the iteration.  We do this first so that this represents one's own personal feelings on how the iteration went and does not allow other team members to influence your decision.
Material:  Whiteboard, pen
Time:  5 - 10 minutes
Process:  The Scrum Master will ask the team "Were you happy or sad with how this iteration went?" then each member will respond and the score will be tallied.  A team member should not defend their answer, the reason(s) should be revealed in the second activity.
 
Good/Confusing/Painful Activity 
Purpose:  Gathering data on went well versus what didn’t and what was confusing
Materials:  Whiteboard, 3 different colored post it notes, Pens
Time:  30-45 minutes

Process: 
Pass out a few pieces of each post it note color and a pen to each member of the team.  For 3 minutes (use a timer), have the team write down memorable events during the iteration with the following criteria:
Green Color -> Item that went well for this iteration
Blue Color -> Item that was confusing
Red Color -> Item that was painful or didn't go well

At the end of the 3 minutes, collect all of the post it notes and organize them into categories on a whiteboard.  The number of categories will vary depending but keep the number of categories no greater than 5 categories.  Once the post it notes are categorized, have each team member vote twice on which categories they would like to discuss.  They can vote twice on the same category or split them.  Once the votes are tallied, discuss the top 2 categories.  Place a 10-15 minute time box for each category for team members to discuss, more in depth, cards they put on the wall, and how to improve the situation.  Don't feel as if you have to draw it out to the maximum time, if a category's discussion has ended, go onto the next category.  If you have time left over from discussion, ask the team if they want to discuss the third most popular category.  Make sure a time box of the remaining time you have left is for the last category as to respect the time.

Generate Insights

Take the cards from the discussed categories and use them for The Soup Experiment.  This will give the team different perspective on the cards written.  It will help clarify whose responsible for what.  If there are a lot of "painful" cards in team control, then this clarifies items that the team needs to work on.  At this point, come up with action items (assign them to an individual, if you assign it to the team, they'll never get done!) as to how the team can improve the situation, or experiments that the team can try for an iteration to try and improve the situation. 

Close the Retrospective

Thank the team for their time and move onto the planning meeting.

Tips/Tricks from my experience:

  • Don't be overwhelmed!  The first few retrospectives will take longer but will become shorter over time.
  • Let this be an avenue where team members can voice their concerns and solutions but refrain from talking exclusively on the negative items.  Talk about what went well and how to maintain that!
  • Action Items:  these are important and will reflect on how effective retrospectives become.  Come up with measurable action items, assign them to a person, and, most importantly, follow up with the status at the next retrospective.  If items are under the team's control, do everything possible to complete those action items sooner rather than later.
  • Limit your action items to between 3 - 5 items, take the highest priority items.  It's easier to remember 3 - 5 items versus 20 plus they are more likely to get done.
  • Switch up your retrospective activities!  This will keep the meetings fresh and not get people into a rut.  Check out the book "Agile Retrospectives Making Good Teams Great"
  • Remember to time box activities as much as possible to respect team member's time.

Tuesday, October 5, 2010

Running Coded UI Tests From CruiseControl.NET

This is that last post of a 4 part series on Coded UI Tests.  We will wrap up our example with how to wire up our Coded UI Tests to CruiseControl.NET (our CI Server).  Here's where we've come:

Part 1:  How to Create a Coded UI Test
Part 2:  Code Organization:  Test Lists, Test Categories, Oh my! – Coded UI Test Organization
Part 3:  Running Coded UI Tests From Command Line

This blog will explain how to run Coded UI Tests from CruiseControl.NET. 

Overview Steps

  1. Create a batch file to call the GUI Tests
  2. CruiseControl.NET Configuration
    1. Install “MSTest Reports” package
    2. Modify the dashboard.config file
    3. Modify the ccnet.config file and add a project
    4. Modify the MSTestReport2008.xsl file
  3. Run the project in CCNET
  4. Tips and Tricks
Create a Batch File to Call the Coded UI Tests

First, create a batch file that will run your Coded UI Tests.  I've blogged about how you can do that here <LINK>.  Note that you if you choose to run multiple tests from one command line prompt, the order in which they are ran is not guaranteed.  I've worked around this by calling 3 commands, in order, with 3 different result files.  If you don't save them to different files, the second command will error out because the file already exists.  Since we have 3 different result files, we'll use CC.NET to merge them all into one report.  Here's our sample bat file:

 

rd /S /Q "C:\build\reports"
mkdir "C:\build\reports"
cd C:\Build\Working\
"C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" /testmetadata:"C:\Build\Working\CalcCodedUITest.vsmdi" /testlist:List_LoadCalc /resultsfile:"C:\Build\Reports\List_LoadCalcResults.trx"
"C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" /testmetadata:"C:\Build\Working\CalcCodedUITest.vsmdi" /testlist:PerfCalcResults /resultsfile:"C:\Build\Reports\PerfCalcResults.trx"
"C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\mstest.exe" /testmetadata:"C:\Build\Working\CalcCodedUITest.vsmdi" /testlist:ShutDownCalcResults /resultsfile:"C:\Build\Reports\ShutDownCalcResults.trx"



The interesting things in this code is we delete the folder the reports are created in (C:\buid\reports) since they are merged in CCNet and we change the directory to the directory of the solution (C:\build\working) and then run 3 seperate test lists.

 

CruiseControl.NET Configuration



Install the MSTest Reports Package


-Assuming you have CC.NET installed, go into CCNet's Administer Dashboard and install the "MSTest Reports" package.  This package will enable you to view the test results through CC.NET.



Modify the dashboard.config File


In order to view the results, you need to tweak the dashboard.config file (default location: C:\Program Files\CruiseControl.NET\webdashboard) first and change the xsl file from MsTestSummary.xsl to MsTestSummary2008.xsl in the <buildPlugins> tag so it looks like the following:

<buildPlugins>
      <buildReportBuildPlugin>
        <xslFileNames>
          <xslFile>xsl\header.xsl</xslFile>
          <xslFile>xsl\modifications.xsl</xslFile>
          <xslFile>xsl\MsTestSummary2008.xsl</xslFile>
        </xslFileNames>
      </buildReportBuildPlugin>
      <buildLogBuildPlugin />
      <xslReportBuildPlugin description="MSTest Report" actionName="MSTestBuildReport" xslFileName="xsl\MsTestReport2008.xsl"></xslReportBuildPlugin>
</buildPlugins>

Do an IISReset.  IISReset is necessary for CC.NET to pick up the changes.

Modify the CCNet.config file


Next, you'll need to add the project to the ccnet.config file (default location:  C:\Program Files\CruiseControl.NET\server\ccnet.config) that will execute an "exec" task that will call your batch file to run the Coded UI tests.  More examples can be found using CruiseControl.NET documentation

First, add the queue name like so:



<queue name="alpha" />



Then add the project node with a name, etc, like so:



<project queue="alpha" queuePriority="1">
        <name>Coded UI Command Line</name>
        <tasks>
            <exec executable="C:\CodedUIFiles\CalcCodedUITests.bat" />
        </tasks>
        <publishers>
            <merge>
                <files>
                    <file>c:\Build\Reports\LoadCalcResults.trx</file>
                    <file>c:\Build\Reports\PerformCalcResults.trx</file>
                    <file>c:\Build\Reports\CloseCalcResults.trx</file>
                </files>
            </merge>
        </publishers>
    </project>




The interesting piece is that we are merging 3 ".trx" files.  We needed to ensure the order in which the test lists are ran and to ensure the order, in the bat file, we have 3 command lines executing tests, one after the other, to different results files.  If we had set it to the same results file, the system will error out stating that the file already exists.  These file names correspond to the file names that are created in the bat file when the tests run and get deleted before every run.  Since the files are merged into the CCNet report file, there's no need to keep them twice.



Modify the MSTestReport2008.xsl File


A side effect to running 3 command line tests is in the reporting, it will only show the same number of tests that are in the last group of tests ran.  In order to sum up all the tests ran, we need to modify the MsTestReport2008.xsl (C:\Program Files\CruiseControl.NET\webdashboard\xsl\MSTestReport2008.xsl).  Modify the file and add the sum commands like so:




    <xsl:variable name="pass_count" select="sum(/cruisecontrol/build/*[local-name()='TestRun']/*[local-name()='ResultSummary']/*[local-name()='Counters']/@passed)"/>
        <xsl:variable name="inconclusive_count" select="sum(/cruisecontrol/build/*[local-name()='TestRun']/*[local-name()='ResultSummary']/*[local-name()='Counters']/@inconclusive)"/>
        <xsl:variable name="failed_count" select="sum(/cruisecontrol/build/*[local-name()='TestRun']/*[local-name()='ResultSummary']/*[local-name()='Counters']/@failed)"/>
        <xsl:variable name="total_count" select="sum(/cruisecontrol/build/*[local-name()='TestRun']/*[local-name()='ResultSummary']/*[local-name()='Counters']/@total)"/>



Save the file and do an IISReset for CCNet to pick up the changes.



Run the Project in CCNET


Go to CCNet and click on "Force" to force the Coded UI tests to run.
Once the tests are finished, click on the name of the project -> "Latest Build" on the left hand side -> "MSTest Report"



Tips and Tricks


Here are some tips and tricks in running Coded UI tests from a WinForms application:



  1. Due to security issues, CCNet cannot run as a service and run Coded UI Tests.  Turn the service off and run the CCNet application as an Administrator.

  2. The Coded UI Tests can be kicked off from any machine that can access the CCNet web page.  However, the computer's desktop that CCNet is installed on will need to be maximized and unlocked (you can operate this from an RDC session) in order to run tests.  If the CCNet computer's desktop is locked, minimized or set to a different screen resolution than when i twas recorded, the tests will fail and throw a "can't interact with desktop" error.

  3. Sometimes you can get a false positive and CCNet will say the last build status will say "Success".  This will not be necessarily the case for all tests.  After the Coded UI tests are ran, make sure you view the MSTest Report to view which tests passed or failed.

  4. Ideally, set up CCNET to pull down the code from your source control, compile it, then run the Coded UI tests from the bat file.

  5. Structure your bat file to run test lists, not individual tests.  The advantage to this is that when new tests are added to the list, you won't need to modify the bat file, it will just be included.  Organizing in test lists also allow for horizontal scalability.  For example, say you have a large application that runs on multiple back ends, you could setup another CI Server on another VM, setup a new bat file just with a specific back end of tests, and let that VM run tests at the same time.

  6. Coded UI Tests are slow by nature so over time you'll want to set up a schedule trigger in CCNet to kick this off at a certain time, for example, as an over night process.  I would also recommend a dedicated computer or VM to perform these tests.


Hope this has helped down your path of automated Coded UI tests. 

Friday, October 1, 2010

How to: Run Coded UI Tests from a Command Line

This blog post is a third in a series on Coded UI tests and have assumed you’ve looked at the previous two posts:

Part 1: Create a Simple Visual Studio 2010 Coded UI Test

Part 2: Coded UI Tests: Test Lists, Test Categories, Oh my!

Now that we have a test suite and we have organized our tests, how can we run them from a command line? Once we have tests running from a command line, we can then wire it up to a continuous integration (CI) server like Cruise Control.Net (which will be a later post) and have them to run automatically.

Running a Coded UI Tests from Command Line Based on a Test List

1. Open up with Visual Studio 2010 Command Prompt Window (Start->Programs->Microsoft Visual Studio 2010->Visual Studio Tools->Visual Studio Command Prompt (2010))

2. Point to the directory of your Solution file. In our case it will be C:\Projects\GUIPOC



3. Enter the following command prompt to run all tests
Mstest /testmetadata:CalcCodedUITest.vsmdi /testlist:LoadCalcTestList

4. The test list will load and execute like the following:




Other popular MSTest commands to run tests:

These examples are based that steps 1 and 2 were followed above.

Run Test by Test Name
  • Mstest /testcontainer:CalcCodedUITest\bin\debug\CalcCodedUITest.dll /test:LoadCalculator /unique
Run Test by Category
  • Mstest /testcontainer:CalcCodedUITest\bin\debug\CalcCodedUITest.dll /category:CalcCategory
Run Test by Category and Publish Results to a Declare File
  • Mstest /testcontainer:CalcCodedUITest\bin\debug\CalcCodedUITest.dll /category:CalcCategory /resultsfile:”C:\LogFile.trx”
Tips and Tricks
  • The “.vsmdi” file stores the links to other tests. This is associated with the solution so you have the ability to store tests from multiple projects in the same solution under one test list.
  • If you declare multiple tests to run in one command line, the order in which they run are not guaranteed. If you need to ensure the order, ensure it using the command line by having multiple command lines executing tests.
  • If a results file already exists, another test run using the same results file name will error out and complain that that file already exists.
 
Additional Resources: