SourceForge.net Logo
Last update: 20 December 09

SvnCrawler


Home Usage/Deployment Included Plug-ins Custom Plug-ins Hacks


Download

Download the latest SvnCrawler package from SourceForge and extract it somewhere, i.e. in ~/my_svn_crawler/
Alternatively you can get the latest version through SVN by typing:
mkdir ~/my_svn_crawler
svn co https://svncrawler.svn.sourceforge.net/svnroot/svncrawler/trunk ~/my_svn_crawler/

Plug-ins setup

Use the command ./svn_crawler.sh list to get informations on the currently installed plug-ins. If some is not relevant for you, delete it to speed up the analysis.
To "uninstall" a plug-in, (i.e. the codeswarm) remove the whole directory (i.e. rm -rf plugin/codeswarm)

Generate repository reports

Once you selected your plug-ins, you can start the analysis with:

./svn_crawler.sh gen SVN_URL

One or more option may follow:
-v Enable verbosity
-f FNUM Start the analysis from revision FNUM instead of 1
-t TNUM Stop the analysis at revision TNUM instead of proceeding until the latest
-o DIR Generate results in DIR/ instead of the default (./Results).
NOTE: -o DIR must be specified also when updating those reports!

Accessing the repository trough svn+ssh:// and is possible but not recommended!
Rather than
./svn_crawler.sh gen svn+ssh://user@server/path/to/my_project
it's a lot faster to do:
mkdir ~/repository_copy
scp -r user@server/path/to/my_project ~/repository_copy
./svn_crawler.sh gen file:///home/user/repository_copy/my_project

https:// is not currently supported, see below for a workaround

Update repository reports

If new revisions are added to the repository (or you stopped early with a -t command), it is possible to update the generated reports without doing the whole analysis from start. Use the command:

./svn_crawler.sh up SVN_URL

One or more option may follow:
-v Enable verbosity
-t TNUM Stop the analysis at revision TNUM instead of proceeding until the latest
-o DIR Required if and only if you used -o to generate the reports! DIR must match the previous

Downloading a repository

SvnCrawler is much faster if the repository is stored locally, cloning the repository is very simple:

svnadmin create $HOME/cloned_repository
touch $HOME/cloned_repository/hooks/pre-revprop-change

Edit the file with your favorite editor, just add the following line:

#!/usr/bin/env bash
Save and quit

svnsync init file://$HOME/cloned_repository/hooks/pre-revprop-change https://foo.bar.com/svn/repository
svnsync sync file://$HOME/cloned_repository

Multiple repositories

SvnCrawler does not keep track of multiple repositories. A gen command will delete the current state and any produced report.
However SC does not store any file outside it's own folder, therefore it is possible to stay up to date with multiple repositories with multiple copies of the crawler.

Errors and bugs

Something is not working properly? Please report it on the forum



Valid XHTML, CSS