Archive for category groovy

Improved jenkins-github navigation

At work, we are using git feature branches extensively, we have a jenkins job configured to build all appearing branches origin/feature/* but it’s hard to know which commit/branch is linked to the build. So I will show you how we use the Groovy Postbuild Plugin to add github link and the branch that was built.

+ +
= cheap and useful navigation links

Continuous build

Add a groovy post build action :

def matcher = manager.getLogMatcher(".*Commencing build of Revision (.*) (.*)\$")
if(matcher?.matches()) {
    branch = matcher.group(2).substring(8,matcher.group(2).length()-1)
    commit = matcher.group(1).substring(0,6)
    githuburl = manager.build.getParent().getProperty("com.coravy.hudson.plugins.github.GithubProjectProperty").getProjectUrl().commitId(matcher.group(1))
    description = "<a href='${githuburl}'>${commit}</a>"+" - "+branch 
    manager.build.setDescription(description)
}

It assumes that you have configured the GitHub project url in the job configuration page from the github plugin.

Don’t forget to install the Extra columns plugin and configure your main view to display the build description.

github-build-descriptions

Deployment pipeline

For deployment job, inspired by GitHub, let’s say that you have an url on your website returning the current sha like /site/sha. You have a jenkins job that tracks commit on origin/develop and trigger a deployment.

Let’s add a shell script step in your job :

DEPLOYED_SHA="`wget --no-check-certificate -qO- https://github.com/site/sha`" 
echo CURRENTLY_DEPLOYED_SHA $DEPLOYED_SHA

Than postbuild groovy script that will show the deployed sha and the github difference between the previously deployed version :

def matcher = manager.getLogMatcher(".*commit (.*)\$")
if(matcher?.matches()) {
    branch = 'develop'
    commit = matcher.group(1).substring(0,6)
    projectUrl = manager.build.getParent().getProperty("com.coravy.hudson.plugins.github.GithubProjectProperty").getProjectUrl()
    githuburl = projectUrl.commitId(matcher.group(1))
    def matcher_currently_depoyed = manager.getLogMatcher(".*CURRENTLY_DEPLOYED_SHA (.*)\$")
    commit_from = matcher_currently_depoyed.group(1).substring(0,6)
    description = "<a href='${githuburl}'>${commit}</a> - ${branch} - <a href='${projectUrl.baseUrl}compare/${commit_from}...${commit}'>diff</a>"
    manager.build.setDescription(description)
}

Where the diff links gives you something like diff.

If you have other hack around GitHub and jenkins, keep me posted !

Advertisements

, ,

Leave a comment

Jenkins as monitoring platform of the poor

The goal

The goal was to monitor some html page and wsdl availability. I don’t really have access to all the monitoring infrastructure and wanted to check my development servers. I was looking for a lightweight way of monitoring them. I’ve mixed jenkins and groovy and ended up to pretty and low-cost monitoring solution 😉

Install the necessary plugin and tools

Manage Jenkins > Manage Plugins :
Groovy plugin : This plugin adds the ability to directly execute Groovy code.
Green Balls : Changes Hudson to use green balls instead of blue for successful builds
Groovy Postbuild Plugin : This plugin executes a groovy script in the Jenkins JVM. Typically, the script checks some conditions and changes accordingly the build result, puts badges next to the build in the build history and/or displays information on the build summary page.

Manage Jenkins > Configure System : Groovy > Groovy installations or Install automatically
For the groovy plugin you can use the built-in tool installer or just point it to a unzipped binary of groovy
GROOVY_HOME : /opt/groovy/

So let’s create a free-style jenkins job with the following settings

Discard Old Builds : Max # of builds to keep : 100
Build periodically : */10 * * * *
Execute Groovy Script : groovy command :

servers = ['ex1.server.com','ex2.server.com','ex3.server.com']

wdsls=[]
simpleurls=[]
servers.each() {host ->
   wdsls.add("http://${host}/ws/MyWebService?wsdl")
   simpleurls.add("http://${host}/ui/MyConsole.html")
}

def koCount=0;
def slowCount=0;
def checkUrl = { url, check ->
 def status ='KO'
 def host =''
   start= System.currentTimeMillis()
    try {
     myurl = new URL(url)
       host =myurl.getHost()
       def text = myurl.getText(connectTimeout: 10000, readTimeout: 10000)
       def ok = check(url,text)
      status = ok?'OK':'KO';
      if (!ok) {koCount++}
    } catch (Throwable t) {
       koCount++
       }
    end= System.currentTimeMillis()
    if ((end-start)>100)
       slowCount++
    println "$host\t"+status+'\t'+(end-start)+'\t'+' '+url
}
def checkAllUrl =  {urls, check -> urls.each() {url ->checkUrl(url,check)}}
def wsdlCheck = {url,content -> content.contains("wsdl:definitions")}
def pingCheck = {url,content -> content.contains("status=NORMAL")}
def contentCheck = {url,content -> content.contains("login")}

checkAllUrl (wdsls,wsdlCheck )
checkAllUrl (simpleurls,contentCheck)

println "ko.count="+koCount
println "slow.count="+slowCount

if (koCount>0 || slowCount >0) {
    System.exit(-1)
}

Build two lists of urls : wsdls, simpleurls based on a list of servers.
A first closure checkUrl get the content of an url and update counters ok, ko
, it’s also receiving another closure that will check the expected content of the url content.
Now depending on the kind of content call the checkAllUrl with matching check closure wsdlCheck ,contentCheck,….

Add a Groovy Postbuild : Groovy script:

def addShortTextSlow = { comp,shortcomp->
matcher = manager.getMatcher(manager.build.logFile, comp+".count=(.*)\$")
if(matcher?.matches()) {
    manager.addShortText(shortcomp+' '+matcher.group(1), "grey", "white", "0px", "white")
}
}
addShortTextSlow('slow','slow')
addShortTextSlow('ko','ko')

That’s it !

Subscribe to the jenkins “RSS for failures” feed or your preferred jenkins notification tool and benefit from the jenkins built-in ui !

You have an history of the checks :

jenkins-monitoring-history

And trending
jenkins-monitoring-trend

You can easily embed this graph or the green/red ball in jira our your wiki :


<a href="http://myjenkins.com/job/monitoring/lastBuild/consoleText">
    <img src="http://myjenkins.com/job/monitoring/buildTimeGraph/png" alt="200" title="200" border="0"/>
</a>

<a href="http://myjenkins.com/job/monitoring/lastBuild/consoleText">
    <img src="http://myjenkins.com/job/monitoring/lastBuild/buildStatus" border="0">
</a>

The sky is the limit !

Ok now you got the idea… let’s add some checks to gather

— check some open ports :

try {
    s = new Socket(host, port);
    s.withStreams { input, output ->	}
    println "management port ok $host $port"
} catch (Exception e){
    koCount++
    println "management port KO for  $host $port : "+e.getMessage()
}	

— access jmx beans

import javax.management.remote.*
def serverUrl = new JMXServiceURL('service:jmx:rmi:///jndi/rmi://ex1.server.com:9999/jmxrmi')
def server = JMXConnectorFactory.connect(serverUrl).MBeanServerConnection;
def memory = new GroovyMBean(server, 'java.lang:type=Memory')
println memory.listAttributeNames() 
println memory.listOperationNames() 

— some jamon statistics :

jamonurls=[]
jamonurlsuffix='/jamonadmin.jsp?sortCol=2&sortOrder=desc&displayTypeValue=RangeColumns&RangeName=ms.&outputTypeValue=xml&formatterValue=%23%2C%23%23%23&TextSize=0&highlight=&ArraySQL=^WS-|^Fault&'

servers.each() {host ->	jamonurls.add("http://${host}"+jamonurlsuffix)}

def fixJamonXml= {
	xml ->
	if (xml.indexOf("No data was returned") != -1) {
		return '<JAMonXML></JAMonXML>';
	}
	String content = xml.substring(xml.indexOf('<JAMonXML>'));
	rangeLabels = [ "0_10ms", "10_20ms","20_40ms","40_80ms","80_160ms","160_320ms","320_640ms","640_1280ms","1280_2560ms","2560_5120ms","5120_10240ms","10240_20480ms"];
	content = content.replaceAll( '<Label>','<Label><![CDATA[');
	content = content.replaceAll( '</Label>',']]></Label>');
	rangeLabels.each() {
		rangeLabel -> content = content.replaceAll(rangeLabel, "range_" + rangeLabel);
	}
	content = content.replaceAll(  "LessThan_0ms", "range_LessThan_0ms");
	content = content.replaceAll( "GreaterThan", "range_GreaterThan");
	return content
}

def jamonCheck= {
	url,content ->
	monitors = []
	def JAMonXML = new XmlSlurper().parseText(fixJamonXml(content))
    def parseLong =  { t ->  if (t.text().equals("")) return null; Long.valueOf(t.text().replaceAll(',', ''))}
    def parseLongString =  { t ->  if (t.equals("")) return null; Long.valueOf(t.replaceAll(',', ''))}
    def parseRange = {
		rangeText ->	// 15/10.2 (0/0/0)
		// http://docs.codehaus.org/display/GROOVY/Tutorial+5+-+Capturing+regex+groups
		rangeFormat = /(.*)\/(.*) \((.*)\/(.*)\/(.*)\)/
		matched = ( rangeText.text() =~ rangeFormat )
		if (matched.matches()) {
			return [	'label':rangeText.name() , hits : parseLongString(matched[0][1]),average:matched[0][2]]
		}
		return [	'label':rangeText.name() , hits : 0,average:0.0]
	}
	println "************************"+ url
	JAMonXML.children().each() { row ->
		monitors.add( [
			'label' : row.Label,
			'units' : row.Units,
			'hits' : parseLong(row.Hits),
			'avg'  : parseLong(row.Avg),
			'total' : parseLong(row.Total),
			'stddev' : parseLong(row.StdDev),
			'lastvalue': parseLong(row.LastValue),
			'min' : parseLong(row.Min),
			'max' : parseLong(row.Max),
			'active' : parseLong(row.Active),
			'avgActice':parseLong(row.AvgActive),
			'maxActice':parseLong(row.MaxActive),
			'firstAccess':row.FirstAccess,
			'lastAccess' : row.LastAccess,
			'ranges' : [
				'range_LessThan_0ms' :parseRange(row.range_LessThan_0ms),
				'range_0_10ms' : parseRange(row.range_0_10ms),
				'range_10_20ms' : parseRange(row.range_10_20ms) ,
				'range_20_40ms' : parseRange(row.range_20_40ms),
				'range_40_80ms':parseRange(row.range_40_80ms),
				'range_80_160ms' : parseRange(row.range_80_160ms) ,
				'range_160_320ms' : parseRange(row.range_160_320ms),
				'range_320_640ms' : parseRange(row.range_320_640ms),
				'range_640_1280ms' : parseRange(row.range_640_1280ms),
				'range_1280_2560ms' : parseRange(row.range_1280_2560ms) ,
				'range_2560_5120ms' : parseRange(row.range_2560_5120ms),
				'range_5120_10240ms' : parseRange(row.range_5120_10240ms),
				'range_10240_20480ms' : parseRange(row.range_10240_20480ms),
				'range_GreaterThan_20480ms': parseRange(row.range_GreaterThan_20480ms)]
		] )
	}
	/**
	 *  1      0      10ms
		2     10      20ms
		3     20      40ms
		4     40      80ms 
		5     80     160ms
		6    160     320ms
		7    320     640ms
		8    640    1280ms
		9   1280    2560ms
		10  2560    5120ms
		10  5120   10240ms
		12 10240   20480ms
		13 >>      20480ms
	 */
	def getPercentiles = {monitor ->
	    def ps = [0.5,0.8,0.9,0.95,0.98,0.99]
		def ranges = [];
		monitor.ranges.eachWithIndex() {it, i -> ranges.add(it.value.hits) }	
		def rangesCumulative  = [];	 
		(0..13).each() {i -> rangesCumulative.add (monitor.hits>0?ranges[i]/monitor.hits:0)}
		def percentages= (0..13).collect() {i -> rangesCumulative[1..i].sum()}
		def percentiles = ps.collect{ percentile->percentages.findIndexOf{it>=percentile}}
	   return percentiles
    }
	percentileserrors = [];
	monitors.each {
		percentiles = getPercentiles(it)
		println percentiles.join('\t') + "\t"+it.label
		if (percentiles[2]>8) {
			percentileserrors.add(it.label)
		}		
	}	

	return percentileserrors>0;
}
checkAllUrl (jamonurls,jamonCheck)

, , , ,

4 Comments

Web’s fear and maven

All is well and good on your jetty or tomcat servers, then one of your client want to deploy your application in websphere application server, and trouble begins

 - JNDI lookup for datasources
 - Classloading mess
    - Verbose classloading and parent last
    - Jboss tattletale
    - Cleanup undesired dependencies
        - Maven exclusions
        - Correct scope
        - Maven war plugin : packagingExcludes
        - Patched jar
    - Keep it clean
         - maven-enforcer-plugin and friends
         - Combining Groovy-Sonar-Jenkins

Jndi lookup

If your client plan to use websphere, may be he wants to use the built-in websphere datasource, an implementation collecting various statistics about connection, prepared statement,…
You probably want to keep your jetty/tomcat compliance and if you are in webphere switch to the specific implementation (jndi datasource, jta transaction manager,…)
You can use the spring profiles to lookup your datasource via jndi instead of using dbcp or another datasource implementation.

	<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean" abstract="false"
		scope="singleton">
		<property name="lookupOnStartup" value="false" />
		<property name="cache" value="true" />
		<property name="proxyInterface" value="javax.sql.DataSource" />
		<property name="expectedType" value="javax.sql.DataSource" />
		<property name="jndiName" value="java:comp/env/jdbc/MyDataSource" />
	</bean>
 

If you plan to use jta transactions, and multiple datasources/queues, don’t forget to use XA transactions or tweak your transactions to avoid mixing access in a single transaction (and design it for possible data loss).
Also reduce the isolation level through the datasource property webSphereDefaultIsolationLevel (the default one is repeatable read).
If you have long running transaction like quartz job, test them extensively.

Classloading mess

We are in 2012… osgi is there since a long time and I’m still struggling with websphere and its bundled xerces.

Verbose classloading and parent last

To diagnose classloading issues like NoClassDefFound, Violation Constraint,… you can enable the verbose classloading.
To minimize the side effect of the bundled jars in websphere you setup the classloader policy of your application and module to parent last.

Jboss tattletale

I know that it’s ironic but this tool developed by JBoss will save you hours of trial and errors.
To audit your web-inf/lib, jboss tattletale is THE tool to identify :
– undesired dependencies like the one bundling javax.** classes
– duplicate jars (often due to maven relocation)
– duplicate classes

	 		<plugin>
				<groupId>org.jboss.tattletale</groupId>
				<artifactId>tattletale-maven</artifactId>
				<version>1.1.2.Final</version>
				<executions>
					<execution>
						<goals>
							<goal>report</goal>
						</goals>
					</execution>
				</executions>
				<configuration>
					<source>./target/mywebapp-${project.version}/WEB-INF/lib</source>
					<destination>./target/reports</destination>
				</configuration>
			</plugin> 
 

Launch mvn clean package
Then take a look at the report, you will perhaps discover duplicates classes like the one from commons-logging and use jcl-over-slf4j

or duplicate quartz jar :

  <groupId>opensymphony</groupId>
  <artifactId>quartz-all</artifactId>
vs
  <groupId>opensymphony</groupId>
  <artifactId>quartz</artifactId>

and many other undesired dependencies.

Cleanup Undesired dependencies

Maven exclusions

Since maven 2.x resolves dependencies transitively, it is possible for unwanted dependencies to be included in your project’s classpath. Projects that you depend on may not have declared their set of dependencies correctly, for example. In order to address this special situation, maven 2.x has incorporated the notion of explicit dependency exclusion. Exclusions are set on a specific dependency in your POM, and are targeted at a specific groupId and artifactId. When you build your project, that artifact will not be added to your project’s classpath by way of the dependency in which the exclusion was declared.

<exclusion>
    <groupId>xstream</groupId>
    <artifactId>xstream</artifactId>
</exclusion>
<exclusion>
    <groupId>com.thoughtworks.xstream</groupId>
    <artifactId>xstream</artifactId>
</exclusion>
 

Correct scope

For example excluding test artifact by specifying the correct scope.

	<dependency>
		<groupId>junit</groupId>
		<artifactId>junit</artifactId> 
                <scope>test</scope>
	</dependency>
 

Exclude jdbc drivers by defining them as provided (idem for your datasource implementation)


				<dependency>
					<groupId>com.ibm.data.db2</groupId>
					<artifactId>db2jcc</artifactId>
                                        <scope>provided</scope>
				</dependency>

 

packagingExcludes

In extreme case… putting exclusions is just too long and boring. Configuring the maven war plugin to exclude the jar can be a faster way but remember that if this dependency breaks something in your application, it’s still there in your unit tests.

			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-war-plugin</artifactId>
				<version>2.1.1</version>
				<configuration>
					<packagingExcludes>WEB-INF/lib/commons-logging-*.jar</packagingExcludes>									
					<warSourceDirectory>WebContent</warSourceDirectory>
				</configuration>
			</plugin>
	
 

Patched jars

Some open source jars bundles multiple times the same classes, for example org.w3c.dom.UserDataHandler is bundled in xom, jaxen and many more.
This interface was also bundled in websphere and two jars in the web-inf/lib, one of them was sealed leading to java.lang.LinkageError: loading constraint violation: loader.
So I removed them from the jar and upload a xom-1.1.patched.jar to the corporate maven repository. It’s really ugly but it’s working.

Keep it clean

maven-enforcer-plugin and friends

Maven provide a default rules to enforce some rules, on[e] of them is for banneddependencies.

But there is another set of rules provided by the pedantic pom enforcer

Have you ever experienced symptoms like headaches, unfocused anger or a feeling of total resignation when looking at a Maven project where everybody adds and changes stuff just as they need? Do people call you a “POM-Nazi” if you show them how to setup proper and well organized projects?

If so, the Pedantic POM Enforcers are absolutely the thing you need!

An you have also an extra rule set @codehaus

Combining Groovy-Sonar-Jenkins

It’s quite easy to create a small groovy script that
– will check the jars in web-inf/lib against a baseline list
– failed the build or if you are less paranoid…
– send a mail to your team,
– or just contribute to a sonar manual measure

Let’s define our baseline, for some jar you want to get noticed if a different version is bundled, for your module you accept any version.
And use this baseline as a whitelist if it’s a different version or if there’s no match then it’s a new dependency -> requires to test a websphere deployment.

xom-1.0.jar
mymodule-.*.jar
...
 

Then create the manual measure in sonar

You can define a manual measure

And now the groovy script to analyze the latest war file and post the manual measure to sonar and send you a mail 😉 :


import java.util.zip.ZipFile;

//authenticated post
def postSonarMeasure = { resource,metric,val, sonarhost,token ->
	def script = "resource=${resource}&metric=${metric}&val=${val}&text=fromgroovy&description=fromgroovy";
	println script
	URL url = new URL("${sonarhost}/api/manual_measures?"+script);
	URLConnection conn = url.openConnection();
	conn.setRequestMethod("POST");
	conn.setRequestProperty("Content-Type","application/x-www-form-urlencoded");
	conn.setRequestProperty ("Authorization", "Basic ${token}")
	conn.setDoOutput(true);
	OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream());
	wr.write(script);
	wr.flush();
	result=  conn.getInputStream().getText()
	println 'metrics created '+result;
	return result
}

def sonar = 'https://continuousbuild.com/sonar'
def mavencoordinate='com.company:mywebapp'
def token = 'myuser:mypassword'.bytes.encodeBase64().toString()

//http://docs.codehaus.org/display/SONAR/Web+Service+API
//curl http://continuousbuild.com/sonar/api/manual_measures?resource=com.company:mywebapp&metric=unverifiedwebinfjars
//http://jira.codehaus.org/browse/SONAR-2966 <not_supported/>  
// 1. define the metrics
// 2. add a measure manually
// 3. launch an analysis
// 4. check the data through api/manual_measures
def postSonarUnverifiedWebInfJars = { value ->
	postSonarMeasure(mavencoordinate,'unverifiedwebinfjars',value,sonar,token)
}

def getActualContentOfWebInfLibFromLastestWar ={
	// find latest war file in target directory
	fileWar = new File("./target").listFiles().findAll(){ it.getName().endsWith('.war')}.sort() { a,b ->
		a.lastModified().compareTo b.lastModified()
	}.getAt(0);
	println "Checking WEB-INF/lib from "+fileWar.canonicalPath;
	//and create actuals with content
	ZipFile file = new ZipFile(fileWar)
	actuals = file.entries().collect { entry -> if (entry.getName().startsWith('WEB-INF/lib/')) return entry.getName().substring('WEB-INF/lib/'.length()) }
	actuals = actuals.findAll {it!=null && !it.equals('')}
	return actuals
}

def getBaseLine = {
	allowed=[]
	new File("./baseline.txt").eachLine { if (!it.trim().isEmpty())allowed.add(it) }
	return 	allowed
}
	actuals =getActualContentOfWebInfLibFromLastestWar();
	allowed =getBaseLine();

	println "************************************ "
	println "actuals "+actuals.size()
	println "allowed "+allowed.size()
	println "************************************ "

	unallowed = [];
	unmatched = [];

	allowedNonMatching = [];
	allowedNonMatching.addAll(allowed);

	actuals.each { actual ->
		ok = allowed.find() { allow ->
			boolean match= (actual =~ '^'+allow)
			if (match) {
				allowedNonMatching.remove(allow)
				println "matching " +actual +" "+ allow
			}
			return match;
		}
		if (ok==null) {
			unallowed.add("unmatched dependencies ! '${actual}' ")
			println "unmatched dependencies ! '${actual}' "
			unmatched.add(actual)
		}
	}
	if (!unallowed.isEmpty() || !allowedNonMatching.isEmpty) {
		def msg =  "The ${project} problem dependencies : \n"+unallowed.join('\n')+" \n add exclusions or adapt baseline.txt check if websphere deployment is ok.\nplease.\n"+actuals.join('\n');
		 ant = new AntBuilder()
		 ant.mail(mailhost:'mysmtp.server.com', subject:"${project} : undesired dependencies detected !" ,tolist:'myaddress@mestachs.com'){
		         from(address:'jenkins@mestachs.com')
		         replyto(address:'myaddress@mestachs.com')
		         message(msg.toString())
		 }

		println msg.toString()
	}
	println "************************************ unused constraint from baseline.txt"
	allowedNonMatching.each {println it}
	println "*************************"
	println "************************************ append content to baseline.txt"
	unmatched.each {println it}
	println "*************************"
	postSonarUnverifiedWebInfJars(unallowed.size())
 

Enable the run of this scripts via maven plugin in a dedicated profile

	
   	            <plugin>
				<groupId>org.codehaus.groovy.maven</groupId>
				<artifactId>gmaven-plugin</artifactId>
				<version>1.0</version>
				<executions>
					<execution>
						<phase>verify</phase>
						<goals>
							<goal>execute</goal>
						</goals>
						<configuration>
							<source>./comparewebinf.groovy</source>
						</configuration>
					</execution>
				</executions>
			</plugin> 	
 

or via jenkins groovy post scripts.

,

1 Comment

Jenkins : diskspace requirement tips

Jenkins is great tool but I already wrote the default value don’t help to keep it running for a long time without terrabytes of disks. So let’s manage our diskspace requirement for maven builds using the various option of jenkins and the system groovy scripts.

Disable maven artefact archiving

This option will tell jenkins to collect pom,jars,wars,ears as they are produced by maven. This is rarely usefull when you use an enterprise repository. This option is enabled by default… so if you aren’t using it… disable it !

to do so you need to go in each job definition and check :

Build > Advanced > Disable automatic artifact archiving

As lazy programmer, you may be know that jenkins offer a jenkins script console.
So you can fix artefact archiving in a single batch with the following script :

String format ='%-45s | %-20s | %-10s | %-10s | %-30s'
def readonly = false
activeJobs = hudson.model.Hudson.instance.items.findAll
    {job -> job.isBuildable() && job instanceof hudson.maven.MavenModuleSet}
def oneline= { str ->   if (str==null)     return "";  str.replaceAll("[\n\r]", " - ")}
println String.format(format , "job", "scm trigger","last status"," logrot","archiving")
println "-------------------------------------------------------------------------------------------------------------------------------"
activeJobs.each{run ->
    println String.format(format ,run.name,oneline(run.getTrigger(hudson.triggers.Trigger.class)?.spec), run?.lastBuild?.result, run.logRotator.getDaysToKeep()+" "+run.logRotator.getNumToKeepStr(), ""+run.isArchivingDisabled()) ; 
    if (!run.isArchivingDisabled() && !readonly ) {        
        run.setIsArchivingDisabled(true);
        run.save()
    }
}

adjust the readonly variable to true to fix them automatically 😉

job                  | scm trigger          | last status |  logrot  | archiving                    
-------------------------------------------------------------------------------
myproject_ci         | 24 * * * *           | SUCCESS    | -1 10     | true

Discard Old Build

you can easily locate the jobs leaking logs

noLogRotation = hudson.model.Hudson.instance.items.findAll
                   {job -> job.isBuildable() && job.logRotator==null}
noLogRotation.each() {println it.name}

and fix them also by providing a logRotator

def jobs = hudson.model.Hudson.instance.items.findAll
                                         { !it.logRotator && !it.disabled }
jobs.each { job ->
   // days to keep, num to keep, artifact days to keep, num to keep 
   job.logRotator = new hudson.tasks.LogRotator ( 30, 40, 1, 1) 
   println "$it.name fixed " 
}

Leave a comment

Surviving in a legacy AS/400 world with a taste of Groovy.

IBM System i, iSeries, AS/400,…

You may have heard of IBM System i, iSeries, AS/400,… he was rebranded multiple times but for most of you it’s a green screen 5250. This system is fairly widespread in our european industry. For java developpement you have access to iseries via jt400 (driver + api for most concept (jobs, program call,…))

Groovy + jt400 + system tables = automation for lazy dba.

Last month, we did a new release of our application and this one required a new set of indexes.

The good news is that the iSeries, when preparing sql statements, is doing an explain plan and logs it’s advised indexes in a system tables. But all advised indexes aren’t good to create, may be you can reuse an existing one by re-phrasing your sql statement. So we had to list advised indexes and existing one for each table, take a look at the number of times the index was advised,…

Doing this manually in the UI tool was in fact too error-prone, too boring. As a java/groovy developper, I should automate this with a groovy script.

Existing tables

So first let’s list all existing tables (physical file) in a given schema (library) using the system view SYSTABLES. Our dba prefer systemName (short name vs long name)

import groovy.sql.Sql
import java.util.*

def getTableSystemNames = {library,sql ->
    sql.rows(""" select * from QSYS2/SYSTABLES where table_schema = '${library}'
                 fetch first 500 rows only with ur""".toString()).collect { it.SYSTEM_TABLE_NAME}
}

Existing indexes

first step, let’s get the existing indexes from sysindexes.
One line is a column of one index… so let’s use groovy goodness groupBy, collect and join to get them one line of format : “column1, column2, column3”

    def getExistingIndexes = { library,tableSystemName,sql -&gt;

    def existingIndexSQL = """with INDX as (
        select INDEX_NAME,SYSTEM_INDEX_NAME,COLUMN_NAME,SYSTEM_COLUMN_NAME from qsys2/SYSKEYS
        where
          index_name in (SELECT INDEX_NAME FROM qsys2/sysindexes 
                         where SYSTEM_INDEX_SCHEMA = '$library' and system_table_name = '$tableSystemName' )
           and index_SCHEMA='$library'
        )
        select * from INDX
        fetch first 500 rows only with ur
    """;
    rows=sql.rows(existingIndexSQL.toString())
    existingIndexes = [:];
    def existingIndexesColumns = rows.groupBy { it.SYSTEM_INDEX_NAME}
    existingIndexesColumns.each {row -&gt; existingIndexes.put row.key, row.value.collect {it.COLUMN_NAME} .join(',') }
    return existingIndexes
}

Advised indexes

second step, get the advised indexes
KEY_COLUMNS_ADVISED is already “column1, column2, column3” format

    def getAdvisedIndexes= {library,tableSystemName,sql -&gt;
    def advisedIndexesSQL = """
        select * from qsys2/SYSIXADV where
        TABLE_SCHEMA = '${library}' and
        SYSTEM_TABLE_NAME like '${tableSystemName}%'
        and TIMES_ADVISED &gt; 1
        and index_type = 'RADIX'
        order by TIMES_ADVISED desc, MTI_CREATED desc
        fetch first 500 rows only with ur
        """

    rows = sql.rows(advisedIndexesSQL.toString())
    rows.collect { it.KEY_COLUMNS_ADVISED+" "+it.TIMES_ADVISED+" "+it.INDEX_TYPE}
}

It works !

last step, put everything together with an sql connection 😉

    def dumpAdvisedAndExistingIndexes = { library,sql -&gt;
    tables=getTableSystemNames(library,sql)
    tables.each() { tableSystemName-&gt;
        advised=getAdvisedIndexes(library,tableSystemName,sql)
        if (advised.isEmpty())
          return
        println "###### ${library}.${tableSystemName}"
        println "****************** existing indexes ****************"
        getExistingIndexes(library,tableSystemName,sql).each {println it}
        println "****************** advised indexes ****************"
        advised.each {println it}
    }

}
def as400 = "myas400"
def as400User = "myuser"
def as400Pwd = "mypwd"

def sql = Sql.newInstance("jdbc:as400://${as400};naming=system;libraries=*LIBL;date format=iso;prompt=false", as400User,as400Pwd, "com.ibm.as400.access.AS400JDBCDriver");

dumpAdvisedAndExistingIndexes('LIB1',sql )
dumpAdvisedAndExistingIndexes('LIB2',sql )
dumpAdvisedAndExistingIndexes('LIB3',sql )

A little further

Ok now I have beautifull script… what can I do with it. You can for example

  • reuse these closures to compare two library, two different iseries,…
  • put this kind of groovy script in a jenkins job. I did similar script to detect reserved keyword and each developper can test his own library/schema via a Parameterized groovyjenkins job.
  • document your database with similar scripts or tool like schemaspy.
  • reuse the same approach for the dbms like DB2 luw, oracle, mysql,…
  • mix these system informatin with your naming conventions check

, , , , , ,

Leave a comment