Páginas

SyntaxHighlighter

quarta-feira, 5 de março de 2014

Hooking Bitbucket up with Jenkins parameterized jobs

Bitbucket repositories allow us to setup hooks which notify/trigger Jenkins' jobs about newly pushed code. The process to create such a hook is documented here. However, it doesn't mention how to integrate with Jenkins parameterized jobs.
After reading the Jenkins documentation and a few trial and error I managed integrate Bitbucket and Jenkins parameterized jobs.
The hook management form presents 4 fields:

  1. Endpoint: Here, you’ll need to set your Jenkins URL in the following format — http://username:apitoken@your.jenkins.url/job/your.job.name/buildWithParameters 
  2. Module name: (Optional)
  3. Project name: (Leave Empty)
  4. Token: It’s the authentication token you defined in your Jenkins job settings 

The gotcha is leaving the project name field blank and include it in the endpoint URL appended with buildWithParameters.


sábado, 27 de julho de 2013

Dynamically generating Zip files using Google Cloud Storage Client Library for Appengine

Let's say you have 50 objects (15Mb each) stored in Google Cloud Storage. Now, you need to create a zip archive containing all of them and store the resultant file back into GCS. How can we achieve that from within an Appengine java application?
Well, after some research, I wrote the method below using Google Cloud Storage Client Library which does exactly that. Just don't forget  to grant the appropriate permissions to your appengine service account so that it can read and write the objects.


public static void zipFiles(final GcsFilename targetZipFile,
  final GcsFilename... filesToZip) throws IOException {

 Preconditions.checkArgument(targetZipFile != null);
 Preconditions.checkArgument(filesToZip != null);
 Preconditions.checkArgument(filesToZip.length > 0);

 final int fetchSize = 4 * 1024 * 1024;
 final int readSize = 2 * 1024 * 1024;
 GcsOutputChannel outputChannel = null;
 ZipOutputStream zip = null;
 try {
  final GcsFileOptions options = new GcsFileOptions.Builder()
    .mimeType(MediaType.ZIP.toString()).build();
  outputChannel = GCS_SERVICE.createOrReplace(targetZipFile, options);
  zip = new ZipOutputStream(Channels.newOutputStream(outputChannel));
  GcsInputChannel readChannel = null;
  for (final GcsFilename file : filesToZip) {
   try {
    final GcsFileMetadata meta = GCS_SERVICE.getMetadata(file);
    if (meta == null) {
     LOGGER.warning(file.toString()
       + " NOT FOUND. Skipping.");
     continue;
    }
    final ZipEntry entry = new ZipEntry(file.getObjectName());
    zip.putNextEntry(entry);
    readChannel = GCS_SERVICE.openPrefetchingReadChannel(file,
      0, fetchSize);
    final ByteBuffer buffer = ByteBuffer.allocate(readSize);
    int bytesRead = 0;
    while (bytesRead >= 0) {
     bytesRead = readChannel.read(buffer);
     buffer.flip();
     zip.write(buffer.array(), buffer.position(),
       buffer.limit());
     buffer.rewind();
     buffer.limit(buffer.capacity());
    }

   } finally {
    zip.closeEntry();
    readChannel.close();
   }
  }
 } finally {
  zip.flush();
  zip.close();
  outputChannel.close();
 }
}

sábado, 6 de julho de 2013

"Smart" Appengine Devserver restarts for faster development lifecycle

We just started a new AppEngine Java project using the appengine-maven-plugin and the Eclipse IDE (Juno). We are using the appengine:devserver goal to start the devserver. It basically builds the entire project (compile, test and package) and after that launches the devserver pointing it to the generated webapp directory - which by default is: ${project.build.directory}/${project.build.finalName}


The Problem

Every edit made in the source directory is not recognized unless the server is restarted - neither static content nor newly compiled classes. Which is obvious since the devserver is monitoring only the target directory.
It's a very "bureaucratic" and nonproductive development environment - we need to stop and start the server even for a single CSS line change.


The Dream

Achieve the same productivity level we have when working with dynamic languages based development environment (i.e. Python). Just hit F5 in the browser to see the changes in static files and automatic server reload every time a Java class or descriptor file is compiled/changed.


The Solution 

Using a little Ant-foo, we were able to create a target which synchronizes both directories.
Thie snippet below uses the sync Ant task to perform the static content synchronization (lines 1-9). Notice that everything inside src.webapp.dir is sync'ed except for the 3 directories declared in the preserveintarget element. We had to exclude them from the synchronization process because since they only exist in the target directory they'd be deleted otherwise. And finally a second sync to synchronize the compiled classes (lines 11-13).

<sync verbose="true" todir="${target.webapp.dir}" includeEmptyDirs="true">
 <fileset dir="${src.webapp.dir}" />
 <preserveintarget>
     <!-- Ignore the directories below -->
  <include name="WEB-INF/lib/**" />
  <include name="WEB-INF/classes/**" />
  <include name="WEB-INF/appengine-generated/**" />
 </preserveintarget>
</sync>

<sync verbose="true" todir="${target.webapp.dir}/WEB-INF/classes">
 <fileset dir="${basedir}/target/classes" />
</sync>

Then, we attached it to an Eclipse builder, which is triggered every time a change is made in the project ("Build automatically" flag enabled).
The same behavior can be achieved by creating a special maven profile and using a combination of the m2e lifecycle mappings and the maven antrun plugin.
Something like this:

<profile>
    <id>m2e</id>
    <activation>
        <property>
            <name>m2e.version</name>
        </property>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-antrun-plugin</artifactId>
                <version>1.7</version>
                <executions>
                    <execution>
                        <phase>process-classes</phase>
                        <goals>
                            <goal>run</goal>
                        </goals>
                        <configuration>
                            <target>
                                <property name="target.webapp.dir" value="${project.build.directory}/${project.build.finalName}" />
                                <property name="src.webapp.dir" value="${basedir}/src/main/webapp" />
                                <sync verbose="true" todir="${target.webapp.dir}" includeEmptyDirs="true">
                                    <fileset dir="${src.webapp.dir}" />
                                    <preserveintarget>
                                        <include name="WEB-INF/lib/**" />
                                        <include name="WEB-INF/classes/**" />
                                        <include name="WEB-INF/appengine-generated/**" />
                                    </preserveintarget>
                                </sync>
                                <sync verbose="true" todir="${target.webapp.dir}/WEB-INF/classes">
                                    <fileset dir="${basedir}/target/classes" />
                                </sync>
                            </target>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
        <pluginManagement>
            <plugins>
                <!-- This plugin's configuration is used to store Eclipse m2e settings 
      only. It has no influence on the Maven build itself. -->
                <plugin>
                    <groupId>org.eclipse.m2e</groupId>
                    <artifactId>lifecycle-mapping</artifactId>
                    <version>1.0.0</version>
                    <configuration>
                        <lifecycleMappingMetadata>
                            <pluginExecutions>
                                <pluginExecution>
                                    <pluginExecutionFilter>
                                        <groupId>org.apache.maven.plugins</groupId>
                                        <artifactId>maven-antrun-plugin</artifactId>
                                        <versionRange>[1.6,)</versionRange>
                                        <goals>
                                            <goal>run</goal>
                                        </goals>
                                    </pluginExecutionFilter>
                                    <action>
                                        <execute>
                                            <runOnIncremental>true</runOnIncremental>
                                        </execute>
                                    </action>
                                </pluginExecution>
                            </pluginExecutions>
                        </lifecycleMappingMetadata>
                    </configuration>
                </plugin>
            </plugins>
        </pluginManagement>
    </build>
</profile>

quarta-feira, 6 de março de 2013

How to reference the current jmeter script base path?

I use lots of javascript in my Jmeter's test plans. I usually keep the code in the "Script" text area, either in BSF or JSR223 assertions and/or processors.
Sometimes however, I'd rather keep the scripts in a separate (external) file to ease maintenance.
Differently from the "CSV Data Set Config" element, the "JSR223 Assertion" "Script File" property does not use the current running script directory as the base path, it uses the user.dir system property instead.

The problem is that I normally organize my test assets following this directory layout:

/my_project/my_test.jmx       $jmx files
/my_project/js/script.js      $script files
/my_project/data/my_test.csv  $csv files

In order to reference scripts with paths relative to the current JMX file I use the FileServer class, in conjunction with the __javaScript function, like this:

${__javaScript(org.apache.jmeter.services.FileServer.getFileServer().getBaseDir())}/js/script.js

sábado, 10 de novembro de 2012

Expondo uma API para busca de CEPs com Google Cloud Endpoints

No Google IO 2012 foi lançado (por enquanto só para trusted testers) o Google Cloud Endpoints.
É um novo serviço do GAE que facilita (e muito) a publicação de APIs RESTful ou JSON RPC.
Na verdade as facilidades vão muito além do servidor. Foi incorporado no GPE (Google Plugin para Eclipse) um "gerador" que dada uma API, gera o código necessário para acessá-la de clientes: Android (Java) e/ou iOS (objective C). Além disso também é possível acessar os serviços via javascript usando o Google APIs Client Library for Javascript (mesma biblioteca utilizada para utilização das APIs Google).


Para testar esse novo serviço, me inscrevi no programa de trusted testers e criei uma aplicação que expõe uma API REST para busca de CEPs - usando uma biblioteca Java para busca de CEPs que criei um tempo atrás.
A aplicação possui apenas uma única classe: CepEndpoint. O gist abaixo mostra como o código é simples e como algumas simples anotações são suficientes para publicar um endpoint composto por alguns serviços.
Loading ....

Para testar a API publicada pode-se usar o Google APIs Explorer ou o "clientzinho web" que criei que invoca esse mesmo endpoint usando a API javascript.
Loading ....

Se derem uma olhada no código fonte, verão que aproveitei também pra dar uma treinada no desenvolvimento de aplicações HTML5 usando Angular JS e Bootstrap.

Use o link abaixo para acessar a aplicação:
http://busca-cep.appspot.com/

Setting up Jenkins on EC2 using AWS CloudFormation (including nginx as a reverse proxy)

I lost count on how many times I had to setup a CI server. Being a big fan of the concept of "Infrastructure as Code" as I am, I promised myself that the next time I'd do it differently (I needed to have it automated!).

Well, the day has come.The stack (using Cloudformation terms) I decided to create is composed by a basic Jenkins installation (standalone + winstone) running as a daemon on an Amazon Linux based EC2 intance. It also includes Nginx as a reverse proxy and a dedicated volume for storing the JENKINS_HOME files and other artifacts.

Obviously that a much easier, simpler and (I think) even cheaper alternative would be using some PaaS offering (Jenkins as a Service).

However, I ended up using AWS Cloudformation to automate the provisioning of the AWS Resources (IAM User, SecurityGroup, EBS Volumes, EC2 instance) and its cloud-init features to install and configure the necessary packages.

External repository addition, yum based package installations, configuration files adjustments, EBS volumes setup, etc. Most of the installation and configuration logic is in a shell script embedded in the UserData property.

The resulting template is right below, the input parameters are: Instance type, JENKINS_HOME volume size in gigabytes and the EC2 KeyName. The output is the Jenkins server URL!

Loading ....