Rahul Somasunderam

Programmer, Cyclist, Trivia Junkie.


JUnit Results on GitHub Actions

16 August 2021

GitHub Actions is a very accessible CI solution for OSS projects. One shortcoming of most OSS-accessible CI systems is: accessing test results is difficult. Compare just about any of those to Jenkins' JUnit results view. With a little effort, you can get something close enough.

Using Build Scans

I tend to use Gradle for most of my Java projects. This guide uses Gradle. It depends on a feature called Gradle Build Scans.

First, let’s enable Gradle Build scans on your build by adding this to your settings.gradle.kts

plugins {
  id "com.gradle.enterprise" version "3.6.3"
}

gradleEnterprise {
  buildScan {
    termsOfServiceUrl = "https://gradle.com/terms-of-service"
    termsOfServiceAgree = "yes"
    publishAlwaysIf(System.getenv("CI")) (1)
  }
}
1 This will publish the build scan only for CI. You can replace it publishAlways() if you want to always publish build scans.

If you now run a gradle build, your build will emit the build url to the console

# ./gradlew build

...
...
BUILD SUCCESSFUL in 1m 0s
17 actionable tasks: 17 executed

Publishing build scan...
https://gradle.com/s/j67twx3r5jzqm (1)
1 This is the URL that contains a lot of information on the build, including the JUnit results.

Getting to these is a bit of a problem from a GitHub Pull Request. So let’s extract this to a file first.

import com.gradle.scan.plugin.PublishedBuildScan

gradleEnterprise {
  buildScan {
    // ... previous configuration
    buildScanPublished { PublishedBuildScan scan ->
      file("build/gradle-scan.md").text = (1)
          """Gradle Build Scan - [`${scan.buildScanId}`](${scan.buildScanUri})"""
    }
  }
}
1 This will write some markdown to a file which we can then use as a comment on the pull request.

Now when you run the same command, you’ll find a file in the build directory called gradle-scan.md that looks like this

Gradle Build Scan - [`mor6ne7ktkgwy`](https://gradle.com/s/mor6ne7ktkgwy)

Now, let’s get GitHub Actions to process with it.

I’ll assume that you have a file called .github/workflows/build-pr.yml that looks like this

name: Build PR

on:
  pull_request:
    branches: [ master ]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      # Other things

      - name: Set up JDK 11
        uses: actions/setup-java@v2
        with:
          java-version: 11
          distribution: zulu

      - name: Build with Gradle
        run: ./gradlew check build --stacktrace --parallel

Let’s add 3 more steps to it

      - id: get-comment-body (1)
        if: always()
        run: |
          body=$(cat build/gradle-scan.md)
          body="${body//'%'/'%25'}"
          body="${body//$'\n'/'%0A'}"
          body="${body//$'\r'/'%0D'}"
          echo ::set-output name=body::$body

      - name: Find Comment
        uses: peter-evans/find-comment@v1
        if: always()
        id: fc (2)
        with:
          issue-number: ${{ github.event.pull_request.number }}
          comment-author: 'github-actions[bot]'
          body-includes: Gradle Build Scan

      - name: Create or update comment (3)
        uses: peter-evans/create-or-update-comment@v1
        if: always()
        with:
          comment-id: ${{ steps.fc.outputs.comment-id }}
          issue-number: ${{ github.event.pull_request.number }}
          body: ${{ steps.get-comment-body.outputs.body }}
          edit-mode: replace
1 The first step extracts reads the content of the file and converts it to something that can be sent as the body of an HTTP request.
2 The second step finds if there is a previous comment we’ve made on the PR. If there is one, it captures the id of the comment so we can update it.
3 This step uses the comment and the comment id (if available), and then upserts a comment to the Pull Request.

Using GitHub Checks

This approach uses GitHub checks to achieve the same effect. The nice thing is: it annotates your Pull Request with the results of failed tests.

      - name: Publish Test Report
        uses: mikepenz/action-junit-report@v2 (1)
        if: always()
        with:
          report_paths: '**/build/test-results/test/TEST-*.xml'

Comparison

Using GitHub checks is nice - it puts things right in your Pull Request. However, it doesn’t give you access to all the information you expect to see from a JUnit report - STDOUT, STDERR and the stacktrace.

Gradle Build scans gives you access to that information. You still have to navigate one site away to get that.

The good news is you can use both at the same time.

Ratpack with Non-Blocking CXF Webservices and RxJava

06 September 2016

In this post we’re going to look at what you can do to use a CXF client in a non-blocking manner inside of Ratpack.

By default CXF offers a blocking client called the HTTP Transport. If you were to use CXF that way, you could wrap CXF calls inside a Blocking promise. While that does the job, it does limit your application in some way.

Since 3.0.0, CXF also offers rt-transports-http-netty-client. This uses Netty, and can be used in a non-blocking manner.

I’m going to take ihe-iti as an example of a SOAP Library. Up until version 0.3, it only supported the synchronous APIs. So if you took a wsdl like PIXConsumer, this is what your generated PortType would look like

PIXConsumerPortType.java
@WebService(name = "PIXConsumer_PortType", targetNamespace = "urn:ihe:iti:pixv3:2007")
@SOAPBinding(parameterStyle = ParameterStyle.BARE)
@XmlSeeAlso({ObjectFactory.class})
public interface PIXConsumerPortType {
    @WebMethod(operationName = "PIXConsumer_PRPA_IN201302UV02",
            action = "urn:hl7-org:v3:PRPA_IN201302UV02")
    @WebResult(name = "MCCI_IN000002UV01", targetNamespace = "urn:hl7-org:v3",
            partName = "Body")
    MCCIIN000002UV01 pixConsumerPRPAIN201302UV02(
            @WebParam(name = "PRPA_IN201302UV02", targetNamespace = "urn:hl7-org:v3",
                    partName = "Body") PRPAIN201302UV02 var1);
}

If you told wsdl2java to generated the async APIs as well, you’ll get something like this

PIXConsumerPortType.java
@WebService(name = "PIXConsumer_PortType", targetNamespace = "urn:ihe:iti:pixv3:2007")
@SOAPBinding(parameterStyle = ParameterStyle.BARE)
@XmlSeeAlso({ObjectFactory.class})
public interface PIXConsumerPortType {
    @WebMethod(operationName = "PIXConsumer_PRPA_IN201302UV02",
            action = "urn:hl7-org:v3:PRPA_IN201302UV02")
    Response<MCCIIN000002UV01> pixConsumerPRPAIN201302UV02Async(
            @WebParam(name = "PRPA_IN201302UV02", targetNamespace = "urn:hl7-org:v3",
                    partName = "Body") PRPAIN201302UV02 var1);

    @WebMethod(operationName = "PIXConsumer_PRPA_IN201302UV02",
            action = "urn:hl7-org:v3:PRPA_IN201302UV02")
    Future<?> pixConsumerPRPAIN201302UV02Async(
            @WebParam(name = "PRPA_IN201302UV02", targetNamespace = "urn:hl7-org:v3",
                    partName = "Body") PRPAIN201302UV02 var1,
            @WebParam(name = "PIXConsumer_PRPA_IN201302UV02Response", targetNamespace = "",
                    partName = "asyncHandler") AsyncHandler<MCCIIN000002UV01> var2);

    @WebMethod(operationName = "PIXConsumer_PRPA_IN201302UV02",
            action = "urn:hl7-org:v3:PRPA_IN201302UV02")
    @WebResult(name = "MCCI_IN000002UV01", targetNamespace = "urn:hl7-org:v3", partName = "Body")
    MCCIIN000002UV01 pixConsumerPRPAIN201302UV02(
            @WebParam(name = "PRPA_IN201302UV02", targetNamespace = "urn:hl7-org:v3",
                    partName = "Body") PRPAIN201302UV02 var1);
}

It is tempting to use RxJava and wrap this method into an Observable.

Response<MCCIIN000002UV01> pixConsumerPRPAIN201302UV02Async(PRPAIN201302UV02 var1);

After all, RxJava has a method with the signature

public static <T> Observable<T> from(Future<? extends T> future)

And javax.xml.ws.Response<T> extends java.util.concurrent.Future<T>

Unfortunately, that will not work as intended. You will be able to call your webservice, and get data, but all your operations will be limited by the number of threads.

To get it to work correctly, you’ll have to use the other asynchronous method, i.e.

Future<?> pixConsumerPRPAIN201302UV02Async(
        PRPAIN201302UV02 var1, AsyncHandler<MCCIIN000002UV01> var2);

You’ll have to turn the response into a Ratpack Promise first, before turning it into an Observable.

This is a how you turn the response into a Promise

import groovy.transform.TupleConstructor
import ratpack.exec.Downstream

import javax.xml.ws.AsyncHandler
import javax.xml.ws.Response

/**
 * Bridges SOAP's AsyncHandler to Ratpack's Promise.
 */
@SuppressWarnings('CatchThrowable')
@TupleConstructor
class PromiseAsyncHandlerBridge<T> implements AsyncHandler<T> {
  Downstream<T> downstream

  @Override
  void handleResponse(Response<T> res) {
    try {
      downstream.success(res.get())
    } catch (Throwable t) {
      downstream.error(t)
    }
  }

}

Then to call your webservice and get an observable, you can do this

Observable<PRPAIN201310UV02> pixResponse =
    observe(async { Downstream down ->
        servicePort.pixConsumerPRPAIN201302UV02Async(
                request, new PromiseAsyncHandlerBridge<MCCIIN000002UV01>(down)
        )
    })

Ratpack Assets in Development

02 September 2016

This post details how to achieve productivity with ratpack when you’re doing front-end development.

The asset pipeline has a nice integration with Ratpack that enables you to optimize your front-end resources for production mode. However, while you’re developing, this make debugging harder.

If your front end code has a lot of scripts and stylesheets, then you would want to use the asset-pipeline to optimize delivery in production. e.g. You would have an app.css and an app.js that look like this:

app.js
//= require /main
//= require /util
//= require /group1/module1
//= require /group1/module2
//= require /group2/module1
app.css
/*
*= require /bower_components/bootstrap/less/bootstrap
*= require /bower_components/font-awesome/less/font-awesome
*= require /base
*= require /module1
*= require /module2
*/

Now in your html, you can include just these two assets like this (assuming you’re using html templates).

index.html
<!DOCTYPE html>
<html>
  <head>
    <link rel="stylesheet" href="/app.css" />
  </head>
  <body>
    <script src="/app.js"></script>
  </body>
</html>

For completeness, here’s the ratpack.groovy.

ratpack.groovy
import asset.pipeline.ratpack.AssetPipelineHandler
import asset.pipeline.ratpack.AssetPipelineModule
import ratpack.groovy.template.TextTemplateModule
import ratpack.server.ServerConfig

import static ratpack.groovy.Groovy.groovyTemplate
import static ratpack.groovy.Groovy.ratpack
import static ratpack.jackson.Jackson.json

ratpack {

  bindings {
    module(AssetPipelineModule) {
      it.url("/")
      it.sourcePath("../../../src/assets")
    }
    module TextTemplateModule
  }

  handlers {
    all AssetPipelineHandler
    all {
      render groovyTemplate('index.html')
    }
  }
}

This will serve the compiled css and js to the browser as a single file. If you want to debug your code, this can pose a bit of a problem. The same asset pipeline when used with Grails does allow you to see individual files served in development mode.

As of writing (ratpack 1.4.0 and asset-pipeline 2.6.4) there is no way to get this to work out of the box. The reason being that ratpack does not have a standard taglib that works across all template types.

I’m going to assume we’re using html templates, but this can be adopted to any template type.

First off, we need to create a helper class that provides taglib like rendering support.

src/main/groovy/com/example/AssetTag.groovy
package com.example

import asset.pipeline.AssetPipeline
import com.google.inject.Inject
import ratpack.server.ServerConfig

import java.util.function.Consumer

/**
 * Helps with calling out assets in Groovy Templates
 */
class AssetTag {
  @Inject ServerConfig serverConfig

  String javascript(String uri) {
    if (serverConfig.isDevelopment()) {
      StringBuilder tags = new StringBuilder()
      dependencies(uri, 'js', 'application/javascript') {
        tags.append("<script src=\"${it.path}?compile=false\"></script>")
      }
      tags.toString()
    } else {
      "<script src=\"${uri}\"></script>"
    }
  }

  String stylesheet(String uri) {
    if (serverConfig.isDevelopment()) {
      StringBuilder tags = new StringBuilder()
      dependencies(uri, 'css', 'text/css') {
        tags.append("<link rel=\"stylesheet\" href=\"${it.path}?compile=false\"/>")
      }
      tags.toString()
    } else {
      "<link rel=\"stylesheet\" href=\"${uri}\"/>"
    }
  }

  private void dependencies(String src, String ext, String contentType, Consumer<Map> callback) {

    final int lastDotIndex = src.lastIndexOf('.')
    final def uri
    final def extension
    if (lastDotIndex >= 0) {
      uri = src.substring(0, lastDotIndex)
      extension = src.substring(lastDotIndex + 1)
    } else {
      uri = src
      extension = ext
    }

    AssetPipeline.getDependencyList(uri, contentType, extension).each {
      callback.accept(it as Map)
    }
  }
}

Next up, we bind that into our Ratpack application

ratpack.groovy
import asset.pipeline.ratpack.AssetPipelineHandler
import asset.pipeline.ratpack.AssetPipelineModule
import com.example.AssetTag
import ratpack.groovy.template.TextTemplateModule
import ratpack.server.ServerConfig

import static ratpack.groovy.Groovy.groovyTemplate
import static ratpack.groovy.Groovy.ratpack
import static ratpack.jackson.Jackson.json

ratpack {

  bindings {
    module(AssetPipelineModule) {
      it.url("/")
      it.sourcePath("../../../src/assets")
    }
    module TextTemplateModule
    bind(AssetTag)
  }

  handlers {
    all AssetPipelineHandler
    all { AssetTag asset ->
      render groovyTemplate('index.html', asset: asset)
    }
  }
}

Finally, we use it in the html template.

index.html
<!DOCTYPE html>
<html>
  <head>
    ${model.asset.stylesheet('/app.css')}
  </head>
  <body>
    ${model.asset.javascript('/app.js')}
  </body>
</html>

Now, when you develop, you’ll see individual files in your browser. And when you package your app for deployment, you’ll still have your optimized version.


Thanks to @davydotcom for creating the awesome asset pipeline library and pointing me in this direction when I first ran into the problem.

Gradle Equivalent of Maven Mirror

31 August 2016

Maven has a way to configure mirrors in settings.xml. Gradle doesn’t have a direct analogue to it, but what it has is really nice. Also it’s under-documented.

Let’s say you’re using gradle at work, and you want to use your company’s maven mirror. You might want to do this for numerous reasons.

  • The connection to the open internet is very slow

  • The connection to the open internet protected by a proxy that prevents access to maven central

  • You want to audit what is being used in projects at work

Let’s say your maven mirror’s url is http://repo.internal.example.com/releases

Just create a file called init.gradle under $USER_HOME/.gradle with this in it

allprojects { (1)
  buildscript { (2)
    repositories {
      mavenLocal() (3)
      maven { url "http://repo.internal.example.com/releases" } (4)
    }
  }
  repositories { (5)
    mavenLocal()
    maven { url "http://repo.internal.example.com/releases" }
  }
}
1 This says we want to apply to all projects on this machine.
2 This says you want to modify the build script dependencies too. This will make gradle plugins and build dependencies also use your mirror.
3 This says to look in maven local first. That’s typically your $USER_HOME/.m2/repository directory.
4 This says to look at your maven mirror next if the artifact could not be found.
5 This says to do the same set of changes for non-build dependencies of your gradle projects.

You can find more information on what else you can do with init scripts here.

NexusMonitor

16 March 2015

 

At @CertifyData, we run a bunch of different build systems including maven, gradle and grails. All of them produce artifacts that can be uploaded to a maven repository. In fact we use a maven repository to deliver content to our end users.

A somewhat boring part of our release process, was waiting for Jenkins to finish its build and then copy the link from nexus and send it out to our deployment team.

Given how long some projects take to build with all their dependencies, sometimes a developer pushing the release button would forget to send out the email. Also capturing the link and details for the email was a repetitive process prone to occasional errors.

So I came up with NexusMonitor. It is a groovy/gradle app which can monitor a Nexus repository’s RSS feed and send out emails based on a template.

Configuring

Let’s get to a sample usage. This describes what happens in our NexusMonitorConfig.groovy. This is the groovy version of a properties file.

First you need to configure your email settings. This is self explanatory.

nexusmonitor {
  from {
    address = 'nexus@example.com'
    personal = 'Nexus'
  }
  mail {
    host = 'smtp.example.com'
    port = 587
    username = 'user@example.com'
    password = 'password'
    javaMailProperties = [
        'mail.smtp.auth' : true,
        'mail.smtp.starttls.enable' : false
    ]
  }
}

Next step, you need to configure feeds you’re interested in monitoring.

nexusmonitor.feeds = [
    new Repository(
        name: 'repo1',
        feedUrl: 'http://domain/nexus/service/local/feeds/',
        repoUrl: 'http://domain/nexus/content/repositories/public/',
        recipients: ['a@example.com', 'b@example.com']
    ),
    new Repository(
        name: 'repo2',
        feedUrl: 'http://otherdomain/nexus/service/local/feeds/',
        repoUrl: 'http://otherdomain/nexus/content/repositories/public/',
        recipients: ['c@example.com', 'd@example.com']
    )
]

Running

Now you can run java -jar nexus-monitor.jar after you’ve downloaded the jar file from Central.

Every time this runs, it records the last run of each repo on a json called lastrun.json.

Running this manually can be boring, so I create a CRON job on the machine which runs this app.

 * * * * * java -jar /root/nexus-monitor-1.0.jar >> /root/NexusMonitor.log

It runs every minute and scans the RSS feed of your repository.

How does my team use this?

We have an engineering team that is responsible for internally releasing applications, that will then be tested by our business analysts. You can think of that as a UAT phase. Once that is done, they want to make it available to our ops team which is responsible for installing these apps on customer machines.

When Engineering decides it’s time to do a release, we use git flow to get a new release onto the master branch, and Jenkins starts turning code into artifacts and putting them in the right repository.

Once a new artifact is available in these scanned repositories, the feed is automatically updated by Nexus. Now NexusMonitor kicks in and sees the new artifact and sends out an email to the business analysts and engineers confirming the availability of the new release and instructions to download the build.

We have some scripts prepared for them to run when they are happy with the build, and that downloads the artifacts from one repository and uploads them to another repository. This second repository is also scanned by NexusMonitor, and this sends out an email to the ops team notifying them what they can do with the build.

How do we send custom instructions?

By default, there is a template embedded in the app called basic.html which gets used for emails. However there is an attribute in our definition of repositories called name. You could create a file called repo1.html and that would get picked up if your repository name in the config was repo1.

This is a good starting point for your email templates - basic.html.


Updates: Removed reference to snapshot in favor of running released code.

Character Encodings

15 January 2014

I wrote this a while back in an office blog, and then realized that a lot of people run into this problem, and would benefit from reading this.

Character Encoding is what decides how Strings which are first class citizens in most modern programming languages, get converted into byte arrays. Byte arrays are what get sent over the wire, or get written to disk. In the reverse direction, they decide how a byte array must be converted to a String.

This program takes a string in 3 different languages and shows how each language is affected by different charsets.

import java.nio.charset.Charset

void reportOnText(String text) {
  final encodings = [
      'ASCII', 'ISO-8859-1', 'windows-1251', 'UTF-8', 'UTF-16', 'UTF-32'
  ]
  println ''
  println text
  println text.replaceAll(/./, '=')
  encodings.each { enc ->
    def theBytes = text.getBytes(Charset.forName(enc))
    def reparse = new String(theBytes, enc)
    println "${enc.padRight(12)}: ${theBytes.encodeHex()} --> ${reparse}"
  }
}

reportOnText('Happy New Year!')
reportOnText('¡Feliz Año Nuevo!')
reportOnText('新年あけましておめでとうございます!')
reportOnText('KYPHON® Balloon Kyphoplasty')

Let’s look at the output before we dig into the explanation

Happy New Year!
===============
ASCII       : 4861707079204e6577205965617221 --> Happy New Year!
ISO-8859-1  : 4861707079204e6577205965617221 --> Happy New Year!
windows-1251: 4861707079204e6577205965617221 --> Happy New Year!
UTF-8       : 4861707079204e6577205965617221 --> Happy New Year!
UTF-16      : feff004800610070007000790020004e00650077002000590065006100720021 --> Happy New Year!
UTF-32      : 0000004800000061000000700000007000000079000000200000004e0000006500000077000000200000005900000065000000610000007200000021 --> Happy New Year!

¡Feliz Año Nuevo!
=================
ASCII       : 3f46656c697a20413f6f204e7565766f21 --> ?Feliz A?o Nuevo!
ISO-8859-1  : a146656c697a2041f16f204e7565766f21 --> ¡Feliz Año Nuevo!
windows-1251: 3f46656c697a20413f6f204e7565766f21 --> ?Feliz A?o Nuevo!
UTF-8       : c2a146656c697a2041c3b16f204e7565766f21 --> ¡Feliz Año Nuevo!
UTF-16      : feff00a100460065006c0069007a0020004100f1006f0020004e007500650076006f0021 --> ¡Feliz Año Nuevo!
UTF-32      : 000000a100000046000000650000006c000000690000007a0000002000000041000000f10000006f000000200000004e0000007500000065000000760000006f00000021 --> ¡Feliz Año Nuevo!

新年あけましておめでとうございます!
==================
ASCII       : 3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f --> ??????????????????
ISO-8859-1  : 3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f --> ??????????????????
windows-1251: 3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f --> ??????????????????
UTF-8       : e696b0e5b9b4e38182e38191e381bee38197e381a6e3818ae38281e381a7e381a8e38186e38194e38196e38184e381bee38199efbc81 --> 新年あけましておめでとうございます!
UTF-16      : feff65b05e7430423051307e30573066304a3081306730683046305430563044307e3059ff01 --> 新年あけましておめでとうございます!
UTF-32      : 000065b000005e7400003042000030510000307e00003057000030660000304a000030810000306700003068000030460000305400003056000030440000307e000030590000ff01 --> 新年あけましておめでとうございます!

KYPHON® Balloon Kyphoplasty
===========================
ASCII       : 4b5950484f4e3f2042616c6c6f6f6e204b7970686f706c61737479 --> KYPHON? Balloon Kyphoplasty
ISO-8859-1  : 4b5950484f4eae2042616c6c6f6f6e204b7970686f706c61737479 --> KYPHON® Balloon Kyphoplasty
windows-1251: 4b5950484f4eae2042616c6c6f6f6e204b7970686f706c61737479 --> KYPHON® Balloon Kyphoplasty
UTF-8       : 4b5950484f4ec2ae2042616c6c6f6f6e204b7970686f706c61737479 --> KYPHON® Balloon Kyphoplasty
UTF-16      : feff004b005900500048004f004e00ae002000420061006c006c006f006f006e0020004b007900700068006f0070006c0061007300740079 --> KYPHON® Balloon Kyphoplasty
UTF-32      : 0000004b0000005900000050000000480000004f0000004e000000ae0000002000000042000000610000006c0000006c0000006f0000006f0000006e000000200000004b0000007900000070000000680000006f000000700000006c00000061000000730000007400000079 --> KYPHON® Balloon Kyphoplasty

As you can see, all the encodings we use do a great job with plain English text. That’s because all encodings have support for the characters in the English alphabet. As we start making the alphabet more and more complex, we start seeing the difference between the Universal Encodings and the regional Encodings.

ASCII and windows-1521 have very limited support for anything other than English.

ISO-8859-1 improves support for Spanish, but Japanese is still broken.

All the UTF encodings are great for all languages.

Among the UTF charsets, UTF-32 takes 32 bits per character. UTF-16 takes a minimum of 16 bits. UTF-8 takes a minimum of 8 bits, but adds more bits to expand the character set.

What if we change charsets after encoding?

This is precisely what happens when one system encodes a message in one charset and another tries to parse it using a different charset.

import java.nio.charset.Charset
void testWrongEncoding(String text) {
  def theBytes = text.getBytes(Charset.forName('ISO-8859-1'))
  def reparse = new String(theBytes, 'UTF-8')
  println "${text}: ${theBytes.encodeHex()} --> ${reparse}"
}
println "Wrong Encoding"
println "Wrong Encoding".replaceAll(/./,'=')
testWrongEncoding('Happy New Year!')
testWrongEncoding('¡Feliz Año Nuevo!')
testWrongEncoding('新年あけましておめでとうございます!')
testWrongEncoding('KYPHON® Balloon Kyphoplasty')

This is the result

Wrong Encoding
==============
Happy New Year!: 4861707079204e6577205965617221 --> Happy New Year!
¡Feliz Año Nuevo!: a146656c697a2041f16f204e7565766f21 --> �Feliz A�o Nuevo!
新年あけましておめでとうございます!: 3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f --> ??????????????????
KYPHON® Balloon Kyphoplasty: 4b5950484f4eae2042616c6c6f6f6e204b7970686f706c61737479 --> KYPHON� Balloon Kyphoplasty

As you can see, there will be problems in parsing the string into UTF-8. This is what happens when another system uses a different encoding and we parse it using UTF-8.

Why UTF-8

You might already have guessed why we use UTF. It’s because UTF can support all major characters we are likely to run into. But why UTF-8 in specific?

Because UTF-8 is the most compressed for the typical inputs we receive.

Other Charsets supported by the Java Runtime

Big5, Big5-HKSCS, CESU-8, EUC-JP, EUC-KR, GB18030, GB2312, GBK, IBM-Thai,
IBM00858, IBM01140, IBM01141, IBM01142, IBM01143, IBM01144, IBM01145, IBM01146,
IBM01147, IBM01148, IBM01149, IBM037, IBM1026, IBM1047, IBM273, IBM277, IBM278,
IBM280, IBM284, IBM285, IBM290, IBM297, IBM420, IBM424, IBM437, IBM500, IBM775,
IBM850, IBM852, IBM855, IBM857, IBM860, IBM861, IBM862, IBM863, IBM864, IBM865,
IBM866, IBM868, IBM869, IBM870, IBM871, IBM918, ISO-2022-CN, ISO-2022-JP,
ISO-2022-JP-2, ISO-2022-KR, ISO-8859-1, ISO-8859-13, ISO-8859-15, ISO-8859-16,
ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6, ISO-8859-7,
ISO-8859-8, ISO-8859-9, JIS_X0201, JIS_X0212-1990, KOI8-R, KOI8-U, Shift_JIS,
TIS-620, US-ASCII, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE,
UTF-8, windows-1250, windows-1251, windows-1252, windows-1253, windows-1254,
windows-1255, windows-1256, windows-1257, windows-1258, windows-31j,
x-Big5-HKSCS-2001, x-Big5-Solaris, x-euc-jp-linux, x-EUC-TW, x-eucJP-Open,
x-IBM1006, x-IBM1025, x-IBM1046, x-IBM1097, x-IBM1098, x-IBM1112, x-IBM1122,
x-IBM1123, x-IBM1124, x-IBM1129, x-IBM1166, x-IBM1364, x-IBM1381, x-IBM1383,
x-IBM29626C, x-IBM300, x-IBM33722, x-IBM737, x-IBM833, x-IBM834, x-IBM856,
x-IBM874, x-IBM875, x-IBM921, x-IBM922, x-IBM930, x-IBM933, x-IBM935, x-IBM937,
x-IBM939, x-IBM942, x-IBM942C, x-IBM943, x-IBM943C, x-IBM948, x-IBM949,
x-IBM949C, x-IBM950, x-IBM964, x-IBM970, x-ISCII91, x-ISO-2022-CN-CNS,
x-ISO-2022-CN-GB, x-iso-8859-11, x-JIS0208, x-JISAutoDetect, x-Johab,
x-MacArabic, x-MacCentralEurope, x-MacCroatian, x-MacCyrillic, x-MacDingbat,
x-MacGreek, x-MacHebrew, x-MacIceland, x-MacRoman, x-MacRomania, x-MacSymbol,
x-MacThai, x-MacTurkish, x-MacUkraine, x-MS932_0213, x-MS950-HKSCS,
x-MS950-HKSCS-XP, x-mswin-936, x-PCK, x-SJIS_0213, x-UTF-16LE-BOM,
X-UTF-32BE-BOM, X-UTF-32LE-BOM, x-windows-50220, x-windows-50221, x-windows-874,
x-windows-949, x-windows-950, x-windows-iso2022jp

Blue Shirt, White Shirt

17 September 2013

Blue Shirt White Shirt

Image from Wikipedia. No! That is not Bob.

I call this the Blue Shirt, White Shirt problem. It goes like this:

Your customer, Bob comes to your garage and says


My car does not work.

It’s actually quite simple, and I figured out what makes it stop. When I wear a blue shirt, it works just fine. But when I wear a white shirt, it doesn’t work. '''''

If you’re a mechanic, you know there’s some serious shit going on there, but your customer has a hypothesis and you don’t. Let’s start looking into this problem deeper.


I woke up last morning and wore a blue shirt. Then I got to work.

Then I came home. The car was still working. You with me?

I had to go to my buddy’s place. So I quickly changed to a white shirt, and tried starting my car. It just wouldn’t.

So I said "let’s go back to my blue shirt". So I went back in, and changed. I got a call from Mom, and minutes later I start the car, and it works just fine.

But I wanted to make sure I’m right so I also took my white shirt with me to his place, and told him about this problem. He doubted me too. So I put on the white shirt and tried starting it, and it wouldn’t start.

Then we went out for dinner in his car, and I spilled some of that Thai Curry on my white shirt. So I had to change back to my blue shirt, because, well anyways my car wouldn’t start with the white shirt.

Now I started the car, and came back home. But to be sure, I put on another white shirt and tried starting the car. It just wouldn’t.

So this morning, I put on this green shirt, and drove to work. Everything was great. I quickly changed to my white shirt, to show my coworker the problem, and as expected the car wouldn’t start.

I decided to bring my car in around my lunch break.

Let me quickly put on my white shirt and show you that I’m not making things up.


Now Bob has done an amazing analysis of the problem, and it’s a great story. The only problem with it is it’s all wrong.

While having a diploma in car repair would help you trust my judgement of this analysis, you probably know that any way.

A lot of customer problems in Software Development get reported this way. And by the time they get to the developer who needs to fix the problem, this is what it looks like

CAR-1034: Car does not start when customer is wearing white shirts.

Severity: Blocker
Type: Bug

What would have helped move things around is

CAR-1034: Investigate intermittent starting trouble on Bob's car.

Severity: Blocker
Type: Support Task

The amount of time wasted by development and QA in reproducing Bob’s "problem" is huge. Perhaps it might have been simpler to find out why Bob’s car does not start when the engine is already warm.

Jenkins, Email Ext and Gmail Grimace

20 March 2013

If you’re working on any project that involves more than one developer and weeks or possibly months of work, you should be using some sort of Continuous Integration. The most common system to use is Jenkins.

If you start using Jenkins and start committing code often, you’ll realize you need email notifications on build status on each commit. The default emails from Jenkins are to put it mildly, a bit lacking.

The solution is, the Email-ext plugin. This lets you customize your emails. They have some decent templates in their code. However you’ll want to customize your emails further. I took the approach of taking their template and making it prettier with Zurb’s responsive email templates. I use Mail.app on the Mac and the Mail app on my iPhone. Things were going great until I committed something to github with my gmail id.

That’s when I learnt about GMail Grimace.

Rewriting all my templates to not use style tags was not an option. Changing the appearance of an html element using css classes that are variables in your template is super easy. Doing that using if conditions in your template, is a pain in the ass.

Then I ran into this SO. So I decided to use this information and create a pre-send script for the Email-ext plugin. However I learnt that there were some problems.

So I decided to hack the plugin to do what I want. It’s now part of v2.28.

All you need to do is create a style tag in your email with an additional attribute data-inline and value true.

<style type="text/css" data-inline="true">
  div.good {
  	background-color: blue;
  }
  div.bad {
  	background-color: red;
  }
</style>
<style type="text/css">
  ... more styles ...
</style>

The plugin now will take this css and apply it inline to all elements. This makes GMail happy.

Here’s my Jelly Template.

Functional Programming in Scala

08 January 2013

A couple of months back I took the Functional Programming Principles in Scala course at Coursera. It’s a beautiful course offered by EPFL, specifically Martin Odersky.

Personally I’ve been working on Java for several years, and at work use Groovy and Grails. At one point I have used scala in small side projects at work. And all these things have impacted one another. Things I learn in one language change the way I write code in another language. However there is the imperative programmers burden. As a programmer you’ve been taught by C, C++ and then Java to think in terms of steps.

Learning Groovy as part of an offering from SpringSource did bring in several closures that let you write slightly more functional code. There were the closures you could pass to functions in collections and get cleaner code, i.e. it was easier to write functional code. However it was just as easy to write bad functional code or imperative code.

Scala, for one makes it incredibly hard to use mutable lists. Which means the likelihood of a developer being tempted to write bad functional code goes down significantly. Immutability is the default behavior in Scala. So much so that you have a val declaration in addition to the var declaration to make sure you think twice before writing mutable code. And this has changed how I think of my code back in Groovy and Java.

If you’re a Java programmer, whether or not you plan to use Scala in one of your projects, I recommend you take this course. If for no other reason, then just because it’s free.

The year in fitness

05 January 2013

2012 had been a fairly successful year in some aspects. In some, it was terrible.

Cycling

I had set a target of 1500 miles and was possibly very very short of this target. A lot of things caused this - mostly my laziness. However the good news was that I was able to get back to biking after my rather long hiatus after an accident. Also I never did many 50 miles rides. There was just one 50 mile ride. Most rides were shorter, though they included some rides to Palomares.

The best thing however was, I got closer to my ideal weight. I got better average speeds. I expanded my territory, i.e. got into mountain biking. I did some rides in Skyline but several rides at Mission Peak. Also my climb speed has improved a lot.

For 2013, I want to revive my goal from 2012, i.e. 1500 miles on my road bike. I have signed up for an 80 miler in May. Also I want to pack in a few mountain rides. And I want to buy my own mountain bike. I’m looking at the '13 Specialized Camber 29. Any suggestions are welcome.

Running

I was able to manage to start running. It took a lot of effort, but I got comfortable with running ~10K. The speed isn’t great, but it’s still something. For 2013, I’m thinking of getting to do a half marathon.

Otherwise

I took help from a personal trainer to get my shit straight. And that was helpful. If you need help with your fitness goals, do consider consulting a personal trainer. You might think you don’t want to hulk up, and that’s fine, but having a trainer who knows what the hell they’re doing advise you is better than you trying to figure out how to reach your goals.

For 2013, my goal keep this going. As a first hurdle, I seem to have screwed up my wrist, again.

What happens at IHE …​

13 October 2012

At the crux of Healthcare IT, is a body called Integrating Healthcare Enterprise (IHE). They are one of the standards bodies in healthcare. They organize this little thing in Chicago every winter called Connectathon.

It is no secret that my opinion of healthcare technology is very bad. However I am not alone. Keep calm and carry on!

If you are into healthcare and want to talk to the nice folks from @CertifyData, we’ll be there. And if I may add, it’ll be easy to find us.

Fitness Trackers

25 June 2012

 

The first thing you’re taught when you start learning physics is

If you can’t measure it, you can’t do anything about it.

When cycling

I applied that approach to cycling - I bought a Schwinn computer and then a Cateye and this made me understand my biking better. I knew how fast I could go, what my speed was on flats, climbs and descents. With Schwinn, I had an understanding of how many calories I burnt. With Cateye, I started looking at my cadence.

When not cycling

However seasonal allergies brought my biking to a halt. And my fitness started declining again. So I decided to sign up for a gym membership - I hate using treadmills and stationary bikes.

If I’m walking or biking for 5 minutes, I expect to be farther away from where I started. If I’m not, I get bored.

However I’ve come to terms with the fact that, the gym is the only way I can get back to being fit if I can’t get my miles when pollen is high.

As part of a goodie bag at a race ride I attended this year, I got a pedometer from the good folks at Walgreens. It’s terrible in terms of accuracy and discreetness. One day, I did nothing more than driving to work and returning home; and it said I’d walked 5 miles. And it’s noisy; my colleagues thought I was tin-foil carrying a Geiger Counter with me. Also pedometers don’t account for other activity I do.

The choices

I saw my trainer at the gym wearing one of those Bodybuggs image. That seemed like a good idea. Except that you require a subscription to get the best out of it. So I’ve started looking at other alternatives

  • Fitbit image seems to do away with the subscription, but it’s just a 3D accelerometer; that’s like a fancy pedometer. It’s great in that it has an API, and apparently a thriving ecosystem.

  • Then there’s Nike’s Fuelband image which looks great, but then doesn’t give any meaningful metrics other than it’s own fake currency.

  • The third contender is Basis which so far seems to be vaporware, in that we don’t know much about it except for that it will cost $199.

Here’s what I’m looking for in the device:

  • Either has either an API or a means to download data to my computer.

  • Measures my activity and gives meaningful metrics.

  • Lets me look at data in fine grained durations so I can compare past performance.

What does it mean to have an API?

It means that I do not have to pay money to someone so they can gather up my data, and sell it to someone else. My fitness data is my Protected Health Information. I don’t want to pay someone $10 a month so they know when I’ve been pumping iron and have my information sent to the makers of Muscle Milk and when I’m sleeping on my couch for 3 days in a row, send it to Nutrisystem. I would love something like good old Nike+ which merely synced with your iPod and if you chose to, shared it on their website. However it looks like everyone is taking the cloud seriously and there aren’t any products in this category that work that way. I may be wrong here, but that’s what everyone’s advertising looks like.

Do you have any fitness trackers? What do you like about them? How have they helped you? Tweet me at @rahulsom or get me on Quora


Older posts are available in the archive.

© Rahul Somasunderam 2012

With help from JBake.
Hosted on GitHub Pages.
Built at 2024-12-02 18:16:58