tag:blogger.com,1999:blog-76251788894550965732024-03-14T01:03:38.420-05:00my kingdom for a smile :-)Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.comBlogger38125tag:blogger.com,1999:blog-7625178889455096573.post-51198702689279187572017-12-25T00:00:00.000-06:002017-12-25T00:07:58.253-06:00Easy Stripe Checkout using AngularJSHere's a simple way to use angularJS to integrate Stripe Checkout into your web page.<br />
<br />
First, in your HTML add the Stripe script reference inside the head tag:<br />
<br />
<head><br />
[angularJS includes here]<br />
<script type="text/javascript" src="https://checkout.stripe.com/checkout.js"></script><br />
</head><br />
<br />
Next, in the body declare a link or button with an ng-click handler to invoke a method in your controller:<br />
<br />
<a href="" ng-click="onStripe('<%= StripeConstants.PUBLIC_API_KEY %>', '<%= request.getAttribute("email") %>')">Stripe Checkout via angularjs</a><br />
<br />
*Note: My page is a JSP and since my user is already signed in I know the email so I push it in to the request object and pull it into my JSP page. Likewise, I load my Stripe public key (encrypted) from a properties file located on my server. Again, I pull that into my JSP and then pass both the user's email and the Stripe public key in to the click handler in my controller.<br />
<br />
That's it for the HTML page. Now on to the controller.<br />
<br />
I'll need two functions - the click handler to invoke Stripe Checkout and a function to handle the Stripe callback with the token representing the payment details.<br />
<br />
<pre class="brush: java">// stripe will call this once it has successfully created a token for the payment details
$scope.onToken = function(token) {
console.log(token);
// now call a service to push the necessary token info to the server to complete the checkout processing
};
$scope.onStripe = function(apiKey, userEmail) {
var handler = StripeCheckout.configure({
key: apiKey,
image: 'https://stripe.com/img/documentation/checkout/marketplace.png',
locale: 'auto',
token: $scope.onToken
});
handler.open({
panelLabel : 'Subscribe',
amount : 4995,
name : 'My Product Name here',
description : '$49.95 Monthly Subscription',
email : userEmail,
zipCode : true,
allowRememberMe : false
});
};
</pre><br />
That's it!<br />
<br />
Here's what the Stripe Checkout form looks like with the above configurations:<br />
<br />
<a href="https://2.bp.blogspot.com/-kkGL3fzLViM/WkCVseW8b1I/AAAAAAAAAHc/YEErLcFIKhQ-qXszWt5UUwikz0QxzHlwQCLcBGAs/s1600/stripe-checkout.png" imageanchor="1" ><img border="0" src="https://2.bp.blogspot.com/-kkGL3fzLViM/WkCVseW8b1I/AAAAAAAAAHc/YEErLcFIKhQ-qXszWt5UUwikz0QxzHlwQCLcBGAs/s400/stripe-checkout.png" width="293" height="400" data-original-width="324" data-original-height="443" /></a>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-56090369225228925352014-08-04T21:26:00.001-05:002014-08-04T21:40:01.342-05:00URL-Safe Compressed and Enhanced UUID/GUIDBelow is a simple method to compress a UUID (128 bits represented by 32 hexadecimal characters with an additional 4 separator characters) into a 22 character string (base64). But, a 22 character base64 string can actually hold 132 bits of data (6 bits per char X 22 chars). As such, this method injects 4 additional random bits of data which increases the potential number of available unique identifiers by a factor of 16.<br />
<br />
In addition, all the selected base64 characters are URL-safe.<br />
<br />
Example:<br />
<br />
This UUID : <b>7e47c34a-eebc-4387-b5a4-c6b558bdc407</b><br />
<br />
is compressed down to this: <b>35Hw0ruvEOHbWkxrVYvcQH</b><br />
<br />
<pre class="brush: java; highlight:[8]">public class KeyGen {
private KeyGen() {
} // constructor
// base64url, see: http://tools.ietf.org/html/rfc4648 section 5
private static String chars
= "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
/**
* Generates a UUID and compresses it into a base 64 character string; this
* results in a 22 character string and since each character represents 6 bits
* of data that means the result can represent up to 132 bits. However, since
* a UUID is only 128 bits, 4 additional randomize bits are inserted into the
* result (if desired); this means that the number of available unique IDs is
* increased by a factor of 16
*
* @param enhanced specifies whether or not to enhance the result with 4
* additional bits of data since a 22 base64 characters
* can hold 132 bits of data and a UUID is only 128 bits
* @return a 22 character string where each character is from the file and url safe
* base64 character set [A-Za-z0-9-_]
*/
public static String getCompressedUuid(boolean enhanced) {
UUID uuid = UUID.randomUUID();
return compressLong(uuid.getMostSignificantBits(), enhanced)
+ compressLong(uuid.getLeastSignificantBits(), enhanced);
} // getCompressedUuid()
// compress a 64 bit number into 11 6-bit characters
private static String compressLong(long key, boolean enhance) {
// randomize 2 bits as a prefix for the leftmost character which would
// otherwise only have 4 bits of data in the 6 bits
long prefix = enhance ? (long)(Math.random() * 4) << 62 : 0;
// extract the first 6-bit character from the key
String result = "" + chars.charAt((int)(key & 0x3f));
// shifting in 2 extra random bits since we have the room
key = ((key >>> 2) | prefix) >>> 4;
// iterate thru the next 10 characters
for (int i = 1; i < 11; i++) {
// strip off the last 6 bits from the key, look up the matching character
// and prepend that character to the result
result = chars.charAt((int)(key & 0x3f)) + result;
// logical bit shift right so we can isolate the next 6 bits
key = key >>> 6;
}
return result;
} // compressLong()
} // class KeyGen
</pre><br />
<br />
Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com2tag:blogger.com,1999:blog-7625178889455096573.post-73268495360869436622014-04-26T11:17:00.000-05:002014-04-26T16:51:21.433-05:00Recipe for AspectJ 1.x, Jersey 2.x, Spring 3.x, Tomcat 7.x, Maven and AOP with Load Time WeavingThere's already a lot of information out there on the web about aspect oriented programming (AOP), Spring and AspectJ. And there are other good articles that explain some of the common pitfalls one may encounter when trying to get AOP up and running in an application that uses these technologies. One variant that doesn't seem to have a lot of information, however (at least that I could find), is using AOP with the combination of Spring 3, Tomcat 7 and Jersey 2.<br />
<br />
The Spring documentation with respect to Tomcat (6 and below), Spring and AOP (with and without AspectJ) is excellent. See the <a href="http://docs.spring.io/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-aj-ltw-environment-generic">Spring docs here</a> for more information. Jersey adds a wrinkle to this because the Jersey web services are not Spring managed beans, so it's a little trickier to get AOP working for the service classes/methods.<br />
<br />
So, what I hope to provide here is a simple recipe, if you will, for how to get the combination of technologies listed above working, with the additional requirement to perform load time weaving of the aspects into your code (as opposed to compile time or post compile time weaving). In addition, I will explain what you would see if you miss a step or don't get a step right so you can recognize the symptoms in your own setup and know what might need to be fixed.<br />
<br />
<b>Step 0 </b>: you have a project to which you want to apply AOP<br />
<br />
<b>Step 1 : configure Tomcat</b><br />
<br />
For load time weaving to work in Tomcat we need to supply a different class loader for Tomcat to use. Just include the spring-instrument-tomcat jar in your Tomcat lib folder (I'll show you how to tell Tomcat to use it in step 6 below).<br />
<br />
You can find the correct version for your needs <a href="http://mvnrepository.com/artifact/org.springframework/spring-instrument-tomcat">here</a>. I used spring-instrument-tomcat-3.2.6.RELEASE.jar for my example.<br />
<br />
If you don't include this jar in your Tomcat lib folder (or anywhere else Tomcat is configured to look for library jars) you will see this error (and several others) in the Tomcat logs:<br />
<br />
Apr 26, 2014 11:08:02 AM org.apache.catalina.loader.WebappLoader startInternal<br />
SEVERE: LifecycleException <br />
java.lang.ClassNotFoundException: org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader<br />
<br />
<b>Step 2 : configure Maven</b><br />
<br />
You DON'T need this dependency, contrary to many of the examples you will find, but it will allow your aspects to compile, which may be confusing. The runtime classes are already included in the aspectjweaver dependency that follows.<br />
<br />
<dependency><br />
<groupId>org.aspectj</groupId><br />
<artifactId>aspectjrt</artifactId><br />
<version>${aspectj.version}</version><br />
</dependency><br />
<br />
My aspectj.version property is set to 1.8.0<br />
<br />
You WILL need the below aspectjweaver dependency and if you omit it you will see the following error in the catalina (tomcat) logs:<br />
<br />
java.lang.NoClassDefFoundError: org/aspectj/weaver/loadtime/ClassPreProcessorAgentAdapter<br />
<br />
<dependency><br />
<groupId>org.aspectj</groupId><br />
<artifactId>aspectjweaver</artifactId><br />
<version>${aspectj.version}</version><br />
</dependency><br />
<br />
<br />
Likewise, you do NOT need the spring-aop dependency if you're going to use the load time weaver (which we are in this case) and are not using any spring-aop specific capability:<br />
<br />
<dependency><br />
<groupId>org.springframework</groupId><br />
<artifactId>spring-aop</artifactId><br />
<version>${spring.framework.version}</version><br />
</dependency><br />
<br />
<br />
<b>Step 3 : configure Spring</b><br />
<br />
The only setting you need to add to the Spring application-context.xml file is:<br />
<br />
<context:load-time-weaver aspectj-weaving="on"/><br />
<br />
You can omit the aspectj-weaving attribute which will cause the default to be used, but I include it here to call out that you could replace that value with an external property loaded into your Spring app context to control whether load time weaving was 'on' or 'off'.<br />
<br />
If you do not include context:load-time-weaver in the Spring app context file you won't notice any errors in the Tomcat logs but your aspects won't execute either.<br />
<br />
<b>Step 4 : create your aspects and pointcuts, etc.</b><br />
<br />
<pre class="brush: java; highlight:[8]">@Aspect
public class Observer {
public Observer() {
} // constructor
@Around("execution(public * *(..))")
public Object logAround(ProceedingJoinPoint joinPoint) throws Throwable {
System.out.println("log from " + joinPoint.toString()); // @todo
Object result = joinPoint.proceed();
throw new IllegalArgumentException("this is only a test");
// return result;
}
} // class Observer
</pre><br />
The key thing here is how you define your aspects. In my case above I am using @Around and am intercepting all public methods in all my classes (I only have one web service class with one public method in this example). This was a fairly inclusive pointcut expression, with the intent to make sure it included my Jersey web service class. Consult the wealth of documentation on AOP to learn more about join points, point cuts, advices, etc. The Spring reference cited above is VERY good as is <a href="http://www.journaldev.com/2583">this article</a>.<br />
<br />
<b>Step 5 : add META-INF/aop.xml to describe your aspects, pointcuts, etc. to AspectJ</b><br />
<br />
This is the file used by AspectJ (you can have multiple aop files) to find and execute your aspects. <b><i>If you don't include this file or if you put it in a location that won't make it on the classpath you won't see any errors in the Tomcat logs but your aspects won't execute either</i></b>. So, in my example I put META-INF/aop.xml in src/main/resources and it will be added to WEB-INF/classes when Maven builds the war file.<br />
<br />
<!DOCTYPE aspectj PUBLIC "-//AspectJ//DTD//EN" "http://www.eclipse.org/aspectj/dtd/aspectj.dtd"><br />
<aspectj><br />
<weaver><br />
<!-- only weave classes in our application-specific packages --><br />
<include within="org.hawksoft..*"/><br />
</weaver><br />
<br />
<aspects><br />
<!-- weave in just this aspect --><br />
<aspect name="org.hawksoft.aop.aspect.Observer"/><br />
</aspects><br />
<br />
</aspec4j><br />
<br />
<br />
<b>Step 6 : add META-INF/context.xml</b><br />
<br />
This is the web context file used by Tomcat and this is where you tell Tomcat to use the instrumented class loader needed to create the proxies for your classes. <br />
<br />
<Context path="/hawk-aop"><br />
<Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" /><br />
</Context><br />
<br />
<b>It is VERY IMPORTANT that you put this folder and file at the same level as WEB-INF in your project</b>. If you don't put the web context.xml in the right location you will get the following error in the Tomcat logs when the web app is initialized:<br />
<br />
2014 8:20:31 AM org.springframework.web.context.ContextLoader initWebApplicationContext<br />
SEVERE: Context initialization failed<br />
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.context.weaving.AspectJWeavingEnabler#0': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'loadTimeWeaver': Initialization of bean failed; nested exception is java.lang.IllegalStateException: ClassLoader [org.apache.catalina.loader.WebappClassLoader] does NOT provide an 'addTransformer(ClassFileTransformer)' method. Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring's agent: -javaagent:org.springframework.instrument.jar<br />
<br />
So, to be clear, <b>you will have TWO META-INF folders </b>- one for the aop.xml that will be pushed into WEB-INF/classes when the war is built and one for the context.xml that is on the same level as WEB-INF. <br />
<br />
Figuring this out was where the majority of my time was spent in trying to get this to work. <a href="http://forum.spring.io/forum/spring-projects/aop/88553-load-time-weaving-in-tomcat6">This Spring forum conversation</a> is what led me to figure out what was going on with context.xml and aop.xml and may be helpful to you as well - particularly the part about what Tomcat does/does not do with the context.xml file you include in your war.<br />
<br />
Note: the 'path' attribute refers to the web app context path and unless you've instructed Tomcat to use a different context it is the name of your war file.<br />
<br />
Here's the folder and file layout for my example Maven project:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-krXg7ddQCRs/U1u2-jSXbkI/AAAAAAAAAEM/fe2AZe5x5lk/s1600/file+structure.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-krXg7ddQCRs/U1u2-jSXbkI/AAAAAAAAAEM/fe2AZe5x5lk/s1600/file+structure.gif" /></a></div>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-51359168243948860252013-12-26T19:14:00.001-06:002015-10-01T09:49:28.966-05:00Jersey & JerseyTest migration from 1.x to 2.5 with Spring, JSP, Tomcat 7 and FreeMarker<br />
I looked at upgrading to Jersey 2 a while ago but it didn't include important functionality I needed from the 1.x versions, like support for JSP templates, so I decided to wait (although I would have expected that a 2.0 release would have included all the capability from 1.x).<br />
<br />
I recently went back and discovered that Jersey 2.5 now supports templates so I decided to take the plunge. Just let me say that the experience has been very painful and end my rant there. The high-level documentation is pretty good, and there are some useful working examples, but I had to dig into the Jersey source code to try to figure some things out and the low level documentation is not what I had hoped for.<br />
<br />
Thus I am writing this article in hopes of sparing other poor souls from the pain I experienced in upgrading to Jersey 2 and getting the following combination of technologies integrated and working:<br />
<br />
<div style="text-align: center;"><b>Jersey 2 + Jersey Test Framework + Spring + templates + Tomcat 7</b></div><div style="text-align: center;"><br />
</div>If you're looking for information on Jersey 1.x please see my <a href="http://usna86-techbits.blogspot.com/2013/03/increase-quality-and-productivity-with.html" target="_blank">previous article on the Jersey Test Framework</a>.<br />
<br />
I had intended for JSP to be the template provider but I couldn't get it to work with the Jersey Test Framework (Grizzly2 container), which caused me to look at other options. After much difficulty I was able to get FreeMarker working as the template provider, but without being able to include the Spring macro library (will explain alternative below).<br />
<br />
<b>First, let's look at the JerseyTest class.</b> <u>Notice on line 8 the forward slash '/' in front of the folder name </u>where I am putting the FreeMarker templates. Please don't forget that. <br />
<br />
At first I put my templates folder (call it whatever you want) at 'src/main/webapp/templates', which worked fine when the app was deployed to Tomcat but failed when the unit tests were being run under Grizzly2. I then noticed in the Jersey source code for the FreeMarker examples and tests that they were putting the template files in the resources folder ('src/main/resources'). When I moved my .ftl files to that location FreeMarker could find them under both Tomcat and Grizzly.<br />
<br />
As you can see from this snippet below, I've created my own abstract test class on top of JerseyTest so that I could have a shared configuration for all my web resource tests and include some other helper methods (not depicted) that help simplify my REST service tests.<br />
<br />
<pre class="brush: java; highlight:[8]">public abstract class AbstractSpringEnabledWebServiceTest extends JerseyTest {
@Override
protected Application configure() {
ResourceConfig rc = new ResourceConfig()
.register(SpringLifecycleListener.class)
.register(RequestContextFilter.class)
.property(FreemarkerMvcFeature.TEMPLATES_BASE_PATH, "/templates")
.register(FreemarkerMvcFeature.class)
;
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return configure(rc);
} // configure()
protected abstract ResourceConfig configure(ResourceConfig rc);
protected abstract String getResourcePath();
</pre><br />
If you've used JerseyTest in Jersey 1.x you will notice some significant changes to how the tests are configured. I'd like to say it's an improvement but I think you will agree it's much less intuitive. In Jersey 1.x it was obvious we were building up a web.xml equivalent. Not so in Jersey 2. You'll have to rely more on the documentation, source code, blogs, and StackOverflow to to figure out how to set up your test web app correctly for your scenario.<br />
<br />
Next is the concrete test class where we (a) provide the Jersey resource classes to load, (b) the location of the Spring context file to use, if using Spring, and (c) the root resource path, which should match the filter mapping from web.xml.<br />
<br />
<pre class="brush: java; highlight:[5,8,15]">public class ResourceATest extends AbstractSpringEnabledWebServiceTest {
@Override
protected ResourceConfig configure(ResourceConfig rc) {
rc.register(ResourceA.class)
.property(
"contextConfigLocation",
"classpath:**/my-web-test-context.xml"
);
return rc;
} // configure()
@Override
protected String getResourcePath() {
return "/my/resource";
} // getResourcePath()
</pre><br />
<br />
<u>Next, here's my <b>web.xml</b>:</u><br />
<br />
<web-app version="2.4" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" <br />
xmlns="http://java.sun.com/xml/ns/j2ee" <br />
xsi:schemalocation="http://java.sun.com/xml/ns/j2ee<br />
http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"><br />
<br />
<context-param><br />
<param-name>contextConfigLocation</param-name><br />
<param-value>classpath:/META-INF/spring/my-web-context.xml</param-value><br />
</context-param><br />
<br />
<context-param><br />
<param-name>spring.profiles.default</param-name><br />
<param-value>prod</param-value><br />
</context-param><br />
<br />
<!-- Spring --><br />
<listener><br />
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class><br />
</listener><br />
<br />
<listener><br />
<listener-class>org.springframework.web.context.request.RequestContextListener</listener-class><br />
</listener><br />
<br />
<filter><br />
<filter-name>My Jersey Services</filter-name><br />
<filter-class>org.glassfish.jersey.servlet.ServletContainer</filter-class><br />
<br />
<init-param><br />
<param-name>jersey.config.server.provider.packages</param-name><br />
<param-value>com.abc.resources.widget</param-value><br />
</init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.server.mvc.templateBasePath.jsp</param-name><br />
<param-value>/WEB-INF/jsp</param-value><br />
</init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.server.mvc.templateBasePath.freemarker</param-name><br />
<param-value>/templates</param-value><br />
</init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.server.mvc.templateBasePath.freemarker</param-name><br />
<param-value>/templates</param-value><br />
<init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.server.provider.classnames</param-name><br />
<param-value>org.glassfish.jersey.server.mvc.freemarker.FreemarkerMvcFeature</param-value><br />
</init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.server.tracing</param-name><br />
<param-value>ALL</param-value><br />
</init-param><br />
<br />
<init-param><br />
<param-name>jersey.config.servlet.filter.staticContentRegex</param-name><br />
<param-value>(/index.jsp)|(/(content|(WEB-INF/jsp))/.*)</param-value><br />
</init-param><br />
<br />
</filter><br />
<br />
<filter-mapping><br />
<filter-name>My Jersey Services</filter-name><br />
<url-pattern>/my/resource/*</url-pattern><br />
</filter-mapping><br />
<br />
</web-app><br />
<br />
Here's my Jersey resource class. Not much to call out here, except what I mentioned earlier about not being able to load the Spring FreeMarker macros. In my case I wanted to use the spring.url macro as a replacement for c:url in JSP. What I ended up doing in the short term is simply injecting the base url into my data map so I could then use it in my template.<br />
<br />
<pre class="brush: java; highlight:[18]">@Service
@Path("/my/resource")
public class ResourceA{
@Context
private UriInfo _uriInfo;
...
@Path("/resourceA")
@Produces(MediaType.TEXT_HTML)
@GET
public Response getResourceA(@Context SecurityContext sc) {
// fetch data for resource A
// put data in map if it isn't already
Map<string object=""> data = new HashMap<>();</string>
data.put("myData", data);
data.put("baseUrl", _uriInfo.getBaseUri().toString());
Viewable view = new Viewable("/myTemplate.ftl", data);
return Response.ok().entity(view).build();
} // getResourceA()
</pre><br />
Finally is a snippet from my FreeMarker template file. You can see the usage of 'baseUrl' that I included in the data model above. One thing you might easily overlook is that I'm not using a 'model' prefix nor an 'it' prefix for the data elements. In Jersey 1.x 'it' was required and the documentation for 2.5 states that the model will be passed in to the view as either 'model' or 'it'. However, that didn't work and when I dropped the model prefix it started working. Something to keep in mind as you troubleshoot any issues you may be having referencing your model data elements.<br />
<br />
<head><br />
<title>Web Resource A</title><br />
<link href="${baseUrl}content/font-awesome/4.0.3/css/font-awesome.css"></link> <br />
rel="stylesheet"><br />
<link href="${baseUrl}content/bootstrap/2.3.2/css/bootstrap.css"></link> <br />
rel="stylesheet"><br />
</head><br />
<br />
Oops - almost forgot the<b> <u>maven dependencies</u></b>: (note: my spring dependencies are declared in my parent pom)<br />
<br />
<properties><br />
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><br />
<servlet-api.version>2.4</servlet-api.version><br />
<jersey.version>2.5</jersey.version><br />
<jersey.scope>compile</jersey.scope><br />
<jettison.version>1.3.3</jettison.version><br />
<freemarker.version>2.3.20</freemarker.version><br />
</properties><br />
<br />
<dependencies><br />
<br />
<dependency><br />
<groupId>org.freemarker</groupId><br />
<artifactId>freemarker</artifactId><br />
<version>${freemarker.version}</version><br />
</dependency><br />
<br />
<dependency><br />
<groupId>javax.servlet</groupId><br />
<artifactId>servlet-api</artifactId><br />
<version>${servlet-api.version}</version><br />
<scope>provided</scope><br />
</dependency><br />
<br />
<dependency><br />
<groupId>org.codehaus.jettison</groupId><br />
<artifactId>jettison</artifactId><br />
<version>${jettison.version}</version><br />
<scope>provided</scope><br />
</dependency><br />
<br />
<dependency><br />
<groupId>org.glassfish.jersey.test-framework</groupId><br />
<artifactId>jersey-test-framework-core</artifactId><br />
<version>${jersey.version}</version><br />
<scope>test</scope><br />
</dependency><br />
<br />
<dependency><br />
<groupId>org.glassfish.jersey.test-framework.providers</groupId><br />
<artifactId>jersey-test-framework-provider-grizzly2</artifactId><br />
<version>${jersey.version}</version><br />
<scope>test</scope><br />
</dependency><br />
<br />
<!-- Required only when you are using JAX-RS Client --><br />
<dependency><br />
<groupId>org.glassfish.jersey.core</groupId><br />
<artifactId>jersey-client</artifactId><br />
<version>${jersey.version}</version><br />
<scope>${jersey.scope}</scope><br />
</dependency><br />
<br />
<dependency><br />
<groupId>org.glassfish.jersey.ext</groupId><br />
<artifactId>jersey-mvc-freemarker</artifactId><br />
<version>${jersey.version}</version><br />
<scope>${jersey.scope}</scope><br />
</dependency><br />
<br />
<dependency><br />
<groupId>org.glassfish.jersey.ext</groupId><br />
<artifactId>jersey-spring3</artifactId><br />
<version>${jersey.version}</version><br />
<scope>${jersey.scope}</scope><br />
</dependency><br />
<br />
</dependencies>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com7tag:blogger.com,1999:blog-7625178889455096573.post-51065704930319403712013-08-06T21:59:00.000-05:002013-08-07T06:39:06.221-05:00Multiple content representations from a resource oriented RESTful web serviceHere are a some thoughts on a few ways you can return multiple different representations of your resources from RESTful web services and still preserve the resource oriented nature of your architecture.<br />
<br />
First, by representational differences I'm not talking about the format (JSON vs. XML, etc.). I'm talking about content.<br />
<br />
Keep in mind that under the ROA style for REST you can use query params for <b>selection</b>, <b>sorting</b> and <b>projection</b>. Selection answers the question of which instances of the resource to return (which rows in database terms). Sorting is self-explanatory. Projection refers to which parts of the resource (which data points, or columns in database terms) to return.<br />
<br />
So, when we're talking about multiple representations with respect to query params we're talking about projection.<br />
<br />
Let's consider an example of a representation of a Customer resource with the following data points:<br />
<ul>
<li>customer_id</li>
<li>first_name</li>
<li>last_name</li>
<li>address1</li>
<li>address2</li>
<li>city</li>
<li>state</li>
<li>zip</li>
<li>zip_plus_4</li>
<li>home_phone</li>
<li>mobile_phone</li>
<li>birth_date</li>
<li>birth_place</li>
<li>email_address</li>
<li>income</li>
</ul>
Now, imagine that the following URL returns a <b>complete representation</b> of the above Customer resource for a customer with customer_id 123:<br />
<br />
http://www.my-company.com/resources/customer/123<br />
<br />
You will notice that we are doing selection, but we aren't using query params but rather putting the customer_id on the URL itself, which is a cleaner approach to REST.<br />
<h4>
Use projection via a query param</h4>
Now, what if a given client didn't want to consume all those data points and endure all the overhead associated with that. If using a <b>query param approach</b> you could do something like this:<br />
<br />
http://www.my-company.com/resources/customer/123?include=customer_id,last_name,zip,email_address<br />
<br />
The web service implementation for this would process the 'include' query param and build up a resource that included only those data points specified. Under this approach you give the client maximum control of the resource representation.<br />
<h4>
Extract a sub-resource</h4>
Another way to obtain a subset of the Customer resource would be to <b>extract a sub-resource</b>. For example, imagine we were only interested in the customer contact info consisting of customer_id, first_name, last_name, mobile_phone and email_address. Then, we could use a URL like the following to obtain the contact information for the customer:<br />
<br />
http://www.my-company.com/resources/customer/123/contact_info<br />
<br />
But, we've created a new URL endpoint, which may or may not be what we want. How can we isolate the contact information without using query params and without changing the original customer URL?<br />
<h4>
Define a <b>custom media type</b></h4>
Let's say we had defined a media type for the Customer resource as so:<br />
<br />
application/vnd.my-company.Customer-1.0<br />
<br />
The client would pass this in as the Accept header to fetch the complete representation. To isolate the contact information we could define a new media type like so and pass that in as the Accept header with the original URL:<br />
<br />
application/vnd.my-company.Customer.ContactInfo-1.0<br />
<br />
Now, let's say the client is happy with the original customer representation, but wants to trim the size of it. We could create a 'lite' version with abbreviated attribute names, such as lname for last_name, email for email_address and so on, and use a media type like the following to retrieve it:<br />
<br />
application/vnd.my-company.Customer-1.0-lite<br />
<br />
You should be able to see the flexibility that custom media types provide. You could create many different subsets of customer information and expose those as different flavors of the Customer media type.<br />
<br />
Each of the above relies on being able to vary the resource representation independently from any object model supporting it. See <a href="http://usna86-techbits.blogspot.com/2013/08/restful-java-web-service-marshalling.html" target="_blank">this article</a> for more information.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-37973778519303232682013-08-02T20:11:00.001-05:002013-08-07T06:40:34.431-05:00RESTful Java Web Service Marshalling Comparison<i> The case against automatic marshalling</i><br />
<br />
I've been meaning to write this post for a very, very long time but I guess it was the look I got yesterday in a meeting when I recommended against automatically marshalling JAXB annotated model objects that pushed me over the edge. It was a look of "why would you even consider doing anything else?".<br />
<br />
The notion that I can add a few annotations to my domain model class, make that the return type from my web service method and, viola, JSON or XML is magically returned to the client is very enticing. And you can certainly understand why developers would be motivated to want to do that.<br />
<br />
But I'd like to offer some food for thought on why that might not be such a good idea and why architects, designers, those having to maintain the system and whoever's paying the bills should consider not allowing this approach in all but proof of concept or prototyping situations.<br />
<br />
The first two problems, which are also the most significant, are very closely related:<br />
<br />
<u>Problem #1</u> : The <b>inability to produce different representations </b>from the same object model. I'm not talking about JSON vs. XML here (i.e. format). I'm talking about content and structure. You can only have one return type from a method and you can only mark up a given model class with annotations in one way. So, let's say you have client A that wants the full object representation returned - you're fine. But what if you have a client B that needs a different representation of that object? Perhaps fewer fields or abbreviated attribute names or some other subset of the object. You can't do it with automatic marshalling and use the same endpoint and without bloating the object model. See <a href="http://usna86-techbits.blogspot.com/2013/08/multiple-content-representations-from.html" target="_blank">this article</a> for some ideas on how to produce multiple different representations from the same object model.<br />
<br />
<u>Problem #2</u> : The <b>inability to support multiple versions</b> of the REST contract off of the same object model. This one has the same root cause as above but a different use case for getting there. In this case I'm referring to changes to the object model that cause existing clients to break - breaking changes. In this case you can't simply reuse the same model class to support two incompatible representations of it - you have to create or extend a new model class. But, if you simply decoupled the REST response from your object model (i.e. don't use JAXB annotations and automatic marshalling) you can vary them independently and support multiple versions of your REST contract from the same object model - or at least you have the possibility of doing that, depending on the nature and extent of the changes. Or, even simpler, maybe it's the REST contract itself that's changing (different attribute names, different structure, exposing fewer data elements due to a removed business feature, etc. etc.). Auto marshalling can't expose two different contracts off the same object model.<br />
<br />
Either one of those should be enough to discourage folks from using automatic marshalling in most cases, but there are still more reasons to avoid this approach...<br />
<br />
<u>Problem #3</u> : Your REST contract, and therefore your client, is <b>tightly coupled</b> to your domain/object model. You've basically opened up a window into the deep internals of your system and are allowing clients to peer into it. Some folks try to get around this by creating a secondary model object layer - a data transfer object layer, if you will - but they're still tighly coupled to a particular instance of a particular object model, they've bloated the overall object model, and they've greatly increased the object count at runtime.<br />
<br />
<u>Problem #4</u> : You <b>lose control of the HTTP response</b> and you won't have an opportunity to catch or log what just happened if there is a problem marshalling or unmarshalling your object. In this case, the framework generates the exception and resulting response to the client - not your code - which is probably something you don't want to have happen. <br />
<br />
<u>Problem #5</u> : This is a consequence of attribute annotations in general in that they <b>couple the classes being annotated to a particular use</b>, albeit perhaps only logically. But, the implications of doing this can manifest themselves in very concrete ways. Let's say, for example, that RESTful representations and JMS messages are being created from the same model and let's say that the structure of the REST representation and the JMS message are different. OK, so you JAXB annotate the model classes for the REST layer and then the messaging team handcrafts the JMS messages from the same model - that will work and everything is fine. But, what if the messaging team needs to change the model layer to support some new changes to messaging and let's say these changes are breaking changes to the REST layer. Oops. This is really a variation of problems 1 and 2 above. Putting aside this contrived example, the key difference here is that we're introducing another developer (messaging team) who is unaware that the object model they are using in a loosely coupled manner has been tightly coupled by the web services team to their clients (changes to the model classes percolate all the way down to the REST clients).<br />
<br />
<u>Problem #6</u> : <b>Clarity</b>. When you look at the web service class it's unclear precisely what's being returned and in what format. Sure, you can see what object type it is, and you can look that up and examine it, but changes to the model will go unnoticed when looking at the web service. You should be able to look at your web service class and see the entire contract that your service is providing.<br />
<br />
<u>Problem #7</u> : <b>The ability to fully enforce the REST contract</b>. Since changes to the model pass straight thru the web service layer you can't enforce the resource representation aspects of the REST contract. However, if you decouple the model from the representation being returned (i.e. hand build the response) you have complete control over the contract.<br />
<br />
<u>Problem #8</u> : <b>Reduced ability to refactor</b> the service and domain layers. Because the client is tightly coupled to the model you lose the ability to independently vary the model and thus are limited in your ability to refactor the system in a way that preserves the REST contract with existing clients. <br />
<br />
<u>Problem #9</u> : <b>Extensibility of the REST contract</b>. This is a variation of #1 and #2, but from a different perspective. If using auto marshalling you can't provide a different REST contract to different clients using the same underlying model. Nor could you extend the contract to another system that makes use of auto marshalling (perhaps you want to use the adapter pattern on an inherited system to make it appear to have the same interface as yours - a consideration for growing and expanding companies and the kind of things architects are tasked with worrying about and considering).<br />
<br />
<u>Problem #10</u> : <b>Lack of flexibility</b>. By using auto marshalling you lose the ability to compose a composite resource representation from multiple top-level objects. In addition, nested hierarchies may or may not behave the way we necessarily want with auto marshalling.<br />
<br />
<u>Problem #11</u> : <b>Time Savings</b>. It's not a tremendous coding time saver - not enough to justify introducing all the other problems mentioned here, despite what people may think. It takes very little effort to code up a JSONObject or an XML document and just a little bit more to create a generic abstraction layer on top of that so you can produce JSON or XML or whatever.<br />
<br />
<u>Problem #12</u> : <b>Performance</b>. I decided to take a closer look at the performance of various approaches for sending and receiving JSON representations to/from a RESTful web service. I used the Jersey Test Framework to create a unit test that invoked the handler methods to GET and POST JSON data to/from the same underlying model object. The only difference was the approach used to map the JSON to/from the underlying object. The object itself consisted of a String field and a couple of int fields (see below).<br />
<br />
The test iterated over each approach in a round robin fashion performing a GET and a POST. That cycle was repeated 100,000 times. The metrics were captured in the unit test client, encompassing the entire request/response. Here are the approaches that were evaluated:<br />
<ul>
<li>Manually building the response using org.codehaus.jettison.json.<b>JSONObject </b>(ver 1.1)</li>
<li>Manually building the response using a <b>custom</b> implementation using StringBuilder (Java 1.7)</li>
<li>Automatic marshalling using the <b>Jersey</b> framework (ver 1.17) and underlying JAXB implementation</li>
<li>Instructing a com.google.gson.<b>Gson</b> (ver 2.2.2) instance to map an object to JSON for us</li>
<li>Instructing a org.codehaus.<b>jackson</b>.map.ObjectMapper (ver 1.9.2) instance to map for us</li>
</ul>
As you can see from the chart below, the <b>manual approaches to handling the JSON/object mapping were quite a bit better performing</b>, and that makes sense as they don't have to use reflection to access the object and build up the response. What was interesting was just how much better performing the manual approaches were. That may or not be an important consideration depending on your situation, but it's information you should be armed with nonetheless and I encourage you to perform your own testing to see for yourself. The best I can tell here is that the margin of error is about 5% as both manual approaches used the same POST handler yet the results for them differ by about 5%. So, again, conduct your own tests in your own environment to see how the numbers shake out for you.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-KCNLEcfaoN8/Uf05zaHnWHI/AAAAAAAAADk/hGBWp_IKMac/s1600/REST+Marshalling+Comparison-100v2k.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-KCNLEcfaoN8/Uf05zaHnWHI/AAAAAAAAADk/hGBWp_IKMac/s1600/REST+Marshalling+Comparison-100v2k.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Marshalling Performance Comparison</td></tr>
</tbody></table>
Here's the interesting code from the web service showing the different approaches used. First is a complete POST handler. Each POST implementation is the same except for the mechanism used to turn the data into an ItemInventory object. I used custom media types to map to the various handlers, reusing the same URL/endpoint and in effect versioning the service.<br />
<br />
<b>Jersey</b> :<br />
<pre class="brush: java; highlight:[2]">@Path("/item")
@Consumes(ITEM_INVENTORY_MEDIA_TYPE_JERSEY_JSON)
@POST
public Response createItemInventory2(ItemInventory inventory) {
Response response = null;
try {
inventory = _inventoryManager.saveItemInventory(inventory);
response = Response.status(201)
.header(
"Location",
String.format(
"%s/%s",
_uriInfo.getAbsolutePath().toString(),
inventory.getItemId()
)
)
.entity(inventory.getItemId())
.build();
} catch (Exception e) {
response = Response.status(500).entity(e.getMessage()).build();
}
return response;
} // createItemInventory()
</pre>
<br />
<b>org.codehaus.jackson.map.ObjectMapper</b>:<br />
<pre class="brush: java; highlight:[2,8]">@Path("/item")
@Consumes(ITEM_INVENTORY_MEDIA_TYPE_JACKSON_JSON)
@POST
public Response createItemInventory3(String data) {
Response response = null;
try {
ItemInventory inventory = new ObjectMapper().readValue(data, ItemInventory.class);
inventory = _inventoryManager.saveItemInventory(inventory);
</pre>
<br />
<b>com.google.gson.Gson</b>:<br />
<pre class="brush: java; highlight:[2,8,9]">@Path("/item")
@Consumes(ITEM_INVENTORY_MEDIA_TYPE_GSON_JSON)
@POST
public Response createItemInventory5(String data) {
Response response = null;
try {
Gson gson = new Gson();
ItemInventory inventory = gson.fromJson(data, ItemInventory.class);
inventory = _inventoryManager.saveItemInventory(inventory);
</pre>
<br />
<b>JSONObject</b> (for the POST I did not write a custom handler, but instead used JSONObject):<br />
<pre class="brush: java; highlight:[2,11]">@Path("/item")
@Consumes({
ITEM_INVENTORY_MEDIA_TYPE_JSONOBJECT_JSON,
ITEM_INVENTORY_MEDIA_TYPE_CUSTOM_JSON
})
@POST
public Response createItemInventory(String data) {
Response response = null;
try {
ItemInventory inventory = jsonObjectToItemInventory(data);
inventory = _inventoryManager.saveItemInventory(inventory);
...
private ItemInventory jsonObjectToItemInventory(String data)
throws JSONException {
JSONObject jo = new JSONObject(data);
ItemInventory inventory = new ItemInventory(
jo.isNull("id") ? null : jo.getString("id"),
jo.getInt("onhand"),
jo.getInt("onOrder")
);
return inventory;
} // jsonObjectToItemInventory()
</pre>
<br />
Now for a complete GET handler, this time for <b>Jersey</b>:<br />
<pre class="brush: java; highlight:[2]">@Path("/item/{itemId}")
@Produces(ITEM_INVENTORY_MEDIA_TYPE_JERSEY_JSON)
@GET
public ItemInventory getItemInventory2(@PathParam("itemId") String itemId) {
ItemInventory inv = null;
try {
inv = _inventoryManager.getItemInventory(itemId);
if (null == inv) {
throw new WebApplicationException(404);
}
} catch (Exception e) {
throw new WebApplicationException(e, 500);
}
return inv;
} // getItemInventory2()
</pre>
<br />
<br />
<b>org.codehaus.jackson.map.ObjectMapper</b>:<br />
<pre class="brush: java; highlight:[2,14]">@Path("/item/{itemId}")
@Produces(ITEM_INVENTORY_MEDIA_TYPE_JACKSON_JSON)
@GET
public Response getItemInventory3(@PathParam("itemId") String itemId) {
Response response = null;
try {
ItemInventory inv = _inventoryManager.getItemInventory(itemId);
if (null == inv) {
// not found
response = Response.status(404).build();
} else {
String json = new ObjectMapper().writeValueAsString(inv);
response = Response.ok().entity(json).build();
}
</pre>
<br />
<b>com.google.gson.Gson</b>:<br />
<pre class="brush: java; highlight:[2,14,15]">@Path("/item/{itemId}")
@Produces(ITEM_INVENTORY_MEDIA_TYPE_GSON_JSON)
@GET
public Response getItemInventory5(@PathParam("itemId") String itemId) {
Response response = null;
try {
ItemInventory inv = _inventoryManager.getItemInventory(itemId);
if (null == inv) {
// not found
response = Response.status(404).build();
} else {
Gson gson = new Gson();
response = Response.ok().entity(gson.toJson(inv)).build();
</pre>
<br />
<b>JSONObject</b>:<br />
<pre class="brush: java; highlight:[2,14,15,16,17,18]">@Path("/item/{itemId}")
@Produces(ITEM_INVENTORY_MEDIA_TYPE_JSONOBJECT_JSON)
@GET
public Response getItemInventory(@PathParam("itemId") String itemId) {
Response response = null;
try {
ItemInventory inv = _inventoryManager.getItemInventory(itemId);
if (null == inv) {
// not found
response = Response.status(404).build();
} else {
JSONObject jo = new JSONObject();
jo.put("id", inv.getItemId());
jo.put("onhand", inv.getOnhand());
jo.put("onOrder", inv.getOnOrder());
response = Response.ok().entity(jo.toString()).build();
}
</pre>
<br />
And finally the <b>custom handler</b>. JsonBuilder is my own helper class that provides JSON formatting and uses a StringBuilder internally:<br />
<pre class="brush: java; highlight:[2,14,15,16,17,18,19]">@Path("/item/{itemId}")
@Produces(ITEM_INVENTORY_MEDIA_TYPE_CUSTOM_JSON)
@GET
public Response getItemInventory4(@PathParam("itemId") String itemId) {
Response response = null;
try {
ItemInventory inv = _inventoryManager.getItemInventory(itemId);
if (null == inv) {
// not found
response = Response.status(404).build();
} else {
JsonBuilder jb = new JsonBuilder();
jb.beginObject();
jb.addAttribute("id", inv.getItemId());
jb.addAttribute("onhand", inv.getOnhand());
jb.addAttribute("onOrder", inv.getOnOrder());
response = Response.ok().entity(jb.toString()).build();
</pre>
<br />
If you're interested in the custom media types here's what they look like:<br />
<pre class="brush: java">public static final String ITEM_INVENTORY_MEDIA_TYPE
= "application/vnd.my-org.item.inventory";
public static final String ITEM_INVENTORY_MEDIA_TYPE_JSONOBJECT_JSON
= ITEM_INVENTORY_MEDIA_TYPE + ".JSONOBJECT+json";
public static final String ITEM_INVENTORY_MEDIA_TYPE_JERSEY_JSON
= ITEM_INVENTORY_MEDIA_TYPE + ".JERSEY+json";
public static final String ITEM_INVENTORY_MEDIA_TYPE_JACKSON_JSON
= ITEM_INVENTORY_MEDIA_TYPE + ".JACKSON+json";
public static final String ITEM_INVENTORY_MEDIA_TYPE_CUSTOM_JSON
= ITEM_INVENTORY_MEDIA_TYPE + ".CUSTOM+json";
public static final String ITEM_INVENTORY_MEDIA_TYPE_GSON_JSON
= ITEM_INVENTORY_MEDIA_TYPE + ".GSON+json";
</pre>
<br />
Here's the JUnit test method used to exercise the above web service handlers:<br />
<pre class="brush: java">@Test
public void testCreateVersion2() throws InterruptedException {
int onhand = 35;
int onOrder = 3;
int iterations = 100000;
JSONObject resource = new JSONObject();
try {
String[] mediaTypes = {
InventoryWebService.ITEM_INVENTORY_MEDIA_TYPE_CUSTOM_JSON,
InventoryWebService.ITEM_INVENTORY_MEDIA_TYPE_JSONOBJECT_JSON,
InventoryWebService.ITEM_INVENTORY_MEDIA_TYPE_JACKSON_JSON,
InventoryWebService.ITEM_INVENTORY_MEDIA_TYPE_JERSEY_JSON,
InventoryWebService.ITEM_INVENTORY_MEDIA_TYPE_GSON_JSON
};
long[][] timers = { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0} };
// get the plumbing working 1st time
WrappedClientResponse response = post(
"/inventory/item",
mediaTypes[0],
resource.toString()
);
for (int i = 0; i < iterations; i++) {
for (int j = 0; j < mediaTypes.length; j++) {
resource.remove("id");
resource.remove("itemId");
resource.put("onhand", onhand);
resource.put("onOrder", onOrder);
response = post(
"/inventory/item",
mediaTypes[j],
resource.toString()
);
timers[TIMER_POST][j] += response.getResponseTime();
assertEquals(201, response.getStatus());
assertNotNull(response.getHeaders().get("Location"));
String itemId = response.getEntity(String.class);
// now check to make sure we can fetch the item we just created
response = get(
String.format("/inventory/item/%s", itemId),
mediaTypes[j]
);
timers[TIMER_GET][j] += response.getResponseTime();
assertEquals(200, response.getStatus());
resource = new JSONObject(response.getEntity(String.class));
assertEquals(35, resource.getInt("onhand"));
assertEquals(3, resource.getInt("onOrder"));
} // for
}
showStats(iterations, timers, mediaTypes);
} catch (JSONException e) {
fail(e.getMessage());
}
Thread.sleep(1000);
} // testCreateVersion2()
</pre>
Here are the convenience methods used by the tests to execute the HTTP requests and capture the metrics: <br />
<pre class="brush: java">protected WrappedClientResponse get(String uri, String mediaType) {
if (StringUtils.isEmpty(uri) || StringUtils.isEmpty(mediaType)) {
throw new IllegalArgumentException("Programming error - required param missing");
}
WebResource resource = resource().path(uri);
WebResource.Builder builder = resource.accept(mediaType);
long start = System.currentTimeMillis();
ClientResponse response = builder.get(ClientResponse.class);
long stop = System.currentTimeMillis();
WrappedClientResponse wrappedResponse = new WrappedClientResponse(response, stop - start);
trace(response);
return wrappedResponse;
} // get()
protected WrappedClientResponse post(String uri, String mediaType, String data) {
if (StringUtils.isEmpty(uri) || StringUtils.isEmpty(mediaType) || StringUtils.isEmpty(data)) {
throw new IllegalArgumentException("Programming error - required param missing");
}
WebResource resource = resource().path(uri);
WebResource.Builder builder = resource.header("Content-Type", mediaType);
long start = System.currentTimeMillis();
ClientResponse response = builder.post(ClientResponse.class, data);
long stop = System.currentTimeMillis();
WrappedClientResponse wrappedResponse = new WrappedClientResponse(response, stop - start);
trace(response);
return wrappedResponse;
} // post()</pre>
<br />
Here's the model class used for the testing:<br />
<br />
<pre class="brush: java">@XmlRootElement
public class ItemInventory {
private static final long serialVersionUID = -4142709485529021223L;
// item ID
@SerializedName("itemId")
private String _itemId;
public String getItemId() { return _itemId; }
public void setItemId(String itemId) { _itemId = itemId; }
// onhand
@SerializedName("onhand")
private int _onhand;
public int getOnhand() { return _onhand; }
public void setOnhand(int onhand) { _onhand = onhand; }
// on order
@SerializedName("onOrder")
private int _onOrder;
public int getOnOrder() { return _onOrder; }
public void setOnOrder(int onOrder) { _onOrder = onOrder; }
public ItemInventory() {}
public ItemInventory(String itemId, int onhand, int onOrder) {
_itemId = itemId;
_onhand = onhand;
_onOrder = onOrder;
} // constructor
} // class ItemInventory
</pre>
<br />
See <a href="http://usna86-techbits.blogspot.com/2013/03/increase-quality-and-productivity-with.html" target="_blank">this article</a> to learn more about the Jersey Test Framework.<br />
<br />Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-66430940029847452172013-03-01T19:34:00.000-06:002013-12-26T20:10:38.449-06:00Increase quality and productivity with the Jersey Test Framework<i>Note: This article applies to Jersey 1.x. If you're looking for information on how to use the Jersey Test Framework in Jersey 2 please see <a href="http://usna86-techbits.blogspot.com/2013/12/jersey-jerseytest-migration-from-1x-to.html" target="_blank">this more recent article.</a></i><br />
<br />
With the Jersey test framework developers can increase the quality of their software as well as their productivity without leaving the comfort of their favorite IDE.<br />
<br />
The framework spins up an embedded servlet container that is configured to load the restful resources specified by the developer. In addition, the SpringServlet can be used to wire in the necessary beans if Spring is being used.<br />
<br />
And, this is really super simple. The key is to extend the JerseyTest class and override the configure() method. In the configure() method you supply the same information that you would normally provide in your web.xml.
<br />
<br />
Line 5 : specify the package that contains the Jersey resource(s) you want to test<br />
Line 6 : provide the name and location of the Spring context file (if using Spring)<br />
Line 7 : turn on the JSON to POJO mapping feature if you want to use that<br />
<br />
<pre class="brush: java; highlight:[1,4,5,6,7]">public class MyResourceWebServiceTest extends JerseyTest {
@Override
protected AppDescriptor configure() {
return new WebAppDescriptor.Builder("com.mycompany.services")
.contextParam("contextConfigLocation", "classpath:**/testContext.xml")
.initParam("com.sun.jersey.api.json.POJOMappingFeature", "true")
.servletClass(SpringServlet.class)
.contextListenerClass(ContextLoaderListener.class)
.requestListenerClass(RequestContextListener.class)
.build();
} // configure()</pre>
Once your test class is configured to spin up the embedded web container with your resources now it's time to write your tests. Again, the Jersey test framework makes it so easy even a caveman can do it.<br />
<br />
On line 3 below we simply access a WebResource object and provide the relative URI to the resource we are interested in. This URI should match the @Path mappings in your Jersey resource definition.<br />
<br />
Once the WebResource is defined simply use it to build and execute the HTTP request for the desired HTTP method, in this case GET, as shown on line 6.
That's it. All that's left to do is the standard JUnit stuff to validate the response.
<br />
<pre class="brush: java; highlight:[3,6]"> @Test
public void someTest() {
WebResource webResource = resource().path("/some/resource/17");
ClientResponse response = webResource
.accept(MediaType.APPLICATION_JSON)
.get(ClientResponse.class);
assertEquals(200, response.getStatus());
try {
JSONObject obj = new JSONObject(response.getEntity(String.class));
assertEquals("widget", obj.get("type"));
} catch (JSONException e) {
fail(e.getMessage());
}
} // someTest()
</pre>
<br />
Now, push a button or hit a key or two to kick off your JUnit test suite and watch your Jersey web services and tests fly. If you need to make a change it only takes a minute or two to modify your code and run the tests again.<br />
<br />
One piece of advice - create a test specific Spring context file targeting the exact REST resources you want to test and, if your resources eventually end up accessing some datasource (most would), consider injecting mock data access objects into your Spring beans so you can easily control the data that your resource would have access to, thus easily facilitating your testing (and development) and making your tests repeatable.<br />
<br />
See <a href="http://usna86-techbits.blogspot.com/2013/02/how-to-return-location-header-from.html" target="_blank">this post</a> if you want to learn how to easily create hyperlinks in your Jersey REST services.<br />
<br />
See <a href="http://usna86-techbits.blogspot.com/2013/08/restful-java-web-service-marshalling.html" target="_blank">this post</a> if you want to see more examples of Jersey unit testing or comparisons of different ways to marshall your data/objects.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-54292558578298503332013-02-27T19:46:00.000-06:002013-03-01T21:01:45.077-06:00How to return a Location header from a Jersey REST serviceIf you're following the Resource Oriented architectural style (ROA) for REST you're often interested in building and returning hyperlinks to your resources in your web service responses.
<br />
<br />
In the case of an HTTP CREATE, in addition to returning an HTTP 201 response code, you're also going to want to return a hyperlink to the newly created resource in the Location header of the response.
<br />
<br />
The <a href="http://jersey.java.net/">Jersey JSR 311 implementation</a> makes this a trivial task. The first step, as you can see in line 2 below, is to inject a UriInfo class member using the Jersey @Context annotation. Jersey recognizes a number of different resources that can be injected into your service classes via the @Context annotation. In this case we're interested in information about the URI of our web service.<br />
<br />
Once you've completed the work of creating your new resource (whatever that happens to be) and you're ready to formulate a response it's a simple matter to create the hyperlink and place it in the Location header of the response. The key is to get the absolute URI to this current service as we're doing in line 21 below. And assuming your URI convention is to tack the ID on to the URI for the CREATE service (as it probably should be in REST) simply append the ID to the absolute URI and use the Jersey Response builder to complete your response.<br />
<br />
The hyperlink we just created should look something like http://mydomain/services/resource/1234 and map over to the GET-mapped service shown below starting on line 34.<br />
<br />
<pre class="brush: java; highlight:[2,21,34]"> @Context
private UriInfo _uriInfo;
@Path("/resource")
@POST
public Response createResource(String data) {
Response response = null;
try {
// convert data to model object
Model model = someConversionMethod(data);
// save model object
_businessManager.saveModel(model);
// formulate the response
response = Response.status(201)
.header(
"Location",
String.format(
"%s/%s",
_uriInfo.getAbsolutePath().toString(),
model.getId()
)
)
.entity(model.getId())
.build();
} catch (Exception e) {
response = Response.status(500).entity(e.getMessage()).build();
}
return response;
} // createResource()
@Path("/resource/{id}")
@GET
public Response getResource(@PathParam("id") String id) {
...
</pre>
See <a href="http://usna86-techbits.blogspot.com/2013/03/increase-quality-and-productivity-with.html">this post</a> if you want to learn how to easily test your Jersey REST services.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-37494913105441664382012-11-18T09:56:00.001-06:002014-06-29T09:14:25.002-05:00UML Class Diagram Relationships, Aggregation, CompositionThere are five key relationships between classes in a UML class diagram : dependency, aggregation, composition, inheritance and realization. These five relationships are depicted in the following diagram: <br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-EjTahSiP7is/UKj8B-wIQeI/AAAAAAAAACE/seDeXyS8pKU/s1600/relationships.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-EjTahSiP7is/UKj8B-wIQeI/AAAAAAAAACE/seDeXyS8pKU/s1600/relationships.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">UML Class Relationships</td></tr>
</tbody></table>The above relationships are read as follows:<br />
<ul><li>Dependency : class A uses class B</li>
<li>Aggregation : class A has a class B</li>
<li>Composition : class A owns a class B</li>
<li>Inheritance : class B is a Class A (or class A is extended by class B)</li>
<li>Realization : class B realizes Class A (or class A is realized by class B)</li>
</ul>What I hope to show here is how these relationships would manifest themselves in Java so we can better understand what these relationships mean and how/when to use each one.<br />
<b></b><br />
<div><b>Dependency</b> is represented when a reference to one class is passed in as a method parameter to another class. For example, an instance of class B is passed in to a method of class A: </div><pre class="brush: java">public class A {
public void doSomething(B b) {
</pre><br />
Now, if class A stored the reference to class B for later use we would have a different relationship called <b>Aggregation</b>. A more common and more obvious example of Aggregation would be via setter injection: <br />
<pre class="brush: java">public class A {
private B _b;
public void setB(B b) { _b = b; }
</pre><br />
Aggregation is the weaker form of object containment (one object contains other objects). The stronger form is called <b>Composition</b>. In Composition the containing object is responsible for the creation and life cycle of the contained object (either directly or indirectly). Following are a few examples of Composition. First, via member initialization: <br />
<pre class="brush: java">public class A {
private B _b = new B();
</pre><br />
Second, via constructor initialization: <br />
<br />
<pre class="brush: java">public class A {
private B _b;
public A() {
_b = new B();
} // default constructor
</pre><br />
Third, via lazy init (example revised 02 Mar 2014 to completely hide reference to B): <br />
<br />
<pre class="brush: java">public class A {
private B _b;
public void doSomethingUniqueToB() {
if (null == _b) {
_b = new B();
}
return _b.doSomething();
} // doSomethingUniqueToB()
</pre><b></b><br />
<div><b>Inheritance</b> is a fairly straightforward relationship to depict in Java:</div><br />
<pre class="brush: java">public class A {
...
} // class A
public class B extends A {
....
} // class B
</pre><br />
<br />
<b>Realization</b> is also straighforward in Java and deals with implementing an interface: <br />
<br />
<pre class="brush: java">public interface A {
...
} // interface A
public class B implements A {
...
} // class B
</pre><br />
Note: (added 3/2/14 in response to comments) Let me point out that in the above composition examples 'new' could be replaced with a factory pattern as long as the factory does not return the exact same instance to any two different containing/calling objects, which would violate the key tenet of composition which is that the aggregated objects do not participate in a shared aggregation (two different container objects sharing the same component part object). The builder pattern could also be used as long as the distinct 'parts' are not injected into more than one containing object.<br />
<br />
6/29/2014 - here's a good article on class diagrams and answers Ivan's question below in the comments:<br />
<br />
<a href="http://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/bell/" target="new">http://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/bell/</a><br />
<br />
Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com57tag:blogger.com,1999:blog-7625178889455096573.post-65590488884350901712012-10-09T20:15:00.000-05:002012-10-09T20:29:19.594-05:00ActiveMQ Producer Flow Control Send TimeoutActiveMQ has a feature called Producer Flow Control that throttles back message producers when it detects that broker resources are running low. In fact, it will block threads sending messages until resources become available.<br />
<br />
You can configure the broker to timeout the message send so that it does not block when producer flow control is in effect, but this is a global setting and you cannot configure it per queue.<br />
<br />
However, the ActiveMQConnection class has a setSendTimeout() method, but it is not exposed via the JMS connection interface. There are a couple of ways to handle this.<br />
<br />
First, you could simply cast your connection object to an ActiveMQConnection and then call the setSendTimeout method directly. This works fine if you know for sure your implementation is ActiveMQ and you have access to the ActiveMQ libraries at compile time (in other words, you don't mind having this dependency in your messaging client).<br />
<br />
<pre class="brush: java; highlight:[7]">try {
// Create a ConnectionFactory
ActiveMQConnectionFactory connectionFactory
= new ActiveMQConnectionFactory("tcp://localhost:61616");
QueueConnection connection = connectionFactory.createQueueConnection();
((ActiveMQConnection)connection).setSendTimeout(5000);
...
</pre>
<br />A second way to handle this would be to use Java reflection to dynamically invoke the setSendTimeout() method if it is available, like so:<br />
<br />
<pre class="brush: java; highlight: [7,8,9,10,12]">try {
...
QueueConnection connection = connectionFactory.createQueueConnection();
try {
Method setSendTimeout = connection.getClass().getMethod(
"setSendTimeout",
int.class
);
if (null != setSendTimeout) {
setSendTimeout.invoke(connection, 5000);
}
} catch (Exception e) {
System.out.println("could not invoke the setSendTimeoutMethod");
}
...</pre>
<br />
With this approach, you can configure send timeouts per connection and you can be somewhat JMS provider agnostic in your client. Keep in mind that if you use a container to provide your JMS connection factory the connections you get back may not be ActiveMQ connections, but rather proxy objects that wrap an ActiveMQ connection.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-49326430934575711992012-09-28T20:56:00.000-05:002012-09-28T22:26:48.941-05:00Project Management, Agile and the Replacement RefsAn article I read this morning mentioned that now that the regular NFL refs are back on the job the referees have faded into the background and the game itself has taken center stage again. Relative calm and order have been restored and fans, players and coaches can focus on the product (football) and not the administration of it. <br />
<br />
I got to thinking about it and I realized how much that applied to project management on agile projects (I'm referring to organizations, like mine, that are project management centric, rooted in waterfall methodologies, and trying to implement scrum in a move toward agile). <br />
<br />
Traditional project management on agile projects, where the focus is primarily on schedules, timelines, budgets, status meetings, resources, timekeeping, etc. etc. is akin to replacement refs in a professional football game - the management of the project is too visible and steals center stage. <br />
<br />
So, bring back the regular refs and restore the integrity of agile by making the product itself the center of attention, surrounded by the team member collaborations and customer interactions that comprise the real game of agile and let project management quietly fade into the background where it can unobtrusively maintain calm and order. <br />
<br />
Disclaimer: I think the replacement refs were put in a tough position where limited training, high expectations and the speed of the game made their ability to succeed a tough proposition from the very outset. I think the same can be said of traditional project managers on agile projects.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com2tag:blogger.com,1999:blog-7625178889455096573.post-41158939637465772182012-08-31T17:53:00.001-05:002012-09-01T23:38:17.858-05:00slf4j5 - Java logging even fasterIn my <a href="http://usna86-techbits.blogspot.com/2012/08/introducing-slf4j5-logging-varargs.html">previous posting</a> where I introduced slf4j5 I reported that, to my surprise, performance was better than when using slf4j by itself - probably due to the usage of the Java 5 String formatter.
<p>
Inspired by this I took a closer look at performance and ended up creating a dedicated thread in each logger for doing the actual logging work (i/o). This resulted in more than a 50% improvement over slf4j5 without the dedicated thread. This has the added benefit of providing for non-blocking logging in a multi-threaded environment.
</p>
Get the code here: <a href="http://code.google.com/p/slf4j5/">http://code.google.com/p/slf4j5/</a>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-3759049436395324712012-08-21T23:36:00.002-05:002012-08-22T22:20:21.562-05:00Introducing slf4j5 - logging, varargs, String.format, fasterI was recently investigating some of the Java logging frameworks out there and really like the flexibility/abstraction layer that <b>slf4j</b> provides and decided to give it a try with the <b>logback</b> logging implementation, which is apparently the successor to log4j.<br />
<br />
I noticed that slf4j has its own formatter built into the logging api calls for assembling parameterized strings. However, what I didn't realize is that it only accepted a maximum of 2 parameters. Bummer. What happened to varargs? Turns out slf4j doesn't support Java 5 yet and it didn't sound like it was going to anytime soon.<br />
<br />
So, I wrote a simple Java 5 wrapper around slf4j and, except for the Java 5 string formatting, you use it just like you would slf4j. You can find it here (I'm hoping the slf4j folks will take it on as a subproject - if so, I'll update the link):<br />
<br />
<a href="http://code.google.com/p/slf4j5/" rel="nofollow" target="_blank">http://code.google.com/p/slf4j5/</a><br />
<br />
I figured since I was adding a small layer on top of slf4j that there would be a performance penalty. It would stand to reason, but I wanted to know how much overhead I was adding to slf4j so I wrote some tests to measure it. I was so surprised by the results that I ran the tests over and over and reviewed the code and tweaked the tests until I convinced myself what I was seeing was actually true - the slf4j5 wrapper was actually faster than using slf4j by itself (with logback, of course).<br />
<br />
But, how could that be? My guess is that it's primarily due to the use of String.format() rather than the custom formatter used in slf4j.<br />
<br />
Here were the results:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-0Fc6AVluDQA/UDReYXbPl_I/AAAAAAAAABw/4I3Wli53ucU/s1600/test-results.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="219" src="http://3.bp.blogspot.com/-0Fc6AVluDQA/UDReYXbPl_I/AAAAAAAAABw/4I3Wli53ucU/s320/test-results.jpg" width="320" /></a></div>
In addition, I also enhanced the context-awareness to automatically log the class, method and line from which the logging call originated.<br />
<br />
On a similar note, using this auto-detection strategy, you don't need to specify a class or a name when obtaining your logger. For example:<br />
<br />
LoggerFactory.getLogger() will obtain a logger for the class wherein this statement is contained.<br />
<br />
So, varargs, advanced formatting, faster performance, auto-context detection - several good reasons to take it out for a test drive.<br />
<br />
I will be working on the wiki, but here's an example:<br />
<br />
<pre class="brush: java">public class MyClass {
private final Logger _log = LoggerFactory.getLogger();
public void doSomething(int param1, String param2, double param3) {
_log.debug(“entering, params = %d, %s, %8.2f”, param1, param2, param3);
// some useful business logic here
_log.debug(“leaving”);
} // doSomething()
...
</pre>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="font-family: inherit;">The above logging statements would result in something like the following in the log file:<o:p></o:p></span></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">
<br /></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="font-family: Calibri;">2012-08-22 07:33:22.543 DEBUG [main] [MyClass.doSomething():6] entering, params = 100, hello, 500.00<o:p></o:p></span></div>
<div class="MsoNormal" style="margin: 0in 0in 0pt;">
<span style="font-family: Calibri;">2012-08-22 07:33:23.618 DEBUG [main] [MyClass.doSomething():10] leaving<o:p></o:p></span></div>
<br />
Let me know how it goes.<br />
<br />Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com4tag:blogger.com,1999:blog-7625178889455096573.post-32110237961302755182012-01-07T20:44:00.000-06:002012-01-07T20:44:02.883-06:00The Agile Machine<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-nLdJOgfg8a8/TwkBqdi22KI/AAAAAAAAABo/INZswEQ9WGs/s1600/TheAgileMachine.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="472" src="http://4.bp.blogspot.com/-nLdJOgfg8a8/TwkBqdi22KI/AAAAAAAAABo/INZswEQ9WGs/s640/TheAgileMachine.png" width="640" /></a></div>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-68276624385732766092011-04-05T23:02:00.006-05:002011-04-06T09:11:27.060-05:00Network tune-up for Windows Home Server (WHS) performance<i>It's all about the cables</i><br />
<br />
Although I've been very pleased with my Windows Home Server, one area that has been a little disappointing is performance - until today that is.<br />
<br />
Recently I tried restoring a 120 GB backup onto a new hard drive. Unfortunately, WHS reported that it was going to take upwards of 22 hours - yes HOURS - to complete. I figured there was something wrong with that so I cancelled and proceeded to investigate. I found some interesting things on Google that said do this or that but none of those suggestions seemed to work for me.<br />
<br />
Since my router was only capable of 100 Mbps and since my WHS box has a Gigabit LAN port I figured I would try upgrading my router. After setting up my new <a href="http://www.amazon.com/gp/product/B002HWRJY4">Netgear gigabit router</a> I noticed that my WHS box was only connecting to the network at 10 Mbps. That would certainly explain where the 22 hours to restore a 120 GB backup was coming from - 120 gigabytes at 10 megabits per second would take about that long to transfer across the network. But I had a gigabit router and a gigabit LAN port - why was the WHS box only connecting at 10 Mbps?<br />
<br />
Well, it turns out, it was the cable. My network, which I built several years ago, was wired with CAT5 cable. Apparently cabling has come a long way since then and I was unaware. But, when I swapped out the CAT5 cable from my router to my WHS box with the shielded CAT6 cable that came with the new router my WHS box was now connecting to the network at the 1 Gbps speed. Yeah! And, the restore of that 120 GB backup now took less than 30 minutes to complete. Wow, what a difference.<br />
<br />
So, if you're having trouble with performance from your WHS check your LAN cables.<br />
<br />
I would also like to mention that the Netgear N600 router I bought has an awesome feature that I was unaware of when I bought it as it doesn't seem to be described in the product literature. There is a button on the front where you can turn off the wireless portion of the router - very cool since all of my connections are currently wired connections.<br />
<br />
If you want to see how to restore a backup to new/different hardware see my post on <a href="http://usna86-techbits.blogspot.com/2011/03/windows-home-server-to-rescue.html">'Windows Home Server to the rescue'</a>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-60557069117191789452011-03-06T21:28:00.007-06:002011-04-06T09:14:38.661-05:00Agile Thoughts : Sprint Length<i>team maturity and work definition are key factors</i><br />
<br />
There are many factors that can/should influence sprint length, such as delivery schedules, resource availability, customer requirements, need for feedback, etc., but two often overlooked and perhaps most important factors are team maturity and how well the requirements/work are defined.<br />
<br />
If I were putting together a new team or implementing scrum/agile processes for the first time with an existing team I would lean towards shorter sprints, perhaps on the order of a week or two. I believe this would allow a team to mature much more quickly as there are more opportunities to exercise the full sprint process and more opportunities to use feedback to more rapidly move toward becoming a high-performing team.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://lh6.googleusercontent.com/-hGPLbBnrbSI/TXRJM-JWDnI/AAAAAAAAABA/LtKEgI367IQ/s1600/sprint-length.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br />
</a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://lh5.googleusercontent.com/-7yhwBC5LxtI/TXRcACZQYyI/AAAAAAAAABE/T5BlLF2G6yQ/s1600/sprint-length.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="250" src="https://lh5.googleusercontent.com/-7yhwBC5LxtI/TXRcACZQYyI/AAAAAAAAABE/T5BlLF2G6yQ/s320/sprint-length.png" width="320" /></a></div>Another key factor affecting sprint length is how well the work to be performed is defined and understood. This includes both the business and technical aspects. If the requirements are vague or unclear or if the technologies to be used are new or not widely known by the team then it might be a good idea to shorten the sprints to flush out more detail and get more rapid feedback from the customer on whether the team is on or off course. Likewise, shorter, more focused sprints might help the team determine whether technology or architecture choices were appropriate and correct as well as helping to minimize risk or wasted effort.<br />
<br />
As you can see from the above chart, mature, high-performing teams with poorly defined requirements and new, immature teams with outstanding requirements are in virtually the same place - they both need shorter sprints, for different reasons of course, but shorter sprints none-the-less.<br />
<br />
See also: <a href="http://usna86-techbits.blogspot.com/2011/02/agile-thoughts-backlog-preparation.html">agile thoughts : backlog preparation</a>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-41678127232691673502011-03-05T22:40:00.008-06:002011-04-06T09:13:11.792-05:00Windows Home Server to the rescue<i>restoring a PC to new hardware</i><br />
<br />
Several months ago I built a Windows Home Server box partly to back up the family's PCs - one of which is an aging Windows XP machine that I built seven or eight years ago. As luck would have it the last remaining SCSI hard drive in that old XP box started to fail last week, corrupting the OS and causing the machine to fail to boot.<br />
<br />
Since I had this new WHS box I figured I had nothing to lose so I decided to try my first restore. It was dirt simple and it worked, for a day or two, until the OS was corrupted again. I ended up swapping out my SCSI controller for a SATA controller, added a new SATA hard drive and performed a restore from WHS onto my new hardware. It looked like it was going to work just fine - until the first reboot after the restore. As most of you probably guessed, the backup image did not have the drivers for my new PCI SATA card and thus Windows failed to boot.<br />
<br />
I tried numerous things and finally discovered the recipe that would let me successfully restore the backup for my old hardware onto my new hardware:<br />
<br />
1. Restore the PC from WHS onto the new hardware<br />
<br />
2. Boot from the Windows XP CD, pressing F6 at the right time to install the SATA drivers for the new hardware<br />
<br />
3. Choose to install Windows XP (do not enter the XP recovery console)<br />
<br />
4. When prompted, choose to 'Repair' the current installation<br />
<br />
Windows will appear to be performing a fresh install (and to some extent it is), but all of your programs and data will be left intact. If you goof up along the way and accidentally do a full reinstall instead of a repair don't fret, simply go back to step 1 and start over by restoring the PC from WHS again.<br />
<br />
5. Once the repair is complete reboot into the OS and run Windows Update to recover all the patches and updates that were lost by the repair (in my case Windows was set back to SP2 from SP3 since SP2 is the service pack level of my installation CD)<br />
<br />
6. I would advise performing a manual backup to WHS at this point<br />
<br />
In hindsight it seems like a pretty simple process, and it is, but it did take some trial and error to figure out. Needless to say I am very pleased with Windows Home Server and my decision to add a WHS box to my home network.<br />
<br />
See my post on <a href="http://usna86-techbits.blogspot.com/2011/04/network-tune-up-for-windows-home-server.html">'network tune-up for WHS'</a> to find out how to make the above process much faster and smoother.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-21596262292831330542011-02-26T11:51:00.005-06:002011-04-06T09:15:40.389-05:00Agile Thoughts : Backlog Preparation<i>Two hours can make a huge difference.</i> <br />
<br />
I've been working in an agile development shop using the Scrum methodology for over four years now and have a few thoughts on what works well, what doesn't work so well and some thoughts on how to improve the process. The first topic I would like to discuss is backlog preparation and where that fits/should fit into the sprint schedule. <br />
<br />
For those of you unfamiliar with Scrum/agile a 'sprint' is a short iteration, somewhere in the 2 to 4 week range (+- a week) that consists of work selection, planning, design, implementation/development, testing, presentation to the client and a team retrospective - usually fairly rigid and in that order.<br />
<br />
The team works from the 'backlog' - a list of features or capabilities (called stories) that need to be researched, developed or integrated into the software. This list is created and prioritized by the 'solution owner' in cooperation with the client/customer. But since we are talking about agile, this list can be changed frequently based on customer feedback and changing priorities.<br />
<br />
Usually, these stories start out as nothing more than simple one line statements or short paragraphs of the form 'as a user I need to be able to do X.' At some point in this agile/scrum process these stories need to be flushed out in enough detail so that (a) the story can become actionable by the team and (b) the amount and type of effort required to complete the story can be estimated with some degree of accuracy. In my experience this usually occurs at backlog selection (the kickoff meeting for the new sprint where work is selected). This usually, without exception, leads to meetings that are long, frustrating, and less productive than they need to be.<br />
<br />
Agile/scrum teams usually try to combat this by holding 'backlog grooming' meetings throughout the sprint to flush out some of the details of these future stories and make some preliminary design decisions. This, however, has several shortcomings that I have seen time and again: (1) it interrupts the flow/focus of the current sprint, (2) team members are distracted by the current sprint's work and don't fully focus/participate in the thought process for developing future stories and (3) the team many times invests time in preparing stories that they will never actually work or that change dramatically by the time they do.<br />
<br />
I use to work in manufacturing and one of the key concepts was 'just in time' - you bring the materials, machinery, and manpower together at just the right time so that inventory isn't building up or so that people and machinery aren't sitting idly by. It's a great concept and aptly applies to software development and agile processes. In this context <b>I believe there is one, and only one, place for backlog preparation and that is sometime between when the team has completed its work on the current sprint and prior to the next backlog selection meeting</b>.<br />
<br />
The purpose of these backlog preparation meetings is for the solution owner to present the team with the stories that are to be worked in the coming sprint, for the team to ask some initial questions, and for the team to then go off and do some initial brainstorming. The result should be stories that have a clearer 'definition of done' with some initial high-level tasking from which reasonable estimates of effort can be made. This meeting should be short, perhaps no more than an hour with the solution owner present and perhaps another hour for the team to brainstorm and come up with an initial tasking, estimates, additional questions for the solution owner and, if need be, alternative implementations/paths forward.<br />
<br />
The benefits to this approach are that the team is constantly focused on the work they are to be performing at any given point in time, resources are more efficiently and effectively utilized, the actual backlog selection meeting is more productive, estimates are more accurate, teams are happier and more engaged, and sprints get started off on the right foot and have a higher probability of success.<br />
<br />
<i>Two hours spent in backlog preparation - <u>at the right time</u> - can make a huge difference.</i><br />
<br />
See also: <a href="http://usna86-techbits.blogspot.com/2011/03/agile-thoughts-sprint-length.html">agile thoughts : sprint length</a><i> </i>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-74496751574961991232011-02-26T10:44:00.006-06:002011-04-06T09:17:45.609-05:00Standalone ExtJS XTemplate classesExtJS XTemplates are awesome! They provide an easy way to combine custom presentation markup with simple or complex data on the client. Sometimes that markup needs to be more dynamic than simply plugging the data straight into the template. But, the Ext folks already thought of that and allow you to add methods to your XTemplate definition. This is great, but can lead to gangly template definitions with scoping issues.<br />
<br />
In a recent situation at work we had a 400+ line template definition - only about 20 lines of that was the presentation template, the rest being methods to manipulate/interpret the data (beyond the conversions we had already applied to the data). In our situation we needed to interpret the same piece of data in different ways depending on where we were in the template (context) as well as the type of view the user wanted to see. For those of you familiar with XTemplates you will realize that the 400+ lines of template definition are in the constructor call to the XTemplate class - basically a huge constructor parameter. Obviously it was time for some refactoring.<br />
<br />
I have written numerous custom components in javascript, but never one extending the XTemplate, so I decided to try making our template a custom class that extended the ExtJS XTemplate. Turns out it worked beautifully with very little modification to the original template (other than relocating it to its own file and doing some minor restructuring). The template markup became part of the call to the super constructor in my new class' constructor and the methods became first class citizens of my new class (which ext accomplishes behind the scenes anyway in the original implementation).<br />
<br />
As a result the client code using the template only needed a single line to create an instance of the template, the template is now reusable if needed, the code is cleaner all around, and the scope/context inside the template methods is more natural and easier to understand.<br />
<br />
See also: <a href="http://usna86-techbits.blogspot.com/2011/01/injecting-extjs-components-via-html.html">injecting extjs components via html templates</a>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com2tag:blogger.com,1999:blog-7625178889455096573.post-24628705102845698472011-01-17T18:47:00.003-06:002012-08-22T22:46:56.109-05:00Injecting ExtJS components via an html template<i>Use Ajax to load an html page as a template for ExtJS and then plug ExtJS components into it.</i>
<br />
<br />
Sometimes a web page layout may be too complicated or time-consuming to develop purely in ExtJS or perhaps you want to convert an existing html page to use ExtJS components. In either case there is a simple and straightforward way to inject ExtJS components into a complex html page.
There are only a few simple steps needed to accomplish this: <br />
<ul>
<li>create the html</li>
<li>fetch the html</li>
<li>load the html</li>
<li>plug in the ExtJS components</li>
</ul>
Here is a snippet from myPage.html. Notice the {idBase} included as part of the id. That is a template param that will be replaced when the ExtJS XTemplate is processed. The purpose of {idBase} is to help make sure that each div section has a unique ID and is not really germain to this article.
<br />
<pre class="brush: html">
<div id='buttonPanel_{idBase}'>
<div id='myButton_{idBase}'></div>
...
</div>
</pre>
The following methods are from myScript.js.
This method loads the html using an Ajax request:
<br />
<pre class="brush: javascript"> initStructure : function() {
Ext.Ajax.request({
url : 'myPage.html',
disableCaching : false,
method : 'GET',
success : this.onStructureLoaded.createDelegate(this)
});
} // initStructure()
</pre>
This success handler puts the html text into an ExtJS XTemplate and then loads that into the body of this component (an ExtJS panel or window):
<br />
<pre class="brush: javascript"> onStructureLoaded : function(response, options) {
var template = new Ext.XTemplate(
response.responseText
});
this.body.update(template.apply({
idBase : this.id
}));
this.initMyButton();
...
} // onStructureLoaded()
</pre>
Once the html has been loaded into the DOM we can start plugging our ExtJS components into it:
<br />
<pre class="brush: javascript"> initMyButton : function() {
new Ext.Button({
applyTo : this.getCustomId('myButton'),
text : 'My Button',
handler : this.onMyButtonClick.createDelegate(this)
});
} // initMyButton()
getCustomId : function(name) {
return String.format('{0}_{1}', name, this.id);
} // getCustomId()
</pre>
Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com1tag:blogger.com,1999:blog-7625178889455096573.post-54398946560181990762011-01-12T21:23:00.000-06:002011-01-12T21:24:33.686-06:00Spring-loading and injecting external properties into beansLet's say you have a Spring managed bean that contains some properties that you would like to externalize from your application, say perhaps in a JBoss 'conf' folder properties file. Apparently you can do this via annotations in Spring 3, but it's also fairly straightforward in Spring 2.5:<br />
<br />
From the context.xml file:<br />
<br />
<bean id="propertyConfigurer"<br />
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"><br />
<property name="location" value="classpath:my_app.properties"/><br />
<property name="placeholderPrefix" value="$prop{"/><br />
</bean><br />
<br />
<bean id="someBeanWithProps" class="my.class.with.Props"><br />
<property name="myPropA" value="$prop{prop.file.entry.prop.A}"/><br />
<property name="myPropB" value="$prop{prop.file.entry.prop.B}"/><br />
</bean>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-50795388173203262432011-01-12T20:51:00.000-06:002011-01-12T21:12:08.142-06:00JBoss, JNDI and java:comp/envOn startup JBoss will process any xyz-service.xml files it finds in the deploy folder before it processes any war or ear files, etc. One thing this could be useful for is to preload configuration values into JNDI, thus making them available to web applications when they start up. It may sound simple but it consists of a non-obvious four step process:<br />
<br />
1. Create a JNDIBindingServiceMgr mbean in the xzy-service.xml file.<br />
<br />
2. In the WEB-INF/jboss-web.xml file map a resource-env-ref entry over to a JNDI value bound in step 1.<br />
<br />
3. In the WEB-INF/web.xml file create a resource-env-ref entry for each JNDI bound value.<br />
<br />
4. Access the JNDI value from somewhere, such as a servlet filter, using 'java:comp/env'<br />
<br />
First, the xyz-service.xml file:<br />
<br />
<?xml version="1.0" encoding="UTF-8"?><br />
<!DOCTYPE server PUBLIC "-//JBoss//DTD MBean Service 4.0//EN"<br />
"http://www.jboss.org/j2ee/dtd/jboss-service_4_0.dtd"><br />
<server><br />
<br />
<mbean code="org.jboss.naming.JNDIBindingServiceMgr"<br />
name="netcds.cas.client:service=JNDIBindingServiceMgr"><br />
<br />
<attribute name="BindingsConfig" serialDataType="jbxb"><br />
<br />
<jndi:bindings<br />
xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"<br />
xmlns:jndi="urn:jboss:jndi-binding-service:1.0"<br />
xs:schemaLocation="urn:jboss:jndi-binding-service:1.0 resource:jndi-binding-service_1_0.xsd"><br />
<br />
<jndi:binding name="my/jndi/property"><br />
<jndi:value type="java.lang.Boolean">false</jndi:value><br />
</jndi:binding><br />
<br />
</jndi:bindings><br />
</attribute><br />
<depends>jboss:service=Naming</depends><br />
</mbean><br />
<br />
</server><br />
<br />
Next, the resource-env-ref entry in the jboss-web.xml file:<br />
<br />
<resource-env-ref><br />
<resource-env-ref-name>my/jndi/property</resource-env-ref-name><br />
<jndi-name>my/jndi/property</jndi-name><br />
</resource-env-ref><br />
<br />
And the associated web.xml entry:<br />
<br />
<resource-env-ref><br />
<resource-env-ref-name>my/jndi/property</resource-env-ref-name><br />
<resource-env-ref-type>java.lang.Boolean</resource-env-ref-type><br />
</resource-env-ref><br />
<br />
Finally, accessing the JNDI value from a servlet filter:<br />
<br />
boolean result = false;<br />
try {<br />
InitialContext context = new InitialContext();<br />
result = (Boolean)context.lookup("java:comp/env/my/jndi/property");<br />
} catch (final NamingException e) {<br />
// log and/or sys out<br />
}Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com1tag:blogger.com,1999:blog-7625178889455096573.post-20738996180343626802010-12-16T18:04:00.000-06:002012-08-22T18:29:07.521-05:00No dynamic filters in servlet spec 2.4 you say?I had a requirement recently to be able to dynamically control CAS security filters in a web application (default CAS security to off for development and allow it to be turned on by external configuration post-deployment). Unfortunately, servlet spec 2.4 does not allow one to programatically add new servlet filters (at least that's the prevailing theory). This is a feature added/being added to the servlet 3.0 API.<br />
<br />
My friend Google said there were a number of others who wanted to do the same thing but they were being pointed to servlet 3.0. Unfortunately, servlet 3.0 and J2EE 6 were not an option for me, so it was looking like a tough nut to crack.<br />
<br />
Then it struck me, what if I created a generic, conditional servlet filter that took the name of the class of the real filter as an init param? And, what if I passed in the condition that was to be evaluated to determine whether or not to create and/or invoke the real filter? Then, in the conditional filter, I could examine the condition and, as necessary, dynamically create an instance of the wrapped filter class.<br />
<br />
Turns out it worked like a charm. Here's how. First the filter definition in web.xml:
<pre class='brush: xml'>
<filter>
<filter-name>CAS Authentication Filter</filter-name>
<filter-class>my.org.security.servlet.ConditionalFilter</filter-class>
<init-param>
<param-name>condition</param-name>
<param-value>cas/enabled</param-value>
</init-param>
<init-param>
<param-name>wrapped-class</param-name>
<param-value>
org.jasig.cas.client.authentication.AuthenticationFilter
</param-value>
</init-param>
</filter>
</pre>
<br />
<pre class='brush: java'>
public class ConditionalFilter implements Filter {
// instance of the actual filter being wrapped
private Filter _wrappedFilter;
// are we to ignore the wrapped filter?
private boolean _ignore = true;
public ConditionalFilter() {
} // constructor
public void init(FilterConfig filterConfig) throws ServletException {
// the 'condition' init param tells us whether or not
// the wrapped filter is active
_ignore = !checkCondition(filterConfig.getInitParameter("condition"));
try {
if (!_ignore) {
// the wrapped filter is active so we create an instance
// of it and initialize it
_wrappedFilter = getFilterInstance(
filterConfig.getInitParameter("wrapped-class")
);
_wrappedFilter.init(filterConfig);
}
} catch (Exception e) {
throw new ServletException(e);
}
} // init()
public void doFilter(ServletRequest request,
ServletResponse response,
FilterChain filterChain)
throws IOException, ServletException {
if (!_ignore) {
// the wrapped filter is active so we let it do its work
_wrappedFilter.doFilter(request, response, filterChain);
} else {
// wrapped filter is inactive so simply move on to the next filter
filterChain.doFilter(request, response);
}
} // doFilter()
public void destroy() {
if (_ignore) {
_wrappedFilter.destroy();
}
} // destroy()
private Filter getFilterInstance(String className)
throws ClassNotFoundException, InvalidClassException,
InvocationTargetException, IllegalAccessException,
InstantiationException, NoSuchMethodException {
// try to create an instance of the wrapped filter
// with the given class name
Class filterClass = Class.forName(className);
java.lang.reflect.Constructor constructor = filterClass.getConstructor();
Object filter = constructor.newInstance();
if (!(filter instanceof Filter)) {
throw new InvalidClassException(
String.format("'%s' is not an instance of Filter", className)
);
}
return (Filter)filter;
} // getFilterInstance()
/*
* looks up the configured 'condition' via JNDI to determine
* whether or not the wrapped filter is active
*/
private boolean checkCondition(String condition) {
boolean result = false;
try {
InitialContext context = new InitialContext();
String path = String.format("java:comp/env/%s", condition);
result =(Boolean)context.lookup(path);
} catch (final NamingException e) {
System.out.println(
"unable to load condition from JNDI"
);
}
return result;
} // checkCondition()
} // class ConditionalFilter
</pre>
Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-66710941448484628402010-04-02T15:36:00.000-05:002010-04-03T00:19:03.129-05:00One Thumb Up for Pair ProgrammingPair programming is one of the characteristics of extreme programming and is, frankly, something I have not been a particularly strong advocate of. The idea of two developers sitting side-by-side, sharing one keyboard and working the exact same problem seems terribly inneficient to me. However, there are two reasons why, for short periods of time, that it would be beneficial to engage in pair programming.<br />
<br />
The first one would be for test driven development. For a particular functional area under development one developer would write the unit/integration tests and one would write the code. To me, this would be the most efficient use of pair programming.<br />
<br />
The second reason why I think it would be beneficial to perform short stints of pair programming would be to gain insight into the work practices, processes and procedures of one's teammates. For me personally, I could see how my teammates work and glean some ideas on how I could be more productive and efficient. What tools do they use? How do they use them? Do they have any shortcuts or time-savers? Likewise, it would be an opportunity for me to help my teammates improve their efficiency by offering suggestions based on the things that I do that help me.Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0tag:blogger.com,1999:blog-7625178889455096573.post-12027552289397404152010-03-09T11:16:00.000-06:002010-04-02T15:39:25.429-05:00jboss plugin for auto-deploying artifact during buildThis is VERY handy for automatically deploying your artifact/war after a maven build is completed. Simply add the mvn goal <b>jboss:hard-deploy</b> to your maven command.<br />
<pre><plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jboss-maven-plugin</artifactId>
<version>1.4</version>
<configuration>
<jbossHome>${jboss.home}</jbossHome>
<serverName>default</serverName>
<fileName>target/my-app.war</fileName>
</configuration>
</plugin></pre>Russ Jacksonhttp://www.blogger.com/profile/15521749913146166813noreply@blogger.com0