Tuesday, October 28, 2008

Heap n Stack

To explain the answer to my earlier post - "Simple One But...", first the difference between two memory areas in Java - heap and stack has to be understood.

Whenever a program runs in JVM, different ways the memory is managed is depicted in the figure below.

Now coming back to the earlier question, so whenever a methos is executed, it creates a new data record on Stack. So each instance of method call has a exclusive data record associated with it. (Figure 2) . And when a method calls other methods, records of the called methods are stacked up along with the caller. (Figure 3).

This explains or answers for the below, as static methods would be no different from non-static methods, but there is a catch here. This statement is correct only if the arguments are primitives, but when they are objects, the objects would still reside on Heap and are referred from within the stack. (Figure 4)
So finally remembering where a particular element lives, heap or stack:: a local variable (primitive or reference) belongs to a method and lives with it on a stack, while instance variable belongs to an object lives with it on a heap. Also note that a local reference variable on a stack will still be pointing to object in the heap. This object will not die with the local reference variable or post method execution.

Note: figures displayed are extracted from book (in Google Book Search) - SCJP Exam for J2SE 5 by Paul Sanghera and from lecture slides of Gerald Weber.

Tuesday, October 21, 2008

Chordiant 6.2 - QueueItem Attribute or Properties - To hold a value in runtime

Before we go ahead, QueueItem is used in case of dynamic processflows, where the requirement is to push into and pull out from queues. Typical example would be any authorisation workflow having hierarchies.

In such scenarios of implementing chordiant workflows, all the service flow information is stored in DB. (jxwwflow and jxwtask tables). State of the process and context variables data are stored inside blob column in jxwwflow table.

Now, if we need a queue item to display a value captured in runtime, the queueitem attribute or property should be set with the value available in runtime. Below are the steps..

1. Open the xml file - 'bpdconstraints.xml' available at ChordiantEAR/config folder.
2. Edit the xml by adding the properties or attributes required. Example -

3. Open Window-Preferences inside RAD and choose BPD in left menu. In the lower panel, select the above file in "Source for QRT queue, user, and property names".

The above proceedure is to define the attributes or properties to be mapped with runtime values. Now, In BPD Perspective, select the task required and in the Properties tab

4. Inside Task properties select Queue and Route properties and click the button present.
5. In the popup, under tab properties, you should be able to see the properties added in the above xml. Select the required and accordinly map it to a value, expression or variable (select workflow scope and context variable. )

Then value at runtime present in context variable is mapped to the property and then can be used. In case attribute is required, use attribute tab and do the same.

Simple One but ....

In the past 5+ years of my experience as a Jave Panelist for the recuritment drives in company, most regular question by me is this

What is static? What does it mean when a method is declared static - Ofcourse the most obvious answers would be - Static means only one instance per class. And when method is static, only one instance exists for that class. - Immediately my reaction would be a sharp smile as the candidate fell into trap.

My next question is - Then i have a method called add as below.

public static void add (int a, int b) {
int sum = 0;
sum = a + b;
System.out.println("Sum equals -- " + sum);
}

What is the result when two concurrent threads access the same method with inputs (3,4) and (4,5).

Will the execution gives right results or wrong results? if so why?
about 99% of cases have this wrong by first saying right results and when i point back to their definition of having a single instance and stressing on 2 concurrent threads, answer changes to defining the method syncronized. Then to further test his reasoning, i would say, so do you mean every method declared static should have syncronized as mandatory. Bowled!!!!!

Any tries are welcome..will exlain later...

Note: this is just to explain/test a simple java basic, not that i am saddist with the candidates. Having got bored asking typical questions as listed in many websites - java intervew questions etc., try asking un-orthodox questions.

Friday, October 17, 2008

"SCORN" - Keyword for analysing UI performance

Good article i have read on Analysing the Front End performance of any web application. The Key points are .....

SCORN stands for the following:
Size -- Caching -- Order -- Response codes -- Number
Below is the brief explanation of each of the above parameters.


Size: When it comes to website performance, smaller is better. Whether it's a graphic, a script or the base HTML page, it has to get from the server to client at least once, and that trip is bound to take less time when there is less information to transmit. Specifically look for uncompressed graphics and media, object or code duplication, script and styles living outside of the base HTML, and code "minification."


Caching: Any page or object that users will view more than once is worth at least considering caching on the client side. The trick is balancing the relative speed of pages and objects being displayed from cache with the fact that objects displayed from cache may not be the most recent version. Check for Expires settings, ETags and other client-side cache controls to see if they strike a sensible balance between object load time and object freshness.

Order: Possibly the most dramatic user-perceived performance gain can be achieved by requesting component objects in the correct sequence. In most cases, the sequence of objects should be as follows:
Styles/style sheets
Critical content (i.e. what the user came to the page to see)
Relevant media (i.e. graphics related to the critical content)
Incidental content (i.e. non-critical graphics, possibly advertisements)
Scripts


Response codes: Checking response codes for each object can help identify requests for objects that don't actually exist, superfluous redirects and errors that aren't apparent from the browser. Each of those can cost more time than getting the object without error/redirect or eliminating the unused request entirely.

Number: For the most part, fewer is better. But depending on a user's connection speed, several smaller graphics may be faster than one large one. In the vast majority of cases, one external style sheet and one external script file will be fastest. It's worth asking questions if the number of objects impresses you as other than "fewer".

For more details refer to --
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1301766,00.html?track=NL-516&ad=659546&asrc=EM_USC_4391444&uid=4931891

New feature - Scope of Page Bean is Process!!

In the last 2.5 years of experience working in Chordiant projects, i have seen many instances, when a very simple and most common requirement in a JSP when implemented in JSF, turned into nightmare. Whenever we need the same page to be presented to the customer more than once, based on his actions, example when you select a dropdown, screen refreshes to present different details etc., to contain all the values previously captured, either we need to have the page bean scope defined as session or if not have the bean in request scope and place the objects in IOMap or session and access them each time. This would be pretty complex to maintain and have performance impact as the load increases.

Recently, there is an enhancement in Interaction Controller layer of Chordiant 6.3 to handle this. The gist of the same is below.. (Derived from Chordiant Mesh.)

The main idea is to move the request scoped bean into the same scope as the client task and so rather than represent a page, these beans represent the client task itself. The second enhancement is to provide a set of annotations to simplify validation and “next” processing.

Process Scoped client task beans ----> Subclassing ICFacesBackingBean automatically places the bean in process scope which can then be accessed from the page. The subclass needs to defined in faces config as request scope just as before. Or provide the text in @Name annotation that matches what the ic framework is looking for.

The bean is placed in process scope under the name “pageBean”. The backing bean superclass looks after managing this bean in its scope and unlike the default chordiant jsf framework, it instantiates the bean before the page is rendered. This allows a certain amount of preprossing before the page renders which is useful.

You can then access the bean from your page by using #{processScope.pageBean}. As the bean is maintained in processscope for the duration of the client task, validation loops and ppr calls are much easier as temporary state can be stored directly on the bean.

This framework to make things simpler had made use of annotations. They are

@Name(value)This is a type level annotation and is optional. If you give the bean a name that matches the ic name for it, e.g. @Name("ic$idverify_selectCustomer"), then you don't need the faces-config entry

@PreProcess This is marked on a single method and is invoked after the bean is created. The bean is created on the back of a response from start or next. The client task object is available at this point and so bean initialization can occur. Most common uses for this is to take what is on the iomap and turn it into something more useful for the bean. This could be as simple as putting the customer object as an attribute of the bean.

@Pack(choice)This annotation takes an optional choice argument. Without an argument, it will get executed when next is run. If it is supplied with a choice, then it will only get executed for that particular choice. Note the general @Pack always get executed.

@Validate(choice)Similar to the @Pack annotation, this gets executed during the validation phase (chordiant, not JSF). The general, non-choice @Validate is executed and then any specific ones that match the choice. For example, if you had a back button on your page and which was bound to next but you didn’t want to validate on it, you might only annotate a validation method with @Validate(choice=”next”).

The following example shows a client task responsible for selecting a customer.
--------------------
@Name("ic$idverify_selectCustomer")
public class SelectCustomerBean extends ICBackingBaseBean {

private List customers; private CoreTable table;
public List getCustomers() { return customers; }
public void setCustomers(List customers) { this.customers = customers; }
public CoreTable getTable() { return table; }
public void setTable(CoreTable table) { this.table = table; }

@PreProcess public void init() {
setCustomers((List) getClientTask().getIoMap().get("customers"));
}

@Validate(choice = "select")
public void validateSelection() {
if (getTable().getSelectedRowData() == null) {
//Message - saying select a value
}
}

@Pack(choice = "select")
public void packSelectedCustomer() {
Object rowData = getTable().getSelectedRowData();
getClientTask().getIoMap().put("selectCustomerReturn", rowData);
}
}
---------------------------
This definetely is very useful enhancement.

Wednesday, October 15, 2008

Arraylist - Key Factor

Similar to the one below, another common recommendation would be which is better - ArrayList/Vector/Linked List. There are various factors to consider in saying which is better for example, if we consider retrieval of elements from the list - arraylist is faster than linkedlist but when you consider adding an element at a particular position say '0' then linked list is faster. Lets not go into details now. But overall, in a general scenario, its always recommended to use Arraylist in place of vector. But Will that really help??

Internally, both the ArrayList and Vector hold onto their contents using an Array. But when any new element is inserted into an ArrayList or a Vector, the object has to expand its internal array in case it is overbound. In that scenario, a Vector defaults by doubling the size of its array, while the ArrayList increases its array size by 50 percent. So, in case the arraylist or vector are not defined with initial capacity, the default size would be again 16characters. And then as new elements are being added, it keeps resizing itself apppropriately and you would end up taking a large performance hit.

So, It's always best to set the object's initial capacity to the largest capacity that your program will need. By carefully setting the capacity, you can avoid paying the penalty needed to resize the internal array later. If you don't know how much data you'll have, but you do know the rate at which it grows, Vector does possess a slight advantage since you can set the increment value.

Use StringBuffer instead of += -- Will this really help??

A very commonly suggested optimization technique for Java is to use a StringBuffer instead of a String when concatenating. Typical reaosn would be that using the + or += operators to concatenate strings causes a new object to be created for each concatenation. On the other hand, the StringBuffer contains an append() method, which allows dynamic string growth without having to create a new String object each time.

But we have a catch here, this can sometimes be observed as ineffecient or not effective as much as it is expected to be. Why? - This can be explained as below.

Take the example

Snip1:: String s1 = "Hello";
s1 = s1 + " XX";

Snip2:: StringBuffer sb = new StringBuffer("Hello");
sb.append("XX");

Now when both the above snips are compiled. In snip1, the compiler is smart enough to recognize when a number of concatenations are going to be executed and automatically creates a StringBuffer. Each concatenation operation is converted to append() calls behind the scenes.
So frankly, manually coding a StringBuffer is unnecessary, unless the following reason is considered.

In order for a StringBuffer to truly optimize concatenation, it must be seeded properly. In other words, it needs to be given an appropriate initial size. This is because the StringBuffer keeps the characters of the string it is maintaining in an array. When append() is called, the StringBuffer checks the size of the character array versus the estimated size of the new string. If the estimated size is larger than the actual array, a new array is created and the old array is copied to the new one. Thus, StringBuffer is not only creating a new object for each concatenation, but it also incurs the overhead of copying all the indexes of the original array.

This can be overhead when using the default constructor. In that case, size defaults to character array of size to a measly 16 characters. The most used scenario in any application coding is SQL String creation. SQL strings assuredly will exceed 16 characters and constantly cause new character array objects to be created. Even using the StringBuffer's constructor that takes in a String only initializes the character array as 16 characters more than the length of the String argument.

Hence, the best way to be sure that the StringBuffer will benefit the performance of your application is to give it a large enough initial seed that it will need to create its character array only once based on your requirement.