Developer Product Briefs

Efficient MIDP for Symbian-Based Devices

Developing Java apps for mobile devices has very different execution environment challenges than desktops. Discover best practices for efficient MIDP development on Symbian devices.

Efficient MIDP for Symbian-Based Devices
Apply best practices to overcome execution environment challenges when developing mobile Java applications.
by Martin de Jode

April 17, 2006

Writing code for mobile phones presents challenges you won't face when targeting the desktop or server: constrained memory, limited processor speed, restricted screen real estate, different user input paradigms, and so on. So when developing mobile applications you should concentrate on techniques to cope with limitations in the execution environment because of constraints in memory and CPU cycles on mobile phones.

The exact hardware resources obviously vary from phone to phone, with high-end smartphones having more resources than mid- or low-range phones, but ballpark figures would be a processor clocking in at the order of 100 MHz (compared to GHz on the desktop) and RAM measured in terms of a few MB (rather than hundreds of MB for a PC).

The focus here is on efficient MIDP rather than optimized MIDP because many of the tips introduced in this discussion are concerned with efficient coding style rather than genuine optimizations. As such they should be adopted as standard practices, with the additional benefit of improved performance.

Various platforms were used to evaluate the practices recommended here: Sun's Wireless Toolkit 2.2—running Sun's K Virtual Machine (KVM)—on Microsoft Windows 2000; a Sony Ericsson P800, which runs Sun's KVM on Symbian OS; a Nokia 6600, which runs Sun's CLDC Hotspot Implementation on Symbian OS; and a non-Symbian OS, Series 40 Nokia 6620 (details not available) representing midtier phones. (See "Evolving the Symbian OS" for more information about key new features in Symbian OS version 9.)

A key part of any optimization is identifying the performance bottlenecks. The Wireless Toolkit provides various profiling aids that allow you to trace class loading, method calls, memory usage, and garbage collection, among other things. The toolkit can be useful for identifying potential bottlenecks, but there is no substitute for on-target profiling, which generally has to be done by hand by bracketing potential trouble spots with methods such as System.currentTimeMillis() and Runtime.freeMemory(). Note that the virtual machines (VMs) running on Symbian OS support a dynamic heap, so runtime freeMemory() might give unexpected results if the VM has just made an extra allocation of memory to the heap.

Accessing Attributes
Let's get started by looking at field access. Generally, it is accepted wisdom that accessing an object's field directly is more efficient than using access methods (getters). In other words, you would use this code:

String firstName = 
  person.firstName;

rather than this code:

String firstName = 
  person.getName();

On the Wireless Toolkit, tests showed this preference to be the case, with direct field access being about 10 times quicker. However, running tests on phones showed different results. On the Nokia 6620 times were identical for direct field access compared to using an access method. For the KVM-based Sony Ericsson P800, using an access method was about 20 percent slower than direct field access, whereas for the CLDC HI-based Nokia 6600, using an access method took about double the time compared to direct field access (although in terms of absolute time it was the fastest phone in the sample group).

In all cases, accessing object fields using access methods occurred on sub-microsecond time scales. So the message is: Don't abandon good programming principles by making all object fields public, because with modern optimized VMs you will gain little to nothing.

Object creation and reuse. Working with desktop Java, we sometimes become rather blasé about object creation, knowing that the garbage collector will spare our blushes. In the MIDP world it pays to proceed with more care. Object creation consumes memory and processor cycles and leads to object destruction through garbage collection, which further impacts performance. Look for ways to reuse existing objects rather than create new ones. Consider this trivial example:

public void commandAction(
  Command command, Displayable d) 
  {
  if (command.getLabel().equals(
    "Exit")) {
    notifyDestroyed();
  }else if ( 
    command.getLabel().equals(
    "Display canvas") ) {
    MyCanvas myCanvas = 
      new MyCanvas();
    display.setCurrent(myCanvas);
  }
}

Each time the user selects "Display canvas" a new object of type MyCanvas is created. The object might only have local scope, but in Java all nonprimitive types are created on the heap and the heap memory allocated is reclaimed only by garbage collection. In the previous example, the myCanvas object becomes eligible for garbage collection as soon as it is no longer the current displayable canvas; however, it might be some time before the garbage collector runs and the memory is reclaimed. Furthermore, as the garbage collector runs as a background system thread, its activity will impact the performance of the MIDlet. We can avoid this unnecessary overhead by making myCanvas an instance variable and reusing it:

public void commandAction(
  Command command, Displayable d) 
  {
  if (command.getLabel().equals(
    "Exit")) {
    notifyDestroyed();
  }else if ( 
    command.getLabel().equals(
    "Display canvas") ) {
    if(myCanvas == null){
      myCanvas = new MyCanvas();
    }
    display.setCurrent(myCanvas);
  }
}

Another area worth paying attention to is ensuring objects are made eligible for garbage collection when they are no longer required by the application. Objects are garbage collected when they are no longer reachable (directly or indirectly through other reachable objects). The set of reachable objects includes references on the stack and instance and static references in loaded classes. It's quite easy to hold on to references to objects that are no longer required and prevent their garbage collection, and therefore the freeing up of their associated memory for the duration of the application. For example, consider this code to display a splash screen while the program is initialized:

public void startApp() {
  new Thread(){
    public void run(){
      display.setCurrent(
        splashCanvas);
      try{
        sleep(5000);
      }catch(InterruptedException 
        e){//}
      display.setCurrent(myForm);
    } 
  }.start();
  init();
}

The splashCanvas, which might be quite a large object (containing an image), is only required when the application starts, and therefore its memory should be reclaimed once it has done its job. However, in the previous example the splashCanvas variable is an instance variable and will not be reclaimed by the garbage collector while its owner is reachable, which in the case of a MIDlet object will be the duration of the program.

The solution is simple. Once the splashCanvas has done its job, set the reference to it to null, casting the object adrift and making it eligible for garbage collection. So our code becomes:

public void startApp() {
  new Thread(){
    public void run(){
      display.setCurrent(
        splashCanvas);
      try{
        sleep(5000);
      }catch(InterruptedException 
        e){//}
      display.setCurrent(myForm);
      splashCanvas = null;
    }
  }.start();
  init(); 
}

Some care must be taken when using object references of local scope. When the reference goes out of scope, it does not follow that the object it referenced is eligible for garbage collection. Consider this code example:

public void createGirlfriend(?) {
  Girlfriend girlfriend = 
    new Girlfriend(?);
  geek.setGirlfriend(girlfriend);
}

After executing this method, the object referenced by geek now holds a reference to the object referenced by girlfriend (before girlfriend went out of scope). (Recall that in Java all arguments are passed by value, so in the case of reference variables the argument passed is a copy of the reference.) If we want to free up the resources associated with the (object referenced by) girlfriend, we could set it to null:

geek.setGirlfriend(null);

This setting, of course, would leave the geek without a girlfriend, or replace it with another:

geek.setGirlfriend(
  new Girlfriend(?) );

In either case, the resources associated with the former girlfriend are eligible for garbage collection.

Working With Strings
Now let's look at efficient string handling with string literals, string concatenation, and the stringBuffer memory trap. Starting at the beginning, declaring String literals like this:

String s1 = "Hello";

is preferred to declaring this way:

String s1 = new String("Hello");

The Java Virtual Machine (JVM) maintains a pool of String literals. In the case of String s1 = "Hello", if an identical string literal already exists in the literal pool, s1 is assigned a reference to the existing String object (rather than creating a new String object). In the second case, a new String object will be created regardless. There are obvious advantages to the former approach in terms of preserving resources associated with object creation. However, another plus point is that in the case of unique literals within the pool, it is possible to use the more efficient == operator, rather than the equals() method. The comparison:

String s1 = "Hello";
?
String s3 = "Hello";
?
if(s3 ==s1){
  ?
}

will return true because s1 and s3 reference the same object. However, this comparison:

String s1 = new String("Hello");
?
String s3 = new String("Hello");
?
if(s3 ==s1){
  ?
}

is not guaranteed to be true because s1 and s3 might refer to different, albeit identical, objects. There is, however, a way to force uniqueness within the pool using the String intern() method supported from CLDC 1.1. For example, the code:

String s1 = new String(
  "Hello").intern();
?
String s3 = new String(
  "Hello").intern();
?
if(s3 ==s1){
  ?
}

now allows the strings to be compared with the == operator. If your application makes extensive use of string comparisons, it might be worth interning your strings to take advantage of the performance advantage offered by the == operator.

String concatenation. Now let's take a look at concatenating strings. In Java, strings are immutable; that is, once it's created, a String object cannot be changed. Concatenation, then, is doing more work than might first appear, and it is important to understand it. In Java you have three ways to concatenate strings: the + operator, using the StringBuffer class, and the concat() method of the String class. This code example illustrates their use:

String s1 = "Hello";
String s2 = "World!";
?
String s3 = s1 + s2;
String s4 = new StringBuffer(s1).
  append(s2).toString();
String s5 = s1.concat(s2);

In the case of the + operator, the compiler effectively expands the source code to:

String s3 = new StringBuffer().
  append(s1).append(s2).
  toString();

In this trivial case, in terms of efficiency, there is nothing to choose between using the + operator and explicitly using a StringBuffer. The concat() method works differently, combining the underlying character arrays of each String into a new character array, and then creating a new String from it. Using the Wireless Toolkit, there is not much to choose between the + operator and the concat() method in terms of performance; but on all the phones under test, the concat() method is about twice as fast as the + operator.

There are cases where the use of StringBuffer is much more efficient than concatenation. Take a look at these two code paragraphs:

String text = "";
int value;
while(true){
  value = is.read();
  if(value == -1)break;
  text = text + (char)value;
}

StringBuffer buffer = 
  new StringBuffer(256);
int value;
while(true){
  value = is.read();
  if(value == -1)break;
  buffer.append( (char)value );
}
String output = buffer.toString();

The latter code paragraph is appreciably more efficient, and in a tight loop it can be orders of magnitude faster. In the first example the line:

text = text + (char)value;

is expanded by the compiler to the wasteful:

text = new StringBuffer().
  append(text).append(char(
  value)).toString();

The rule is, then, don't use an immutable String when you really want a StringBuffer. Note that the StringBuilder class, introduced in the JDK 1.5, is not available on CLDC/MIDP.

Wasting Memory
There is one fairly subtle issue that you should be aware of when using StringBuffer. When creating a new String from a StringBuffer, for efficiency reasons the new String shares the underlying storage (a character array char[]) of the StringBuffer. If the StringBuffer is modified subsequently in such a way that its underlying storage becomes inconsistent, it replaces the original, shared storage with a new char array, in effect passing ownership of the original storage to the String object (because it now holds the only reference to that char[]). The memory issue arises if the capacity of the StringBuffer from which the String was created is significantly larger than the length of the String. Consider an example:

String[] array = new String[10];
StringBuffer buffer = 
  new StringBuffer(1024); 
for(int i = 0; i < array.length; 
  i++){
  buffer.insert(0, "iteration " + 
    i);
  array[i] = buffer.toString();
}

On the first iteration, a new String is created that holds a reference to the underlying StringBuffer storage and the first and last indices of the elements that hold the relevant characters of the String (under the covers a flag is set on the StringBuffer object to indicate the storage is shared). On the next iteration, to avoid having the insert statement writing over this shared storage, new storage (a new character array) is created containing the new data and a copy of the previous data. When the new String is created, it holds a reference to the new storage.

By the time the loop is finished, we have nine strings, each having sole ownership of 1K of storage to represent a String fewer than 100 characters long! Note, if we had used the append() method instead of insert(), we would avoid this memory wastage because the new data would be added onto the end of the existing data, hence the storage referenced by the original String remains valid (or more correctly, the indices to the storage remain valid).

Efficient Looping
Now let's turn to some tricks that might help speed up looping. Here are a couple of code samples:

for(int i = 0; i < vector.size(); 
  i++){
  ?
}

for(int i = vector.size()-1; i <= 
  0; i--){
  ? 
}

In the first example, the variable i is initialized before the loop starts, and then prior to each iteration the variable is compared to the size of the vector, which involves a method call. A simple optimization would involve taking the method call out of the loop by creating a local variable:

int vectorSize = vector.size();

and comparing the loop index variable to it. This assignment, of course, involves an extra line of source. Alternatively, we can adopt the second approach of decrementing the loop rather than incrementing it, which involves two optimizations: 1) the size() method call is invoked only once, at the loop initialization (rather than prior to every iteration), and 2) we take advantage of the fact that comparing an integer to zero is generally faster than any other comparison.

The Java programming language has dedicated opcodes for integer comparisons to zero. So the i >= 0 amounts to popping a single value off the stack and comparing it to zero; whereas, i < vectorSize involves popping two values off the stack and comparing them to each other. In a tight loop decrementing the index can be significantly quicker.

A more aggressive way of reducing overhead in looping is unrolling the loop. A typical example is where this code:

for(int i = 0; i < 100; i++){
  foo(i);
}

becomes this code:

for(int i = 0; i > 100; i += 5){
  foo(i);
  for(i+1);
  foo(i+2);
  foo(i+3);
  foo(i+4);
}

Obviously unrolling the loop is worthwhile only for tight loops where the overhead of incrementing the loop is significant compared to the operations performed in the loop body. The downside of unrolling the loop, apart from making your code less transparent, is the increase in code size (and, hence, JARS size).

Graphics
In many applications, particularly games, the most significant performance bottleneck is rendering graphics. Here, we'll look at a few general tips for improving graphics performance. For a more detailed review, take a look at the many specialist books and articles on J2ME games programming—for example, "J2ME Game Optimization Secrets" by Mike Shivas (see Resources).

Games typically use a dedicated game loop thread that continually responds to user input, updates the game state, and repaints the graphics. Here's an example:

public void run(){
  ?
  while (running) {
    setUpNextFrame();  // update 
      game logic
    repaint();
    Thread.sleep(SLEEP);  // pause 
      to allow user input.
  }
}

public void paint(Graphics g) {
  g.setFont(font);
  g.drawImage(background, 0, 0, 
    g.TOP|g.LEFT );
  g.drawImage(alien, alienX, 
    getHeight()/2, 
    g.VCENTER|g.HCENTER );
  ?  
  g.setColor(255,255,0);
  if(isLevelOver) {
    g.drawString(LEVEL_OVER, 
      getWidth()/2, getHeight()/3, 
      g.TOP|g.HCENTER);
  }
}

Normally the bottleneck in the game loop will be the rendering of the graphics through the paint() method. So it's worth looking at optimizing the implementation of the paint() method by applying the lessons already outlined here, particularly looping.

In the previous code example, we could cache the height and width of the canvas as an instance variable and take out the getHeight() and getWidth() calls. Probably the single biggest optimization available: Setting appropriate clip regions so that only portions of the display that have changed are repainted. Either a clip region can be set using the Graphics setClip() method, or the region to be redrawn can be specified in the Canvas repaint() method.

Using a single drawRect() method is preferable to using four calls to drawLine(), because these methods usually map to underlying native implementations. If your application uses many calls to drawing primitives (drawLine(), drawRect(), and so on) to draw a background, for example, consider instead creating an image to represent the background and render it with drawImage(). Although this rendering will increase the JAR size and memory requirements of your application, it may well reduce substantially the time taken to render the background.

Another technique used often to speed up graphics rendering is double buffering, where the graphics are drawn to an off-screen buffer, which at the appropriate time is then rendered to the screen using the paint() method. Symbian's implementation of the MIDP Canvas class uses double buffering anyway, so explicitly coding an off-screen buffer in your Java application is wasteful if you are targeting Symbian OS phones. For non-Symbian OS phones, using double buffering might be advantageous. To find out whether the implementation is double buffered or not, use the isDoubleBuffered() method.

In the prior sample code, the game-loop thread pauses for an interval to allow user input to be processed, prior to the next frame being generated, to remain responsive. There is an alternative available on phones supporting the MIDP 2.0 API—namely, the callSerially(Runnable r) method of the Display class. The callSerially() method takes a Runnable object and invokes its run() method after the last repaint, serialized with the event stream. By using callSerially() we obtain a serial sequence of repaint(), event notification, and then run(), ensuring that user input can be received by the application and processed prior to the next repaint. This sequence speeds up the game loop by doing away with the need for an explicit call to Thread.sleep() or Thread.wait(). Here is some code using the callSerially() method:

public void run(){
  setUpNextFrame();
  repaint();
  display.callSerially(this);
}

Note the recursive nature of the use of callSerially() in the previous code example, where the run() method is invoked implicitly by callSerially(this).

Finally, now that I've mentioned various methods to speed up your graphics, it is worth noting that the human eye can't detect frame rates greater than about 25 frames per second, so don't waste time trying to achieve frame rates faster than that.

Switch Statements
For nontrivial case discrimination, opt for switch statements over the evaluation of conditional expressions. For example, this statement:

switch(test)
{
  case CASE1:
    ?
    break;
  case CASE2:
    ?
    break;

  ?

  case CASEN:
    ?
    break;
}

is preferable over this expression:

if(test == CASE1){
  ?
}
else if (test == CASE2){
  ?
}

?

else if (test == CASEN){
  ?
}

When using integer case values, try to ensure consecutive values in a series (for example, 1, 2, 3, 4, ?) because doing so can have an effect on the underlying algorithm employed (the more efficient tableswitch vs. the lookupswitch bytecode).

Copying arrays. If you want to copy the contents of one array to another, it is far more efficient to use System.arraycopy() than to do it by hand in a for loop. For example, use this code:

System.arraycopy(
  array, 0, destination, 0, 
  iterations);

instead of this code:

for(int i = 0; i < array.length; 
  i++){
  destination[i] = array[i];
}

because the arraycopy() method is implemented natively.

Reducing JAR Size
Unlike Symbian OS phones, which place no limits on JAR file size, many low-tier to midtier phones impose a maximum size on JAR files that is typically 64K to 128K. Even if you are developing applications using a Symbian OS phone, portability issues might exert downward pressure on your JAR file size.

The first resort in JAR file size reduction is to obfuscate your application. Part of the obfuscation process is to rename package, class, and variable names and replace them with cryptic shorthand versions, which can lead to a significant reduction in the JAR file size.

Apart from class files, images also bulk out JAR files. Be careful to include only images that the application uses, and ensure that their color depth doesn't exceed the supported color depth of the target phone.

More aggressive reduction in JAR file size involves refactoring code and entails trade-offs between design and code size. For example, using the default package name, avoiding anonymous classes, not overdoing inheritance, and limiting interfaces can all lead to a reduction in JAR size, possibly at the expense of maintainability.

Know when to optimize. Optimization often involves making trade-offs. Typically, the trade-offs tend to be speed vs. runtime memory usage, speed vs. code size (for example, JAR size), or speed or size vs. intelligibility or maintainability. Having said that, many of the tips presented here are really about good coding style, rather than genuine optimizations. Object reuse, correct use of StringBuffers, and using arraycopy(), for example, are essentially free wins in that they shouldn't really compromise comprehension or maintainability of your application source code or bloat runtime memory.

Assuming that you do need to apply more aggressive techniques, you must consider the question: when and how do I optimize my application? The answer is late or not at all! According to Donald Knuth, scientist and professor emeritus at Stanford University and author of the multivolume The Art of Computer Programming, "premature optimization is the root of all evil."

Concentrate on good design and coding practices, rather than trying to optimize every line of code. Optimization often adds complexity and compromises intelligibility and maintainability, and it's time-consuming. Don't optimize if you don't need to.

It is only when your application is more or less complete that you will be in a position to decide whether and where optimization needs to be applied. For most applications it is generally accepted that an 80:20 or even 90:10 rule applies, where 90 percent of the time is spent executing 10 percent of the code.

The skill in optimizing is to identify the bottlenecks and optimize only these. You can use the Wireless Toolkit profiling tools to get a feel for potential bottlenecks, but this procedure must be followed up with testing on representative target phones because as we have seen, the behavior of real phones can differ from PC-based emulators. Where platform- or phone-specific behavior was observed for a particular optimization, we have drawn attention to it here; otherwise, the optimization was generally valid for all the products used.

Optimizing applications is a large topic, and in this discussion we have provided only a short overview of a few key tips and tricks. See Resources to explore the topic of optimization in more depth.

About the Author
Martin de Jode is a developer consultant at Symbian and lead author of Programming Java 2 Micro Edition on Symbian OS (John Wiley & Sons, 2004).

comments powered by Disqus

Featured

  • Full Stack Hands-On Development with .NET

    In the fast-paced realm of modern software development, proficiency across a full stack of technologies is not just beneficial, it's essential. Microsoft has an entire stack of open source development components in its .NET platform (formerly known as .NET Core) that can be used to build an end-to-end set of applications.

  • .NET-Centric Uno Platform Debuts 'Single Project' for 9 Targets

    "We've reduced the complexity of project files and eliminated the need for explicit NuGet package references, separate project libraries, or 'shared' projects."

  • Creating Reactive Applications in .NET

    In modern applications, data is being retrieved in asynchronous, real-time streams, as traditional pull requests where the clients asks for data from the server are becoming a thing of the past.

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

Subscribe on YouTube