Sunday, December 23, 2012

Threading, loopers and messaging queues in Android

Android provides an efficient way to allow communication between different threads.
One thread can communicate with another by creating a Message object. Usually a Message is not created with the normal constructor; instead, it is used one of the static factory methods provided by the class, for instance obtain(Handler h, int what, Object data).

One way to send a message to another thread is to create a Messenger object and invoke the send method with the Message to send.
But how can we pass to the messenger which thread will receive the message? In the constructor, it is specified the handler bound to the target thread, i.e. the thread that will receive a messenger. An Handler is an object that, as its name suggests, when created, binds itself with the thread that is creating it.
Consequently, the receiver thread will create the handler, that will be used by the sender thread that calls the send() method with a Messenger object.

This is the basic behavior.
But usually we want that, at least the receiver thread, will be running indefinitely, or at least, as long as we need it running.
We can use a Looper object to transform a simple "one-shot" thread, in a thread that runs indefinitely, or at least, as long as the quit() method is not called, to quit the loop.
The Looper class "enriches" the behavior of a thread in two ways:

  • transforms a thread in an indefinite one
  • creates internally a messaging queue that provides a serialized handling of the messages received. A message received is handled in the handleMessage() method of Handler.Callback.
In the script above, the code between Looper.prepare() and Looper.loop() creates a handler that will handle the messages received by the current thread. If a message is received before the handler has finished processing the message, it will be automatically enqueued and executed as soon as the previous message has been completely processed.

The Handler subclass manages the incoming message. In the example above, it just prints the data received and sleeps for 2 seconds.

I have created a short demo project consisting of a main activity and a looper thread. The looper thread processes in its queue messages sent by the main thread (ui thread) of MainActivity. A message is sent each time the button send is pressed by the user. I have inserted a sleep of 2 seconds in the handler() method of the handler bound to the receiver thread, to show that if a user clicks quickly many times on the send button, the message queue handles the incoming messages sequentially, one after the other.

The demo project can be downloaded from here:

Pleas note that I had to use wait() and notify() methods because the code that trasforms the thread in a looper is not so immediate, so before getting the handler we must be sure that the looper has been created.

Saturday, October 13, 2012

Remove views with nice animations

In iOS is very simple to create effective animations on UI components.

For an app I'm currently working on, I would like to create an animation on a set of views, vertically aligned. When user taps on a button inside the view, the app should start an animation that translates and rotates te view offscreen, while fading it at the same time. At the same time, all the views below it, should react by translating up occupying the space left by the removed view.

This is the initial scenario:

To perform this effect, we need two different animations, one on the view to delete, and the other on the views below it.

This is the first one:

To translate all the views below the one that we want delete, we first need to find them:

 and then we can apply the translation to all of them:

The complete sample project can be found here:

Monday, July 30, 2012

iOS view hierarchy

Sometimes it is usefeul to find the view hierarchy of a certain graphical component.
On iOS, here's a small code snippet to do this:

Thursday, July 12, 2012

Android fragments for beginners

Fragments have been introduced in Android with the Honeycomb milestone (aka Android 3.0). The main reason was to provide a flexible and powerful solution to deal with different kind of screens (i.e tablets and phone displays).

An activity layout is composed by one or more viewgroups, at the end, by simple views. Every view group is also called layout. An activity has one root layout, usually set with the setContentView(int resId) call.

If a layout is composed by many sublayouts/views, having all the UI code and the application logic code for the activity inside one class might quickly increase the complexity and lower the maintainability of the component.

A solution I've used in the past was to subclass the layout class and create a custom layout where I put all the UI code related to such layout. This approach has a drawback: a view group is not an activity. You've to put the app logic code inside the activity and consiquently to provide some sort of communication channel between the view and the activity when, for instance, you want to react to user actions (tap on a button, typing of a text). Components became tightly coupled and difficult to maintain/change.

With Honeycomb (but also in previous releases of the platform, thanks to the compatibility package), you can use fragments. These are like layouts but they have their own lifecycle, they have been created with the aim to contain both UI and app logic code, and can be used to provide an efficient way to deal with layouts that must be placed in different positions depending on the device the app is running on, or on the orientation of the device itself (portrait, landscape).

I've create a simple test project that shows how a layout can be created by different fragments that place themselves in a different way based on the orientation of the device.

The activity has provided a main layout for portrait and another for landscape. Fragments are placed in the LinearLayout in a different way depending on the orientation of the device:

The app logic that manages the fragment is inside the fragment itself:

As you can see, the fragment has framework methods that allow to save the current state. In the sample, a simple counter value is saved and if you change device orientation its valued isn't lost. As you can see, all the application logic pertaining the fragment is inside the fragment. The activity can handle other higher level tasks, such as fragment repositioning/move or orchestrating tasks affecting different portions of the UI.

The sample code can be found here:

Saturday, June 2, 2012

Audio mix and record in Android

iOS offers, among its frameworks, many interesting features that allow to create audio tracks by simply mixing multiple tracks together. You can use Audio Unit and its methods, as described here:

But what if you need a similar result on Android? Android does't offer such feature in its audio framework. So I've spent a couple of days on google groups and stackoverflow, reading unanswered questions of android devs searching for a similar functionality on the Google mobile platform, or developed and released by third party contributors and external devs.
It appears there isn't nothing available.
So I've studied the problem and the tools I had to solve it. First let's see what possibilities the platform offers to play files.
Android audio framework consists of these main classes for audio playback:

  • MediaPlayer: useful to play compressed sources (m4a, mp3...) and uncompressed but formatted ones (wav). Can't play multiple sounds at the same time. [HIGH LEVEL methods]
  • SoundPool: can be used to play many raw sounds at the same time.
  • AudioTrack: can be used as SoundPool (raw sounds), but need to use threads to play many sounds at the same time. [LOW LEVEL methods]
I've found that AudioTrack works fine to play uncompressed raw data, and if you want to play multiple sounds at the same time, you can create different threads and start the playback in an asynchronous fashion.
Unluckily this is not always precise: sometimes you can experience a delay before a certain sound is played, and in such cases the final result is far from acceptable.

Another option is to mix sounds before playing them. This option offers you a nice plus: you obtain the mixed sound that is ready to be stored on file. If you mix sounds with SoundPool for instance, then when you play it, you cannot grab the output and redirect it to a file descriptor instead of to the audio hardware (headphones or speaker).
As mentioned at the beginning, there is no ready solution for such problem. But actually we will see the solution is rather trivial.

Before delving in the details of how 2 sounds can be mixed together, let's see how can we record a sound on Android. The main classes are:

  • MediaRecorder: sister-class of MediaPlayer, can be used to record audio using different codecs (amr, aac). [HIGH LEVEL methods]
  • AudioRecord: sister class of AudioTrack. It records audio in PCM (Pulse Code Modulation) format. It is the uncompressed digital audio format used in CD Audio, and it is very similar to .wav file format (the .wav file has 44 bytes header before the payload). [LOW LEVEL methods].

AudioRecord offers all the features we want be able to control: we can specify the frequency, the number of channels (mono or stereo), the number of bit per sample (8 or 16).

In the fragment of code posted above, there is a simple function that can be used to record a 44.1khz mono 16 bit PCM file on the external storage. The function is blocking so it must be run on a secondary thread; it continues to record until the boolean isRecording is set to false (for example when a timeout expires or when a user taps on a button).

And now comes the most interesting part: how to mix two sounds together?

Two digital sounds can be mixed easily if files have the same features (same number of channels, same bit per samples, same frequency). This is the simplest scenario and is the only one I'm covering in this post.
Every sample in such case is a 16 bit number. In java a short can be used to represent 16 bit numbers, and infact AudioRecord and AudioTrack work with array of shorts, which simply constitute the samples of our sound.

This is the main function used to mix 3 sounds together:

There are some complementary methods I'm not posting here because this post is already too long :) but these are some small hints of what they do:

  • createMusicArray reads the stream and returns a list of short objects (the samples)
  • completeStreams normalizes the streams by adding  a series of '0' shorts at the end of smaller files. At the end the 3 files have all the same length.
  • buildShortArray converts the list in an array of short numbers
  • saveToFile saves to file :)
The key point in the method is that we sum every sample together. We normalize the short to a float [-1,1] so we dont have under/overflow issues. At the end we reduce a bit the volume and we save it in the new array. That's it!

Of course this is the simplest scenario; if the samples have different frequency we should do other computations. But I think most of the time we want to mix sounds we can also control how they are recorded thus reducing its complexity a lot.

Once we have a PCM mixed sound, it can be transformed in a .wav file so that every player can read it. EDIT: as many people have asked me some more help, below it is the code snippet to build a short array from a file containing a raw stream.

Saturday, May 12, 2012

TDD - Testing AsyncTasks on Android

Testing functionalities of a software system is probably one of the most important things to do if you want to release a reliable application to your clients.

TDD (aka Test Driven Development) is a methodology where you create and develop unit tests before writing a single line of application logic. This strategy allows you to better understand the problem your app is solving, it gives you a more pragmatic and structured way to build a software component. It forces you to design and think to the solution in detail before coding it. And, most important, when you change something in your code, you can safely run your tests again and have an immediate result if your changes have affected the behavior of the component.

I usually do TDD to test model classes, database classes and networking. I'm not writing functional tests on the UI but Android offers you tools like monkey to test also user clicks on your UI.

If your code use AysncTasks to perform networking operations in a background thread, things become a little trickier.

Normally, in a AysncTask, you can specify a result of this task using onPostExecute.
This call is executed on the main thread so you can update your UI and notify the user easily.

AsyncTask runs your task in a background thread, but this opens some issues: our tests must be run on the main thread! We need to wait for the AsyncTask to finish before jumping to the next test or to quit the testing routine.
There is a method called runTestOnUiThread that perform a testcase on a main thread. To use such method, your class must extend InstrumentationTestCase.

In the previous code snippet, I'm creating a unit test called testGetMedia whose purpose is doing some networking in an asynchronous fashion, to fetch some data from a remote backend.
The tested class is called NetworkTasks and it contains a method called getMedia(Media) similar to this one:

GetAsyncMediaTask is a subclass of AsyncTask that implements the doInBackground() and the onPostExecute() methods. I'm not showing the code of GetAsyncMediaTask here because is not really interesting for our purpose. In the doInBackground() method an http call is placed and it blocks the calling thread until it gets an HttpResponse; inside the onPostExecute() method, I'm using the listener passed to the NetworkTasks object to notify the component that has launched it. The component is notified through onFail and onSuccess callbacks.
In the code snippet the listener is called GetMediaListener.
This covers almost everything except for few (important!) details: what happens in the GetMediaListener?

The GetMediaListener extends a generic CallbackListener which in turn implements a simple interface with two methods (onFail and onSuccess).
The callbacks are here used to assert that a condition holds or doesn't hold. The listener used in the real code will do some useful stuff on the UI, trigger some notifications and so on.
And here comes into play the CountDownLatch object. It is absolutely mandatory to use this object when you want to test an AsyncTask. Infact, even if we're running the task on the main thread, as soon as the doInBackground method ends, the testGetMedia() method ends too, and there is no way for the listener to be invoked by the system! By using a CountDownLatch, we impose the current thread to stop and wait until a certain condition is met: here we wait that the callback onPostExecute (which in turns call onFail or onSuccess) is executed: as soon as it is executed, the object is decremented with the countdown() method of the supreclass and the lock is removed. The lock is automatically removed after 30 seconds by the signal.await() call.

You can get all the code here:

Enjoy async TDD on Android! :)

Wednesday, April 4, 2012

Android: screen densities and sizes. How to calculate them

For me it has always be complex to find out, for each android device, what is its density and its screen size. I'll try to explain in this article what i've discovered so far.
It is really very important to find out these 2 parameters, since Android has a way to distinguish which resources to use (both xml layout files and drawables) based on such parameters.

In detail, Android allows you to specify your drawable resources based on the screen density of your phone. The 4 screen densities available are:

  • ldpi - 120 dpi
  • mdpi - 160 dpi
  • hdpi - 240dpi
  • xhdpi - 320 dpi
Each specifies a different screen density, i.e. a different amount of pixels for square inch of screen. These are called dpi or dots per inch.

For example, a HTC Desire has a density of 240dpi, i.e there are 240 pixels for each square inch of screen.
A LG Optimus One has a density of 160dpi, i.e. there are 160 pixels for each square inch of screen. Obviously, if we provide the same icon on these devices, it will show itself larger or the LG than on the HTC phone. A way to cope with that is to specify a layout that uses density independent pixels. But that's another story.

Let's focus on the dpi. How can we calculate this value? Let's take the two parameters that we always know of a device: diagonal in inches and screen size in pixels.
The HTC Desire has: 800 x 480 and 3.7''.
dpi = sqrt(w^2 + h^2)/d = sqrt(800^2 + 480^2)/3.7 = 252,1

So the screen density of HTC Desire is HDPI.

the display size is important because usually we associate different layouts to different screen sizes. A device can have 4 screen sizes:

  • small-screen
  • normal-screen
  • large-screen
  • xlarge-screen
On the Android dev website, they tell us that a medium screen is more or less between 3 and 5''. But is there a scientific way to calculate it?
They also tell us that:

  • small-screens must have a screen size of 426 x 320 dp = 136320 dp
  • normal-screens must have a screen size of 470 x 320 dp = 150400 dp
  • large-screens must have a screen size of 640 x 480 dp = 326400 dp
  • xlarge-screens must have a screen size of 960 x 720 dp = 691200 dp
Usually phones are in the small and normal category, while tablets are in the large and xlarge category, although there are some important exceptions to this rule.

How to calculate the display size for a device? Let's take the HTC Desire as example.
The HTC Desire has: 800 x 480 and 3.7'' and is a HDPI device with a scale of 1,57, so each virtual pixel counts as 1.57 physical pixels. This is known as scale and can be determined as: 252/160. A device with 160 dpi has a scale of 1 (1 physical pixel counts a s1 virtual pixel). This was the density of the first Android phone.

So: 800/1,57 = 510 dp; 480/1,57 = 305 dp
520 * 305 = 155550 dp which is higher than 150400 but lower than 326400. So the HTC Desire has  a normal screen.

A particular case is give by the recent Galaxy Note. This device has: 1280x800 5.3'' display.
So the DPI is 284,8. Its scale ratio is 1.77. It is between 1.5 and 2. Aaagh.. So in what screen density folder should we put the drawables? A search on stackoverflow suggest that the Note si considered as XHDPI screen.
But here's the problem. We said that phones have a small or medium size screen. This is WRONG for the Galaxy Note.
Infact: 1280/1,77 = 724 dp;  800/1,77 = 451
724 * 451 = 326524 which is larger than 326400

So if you want to filter your app based on the screen size, be careful. Some phones, like the Galaxy Note, belongs to the Large screens device family!

Monday, March 12, 2012

Tables and cell selections: iOS UITableView vs Android ListView

One of the first things every dev learns when he/she starts developing on a mobile platform is how to present sets of data. They could be a list of shops, an array of products and so on.
Both Android and iOS have a component to manage this kind of data: ListView for the former, and UITableView for the latter.

These classes surprisingly offer a similar interface, and have similar ways to provide the datasource to it.

Both platforms have a component to show the elements on screen and a datasource (or adapter) that manages the presentation of the tableview cells, the cells reuse and how data are presented (in which order).

iOS has a protocol, UITableViewDataSource, that every class that wants to provide some data to a UITableView must implement. The key method used to provide the tableView cells is

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath

On the other side, Android datatsource is a class that extends ListAdapter.
Also this class has a method that must be always implemented:

public View getView (int position, View convertView, ViewGroup parent) 

In both platforms it is very important to reuse the cells: with iOS 5 the cell recycle is managed by the system, in case the cell is declared in the .xib file as the default subview of the UITableView. It is very important though to specify a cell-id in code that matches the one declared in the .xib file.

Android use a system quite similar to that used by iOS 4.x and lower. You fetch a view inflating it from an xml file only if the view provided by the framework is null (i.e. there isn't currently a reusable cell).

This is a typical iOS 5.0 implementation:

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{
    UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"CellId"];
    cell.textLabel.text = [_list objectAtIndex:indexPath.row];
    cell.selectionStyle = UITableViewCellSelectionStyleGray;
    return cell;

While this one is an Android implementation of getView():

public View getView (int position, View convertView, ViewGroup parent){
    LinearLayout layout;
    if(convertView == null){
    LayoutInflater inflater = (LayoutInflater)context.getSystemService(LAYOUT_INFLATER_SERVICE);
    layout = (LinearLayout)inflater.inflate(R.layout.cellnull);
    layout = (LinearLayout)convertView;
    return layout;

The highlithed row shows the comparision made to establish if a convertView is available. If not, it is inflated from the XML layout file.

Another important issue to face is how to customize the list selector.
In iOS it is a very straightforward procedure: just specify the selection style, or if you want a different color, create a background view with the color you want as its background and assign it to the selectedBackgroundView property (see:

On Android, a premise is necessary. Every component (button, tetview, and also cell views) has different states that define how the rendering engine should show them on screen. The states are: pressed, focused, selected, enabled.
Concerning the ListView component, the system provides a way to select cells that is specified by a property called listSelector. The listSelector is an xml file with a set of rules, similar to the rules you specify on a css web page, that define how an item is shown in a set of possible states.
I've found very difficult to customize this behavior. Every time I tried to customize the cell selection appeareance, I came across many different issues: some times the selection changed the appeareance of all the tableview items, some other times the cell divider disappeared from screen.

In the official samples app there isn't a single example on how to customize such property, and also watching the World of ListView Google IO Session didn't helped me a lot (see link here : )

So I came up with a simple yet effective solution: get rid of the listSelector property by imposing a transparent view for every possible state, and by modifying the background of the cell items instead.

This is the selector I've used:

<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="" >
<!--    <item android:drawable="@color/test_color" android:state_pressed="true" android:state_selected="true"></item> -->
<!--     <item android:drawable="@color/test_color" android:state_pressed="false" android:state_selected="true"></item> -->
    <item android:drawable="@color/test_color" android:state_pressed="true" android:state_selected="false"></item>
    <item android:drawable="@color/black" android:state_pressed="false" android:state_selected="false"></item>

You can also uncomment the first two rows if you want to manage the selected state (it is useful if your device has a D-Pad).

Sunday, March 4, 2012

Eclipse: Java compiler compliance

If you want to avoid build errors on the @Override annotation, you must change the Java compiler compliance.

The steps to do so:

Preferences -> Java -> Compiler -> JDK Compliance

and set it to 1.6

Eclipse and Android: projects build order

When you work on a project it may happen that at a certain point you include other projects in your workspace and you want to include them as dependencies.

The build order is very important and you have to specify it correctly otherwise the Eclipse IDE will generate compile errors almost every time the project is re-built (i.e every time you open Eclipse or you get the latest version from a repo).

To customize the build order in Eclipse:

General -> Workspace -> Build Order

Saturday, February 25, 2012

NFC, Android, accelerometer and Node.JS can transform your mobile in a brush!

Last saturday I went to Bologna with two colleagues (@robb_casanova as frontend artisan/dev and @emme_giii as mobile UX guru), and together we participated in a contest called HackReality.

Our intent was to mix up some native and web based technologies to transform a mobile phone into a brush, and a wall into a virtual canvas where a user, with its Android phone, could draw multi colored traits.

We used some NFC tags sticked on a paper-made palette:

We used my Galaxy Nexus as a "brush": by tapping the phone on a NFC tag on the palette we changed the paint color. Then we used the accelerometer data to detect accelerations on X and Y axis.
Data were sent to a server running on top of Node.js. We finally implemented a canvas and we used Processing language to perform the actual drawing.

The source code for the client-side part of project is available at github:

If I'll have time, in next posts I will explain how the NFC part works.