Thursday, January 24, 2013

Android: Trying to load native library results in a process terminated by signal (11)

Symptoms

You are trying to load your native C/C++ library in an Android application and when at runtime your app calls the System.loadLibrary() function - for example:

static {
        System.loadLibrary("mynativelib");
}

the application dies without core dumping, and the only message you see in LogCat is something like the following:

01-23 19:58:08.699: D/dalvikvm(4146): Trying to load lib /data/data/com.example.myapp/lib/libmynativelib.so 0x42341a50
01-23 19:58:08.709: I/ActivityManager(340): Process com.example.myapp (pid 4146) has died.
01-23 19:58:08.709: W/ActivityManager(340): Force removing ActivityRecord{211d37d0 com.example.myapp/.MyActivity}: app died, no saved state
01-23 19:58:08.719: D/Zygote(188): Process 4146 terminated by signal (11)

Causes

You native library is probably trying to dynamically load another library it depends upon.

Solution

Use the GCC readelf for your ABI to dump the dynamic section of your native library and find out which libraries it depends upon. For example if you are compiling for the x86 ABI:


$ android-ndk-r8d/toolchains/x86-4.7/prebuilt/linux-x86/bin/i686-linux-android-readelf -d libmynativelib.so | grep NEEDED
 0x00000001 (NEEDED)                     Shared library: [libgnustl_shared.so]
 0x00000001 (NEEDED)                     Shared library: [libiconv.so]
 0x00000001 (NEEDED)                     Shared library: [libdl.so]
 0x00000001 (NEEDED)                     Shared library: [libstdc++.so]
 0x00000001 (NEEDED)                     Shared library: [libm.so]
 0x00000001 (NEEDED)                     Shared library: [libc.so]


In this example you only need to load libgnustl_shared.so and libiconv.so before your libmynativelib.so to resolve the dependencies and make the System.loadLibrary() function call happy (the other standard libraries are already preloaded for you by Android). In your static section you will then include the following lines:


static {
        System.loadLibrary("gnustl_shared");
        System.loadLibrary("iconv");
        System.loadLibrary("mynativelib");
}

This should solve the problem.

Thursday, January 10, 2013

How to port Mozilla SpiderMonkey 1.7 to Android

Another third-party package I needed to cross-compile for Android was Mozilla's SpiderMonkey 1.7 Javascript engine. I found two issues here:

  1. When configuring SpiderMonkey the makefile tries to build two executables (jscpucfg and jskwgen) and then runs them to generate two  configuration header files (jsautocfg.h and jsautokw.h, respectively). The problem when using the Android NDK cross-compiler is that these two executables can only be run on the Android target (for example an ARM processor), while I'm cross-compiling my build from a Linux Ubuntu 12.04 machine with an x86_64 processor architecture. So you get an error that you cannot execute these files on the host machine.
    I solved this problem by copying the two executables to an Android device (Samsung Galaxy S III) using scp and the SSHDroid application, and generating the two header files there:

    $ jscpucfg > jsautocfg.h
    $ jskwgen > jsautokw.h

    then I copied the two files back to my node on my Ubuntu machine and saved in the source trunk under the config sub-directory. Then I changed the makefile to skip generating these two header files when cross-compiling for Android and get them instead from the config sub-directory.
  2. The SpiderMonkey jsnum.c file generates the following error when cross-compiled for Android:

    js-1.7/jsnum.c: In function 'js_InitRuntimeNumberState':
    js-1.7/jsnum.c:578: error: 'struct lconv' has no member named 'thousands_sep'
    js-1.7/jsnum.c:578: error: 'struct lconv' has no member named 'thousands_sep'
    js-1.7/jsnum.c:580: error: 'struct lconv' has no member named 'decimal_point'
    js-1.7/jsnum.c:580: error: 'struct lconv' has no member named 'decimal_point'
    js-1.7/jsnum.c:582: error: 'struct lconv' has no member named 'grouping'
    js-1.7/jsnum.c:582: error: 'struct lconv' has no member named 'grouping'
    gmake[2]: *** [js-1.7.dir/jsnum.c.o] Error 1

    This is caused by the fact that the lconv structure in locale.h shipped with the Android NDK is stubbed with the following comment:

    #if 1 /* MISSING FROM BIONIC - DEFINED TO MAKE libstdc++-v3 happy */
    struct lconv { };
    struct lconv *localeconv(void);
    #endif /* MISSING */

    To solve this problem I applied the following patch to jsnum.c and I was able to successfully cross-compile SpiderMonkey 1.7 for Android.

    --- a/jsnum.c   2013-01-10 10:37:54.413800695 -0500
    +++ b/jsnum.c   2013-01-10 10:06:49.432752061 -0500
    @@ -573,13 +573,28 @@ js_InitRuntimeNumberState(JSContext *cx)
         u.s.lo = 1;
         number_constants[NC_MIN_VALUE].dval = u.d;
     
    -    locale = localeconv();
    -    rt->thousandsSeparator =
    -        JS_strdup(cx, locale->thousands_sep ? locale->thousands_sep : "'");
    -    rt->decimalSeparator =
    -        JS_strdup(cx, locale->decimal_point ? locale->decimal_point : ".");
    -    rt->numGrouping =
    -        JS_strdup(cx, locale->grouping ? locale->grouping : "\3\0");
    +    /* Copy locale-specific separators into the runtime strings. */
    +    const char *thousandsSeparator, *decimalPoint, *grouping;
    +#ifdef HAVE_LOCALECONV
    +    locale = localeconv();
    +    thousandsSeparator = locale->thousands_sep;
    +    decimalPoint = locale->decimal_point;
    +    grouping = locale->grouping;
    +#else
    +    thousandsSeparator = getenv("LOCALE_THOUSANDS_SEP");
    +    decimalPoint = getenv("LOCALE_DECIMAL_POINT");
    +    grouping = getenv("LOCALE_GROUPING");
    +#endif
    +    if (!thousandsSeparator)
    +        thousandsSeparator = "'";
    +    if (!decimalPoint)
    +        decimalPoint = ".";
    +    if (!grouping)
    +        grouping = "\3\0";
    +
    +    rt->thousandsSeparator = JS_strdup(cx, thousandsSeparator);
    +    rt->decimalSeparator = JS_strdup(cx, decimalPoint);
    +    rt->numGrouping = JS_strdup(cx, grouping);
     
         return rt->thousandsSeparator && rt->decimalSeparator && rt->numGrouping;
     }

    Of course you would define HAVE_LOCALECONV only for regular builds but not for Android cross-compilations, so that you could either pass your own definitions for the locale thousands separator, decimal point or locale grouping via environment variables, or use the above defaults.

Wednesday, January 9, 2013

How to inspect expanded C macros with gcc/gcc+

Sometimes you get compilation errors in gcc/g++ and you'd like to inspect the output from the C compiler pre-processor to figure out why certain macros were expanded in such as way to cause an error.
The solution is simply to remove the -o and -c options from your gcc/g++ command line and replace it with

gcc/g++ -E -dD

this will generate on your terminal only the C pre-processor output that is eventually fed to the C/C++ compiler.

For example you are trying to compile Mozilla SpiderMonkey 1.7 and one of  the compiler lines is the following:

/usr/bin/gcc  -Djs_1_7_EXPORTS -DXP_UNIX -DSVR4 -DSYSV -D_BSD_SOURCE -DPOSIX_SOURCE -DHAVE_LOCALTIME_R -DX86_LINUX -D_IEEE_LIBM -DJS_EDITLINE -DEDITLINE -DHAVE_VA_COPY -DVA_COPY=va_copy -DPIC -DANSI_ARROWS -DHAVE_TCGETATTR -DHIDE -DUSE_DIRENT -DSYS_UNIX -DHAVE_STDLIB -DUNIQUE_HISTORY -O3 -DNDEBUG -fPIC -I../js-1.7  -fPIC -o CMakeFiles/js-1.7.dir/jsarena.c.o   -c js-1.7/jsarena.c

To see the expanded C macros run the following command:

 /usr/bin/gcc  -E -dD -Djs_1_7_EXPORTS -DXP_UNIX -DSVR4 -DSYSV -D_BSD_SOURCE -DPOSIX_SOURCE -DHAVE_LOCALTIME_R -DX86_LINUX -D_IEEE_LIBM -DJS_EDITLINE -DEDITLINE -DHAVE_VA_COPY -DVA_COPY=va_copy -DPIC -DANSI_ARROWS -DHAVE_TCGETATTR -DHIDE -DUSE_DIRENT -DSYS_UNIX -DHAVE_STDLIB -DUNIQUE_HISTORY -O3 -DNDEBUG -fPIC -I../js-1.7  -fPIC  js-1.7/jsarena.c



Sunday, January 6, 2013

Upgrading mid-2007 24in iMac with an SSD and a 2.5in HDD in the optical bay

I own a mid-2007 24in iMac with a 2.4 GHz Intel Core 2 Duo processor, 4GB of DDR2 DRAM, running Mac OS Mountain Lion, and for the past few months the computer was getting slower and slower, and when I was running Windows XP in VMWare Fusion, the machine was often swapping, bringing the iMac to a crawl. Finally I decided to upgrade my iMac by installing an SSD. Two years ago I replaced the Western Digital 500 GB hard drive (that failed on me one day by giving me the gray screen of death at boot time), with a 1 TB WD caviar green which I almost filled since then (about 750 GB used).
So I wanted the fast SSD drive but also to maintain the 1 TB of space available. The solution I adopted (found on several posts on the internet) was to replace the existing SATA HDD with an SSD, and replace the optical drive with a PATA to SATA adapter hosting a 2.5in HDD. To avoid the swapping I also replaced one of the 2 GB memory modules with a new OWC 4GB module, bringing the total available memory to 6GB.

Hardware Parts

This is the list of hardware components I ended up buying for this upgrade:


Hardware Installation

Here is the picture of the tools I used to perform the upgrade (grounding wrist band, 6 Pc. mini torx screwdriver set, spudgers, vacuum cups, philips screwdriver.



I followed the instructions to replace the hard drive and the optical bay from the iFixit web site.
The following picture shows the iMac internal components.



The old 1TB caviar green HDD is shown in the following picture. The idea was to replace it with the SSD mounted on the 2.5'' to 3.5'' bay converter.



This picture shows instead the Apple optical drive (superdrive).


I removed the optical drive and replaced it with the 1TB WD Blue HDD mounted on the MCE OptiBay enclosure as shown in the following picture. I also put a piece of foam in the enclosure gap (top of picture) to prevent the HDD from moving.


Then I installed the Crucial M4 SSD inside the SilverStone 2.5" to 3.5" Bay Converter in the bottom position (there is room to install two SSD or two 2.5'' HDD), in order to align the SATA and power connectors with the iMac motherboard.




The following picture shows the newly installed SSD on the top left, and the WD Blue HDD inside the OptiBay enclosure on the bottom right. Notice I attached the thermal sensor for the SSD directly on the metal frame of the bay converter, and used tape to make sure it stays in position.



All the hardware parts described above fitted perfectly in my iMac and I didn't have to do anything special to install them except for adding the piece of foam in the OptiBay enclosure to prevent the HDD from moving around.

Software Configuration

At this point my iMac had two new unformatted disk drives and my original HDD was sitting on my desk. What I had in mind was to install Mac OS on the Crucial M4 SSD, and put the users on the WD Blue HDD. In fact 64 GB are not enough to store the users directories where for example pictures and movies take a lot of disk space. To restore the system I then followed this procedure.
  • I took the original WD caviar green HDD and installed it in a Rosewill external SATA enclosure I had laying around. Then I connected the external SATA enclosure to the iMac via an USB cable.
  • I powered up the iMac holding the option button. This gave me the option to boot into Mountain Lion from the external WD Caviar Green HDD (volume name Macintosh HD).
  • I logged in and opened the Disk Utility application. Selected the Crucial M4 disk, clicked on the Erase tab and formatted the disk by selecting the Mac OS Extended (journaled) format and naming the volume Macintosh SSD.
  • In a similar way I erased the WD Blue HD disk and named the volume Macintosh HD Users. At the end of this procedure, Disk Utility showed the following on the screen:

  • Then I ran Carbon Copy Cloner and copied all directories from the external Macintosh HD except the /Users directory to the Crucial M4 on volume Macintosh SSD (it is simple to do this since Carbon Copy Cloner allows you to select individual directories to copy from the source). I ended up copying about 32 GB to the Macintosh SSD volume, which took about 1 hour. 
  • I ran Carbon Copy Cloner again but this time for source I selected the /Users directory only from the external Macintosh HD, and Macintosh HD Users for destination. To copy 693 GB Carbon Copy Cloner took about 7 hours and 47 minutes.
  • (If you don't want to use Carbon Copy Cloner, you can copy the files directly using the ditto command. For example to clone the /Users directory: 

  • # sudo ditto /Volumes/Macintosh\ HD/Users /Volumes/Macintosh\ HD\ Users/Users/
    )

  • I then opened the Terminal app, logged in as root and created a symbolic link to the new location of the /Users directory:

  • # sudo su -
    # cd /Volumes/Macintosh\ SSD
    # ln -s /Volumes/Macintosh\ HD\ Users/Users/ Users
At this point I rebooted the iMac and when it came back all my original logins with their setting were magically preserved. MacOSX now boots in 13 seconds, and opening applications is almost instantaneous. Now I still had to do some tweaks to make the new setup even better. I followed the suggestion from Martin's blog (Optimizing MacOS X Lion for SSD).
  1. I went to System Preferences, clicked on Users & Groups, clicked the lock icon to unlock the advanced editing. Once unlocked, I right-clicked on each user account and chose Advanced Options from the pop-up menu. Once in the Advanced Options dialog, I changed the Home directory of the user from /Users/user-name to the new location (/Volumes/Macintosh\ HD\ Users/Users/user-name).
  2. I installed and enabled TrimEnabler from this web site.
  3. I set the noatime flag to prevent MacOS from updating the SSD file system every time a file is accessed.
  4. I used the WD Blue HDD for temporary files.

    sudo ditto /private/tmp /Volumes/Macintosh\ HD\ Users/private/tmp
    sudo rm -rf /private/tmpsudo ln -s /Volumes/Macintosh\ HD\ Users//private/tmp /private/tmp
I rebooted the iMac another time and verified that Trim was working, the root file system was mounted with the noatime flag and that temporary files were going to the HDD drive instead of the SSD.
Given new life to an old machine. I'm pretty happy now.

Tuesday, December 18, 2012

How to Cross-Compile libiconv for Android

If your legacy C/C++ code includes <iconv.h> to convert the encoding of characters from one coded character set to another, and you need to cross-compile it with the Android NDK, you will get the following error:

   error: iconv.h: No such file or directory

In fact there is currently no iconv.h available in the Android NDK and you will have to port libiconv to Android yourself.
I successfully used the following instructions to cross-compile libiconv.so for Android.

Get the source code for libconv-1.13.1:
 
   $ wget http://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.13.1.tar.gz
 
Unzip and untar the file:

   $ tar zxvf libiconv-1.13.1.tar.gz
 
Patch localcharset.c using the following patch file (or else you will get another error: : langinfo.h: No such file or directory) :
 
   $ echo "diff --ignore-file-name-case -wuprN libiconv-1.13.1.orig/libcharset/lib/localcharset.c libiconv-1.13.1/libcharset/lib/localcharset.c
--- libiconv-1.13.1.orig/libcharset/lib/localcharset.c  2009-06-21 07:17:33.000000000 -0400
+++ libiconv-1.13.1/libcharset/lib/localcharset.c       2012-12-18 10:20:27.000000000 -0500
@@ -44,7 +44,7 @@
 # endif
 #endif

-#if !defined WIN32_NATIVE
+#if !defined(WIN32_NATIVE) && !defined(__ANDROID__)
 # if HAVE_LANGINFO_CODESET
 #  include <langinfo.h>
 # else
@@ -328,7 +328,7 @@ locale_charset (void)
   const char *codeset;
   const char *aliases;

-#if !(defined WIN32_NATIVE || defined OS2)
+#if !(defined WIN32_NATIVE || defined OS2 || defined __ANDROID__)

 # if HAVE_LANGINFO_CODESET " > iconv.patch

   $ patch -b -p0 < ./iconv.patch


Run the configure script and generate iconv.h:

   $ cd libiconv-1.13.1
   $ ./configure

Create a jni sub-directory:
 
$ mkdir jni
 
And save the following lines in jni/Android.mk:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
TARGET_ARCH_ABI := armeabi-v7a
LOCAL_MODULE    := iconv
LOCAL_CFLAGS    := \
    -Wno-multichar \
    -D_ANDROID \
    -DLIBDIR="\"c\"" \
    -DBUILDING_LIBICONV \
    -DIN_LIBRARY
LOCAL_C_INCLUDES := \
    ../libiconv-1.13.1 \
    ../libiconv-1.13.1/include \
    ../libiconv-1.13.1/lib \
    ../libiconv-1.13.1/libcharset/include
LOCAL_SRC_FILES := \
    ../libiconv-1.13.1/lib/iconv.c \
    ../libiconv-1.13.1/lib/relocatable.c \
    ../libiconv-1.13.1/libcharset/lib/localcharset.c
include $(BUILD_SHARED_LIBRARY)
 
Finally cross-compile iconv using the ndk-build tool:

   $ cd jni
   $ ndk-build V=1
If everything goes well, you will find iconv.h under libiconv-1.13.1/include and libiconv.so under libs/armeabi.

Tuesday, December 4, 2012

Configuring the Apple Airport Extreme with Verizon FIOS

I have Verizon FIOS triple-play service and I love the TV picture quality and internet speed and reliability, but I don't like Verizon's solution to my Parental Control needs.
So I've decided to buy an Airport Extreme Base Station (AEBS) from Apple, but when I went to set it up, it configured itself for bridged mode since the Verizon modem is also a router and the Airport Utility decided not to have a double NAT configuration. Anyway by doing so you loose the capability of having a guest network with the Airport Extreme since it dummies itself down from a full fledged router to a simple level-2 switch.
So I managed to manually set it up in DHCP/NAT mode even though it is initially complaining about a double NAT configuration (one from the Verizon router and one from the AEBS), but least I gained my guest network back (at the time of this post it is not possible to configure a guest network with the Verizon router). 


Hardware Configuration


Apple Router: AirPort Extreme Base Station, Part Number: MD031LL/A 




Verizon Modem/Router: Actiontec MI424WR Rev. I
  • I used a CAT6 Patch cable to connect the AEBS Internet WAN port to Ethernet Port 1 of the Actiontec. This setting allows the two routers to communicate at wire speed of up to 1 GB per second.




Software Configuration



Here is the step by step procedure to set up the Airport Extreme I ran from an iMac running Mountain Lion.




  • Click on the picture of the Airport Extreme base station: a popup window will show an Edit button
  • Click on the Edit button
  • Click on the Network tab and select "DHCP and NAT" from the Router Mode
  • Click on the Network Options button and select "10.0" for the IPv4 DHCP Range, and "172.16" for the Guest IPv4 DHCP Range. Click on the Save button (see picture below)






  • Click on the Update button: this will reset the AEBS and cause a solid yellow light with a status of Double NAT. The AEBS will advise you to switch back to bridge mode
  • Click on the Double NAT Status pull-down menu and select Ignore: after another reboot the AEBS should come up with a solid green color


Regarding the Actiontec configuration, I left it as is, except for disabling the wireless mode to avoid interference with the AEBS.

  • With an internet browser log on to http://192.168.1.1 using the login and password printed on the bottom of the MI424WR modem/router.
  • Click on the Wireless Settings icon:

Wireless
Settings
  • Click on Basic Security Settings from the left panel
  • Click on Off from the 1. Turn Wireless ON form entry
  • Click on the Apply button to disable wireless

Considerations

Having the Verizon DHCP server using the 198.168.1.xxx IP addresses range and the AEBS the 10.0.1.xxx range will keep you sane and prevent confusions in your mind about which sub-network you are connected to. 
In any case these two sub-networks are completely separated and invisible from the Internet and you cannot have a device connected to one subnet talk to another device on the other subnet (unless you start configuring forwarding ports on the AEBS of course). This is OK since I plan to keep all of my computers and devices on the subnet controlled by the AEBS in order to have access control over each device.
To my disappointment the Airport Utility version 6.1 used to configure the AEBS is probably one the worst application I ever used to set up a router. It's not intuitive and worst of all it doesn't show who's connected to your network (or at least it doesn't show all the connected devices). In fact it only shows wireless clients in a weird fashion (by hovering over the base station picture), but no sign of any device connected through the Gigabit ethernet ports. By the way the list of wireless client is dynamic and you cannot even copy/paste the MAC addresses to add later to the access control table. I ended up switching to the previous version of Airport Utility (version 5.6 - as suggested by several people on the Internet) since with that you can still get a list of all devices connected to the AEBS from the Advanced->Logs and Statistics->DHCP Clients tab.

Still for parental control AEBS only has a time-based table where you can setup a schedule on a per MAC address basis. So I still had to resolve the problem of preventing my kids from hitting questionable web sites. I solved that by setting the OpenDNS servers for primary and secondary DNS servers in my Verizon router. OpenDNS offer a basic parental control filter based on categories. But I found that is is adequate for my needs.

Tuesday, November 20, 2012

Python For Android (Py4A)

A better solution for cross-compiling Python for Android is to use the Py4A project which is made to be used together with SL4A (Scripting Layer For Android). If you are only interested in the Python interpreter and the runtime Python library, you can also use it standalone.
Get a local copy of the source code using  the following command:

   $ hg clone https://code.google.com/p/python-for-android/

Just focus on the python-build subdirectory and make sure the  python-build/python-src subdirectory is not present (remove it if it came with the Mercurial repository, or else the compilation will fail).
Set up your environment so that the python-for-android build script can pick up the ndk-build script from the Android NDK:

  $ export ANDROID_NDK_ROOT=/home/<your-directory>/android-ndk-r8
  $ export PATH=$ANDROID_NDK_ROOT:$PATH

Finally build Python for Android by issuing the following command:

  $ cd python-for-android/python-build
  $ rm -rf python-src
  $ bash build.sh

Note that on my Ubuntu 12.04 machine I had initially the following compilation error:

Traceback (most recent call last):
  File "build.py", line 161, in <module>
    os.path.join(pwd, 'output.temp', 'usr'))
  File "build.py", line 89, in zipup
    zip_file = zipfile.ZipFile(out_path, 'w', compression=zipfile.ZIP_DEFLATED)
  File "/home/danilo/python-for-android/python-build/host/lib/python2.6/zipfile.py", line 660, in __init__
    "Compression requires the (missing) zlib module"
RuntimeError: Compression requires the (missing) zlib module


I identified the problem in having the zlib library in my system installed under /lib/x86_64-linux-gnu/ instead of one of the traditional lib directories covered by the Python setup.py script. Also on my system I only had libz.so.1 and not libz.so. So to fix both problems I just created a symlink in the standard /usr/lib directory as follows:

  $ cd /usr/lib
  $ sudo ln -s /lib/x86_64-linux-gnu/libz.so.1 libz.so

With this fix the build.sh script was able to successfully build the zlib module for the host environment and create the following zipped files:

  • python_extras_r14.zip
  • python-lib_r16.zip
  • python_r16.zip
  • python_scripts_r13.zip

Of these I only used the python_r16.zip which contains the stripped python interpreter and the runtime libraries, and the python-lib_r16.zip which contains the include header files such as Python.h that can be used to compile Python bindings at development time.