[ Content | Sidebar ]

Archives for linux

Generating perf maps with OpenJDK 17

February 28th, 2021

Linux perf is a fantastically useful tool for all sorts of profiling tasks. It’s a statistical profiler that works by capturing the program counter value when a particular performance event occurs. This event is typically generated by a timer (e.g. 1kHz) but can be any event supported by your processor’s PMU (e.g. cache miss, branch mispredict, etc.). Try perf list to see the events available on your system.

A table of raw program counter addresses isn’t particularly useful to humans so perf needs to associate each address with the symbol (function) that contains it. For ahead-of-time compiled programs and shared libraries perf can look this up in the ELF symbol table on disk, but for JIT-compiled languages like Java this information isn’t available as the code is generated on-the-fly in memory.

Let’s look at what perf top reports while running a CPU-intensive Java program:

Samples: 136K of event 'cycles', 4000 Hz, Event count (approx.): 57070116973 lost: 0/0 drop: 0/0
Overhead  Shared Object                       Symbol
  16.33%  [JIT] tid 41266                     [.] 0x00007fd733e40ec0
  16.15%  [JIT] tid 41266                     [.] 0x00007fd733e40e3b
  16.14%  [JIT] tid 41266                     [.] 0x00007fd733e40df1
  16.14%  [JIT] tid 41266                     [.] 0x00007fd733e40e81
   2.80%  [JIT] tid 41266                     [.] 0x00007fd733e40df5
   2.62%  [JIT] tid 41266                     [.] 0x00007fd733e40e41
   2.45%  [JIT] tid 41266                     [.] 0x00007fd733e40ec4
   2.43%  [JIT] tid 41266                     [.] 0x00007fd733e40e87

Perf marks these locations as [JIT] because the addresses are in part of the process’s address map not backed by a file. Because the addresses are all very similar we might guess they’re in the same method, but perf has no way to group them and shows each unique address separately. Apart from that it’s not very helpful for figuring out which method is consuming all the cycles.

As an aside, it’s worth briefly comparing perf’s approach, which samples the exact hardware instruction being executed when a PMU event occurs, with a traditional Java profiler like VisualVM which samples at the JVM level (i.e. bytecodes). A JVM profiler needs to interrupt the thread, then record the current method, bytecode index, and stack trace, and finally resume the thread. Obviously this has larger overhead but there is a deeper problem: JIT-ed code cannot be interrupted at arbitrary points because the runtime may not be able to accurately reconstruct the VM state at that point. For example one cannot inspect the VM state halfway through executing a bytecode. So at the very least the JIT-ed code needs to continue executing to the end of the current bytecode. But requiring the VM to be in a consistent state at the end of each bytecode places too many restrictions on an optimising JIT. Therefore the optimised code can typically only be interrupted at special “safepoints” inserted by the JIT – in Hotspot this is at method return and loop back-edges. That means that a JVM profiler can only see the thread stopped at one of these safepoints which may deviate from the actual hot parts of the code, sometimes wildly. This problem is known as safepoint bias.

So a hardware profiler can give us better accuracy, but how to translate the JIT-ed code addresses to Java method names? Currently there are at least two tools to do this, both of which are implemented as JVMTI plugins that load into a JVM process and then dump some metadata for perf to use.

The first is the “jitdump” plugin that is part of the Linux perf tree. After being loaded into a JVM, the plugin writes out all the machine code and metadata for each method that is JIT compiled. Later this file can be combined with recorded profile data using perf inject --jit to produce an annotated data file with the correct symbol names, as well as a separate ELF shared object for each method allowing perf to display the assembly code. I use this often at work when I need to do some detailed JIT profiling, but the offline annotation step is cumbersome and the data files can be 100s of MB for large programs. The plugin itself is complex and historically buggy. I’ve fixed several of those issues myself but wouldn’t be surprised if there are more lurking. This tool is mostly overkill for typical Java developers.

The second tool that I’m aware of is perf-map-agent. This generates a perf “map” file which is a simple text file listing start address, length, and symbol name for each JIT compiled method. Perf will load this file automatically if it finds one in /tmp. As the map file doesn’t contain the actual instructions it’s much smaller than the jitdump file, and doesn’t require an extra step to annotate the profile data so it can be used for live profiling (i.e. perf top). The downsides are that profile data is aggregated by method so you can’t drill down to individual machine instructions, and the map can become stale with a long-running VM as methods can be unloaded or recompiled. You also need to compile the plugin yourself as it’s not packaged in any distro and many people would rightly be wary of loading untrusted third-party code which has full access to the VM. So it would be much more convenient if the VM could just write this map file itself.

OpenJDK 17, which should be released early next month, has a minor new feature contributed by yours truly to do just this: either send the diagnostic command Compiler.perfmap to a running VM with jcmd, or run java with -XX:+DumpPerfMapAtExit, and the VM will write a perf map file that can be used to symbolise JIT-ed code.

$ jps
40885 Jps
40846 PiCalculator
$ jcmd 40846 Compiler.perfmap
40846:
Command executed successfully
$ head /tmp/perf-40846.map 
0x00007ff6dbe401a0 0x0000000000000238 void java.util.Arrays.fill(int[], int)
0x00007ff6dbe406a0 0x0000000000000338 void PiSpigout.calc()
0x00007ff6dbe40cc0 0x0000000000000468 void PiSpigout.calc()
0x00007ff6dbe41520 0x0000000000000138 int PiSpigout.invalidDigitsControl(int, int)
0x00007ff6d49091a0 0x0000000000000110 void java.lang.Object.<init>()
0x00007ff6d4909560 0x0000000000000350 int java.lang.String.hashCode()
0x00007ff6d4909b20 0x0000000000000130 byte java.lang.String.coder()
0x00007ff6d4909ea0 0x0000000000000170 boolean java.lang.String.isLatin1()

Here we can see for example that the String.hashCode() method starts at address 0x00007ff6d4909560 and is 0x350 bytes long. Let’s run perf top again:

Samples: 206K of event 'cycles', 4000 Hz, Event count (approx.): 94449755711 lost: 0/0 drop: 0/0
Overhead  Shared Object                         Symbol
  78.78%  [JIT] tid 40846                       [.] void PiSpigout.calc()
   0.56%  libicuuc.so.67.1                      [.] icu_67::RuleBasedBreakIterator::handleNext
   0.34%  ld-2.31.so                            [.] do_lookup_x
   0.30%  libglib-2.0.so.0.6600.3               [.] g_source_ref
   0.24%  libglib-2.0.so.0.6600.3               [.] g_hash_table_lookup
   0.24%  libc-2.31.so                          [.] __memmove_avx_unaligned_erms
   0.20%  libicuuc.so.67.1                      [.] umtx_lock_67

This is much better: now we can see that 79% of system-wide cycles are spent in one Java method PiSpigout.calc().

Alternatively we can do offline profiling with perf record:

$ perf record java -XX:+UnlockDiagnosticVMOptions -XX:+DumpPerfMapAtExit PiCalculator 50
3.1415926535897932384626433832795028841971693993750
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.030 MB perf.data (230 samples) ]
$ perf report
Samples: 340  of event 'cycles', Event count (approx.): 71930116
Overhead  Command          Shared Object       Symbol
  11.83%  java             [JIT] tid 44300     [.] Interpreter
   4.19%  java             [JIT] tid 44300     [.] flush_icache_stub
   3.87%  java             [kernel.kallsyms]   [k] mem_cgroup_from_task
...

Here the program didn’t run long enough for any method to be JIT compiled and we see most of the cycles are spent in the interpreter. (By the way, the interpreter still looks like JIT-ed code to perf because its machine code is generated dynamically by Hotspot at startup.)

I’d be interested to hear from anyone who tries this new feature and finds it useful, or otherwise!

YPbPr Mode vs RGB

February 26th, 2021

I was fiddling with my monitor settings today (Dell U2415) and noticed the “Input Color Format” was set to “YPbPr” instead of “RGB”. This is a compressed colour space where the chroma channel has half the resolution of the luminance channel. Normally this would be used for TVs or video encoding rather than a PC monitor. That said I’ve been using it this way for two years without noticing…

The problem is Dell monitors advertise this mode along with RGB in their HDMI EDID. The driver for my AMD graphics card see this and prefers it over RGB with no way to override the selection. There is one creative solution I found which involves patching a local copy of the EDID and telling the driver to load that from disk rather than reading it from the monitor. I took the simpler option of spending a few quid on a DisplayPort cable which only supports RGB.

The result? Fonts look a bit sharper… maybe… but it’s hard to tell.

Best Shell Prompt Colour Scheme

December 19th, 2020

It can be agonizing to pick a good colour scheme for your shell prompt. Especially when you have 256 or more colours to pick from. So rather than waste my time I decided to embrace serendipity and have my shell pick a random colour when it starts. The results are rather pleasing, as you can see below, and if I don’t like a particular colour then it will only last as long as that particular shell.

It also helps to visually distinguish different windows that are being used for different tasks, and root shells are coloured an alarming shade of red. Just pop the following in your .bashrc.

PS1=${SSH_CLIENT:+$(hostname -s):}'\w \$ '
case "$TERM" in
  *-256color)
    if [ "$UID" = 0 ]; then
      color=196   # Red
    else
      color=$((16+(36*(1+RANDOM%5))+(6*(1+RANDOM%5))+(1+RANDOM%5)))
    fi
    PS1='\[\033[1m\033[38;5;'$color'm\]'$PS1'\[\033[00m\]'
    ;;
  *-color)
    if [ "$UID" = 0 ]; then
      color=31   # Red
    else
      color=$((31+RANDOM%8))
    fi
    PS1='\[\033[1m\033['$color'm\]'$PS1'\[\033[00m\]'
    ;;
esac
unset color

For *-256color terminals the codes above 36 are a 6x6x6 RGB colour cube. This script avoids darker colours but you can tweak it to your liking. Most modern terminals also support a true colour escape sequence giving full 24-bit colour, but 120 different shades is surely enough for anyone.

Filed in linux - Comments closed

SIGPIPE and how to ignore it

September 23rd, 2020

I recently found myself trying to port a program that uses Boost Asio to run on OpenBSD. Everything compiled OK but while running it would occasionally exit with an unhandled SIGPIPE signal. This doesn’t happen on Linux. What’s going on here?

SIGPIPE is a synchronous signal that’s sent to a process (thread in POSIX.1-2004) which attempts to write data to a socket or pipe that has been closed by the reading end. Importantly it’s not an asynchronous signal that notifies you when the reading end has been closed: it’s delivered only when you attempt to write data. In fact it’s generated precisely when the system call (write(2), sendmsg(2), etc.) would fail with EPIPE and doesn’t give any additional information.

So what’s the point then? The default action for SIGPIPE is to terminate the process without a core dump (just like SIGINT or SIGTERM). This simplifies error handling in programs that are meant to run as part of a shell pipeline: reading input, transforming it, and then writing it to another process. SIGPIPE allows the program to skip error handling and blindly write data until it’s killed.

For programs that handle write errors it doesn’t seem to be useful and is best avoided. But unfortunately there are several different ways to do that.

Ignore the signal globally

This is the easiest if you are in complete control of the program (i.e. not writing a library). Just set the signal to SIG_IGN and forget about it.

signal(SIGPIPE, SIG_IGN);

Use MSG_NOSIGNAL

If you are writing to a socket, and not an actual pipe, pass the MSG_NOSIGNAL flag to send(2) or sendmsg(2). This has been in Linux for ages and was standardised in POSIX.1-2008 so it’s available almost anywhere.

Set SO_NOSIGPIPE socket option

This is a bit niche as it only exists on FreeBSD and OS X. Use setsockopt(2) to set this option on a socket and all subsequent send(2) calls will behave as if MSG_NOSIGNAL was set.

int on = 1;
setsockopt(s, SOL_SOCKET, SO_NOSIGPIPE, &on, sizeof(on))

This seems to be of limited utility as calling write(2) on the socket will still generate SIGPIPE. The only use I can think of is if you need to pass the socket to a library or some other code you don’t control.

Temporarily mask the signal on the current thread

The most general solution, for when you are not in full control of the program’s signal handling and want to write data to an actual pipe or use write(2) on a socket, is to first mask the signal for the current thread with pthread_sigmask(3), write the data, drain any pending signal with sigtimedwait(2) and a zero timeout, and then finally unmask SIGPIPE. This technique is described in more detail here. Note that some systems such as OpenBSD do not have sigtimedwait(2) in which case you need to use sigpending(2) to check for pending signals and then call the blocking sigwait(2).

Anyway back to the original problem. Asio hides SIGPIPE from the programmer by either setting the SO_NOSIGPIPE socket option on systems that support it, or on Linux by passing MSG_NOSIGNAL to sendmsg(2). None of these apply to OpenBSD which is why we get the SIGPIPE. I submitted a pull request to pass MSG_NOSIGNAL on OpenBSD as well. But I don’t know when or if that will be merged so I’m also trying to get the same fix added to the ports tree.

UPDATE: a patch is now in the ports tree.

OpenSMTPD: use SSL client certificate when relaying outgoing mail

September 13th, 2020

I recently set up OpenSMTPD as the MTA on my local machine. I want to relay outgoing mail through another mail server on my VPS which is configured to only accept SSL connections with valid client certificates.

It’s not clear from the documentation how to configure this in smtpd.conf. However I eventually found from the source code that the “relay” action accepts a “pki” option to specify a certificate and key file.

action "outbound" relay host smtps://user@mail.example.org \
	auth <secrets> pki host.example.org mail-from "@example.org"

My mail server requires a username and password in addition to the client certificate so a “secrets” table should also be configured:

table secrets file:/etc/mail/secrets

And finally add a “pki” stanza for host.example.org to associate the X.509 certificate and private key:

pki host.example.org cert "/etc/ssl/example.crt"
pki host.example.org key "/etc/ssl/private/example.key"

UPDATE: this is documented in the man page now. :D

Filed in linux - Comments closed

My first Linux “kernel” patches

June 7th, 2020

OK well not really kernel patches, but they’re in the Linux tree so I guess it counts?

Was so excited when I got the automatic notification they’d been merged for the 5.8 release. Hopefully someone out there using perf to profile Java finds them useful.

Filed in linux - Comments closed

Wlroots and Phosh on Samsung S7

April 19th, 2020

A few weekends ago I left my Samsung S7 running Gnome on software-rendered X11. This kind of works as a demo but it’s slow and clunky so I followed that by attempting to get Phosh running. Phosh is a gnome-shell replacement for Purism’s Librem5. It uses phoc as a Wayland compositor instead of Mutter, which in turn is based on wlroots, the compositor-as-a-library component of Sway.

It should be much easier to add a hwcomposer backend to wlroots than Mutter, and in fact someone already started: NotKit/wlroots. I took this and rebased on the latest upstream wlroots tag, hacked the code around until it compiled with the new interface, ran the example app and… the screen flashed green for a second and then kernel panicked and rebooted. Ouch.

<0>[ 7300.959344] I[0:      swapper/0:    0] Kernel panic - not syncing: Unrecoverable System MMU Fault!!
<0>[ 7300.959382] I[0:      swapper/0:    0] Kernel loaded at: 0x8013c000, offset from compile-time address bc000
<3>[ 7300.959438] I[0:      swapper/0:    0] exynos_check_hardlockup_reason: smc_lockup virt: 0xffffffc879980000 phys: 0x00000008f9980000 size: 4096.
<0>[ 7300.959489] I[0:      swapper/0:    0] exynos_check_hardlockup_reason: SMC_CMD_GET_LOCKUP_REASON returns 0x1. fail to get the information.
<0>[ 7300.959534] I[0:      swapper/0:    0] exynos_ss_prepare_panic: no core got stucks in EL3 monitor.

The panic log is not very helpful, there’s no user stack trace.

After a painful few hours debugging by adding prints and sleeps and comparing against the working test_hwcomposer from libhybris I managed to fix it. I’ve pushed a hwcomposer-0.10.1 branch here.

Then I built phoc and phosh, linking against the modified wlroots and libhybris. To my surprise it Just Worked, with the exception of touch input. Input requires enabling the libinput backend of wlroots, and that in turn requires an active “session”. Session in the systemd world means being associated with a “seat” in systemd-logind. We can do that by starting phoc inside a systemd service and associating it with a TTY. I copied phosh.service from the Librem5 package and edited it for my system.

Unfortunately phoc then hangs at startup inside the wlroots libinput backend polling for sd_seat_can_graphical(..) to return true. Logind seems to make some people very angry, but debugging it with the source code and loginctl wasn’t too bad.

$ loginctl 
SESSION  UID USER SEAT  TTY  
    132 1000 nick            
    162 1000 nick seat0 tty7     <---------------
      4 1000 nick            
      6 1000 nick            
      7 1000 nick            
     c2    0 root       pts/4
 
6 sessions listed.

Here phoc is running on seat0 which is attached to /dev/tty7.

$ loginctl show-seat seat0 
Id=seat0
ActiveSession=162
CanMultiSession=yes
CanTTY=yes
CanGraphical=no    <-----------
Sessions=162
IdleHint=yes
IdleSinceHint=1587287027759986
IdleSinceHintMonotonic=18371557550

From reading the logind source, CanGraphical is true if there is a device attached to the seat that has the udev TAG attribute with value "master-of-seat". Normally this attribute is added to graphics devices by the udev rules systemd ships in /lib/udev/rules.d/71-seat.rules. The S7 has a special “decon” graphics driver so none of the standard rules match. But it’s easy to add a custom rule:

SUBSYSTEM=="graphics", KERNEL=="fb0", DRIVERS=="decon", TAG+="master-of-seat"

After reloading and retriggering the udev rules, the framebuffer device now has this tag:

$ udevadm info /dev/fb0
P: /devices/13960000.decon_f/graphics/fb0
N: fb0
L: 0
S: graphics/fb0
E: DEVPATH=/devices/13960000.decon_f/graphics/fb0
E: DEVNAME=/dev/fb0
E: MAJOR=29
E: MINOR=0
E: SUBSYSTEM=graphics
E: USEC_INITIALIZED=42601393
E: ID_PATH=platform-13960000.decon_f
E: ID_PATH_TAG=platform-13960000_decon_f
E: ID_FOR_SEAT=graphics-platform-13960000_decon_f
E: ID_SEAT=seat0
E: DEVLINKS=/dev/graphics/fb0
E: TAGS=:seat0:seat:master-of-seat:      <-----------

And after restarting phoc/phosh touch is working! :D

From left: Gnome calculator, app drawer, Gnome terminal and squeekboard onscreen keyboard

Filed in linux - Comments closed

Installing GNU/Linux on my Samsung S7

April 6th, 2020

I want to do something useful with my old Samsung S7. Previously I’ve tried LineageOS, and while it works OK, it wasn’t as stable as the stock Samsung ROM and you still end up installing a lot of proprietary apps on it by necessity. I’d much rather run a proper GNU/Linux distro on it. This post is documenting my efforts at doing that, with some minor success.

A failed mainling

My first attempt was to use PostmarketOS. This is a proper free/libre distribution in that it doesn’t rely on any Android blobs for hardware access. Which is great if your SoC/GPU/modem/etc. has mainline Linux support. But I have the European S7 which contains the Samsung Exynos 8890 SoC, for which the only publicly available source code is the Android 3.18 kernel dump. For graphics this means you’re stuck with an unaccelerated framebuffer only which is a non-starter for any modern GUI.

The solution to this would be to update the Samsung provided kernel to a more recent version of Linux, which has free drivers for the Mali GPU, etc. I tried rebasing their code on a slightly more recent 4.4 kernel but once I looked into the SoC’s PCIe and USB drivers I realised how enormous the task was and gave up. Sadly unless Samsung do the work themselves or even just release the 8890 datasheet this device is never going to run anything later than 3.18.

Halium

Halium is an interesting project for devices stuck in this situation. Drivers for Android devices are usually split between a minimal GPL’d kernel driver and a proprietary userspace blob (HAL). Halium allows you to run the userspace blobs inside a LXC container, using a minimal Android system.img without any of the UI level components. Hybris then allows you to link to and call the Android libraries from normal Linux programs linked against glibc. It’s the basis of UBPorts and Plasma Mobile.

There’s already some progress getting Halium to run on the S7. I picked this up and got it to the point where it can run GNOME under X11.

Debugging in the initrd

The Halium boot.img contains a very useful init-script which is great for debugging. You can configure it enable the USB network device and then start a telnet server inside the initrd and wait for you to connect before it continues booting. This lets you chroot into the target filesystem, poke around, and most importantly read /proc/last_kmesg to debug kernel panics. Without this I would have given up pretty quickly, as will no serial port or other debug output there’s little feedback when something goes wrong.

The S7 has an annoying bug where the USB Ethernet MAC address is all zeros. I’ve put a patch for that here.

Systemd-journald hangs the system

The default Halium rootfs image hangs very early in boot. On the GitHub issue someone noticed this was caused by journald and it would boot a bit further if you stop this from running. You can do this with systemctl mask systemd-journald.service.

The problem here is caused by the max77854_fuelgauge battery driver not implementing some properties to read its status from /proc. Journald gets stuck in a loop of polling these files and locks up the system. Why is journald trying to read battery information? No idea, but it’s trivial to fix this behaviour with a patch to the kernel.

Disable Android “paranoid network”

Android has some patches to disable all network access unless the user is a member of some specific group. This is useless for a normal Linux userland so disable it in the defconfig.

Replacing the Halium system image with Debian

After this we can boot up into the default Halium rootfs which is based on Ubuntu 16.04. The LXC Android container boots, loads all the firmware, and WiFi is working but not much else. I thought about trying some other stock Halium rootfs like Plasma Mobile or UBPorts, but really I’d rather set my phone up like a regular computer so I used debootstrap to install a minimal Debian filesystem in the /data partition on the phone. My home directory is on a 64GB microSD card so I can safely blat the OS and reinstall. This requires tweaking the boot script in the initrd because Halium normally loopback mounts a rootfs image inside /data and then does a switch_root into that, but now we have the OS installed directly in /data.

Sadly this just hangs on boot. Debugging with the initrd telnet interface, systemd is stuck unable to start any processes. Turns out this is because systemd 245 depends on the “ambient capabilities” feature which isn’t present in the 3.18 kernel. Google have backported this to the Android 3.18 series so we can just apply that patch on the Samsung kernel.

This gets us a bit further and then the kernel panics while starting udevd.

&lt;0>[  132.035960]  [5:         v4l_id:  787] Call trace:
&lt;0>[  132.035979]  [5:         v4l_id:  787] [<ffffffc0000ec328>] dump_backtrace+0x0/0x144
&lt;0>[  132.035990]  [5:         v4l_id:  787] [<ffffffc0000ec5ec>] die+0x140/0x228
&lt;0>[  132.036009]  [5:         v4l_id:  787] [<ffffffc0000f82b8>] __do_kernel_fault+0xb4/0xd8
&lt;0>[  132.036019]  [5:         v4l_id:  787] [<ffffffc0000f858c>] do_page_fault+0x2b0/0x2fc
&lt;0>[  132.036029]  [5:         v4l_id:  787] [<ffffffc0000f86c0>] do_translation_fault+0xe8/0x150
&lt;0>[  132.036039]  [5:         v4l_id:  787] [<ffffffc0000e5268>] do_mem_abort+0x38/0xa4
&lt;0>[  132.036050]  [5:         v4l_id:  787] [<ffffffc0000e7d70>] el1_da+0x20/0x78
&lt;0>[  132.036065]  [5:         v4l_id:  787] [<ffffffc00067d00c>] v4l_querycap+0x28/0x50
&lt;0>[  132.036076]  [5:         v4l_id:  787] [<ffffffc000680eb4>] __video_do_ioctl+0x164/0x254
&lt;0>[  132.036085]  [5:         v4l_id:  787] [<ffffffc000681250>] video_usercopy+0x2ac/0x504
&lt;0>[  132.036094]  [5:         v4l_id:  787] [<ffffffc0006814b8>] video_ioctl2+0x10/0x1c
&lt;0>[  132.036103]  [5:         v4l_id:  787] [<ffffffc00067bd3c>] v4l2_ioctl+0x78/0x130
&lt;0>[  132.036117]  [5:         v4l_id:  787] [<ffffffc00068c468>] do_video_ioctl+0xfd8/0x1d04
&lt;0>[  132.036128]  [5:         v4l_id:  787] [<ffffffc00068d1ec>] v4l2_compat_ioctl32+0x58/0xb4
&lt;0>[  132.036142]  [5:         v4l_id:  787] [<ffffffc000230bc8>] compat_SyS_ioctl+0x10c/0x1250
</ffffffc000230bc8></ffffffc00068d1ec></ffffffc00068c468></ffffffc00067bd3c></ffffffc0006814b8></ffffffc000681250></ffffffc000680eb4></ffffffc00067d00c></ffffffc0000e7d70></ffffffc0000e5268></ffffffc0000f86c0></ffffffc0000f858c></ffffffc0000f82b8></ffffffc0000ec5ec></ffffffc0000ec328>

This is in some ioctls for the camera sensor driver. I think the problem here is that udev is enumerating the devices before the Android HAL blobs have had a chance to do their initialisation. I sprinkled some random NULL checks and it does crash anymore but I’m not really happy with this so haven’t pushed the patch.

Halium LXC container

Next I copied all of the lxc-android repository onto the device, as well as the Android system.img from the default Halium install. This contains all the scripts to start the LXC Android container. The LXC config file in the repository is from an older version of LXC than current Debian unstable. It’s easy to upgrade in-place with lxc-update-config -c /var/lib/lxc/android/config.

Enable the services that mount the Android filesystems:

systemctl enable android-mount.service
systemctl enable system.mount

Then reboot and check the Android side started up with /system/bin/logcat. This should load all the firmware blobs and enable the WiFi interface.

Hybris tests and udev permission problems

Hybris comes with a bunch of useful test programs for checking the various Android wrappers like GLES, sound, camera, etc. are working. Unfortunately they all failed with some cryptic error messages:

nick@samsung:~$ test_glesv2 
library "libgui.so" wasn't loaded and RTLD_NOLOAD prevented it
ERROR: The DDK is not compatible with any of the Mali GPUs on the system.
The DDK was built for 0x880 r2p0 status range [0..15], but none of the GPUs matched:
test_glesv2: ../../tests/test_glesv2.c:113: main: Assertion `eglGetError() == EGL_SUCCESS' failed.
Aborted

A quick check in Android’s logcat reveals a lot of failures to open devices:

03-21 08:15:14.350     0  1574 D libEGL  : failed to load libgui: dlopen failed: 
03-21 08:15:14.356     0  1574 D libEGL  : loaded /vendor/lib/egl/libGLES_mali.so
03-21 08:15:14.368     0  1574 E ion     : open /dev/ion failed!
03-21 08:15:14.368     0  1574 E gralloc : /dev/graphics/fb0 Open fail
03-21 08:15:14.368     0  1574 E gralloc : Fail to init framebuffer
03-21 08:15:14.368     0  1574 E ion     : open /dev/ion failed!
03-21 08:15:14.368     0  1574 W libEGL  : eglInitialize(0xf786da60) failed (EGL_NOT_INITIALIZED)

This looks like a simple permission problem where the permissions on the device files are not set up the way Android expects. Simply following the Halium documentation and generating the udev rules file for the device then rebooting fixes this. Patch here.

Now we get a little bit further and test_hwcomposer crashes inside some EGL initialisation function. I don’t really understand what’s going on but it seems to be some mismatch between the gralloc version libhybris is using and the gralloc version used by the Samsung blobs. (Gralloc is Android’s graphics memory allocation layer.) I hacked around this by forcing hybris to skip the check for gralloc v1. I’m not very happy about this so I’ll perhaps revisit it later.

UPDATE: better fix here: libhybris/libhybris#446.

And now all graphics, lights, and vibrator tests all work! I couldn’t immediately get camera and sound working, but I didn’t try very hard.

Wayland, X11, GNOME

I’ve been using GNOME3 on my desktop quite happily for a while now and I’d like to run it on my phone too. The launcher is touch friendly-ish and Purism have done a lot of work porting the various GNOME apps to a mobile form-factor using libhandy.

Ideally I’d run the Wayland variant of Mutter because both Wayland and the Android graphics stack are based on GLES and this should get us hardware accelearation for the UI. Unfortunately this is a bit of a non-starter as the Mutter Wayland backend is tied to DRM which Android kernels don’t support. So we’re stuck with X11 or using a different compositor entirely.

Thankfully there’s the xf86-video-hwcomposer project that provides an X driver using Android’s hwcomposer and libhybris. This provides hardware acceleration on the X server side but any client side drawing is unaccelerated and uses Mesa’s llvmpipe software fallback.

And finally…

There’s really nothing like the triumphant feeling of doing apt install emacs on your phone followed by the despair of discovering there’s no way to press those C- and M- meta keys.

Filed in linux - Comments closed

Unlocking Encrypted Home Partition on Login

September 22nd, 2019

I recently did a new Debian install on my laptop after upgrading the NVMe and this time round I set up LUKS disk encryption for my /home partition. I want this to be as hassle-free as possible, which means having the partition automatically unlocked and mounted when I log in, rather than having to type a separate password on boot.

It’s not as straightforward as you might think, I guess because everyone’s setup and requirements are a little different. So I’ll write my notes here in case it’s useful to someone else. I’m doing this on Debian, but I cribbed a lot of it from the excellent Arch wiki.

When first setting up the encrypted partition make sure that the disk password is the same as your login password. This will be important later.

The file /etc/crypttab is read early in boot by systemd (see its crypttab map page). Systemd then calls cryptsetup on each entry in this file to unlock the partition. This is where the boot time password prompt that we want to get rid of comes from. Simply add noauto to options list at the end and systemd will skip it:

nvme0n1p3_crypt UUID=XXXX-XXXX none luks,discard,noauto

Also edit /etc/fstab to comment out or remove the entry for /home: we’ll be using pam_mount to do this directly.

sudo apt install libpam-mount

This is a PAM plugin that can mount arbitrary filesystems whenever a user logs in and unmount them when they log out. We can also use it to unlock and encrypted partition using the user’s password before mounting. This is why the login password and the disk password must be the same. Open /etc/security/pam_mount.conf.xml and add these lines to it:

<volume user="nick" fstype="crypt" path="/dev/nvme0n1p3"                        
        mountpoint="nvme0n1p3_crypt" />                                         
 
<volume user="nick" fstype="auto" path="/dev/mapper/nvme0n1p3_crypt"            
        mountpoint="/home" options="defaults,relatime,discard" />                       
 
<cryptmount>cryptsetup open --allow-discards %(VOLUME) %(MNTPT)</cryptmount>    
<cryptumount>cryptsetup close %(MNTPT)</cryptumount>

We need to add two <volume> entries. The first with fstype="crypt" unlocks the physical LUKS partition (/dev/nvme0n1p3) and creates a new volume that we can mount as a normal filesystem (/dev/mapper/nvme0n1p3_crypt). Obviously change the user name and physical device path to match your system.

The <cryptmount> and <cryptumount> entries tell pam_mount how to open and close the encrypted partition when fstype="crypt". Note that I’ve added the --allow-discard option here which enables the SSD TRIM command to reduce wear on the disk, but has some security implications which you might want to read up on.

Reboot and check everything works. If you have problems try adding:

<debug enable="1" />

to pam_mount.conf.xml and log in on a text console. This will print some diagnostic messages.

Filed in linux - Comments closed

Linus Torvalds

July 13th, 2019

I finally saw Linus Torvalds live! I think, however, reading his online rants is considerably more interesting than watching a staged conversation. 🙃