Showing 12 posts
I’ve had a Raspberry Pi 3 in the garage running Raspbian so it was attached to Ethernet for a long time. A few weeks ago, however, I wanted to bring the Pi into the house so that my kid, who was showing interest in robotics, and I could play with it. That required having the ability to place the device onto the dining table, next to a laptop, which meant connecting it to WiFi. Easy peasy, right? Well… while that should have been trivial, it did not work right away and the solutions I found online back then were all nonsensical. I gave up in desperation because I did not have enough time to find the root cause, and all interest was lost. Until last weekend when I gave this ordeal another try. At this point, I found once again the same nonsensical solutions online, got equally frustrated about the fact that they even existed, and decided to find the real answer to my problem on my own. Yes, this is mostly a rant about the Internet being littered with misleading answers of the kind “I reinstalled glibc and my problem is gone!”. But this is also the tale of a troubleshooting session—and you know I like to blog about those.
August 16, 2023
·
Tags:
<a href="/tags/linux">linux</a>, <a href="/tags/troubleshooting">troubleshooting</a>
Continue reading (about
11 minutes)
While diagnosing a non-determinism Bazel issue at work, I had to compare the dynamic libraries used by two builds of the same binary. To do so, I used ldd(1) and I had to refer to its manual page to understand details of the output I had never paid attention to before. What I saw will surprise you: ldd can end up running the binary given to it, thus making it unsafe against untrusted binaries. Read on for the history I could find around this issue and what alternatives you have.
July 1, 2023
·
Tags:
<a href="/tags/linux">linux</a>, <a href="/tags/security">security</a>
Continue reading (about
6 minutes)
In the previous post, we saw why waiting for a process group to terminate is important (at least in the context of Bazel), and we also saw why this is a difficult thing to do in a portable manner. So today, let’s dive into how to do this properly on a Linux system. On Linux, we have two routes: using the child subreaper feature or using PID namespaces. We’ll focus on the former because that’s what we’ll use to fix (#10245) the process wrapper1, and because they are sufficient to fully address our problem.
November 14, 2019
·
Tags:
<a href="/tags/bazel">bazel</a>, <a href="/tags/linux">linux</a>, <a href="/tags/unix">unix</a>
Continue reading (about
4 minutes)
I have an old Aiptek mini PenCam 1.3 MPixels, identified by USB vendor 1276 and product 20554. I want to use this webcam for videoconferencing in the machine I am setting up for this purpose. This machine carries a Fedora 9 x86_64 installation, as already mentioned in the previous post. Whenever I connect the camera to the machine, HAL detects the new device and then GNOME attempts to "mount" it using gphoto2. The result is that I get a new device on the desktop referring to the camera, which is pretty nice, but it does not work at all: accessing it raises an unexpected error and thus the photos stored in the webcam cannot be seen. Anyway, I do not care about the photo capabilities of this camera, just about its ability to stream video. Hence, I installed the gspca and kmod-gspca packages from the livna repositories and, according to the gspca driver, my camera is, supposedly, fully supported. Unfortunately, I was not able to get the /dev/video device: it didn't exist, even with the kernel modules loaded. After some manual investigation on the console (so that gphoto2 couldn't get in the way), I found that the video device really appears but vanishes as soon as gphoto2 attempts to access the camera. I suspect it is not possible to use the photo and video capabilities of the camera at once with the current drivers. So, how to avoid this problem? I had to tell HAL to omit this device, so that GNOME did not get any notification of its existance and therefore the interface did not attempt to mount the camera using gphoto2. However, there is few documentation on how to do this, so I had to resort to reading the files in /usr/share/hal/fdi/ and guess what to do. I ended up creating a 10-broken-cameras.fdi file in /etc/hal/fdi/preprobe/ with the following contents:<?xml version="1.0" encoding="UTF-8"?> <deviceinfo version="0.2"> <device> <match key="usb.vendor_id" int="1276"> <match key="usb.product_id" int="20554"> <merge key="info.ignore" type="bool">true</merge> </match> </match> </device> </deviceinfo>What this code snippet does is match the camera device using some of the properties that are attached to it and, once there is a match, appends the info.ignore property to the device description to tell HAL to not use this device any more. In order to set up the matching of a device, you can see the full list of properties of all device descriptors using the hal-device command.
June 29, 2008
·
Tags:
<a href="/tags/fedora">fedora</a>, <a href="/tags/linux">linux</a>
Continue reading (about
2 minutes)
I'm setting up a machine at home to act as a videoconferencing station so that my family can easily talk to me during the summer, while I'm in NYC. This machine is equipped with an Athlon 64-bit processor and a nVidia GeForce 6200 PCI-Express video card. I decided to install Fedora 9 in this computer because this is the distribution I'm currently using everywhere (well, everywhere except on the Mac ;-). Plus it just works (TM), or mostly. The 3D desktop is not something that is really needed for daily work, but I wanted to try it. Unfortunately, I could not get the desktop effects to work the first time I tried. I enabled the livna repositories, installed the nVidia binary drivers and configured the X server to use them. However, telling the system to enable the Desktop Effects failed, and running glxinfo crashed with a "locking assertion failure" message. Googling a bit, I found a page mentioning that one has to run the livna-config-display command to properly configure the X server. I think I did not do this, so I just ran this manually and later restarted X. No luck. Fortunately, that same page also contained a snippet of the xorg.conf configuration file that was like this: Section "Files" ModulePath "/usr/lib64/xorg/modules/extensions/nvidia" ModulePath "/usr/lib64/xorg/modules" EndSectionEffectively, my configuration file was lacking the path to the nVidia extensions subdirectory. Adding that line fixed the problem: now the server loads the correct GLX plugin, instead of the "generic" one that lives in the modules directory. I guess livna-config-display should have set that up automatically for me, but it didn't... The desktop effects are now working :-) Now I figure out why compiz feels so slow... specially because I have the same problem at work with an Intel 965Q video card.
June 28, 2008
·
Tags:
<a href="/tags/compiz">compiz</a>, <a href="/tags/fedora">fedora</a>, <a href="/tags/linux">linux</a>, <a href="/tags/nvidia">nvidia</a>
Continue reading (about
2 minutes)
Linux distributions for the x86_64 platform have different approaches when it comes to the installation of 32-bit and 64-bit libraries. In a 64-bit platform, 64-bit libraries are required to run all the standard applications but 32-bit libraries need to be available to provide compatibility with 32-bit binaries. In this post, I consider 64-bit applications to be the native ones and the 32-bit to be foreign. The two major approaches I have seen are: lib32 and lib64 directories, leaving lib to be just a symbolic link to the directory required by the native applications. This is the approach followed by Debian. The advantage of this layout is that the lib directory is the correct one for native applications. However, foreign applications that have built-in paths to lib, if these exist, will fail to work.lib and lib64 directories. This is the approach followed by Fedora. In this layout, the foreign applications which have built-in paths to lib will work just fine, but the native applications have to be configured explicitly to load libraries and plugins from within lib64.I have found so far two instances where the Fedora approach fails because native 64-bit applications hardcode the lib name in some places, instead of using lib64. One of these were the NetworkManager configuration files, which had an incorrect setup for the OpenVPN plugin and it failed to work. This issue has already been fixed in Fedora 9. The other problem was in gnome-compiz-manager where the application tries to load plugins from the lib directory, but as it is a 64-bit binary, it failed due to a bitness mismatch. This has been reported but is not yet fixed upstream. I'm sure several other similar problems remain to be discovered. I personally think that the Debian approach is more appropriate because it seems weird that all standard system directories, such as bin or libexec, contain 64-bit binaries but just one of them, lib, is 32-bit specific. As a side note, NetBSD follows an slightly different approach: lib contains 64-bit libraries and lib32, if installed at all, contains the 32-bit ones.
June 12, 2008
·
Tags:
<a href="/tags/fedora">fedora</a>, <a href="/tags/linux">linux</a>
Continue reading (about
2 minutes)
You can't imagine how happy I was today when I read the interview with KDE 4's developer Sebastian Kuegler. Question 6 asks him:6. Are there any misconceptions about KDE 4 you see regularly and would like to address?And around the middle of the answer, he says:Frankly, I don’t like the whole concept of the “Linux Desktop”. Linux is really just a kernel, and in this case very much a buzzword. Having to mention Linux (which is just a technical implementation detail of a desktop system) suggests that something is wrong. Should it matter to the user if he runs Linux or BSD on his machine? Not at all. It only matters because things just don’t work so well (mostly caused by to driver problems, often a matter of ignorance on some vendor’s side).Thanks Sebastian. I couldn't have said it better. What virtually all application developers are targeting —or should be targeting— is KDE or GNOME. These are the development platforms; i.e. what provide the libraries and services required for easy development and deployment. It doesn't make any sense to "write a graphical application for Linux", because Linux has no standard graphical interface (unless you mean the framebuffer!) and, again, Linux is just a kernel. I think I have already blogged about the problems of software redistribution under Linux... will look for that post and, if it is not there, it is worth a future entry.
February 2, 2008
·
Tags:
<a href="/tags/kde">kde</a>, <a href="/tags/linux">linux</a>
Continue reading (about
2 minutes)
Been tracking and resolving a bug in Linux's SPU scheduler for the last three days, and fixed it just a moment ago! I'm happy and needed to mention this ;-) More specifically, tracking it down was fairly easy using SystemTap and Paraver (getting the two to play well together was another source of headaches), but fixing it was the most complex thing due to deadlocks popping up over and over again. Sorry, can't disclose more information about it yet; want to think a bit more how to make this public and whether my fix is really OK or not. But be sure I will!
December 7, 2007
·
Tags:
<a href="/tags/cell">cell</a>, <a href="/tags/linux">linux</a>
Continue reading (about
1 minute)
I started this week's work with the idea of instrumenting the spufs module found in Linux/Cell to be able to take some traces of the execution of Cell applications. At first, I modified that module to emit events at certain key points, which were later registered in a circular queue. Then, I implemented a file in /proc so that a user-space application could read from it and free space from the queue to prevent the loss of events when it was full. That first implementation never worked well, but as I liked how it was evolving, I thought it could be a neat idea to make this "framework" more generic so that other parts of the kernel could use it. I rewrote everything with this idea in mind and then also modified the regular scheduler and the process-management system calls to also rise events for my trace. And got it working. But then, I was talking to Brainstorm about his new "Sun Campus Ambassador" position at the University, and during the conversation he mentioned DTrace. So I asked... "Mmm, that tool could probably simplify all my work; is it there something similar for Linux?". And yes; yes it is! Its name, SystemTap. As the web page says, SystemTap "provides an infrastructure to simplify the gathering of information about the running Linux system". You do this by writing small scripts that hook into specific points of the kernel — at the function level, at specific mark points, etc. — and which get executed when the script is processed and installed into the live kernel as a loadable kernel module. With this tool I can discard my several-hundred-long changes to gather traces and replace them with some very, very simple SystemTap scripts. No need to rebuild the kernel, no need to deal with custom changes to it, no need to rebuild every now and then... neat! Now I'm having problems using the feature that allows to instrument kernel markers, and I need them because otherwise some private functions cannot be instrumented due to compiler optimizations (I think). OK, I'd expose those functions, but while I'm at it, I think it'd be a good idea to write a decent tapset for spufs that could later be published. And that prevents me from doing such hacks. But anyway, kudos to the SystemTap developers. I now understand why everybody is so excited about DTrace.
December 2, 2007
·
Tags:
<a href="/tags/cell">cell</a>, <a href="/tags/linux">linux</a>, <a href="/tags/systemtap">systemtap</a>
Continue reading (about
2 minutes)
I'm decided to improve my knowledge on the Cell platform, and the best way to get started seems to be to learn 64-bit PowerPC assembly given that the PPU uses this instruction set. Learning this will open the door to do some more interesting tricks with the architecture's low-level details. There are some excellent articles at IBM developerWorks dealing with this subject, and thanks to the first one in an introductory series to PPC64 I've been able to write the typical hello world program :-) Without further ado, here is the code!# # The program's static data # .data msg: .string "Hello, world!n" length = . - msg # # Special section needed by the linker due to the C calling # conventions in this platform. # .section ".opd", "aw" # aw = allocatable/writable .global _start _start: .quad ._start, .TOC.@tocbase, 0 # # The program's code # .text ._start: li 0, 4 # write(2) li 3, 1 # stdout file descriptor lis 4, msg@highest # load 64-bit buffer address ori 4, 4, msg@higher rldicr 4, 4, 32, 31 oris 4, 4, msg@h ori 4, 4, msg@l li 5, length # buffer length sc li 0, 1 # _exit(2) li 3, 0 # return success scYou can build it with the following commands:$ as -a64 -o hello.o hello.s $ ld -melf64ppc -o hello hello.oI'm curious about as(1)'s -a option; its purpose is pretty obvious, but it is not documented anywhere in the manual page nor in the info files. Anyway, back to coding! I guess I'll post more about this subject if I find interesting and/or non-obvious things that are not already documented clearly anywhere. But for beginner's stuff you already have the articles linked above.
November 25, 2007
·
Tags:
<a href="/tags/cell">cell</a>, <a href="/tags/linux">linux</a>, <a href="/tags/powerpc">powerpc</a>
Continue reading (about
2 minutes)
The deadline for my PFC (the project that will conclude my computer science degree) is approaching. I have to hand out the final report next week and present the project on July 6th. Its title is "Efficient resource management in heterogeneous multiprocessor systems" and its basic goal is to inspect the poor management of such machines in current operating systems and how this situation could be improved in the future. Our specific case study has been the Cell processor, the PlayStation 3 and Linux, as these form a clear example of an heterogeneous multiprocessor system that may become widespread due to its relatively cheap price and the attractive features (gaming, multimedia playback, etc.) it provides to a "home user". Most of the project has been an analysis of the current state of the art and the proposal of ideas at an abstract level. Due to timing constraints and the complexity of the subject (should I also mention bad planning?), I have been unable to implement most of them even though I wanted to do so at the very beginning. The code I've done is so crappy that I won't be sharing it anytime soon, but if there is interest I might clean it up (I mean, rewrite it from the ground up) and publish it to a wider audience. Anyway, to the real point of this post. I've published an almost definitive copy of the final report so that you can take a look at it if you want to. I will certainly welcome any comments you have, be it mentioning bugs, typos, wrong explanOctations or anything! Feel free to post them as comments here or to send me a mail, but do so before next Monday as that's the deadline for printing. Many thanks in advance if you take the time to do a quick review! (And yes... this means I'll be completely free from now on to work on my SoC project, which is being delayed too much already...) Edit (Oct 17th): Moved the report in the server; fixed the link here.
June 19, 2007
·
Tags:
<a href="/tags/cell">cell</a>, <a href="/tags/linux">linux</a>, <a href="/tags/pfc">pfc</a>, <a href="/tags/ps3">ps3</a>
Continue reading (about
2 minutes)
The mainstream Linux sources have some support for the PlayStation 3, but it is marked as incomplete. Trying to boot such a kernel results in a stalled machine, as the kernel configuration option says: CONFIG_PPC_PS3: This option enables support for the Sony PS3 game console and other platforms using the PS3 hypervisor. Support for this platform is not yet complete, so enabling this will not result in a bootable kernel on a PS3 system.To make things easier, I'd simply have used the Linux sources provided by YellowDog Linux 5 (YDL5), which correspond to a modified 2.6.16 kernel. However, as I have to do some kernel development on this platform, I objected to using such old sources: when developing for an open source project, it is much better to use the development branch of the code — if available — because custom changes will remain synchronized with mainstream changes. This means that, if those changes are accepted by the maintainers, it will be a lot easier to later merge them with the upstream code. So, after a bit of fiddling, I found the public kernel branch used to develop for the PS3. It is named ps3-linux, is maintained by Geoff Levand and can be found in the kernel's git repository under the project linux/kernel/git/geoff/ps3-linux.git. Fetching the code was "interesting". I was (and still am) a novice to git, but fortunately my prior experiences with CVS, Subversion and specially Monotone helped to understand what was going on. Let's now see how to fetch the code, cross-build a custom kernel and install it on the PS3 using YDL5. To checkout the latest code, which at this moment corresponds to a patched Linux 2.6.21-rc3 sources, do this: $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/geoff/ps3-linux.git ps3-linux This will clone the ps3-linux project from the main repository and leave it in a directory with the same name. You can keep it up to date by running git pull within the directory, but I'm not going to talk about git any more today. As I cross-compile the PS3 kernel from a FC6 Intel-based machine with the Cell SDK 2.0, I need to tell it which is the target platform and which is the cross-compiler before being able to build or even configure a kernel. I manually add these lines to the top-level Makefile, but setting them in the environment should work too: ARCH=powerpc CROSS_COMPILE=ppu- Now you can create a sample configuration file by executing the following command inside the tree: $ make ps3_defconfig Then proceed to modify the default configuration to your likings. To ease development, I want my kernels to be as small and easy to install as possible; this reduces the test-build-install-reboot cycle to the minimum (well, not exactly; see below). Therefore I disable all stuff I do not need, which includes modules support. Why? Keeping all the code in a single image will make later installation a lot easier. Once the kernel is configured, it is time to build it. But before doing so you need to install a helper utility used by the PS3 build code: the Device Tree Compiler (or dtc). Fetch its sources from the git repository that appears in that page, run make to build it and manually install the dtc binary into /usr/local/bin. With the above done, just run make and wait until your kernel is built. Then copy the resulting vmlinux file to your PS3; I put mine in /boot/vmlinux-jmerino to keep its name version-agnostic and specific to my user account. Note that I do not have to mess with modules as I disabled them; otherwise I'd have to copy them all to the machine — or alternatively set up a NFS root for simplicity as described in Geoff Levand's HOWTO. To boot the kernel, you should know that the PS3 uses the kboot boot loader, a minimal Linux system that chainloads another Linux system by means of the kexec functionality. It is very powerful, but the documentation is scarce. Your best bet is to mimic the entries already present in the file. With this in mind, I added the following line to /etc/kboot.conf: jmerino='/dev/sda1:/vmlinux-jmerino root=/dev/sda2 init=/sbin/init 4' I'd much rather fetch the kernel from a TFTP server, but I have not got this to work yet. Anyway, note that the above line does not specify an initrd image, although all the other entries in the file do. I did this on purpose: the less magic in the boot, the better. However, bypassing the initrd results in a failed boot with: Warning: Unable to open an initial console. This is because the /dev directory on the root partition is unpopulated, as YDL5 uses udev. Hence the need for an initrd image. Getting a workaround for this is trivial though: just create the minimum necessary devices on the disk — "below udev" —, as shown below. # mount --bind / /mnt # MAKEDEV -d /mnt/dev console zero null # umount /mnt And that's it! Your new, fresh and custom kernel is ready to be executed. Reboot the PS3, wait for the kboot prompt and type your configuration name (jmerino in my case). If all goes fine, the kernel should boot and then start userland initialization. Thanks go to the guys at the cbe-oss-dev mailing list for helping me in building the kernel and solving the missing console problem. Update (23:01): Added a link to a NFS-root tutorial.
March 16, 2007
·
Tags:
<a href="/tags/linux">linux</a>, <a href="/tags/pfc">pfc</a>, <a href="/tags/ps3">ps3</a>, <a href="/tags/yellowdog">yellowdog</a>
Continue reading (about
5 minutes)