Showing 5 posts
If you grew up in the PC scene during the 1980s or early 1990s, you know how painful it was to get hardware to work. And if you did not witness that (lucky you) here is how it went: every piece of hardware in your PC—say a sound card or a network card—had physical switches or jumpers in it. These switches configured the card’s I/O address space, interrupts, and DMA ports, and you had to be careful to select values that did not overlap with other cards.
But that wasn’t all. Once you had configured the physical switches, you had to tell the operating system and/or software which specific cards you had and how you had configured them. Remember SET BLASTER=A220 I5 D1 H5
? This DOS environment variable told programs which specific Sound Blaster you had installed and which I/O settings you had selected via its jumpers.
Not really fun. It was common to have hardware conflicts that yielded random lock-ups, and thus ISA “Plug and Play”, or PnP for short, was born in the early 1990s—a protocol for the legacy ISA bus to enumerate its devices and to configure their settings via software. Fast-forward to today’s scene where we just attach devices to external USB connectors and things “magically work”.
But how? How does the kernel know which physical devices exist and how does it know which of the many device drivers it contains can handle each device? Enter the world of hardware discovery.
February 28, 2025
·
Tags:
blogsystem5, hardware, unix
Continue reading (about
16 minutes)
My interest in storage is longstanding—I loved playing with different file systems in my early Unix days and then I worked on Google’s and Microsoft’s distributed storage solutions—and, about four years ago, I started running a home-grown NAS leveraging FreeBSD and its excellent ZFS support. I first hosted the server on a PowerMac G5 and then upgraded it to an overkill 72-core ThinkStation that I snapped second-hand for a great price.
But as stable and low maintenance as FreeBSD is, running day-to-day services myself is not my idea of “fun”. This drove me to replace this machine’s routing functionality with a dedicated pfSense box a year ago and, for similar reasons, I have been curious about dedicated NAS solutions.
I was pretty close to buying a second-hand NAS from the work classifieds channel when a Synology marketing person (hi Kyle!) contacted me to offer a partnership: they’d ship me one of their devices for free in exchange for me publishing a few articles about it. Given my interest to drive-test one of these appliances without committing to buying one (they ain’t cheap and I wasn’t convinced I wanted to get rid of my FreeBSD-based solution), I was game.
And you guessed right: this article is one of those I promised to write but, before you stop reading, the answer is no. This post is not sponsored by Synology and has not been reviewed nor approved by them. The content here, including any opinions, are purely my own. And what I want do do here is compare how the Synology appliance stacks against my home-built FreeBSD server.
December 13, 2024
·
Tags:
blogsystem5, hardware
Continue reading (about
17 minutes)
If you read my previous article on DOS memory models, you may have dismissed everything I wrote as “legacy cruft from the 1990s that nobody cares about any longer”. After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today’s 64-bit machines can theoretically access 16EB.
All of this growth has been in service of ever-growing programs. But… even if programs are now more sophisticated than they were before, do they all really require access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms?
Let’s try to answer those questions to find some very surprising answers. But first, some theory.
October 7, 2024
·
Tags:
blogsystem5, hardware, unix
Continue reading (about
18 minutes)
Wow, it has already been three years since a friend an I found a couple of old Macintoshes in a trash container1. Each of us picked one, and maybe a year ago or so I gave mine to him as I had no space at home to keep it. Given that he did not use them and that I enjoy playing with old hardware, I exchanged those two machines by an old Pentium 3 I had laying around :-) The plan is to install NetBSD-current on at least one of them and some other system (or NetBSD version) in the other one to let me ensure ATF is really portable to bizarre hardware (running sane systems, though).
The machines are these:
July 16, 2007
·
Tags:
atf, hardware, mac
Continue reading (about
3 minutes)
Old hard disks exposed a lot of their internals to the operating system: in order to request a data block from the drive, the system had to specify the exact cylinder, head and sector (CHS) where it was located (as happens with floppy disks). This structure became unsustainable as drives got larger (due to some limits in the BIOS calls) and more intelligent.
Current hard disks are little (and complex) specific-purpose machines that work in LBA mode (not CHS). Oversimplifying, when presented a sector number and an operation, they read or write the corresponding block wherever it physically is — i.e. the operating system needn't care any more about the physical location of that sector in the disk. (They do provide CHS values to the BIOS, but they are fake and do not cover the whole disk size.) This is very interesting because the drive can automatically remap a failing sector to a different position if needed, thus correcting some serious errors in a transparent fashion (more on this below).
Furthermore, "new" disks also have a very interesting diagnostic feature known as S.M.A.R.T. This interface keeps track of internal disk status information, which can be queried by the user, and also provides a way to ask the drive to run some self-tests.
If you are wondering how I discovered this, it is because I recently had two hard disks fail (one in my desktop PC and the one in the iBook) reporting physical read errors. I thought I had to replace them but using smartmontools and dd(1) I was able to resolve the problems. Just try a smartctl -a /dev/disk0 on your system and be impressed by the amount of detailed information it prints! (This should be harmless but I take no responsibility if it fails for you in some way.)
First of all I started by running an exhaustive surface test on the drive by using the smartctl -t long /dev/disk0. It is interesting to note that the test is performed by the drive itself, without interaction with the operating system; if you try it you will see that not even the hard disk led blinks, which means that the test does not "emit" any data through the ATA bus. Anyway. The test ended prematurely due to the read errors and reported the first failing sector; this can be seen by using smartctl -l selftest /dev/disk0.
With the failing sector at hand (which was also reported in dmesg when it was first encountered by the operating system), I wrote some data over it with dd(1) hoping that the drive could remap it to a new place. This should have worked according to the instructions at smartmontools' web site, but it didn't. The sector kept failing and the disk kept reporting that it still had some sectors pending to be remapped (the Reallocated_Sector_Ct attribute). (I now think this was because I didn't use a big-enough block size to do the write, so at some point dd(1) tried to read some data and failed.)
After a lot of testing, I decided to wipe out the whole disk (also using dd(1)) hoping that at some point the writes could force the disk to remap a sector. And it worked! After a full pass S.M.A.R.T. reported that there were no more sectors to be remapped and that several ones were moved. Let's now hope that no more bad sectors appear... but the desktop disk has been working fine since the "fixes" for over a month and has not developed any more problems.
All in all a very handy tool for testing your computer health. It is recommended that you read the full smartctl(1) manual page before trying it; it contains important information, specially if you are new to S.M.A.R.T. as I were.
December 4, 2006
·
Tags:
hardware, smart
Continue reading (about
3 minutes)