From the desk of James Fatdog64, FatdogArm, Linux, ARM Linux, and others http://lightofdawn.org/blog/ http://lightofdawn.org/blog/?viewDetailed=00218 Fatdog64 811 is released Fatdog64'Linux Fatdog64 811 was released on 10 September 2020. <br /> <br />Release notes <a href=http://distro.ibiblio.org/fatdog/web/811.html target=_blank>here</a>, announcement <a href=https://forum.puppylinux.com/viewtopic.php?f=60&t=746 target=_blank>here</a>. <br /> <br />Get it from the usual mirrors: <a href=http://distro.ibiblio.org/fatdog/iso/ target=_blank>ibiblio.org</a>, <a href=https://mirror.aarnet.edu.au/pub/fatdog/iso/ target=_blank>aarnet</a>, <a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/ target=_blank>uoc.gr</a>, or <a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/ target=_blank>nluug.nl</a>. <br /> http://lightofdawn.org/blog/?viewDetailed=00217 RetroPie Linux'General I've always enjoyed retro gaming. In Fatdog we have always had dosbox (emulator for DOS games) for as long as I can remember, and scummvm (for Sierra games) came a little bit later. I've recently added a few more emulators: mednafen (multi-emulators for GBA, SNES and others), desmume (NDS), duckstation and pcsxr (PS1) as well as pcsx2 (PS2). I threw ZX-Tune as well, to play the music from those old games. <br /> <br />But very recently I've been introduced to <a href=https://retropie.org.uk target=_blank>RetroPie</a>, a distro for Raspberry Pi (raspi) which is dedicated to turning your raspi to a retro gaming machine. Since I have a dusty Pi3 laying around doing nothing (which I was supposed to use for testing Mick's Raspbian Buster but never got my lazy bum off to actually do it - sorry Mick!), I reckon, why not give it a try. If it doesn't work all I wasted is a couple of minutes downloading ~800MB image and couple of minutes "dd"-ing it. <br /> <br />As it turned out, it worked the first time around. Once the SD card as prepared, I put it inside the raspi, hooked the raspi to my TV, and then turned the raspi on. I was instantly asked to configure my controller (I didn't have one, but no problem, retropie accepts keyboard too). I configured the optional wifi setup (I didn't have to do it, but I wanted to try its samba feature). Then all I needed to do was to install the games (which I can install via USB, SFTP, or Samba). No stupid questions, no hassle, no config file. It's all ready to use. Most of the popular emulators were already included, and those that don't, are literally available for installation, a few clicks away. <br /> <br />With retropie on it, the raspi has been magically turned into a retro gaming machine. In the beginning I didn't believe that the raspi had enough muster to pull up a decent emulation, but I was pleasantly proven wrong. Most emulators worked well. Some had a few stutters every once a while but it was nothing to fret about. <br /> <br />The display quality was good, too. Retro games were notorious for displaying low-res pixelated images on today's high-res TV, but the emulators included in retropi had a few tricks up their sleeve to make the images sharper. I'm not surprised about that (after all I compiled some of those for Fatdog64 too) but what surprised me that the little raspi could pull off the enhanced graphics too. Well again not all emulators and not all games, but as I said, nothing to fret about. <br /> <br />Now, I have FatdogArm, Fatdog's variants that runs on ARM machines, the raspi included. If I really want to, I could, in theory, build and package all these myself too, producing a custom FatdogArm build that does what retropie does. All the software used in retropie is <a href=https://github.com/RetroPie target=_blank>FOSS software</a>. The standard release of FatdogArm was built to run as a "desktop-replacement" OS, but of course at its very core it was designed to be modded to produce a build that met specific needs. Making a retro gaming machine would be one of those. <br /> <br />But of course, why bother? The folks who makes retropie does a very good job, and the result is a very polished retro gaming software. Unless you try very very hard, you won't even know that beneath all the pretty faces, it runs Raspian Buster Linux. Rather than spending thankless hours building software on FatdogArm so it can become retropie wannabe, I may as well use retropie as is and start gaming with my kids <img src=images/smilies/teeth.gif /> <br /> <br />Anyway. In case you want to try retropie and don't have raspi, there is no need to fret and there is no need to fork out extra dollars too. While originally designed for raspi (all varians from pi 0/1/2/3/4), today's retropie supports other platforms: Odroid C1/C2, Odroid XU3/XU4, as well as standard PC! Yes, standard PC. You can use your old laptop, old desktop, NUC if you have one, etc basically any PC. The details are all on their website, if you're interested, go ahead and check it out. <br /> <br />Meanwhile, I've got a few games I need to catch up. <br /> <br />Disclaimer: I am not affiliated with retropie or any of its developers. In fact I wasn't aware that it exists until last week. I'm writing this to share with fellow retro gamers who aren't aware of retropie. <br /> http://lightofdawn.org/blog/?viewDetailed=00216 spcplayer patch for libao Linux I've been intro retro gaming lately, and one of the interesting aspects of the retro gaming is their music; in particular, the music in their original format. <br /> <br />One of those formats is <a href=http://www.vgmpf.com/Wiki/index.php?title=SPC target=_blank>SPC</a>. There are plenty of players for Windows (mostly plugins to popular music players), but standalone players for Linux are few and far between. <br /> <br />There is this player called <a href=https://www.bannister.org/software/ao.htm target=_blank>AudioOverload</a> but it is closed-source and output only to /dev/dsp. It does have a nice interface and supports many different video game formats, not only SPC. <br /> <br />The older version of gstreamer (0.10 branch) has a decoder in their 'bad plugins' collection called "GstNsfDec" which apparently can be used to decode SPC file (I haven't tested) - but do I really want to install the entire train of old gstreamer libs just to do that? <br /> <br />Then I found this <a href=https://github.com/raphnet/vspcplay target=_blank>vspcplay</a> is an SDL-based player which is really nice and full featured; unfortunately the emulation isn't accurate; it sounds different from what I heard in the game and it eats 100% of one of my CPU core when running it. <br /> <br />Finally I found this one: <a href=https://github.com/sbmelvin/SPC-PLAYER target=_blank>SPC-PLAYER</a>, a CLI player for SPC but unfortunately like AudioOverload it also outputs to /dev/dsp (and other OSS devices). <br /> <br />The problem with outputting to /dev/dsp is that whatever program is outputting the sound takes an exclusive use of the soundcard. I cannot play anything if I have my web brower open and one of its tab is pointing to youtube (even if nothing is being played), because the browser keep the sound access open and this prevents other programs to access /dev/dsp. <br /> <br />The only way to make work nicely is to use ALSA, which comes with built-in mixer (dmix) allowing multiple programs to use the sound card at the same time. <br /> <br />So I've written a simple patch for SPC-PLAYER to use <a href=https://xiph.org/ao/ target=_blank>libao</a>, a very nice sound output library that enables us to output to a variety of output devices (ALSA, OSS< Pulse, etc) with exactly the same, simple API. Nice. <br /> <br />Well, if you're interested, you can find the patch in my <a href=/wiki/wiki.cgi/Patches target=_blank>patches</a> page. <br /> <br />Note that SPC-PLAYER compiles in 32-bit only due to its extensive usage of x86 assembly in its APU emulation. <br /> <br /> <br />__________________________ <br /> <br /> <br />EDIT: and just after I spent the effort to create the patch, I found <a href=https://zxtune.bitbucket.io/ target=_blank>ZXTune</a> which is a full-fledged multi-format chiptune player which includes support for SPC and many other formats! You know, you can't guess by its name alone (ZXTune - it plays only sounds for ZX Spectrum? Come on ...). It's open source and binary is provided, too! Works out of the box and supports the formats I'd love to play. The program comes in GUI flavours (Qt) and CLI, with only minimal dependencies needed (the rest of the libs needed were compiled statically with the binary). <br /> <br />So go ahead and use that instead. <br /> <br /> <br />------------------------ <br /> <br /> <br />EDIT: After finding ZX-Tune, JakeSFR informed me that <a href=https://deadbeef.sourceforge.io/ target=_blank>deadbeef music player</a> can also play SPC files. <br /> <br />Interesting. I looked into that, and found that deadbeen uses a library called "game music player", also known as libgme, to do that. And I found out ZXTune uses the same library too! <br /> <br />The library also has a simple "sample player" which uses SDL for output and hence is not affected by the "OSS problem". <br /> <br />You can get libgme from <a href=https://bitbucket.org/mpyne/game-music-emu/wiki/Home target=_blank>here</a>. <br /> <br />That probably explains why nobody bothers to patch SPC-PLAYER, but that's okay. At least now I know how to use libao for my future projects <img src=images/smilies/teeth.gif /> http://lightofdawn.org/blog/?viewDetailed=00215 New forum for Fatdog Fatdog64'Linux'PuppyLinux Fatdog has, for years, since its inception, always been piggybacking on Puppy Linux forum. Firstly because of its obvious roots; also because we simply didn't have enough man power to run and police an independent forum ourselves. <br /> <br />As the previous blog entry indicates, however, John de Murga, the owner of the Puppy Linux forum, sadly passed away, and the original Puppy Linux forum went down (for a time). <br /> <br />Behind the scenes people are frantically working to bring it up, but there are also efforts to setup a replacement forums, including one by rockedge. <br /> <br />By now, both efforts have been fruitful: the old forum has been resurrected and the rockedge forum has flourished with old and new members (re-)joining it. <br /> <br />The stewards of Puppy Linux has decided that the old forum will continue in read-only mode as archives, while rockedge forum will take over as the new Puppy Linux forum. <br /> <br />Rockedge has graciously offered a section of the forum specifically for Fatdog, for which we are very grateful. <br /> <br />After some internal discussions, we have decided to take the offer and continue the tradition of piggybacking on Puppy Linux forum. All of Fatdog team members have now re-joined there as well. <br /> <br /><b>For your reference:</b> <br />--------- <br /> <br />The new forum address: <a href=https://forum.puppylinux.com target=_blank>https://forum.puppylinux.com</a> (Fatdog posts will be in the Fatdog section) <br /> <br />The old (archived) forum (in case you need to review/check old postings for older version of Fatdogs): <a href=http://oldforum.puppylinux.com target=_blank>http://oldforum.puppylinux.com</a> (Fatdog posts are in the Puppy Projects section) <br /> <br />We'll see you there in the new forum! <br /> http://lightofdawn.org/blog/?viewDetailed=00214 Goodbye to a friend - John de Murga Fatdog64'Linux'PuppyLinux'General John de Murga (aka John Murga) was the owner of the official Puppy Linux forum, formerly located in http://murga-linux.com/puppy/index.php <br /> <br />If you use Fatdog64, you will know that we hosted our support threads on the same forum. <br /> <br />But my involvement with the Puppy Linux forum dates back well before then. I started browsing the forum when I was still a Windows user back in 2006; and joined in 2007 when I was converted into Puppy Linux user - and had been a regular on the forum ever since. Until the forum went down in early July 2020. <br /> <br />--- <br /> <br />But in all my years being John's forum, I don't really know John Murga. Not personally. Never met him. Never spoke with him. Not even knowing how he looked like - well, not until a few days before I wrote this. Not through my fault though, I only know bits and pieces that he himself chose to share, for he himself claimed to be 'the man of mystery'. <br /> <br />He and his work, however, left an indelible mark on my life. It was such a simple work. Setting up an online forum. What was so difficult about it? Get a web-hosting site. Click a button, and you've got a forum and up running. Easy. Everyone can do it, right? <br /> <br />No. Not every one can do what John did. <br /> <br />The forum he set up was the forum where I spent lots of my past life on. Where I got help when I started and when I stumbled. Where I, eventually, helped others as I became proficient enough. Where I, unexpectedly, met and became friends with people in the real world. <br /> <br />All of these because he had the tenacity to keep the forum going for 15+ years; and let the forum to govern itself instead of strict policing found in many others. He even had the magnanimity to allow people to post links of competing projects and forums. This was one of the factors why so many people stayed on and the forum grew to become the melting pot of folks from all walks of life, sharing the same interest - Puppy Linux, and Linux in general. <br /> <br />So it was great shock and sadness that I learn of his passing recently in May 2020. I was even more stunned to know that he had left a young family - a daughter and twins who were born shortly after his passing. I have young children myself, I could not even begin to contemplate how it must have felt, to be left so soon, and so suddenly. <br /> <br />Nevertheless, I would like offer a prayer of hope for whom that John's family, that John had left a legacy that not many people could manage to do: to change the life of thousands, if not then ten thousands. John had left many friends that he himself probably didn't know. I was one of them, one among the thousands. For us, he is a hero. In every sense of the word. <br /> <br />Goodbye, John, and all blessings in your next journey. <br /> <br />You will be missed, but not forgotten. <br /> <br />--- <br /> <br /><a href=http://blog.puppylinux.com/vale-john-de-murga target=_blank>Eulogy from 01micko, the current steward of Puppy Linux</a> <br /> <br /><a href=https://bkhome.org/news/202007/murga-linux-puppy-forum-is-down.html target=_blank>Eulogy from Barry Kauler, the creator of Puppy Linux</a> <br /> <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00213 Memories General “And this world will stay with you, forever. You will keep it in your heart. We are ephemeral creatures; we always live in the present. We jump from one moment to another; memories are all that we carry within us. Even if you won’t ever visit this place again, it will still be alive in your dreams, every time you recall them.” <br />--- <br />The Book of the Ten Children, Reunion Part II:Epilogue http://lightofdawn.org/blog/?viewDetailed=00212 Fatdog64 updates - behind the scenes Fatdog64'Linux Fatdog64 was last released on 23 Jan. And although we have been quiet, it doesn't mean that things stopped. Things are still going in the background, bug fixing, adding features, etc. <br /> <br />Here are a few major things that has been going on, in no particular order. <br /> <br />1. Puppy Linux forum user ICPUG has indicated possible issues with Fatdog savefile residing on an NTFS partition, especially partition shared with Windows. This is due to the way Fatdog uses ntfs-3g: it runs with full permission control enabled. It makes NTFS behaves like POSIX filesystem and we can use it for save directory (not only save file), but on the other hand it makes Windows complain about not-granted access each time the partition needs to be accessed from Windows. <br /> <br />Based on this feedback, we have added "ntfsnoperm" boot parameter to disable that permission control. When this parameter is used, ntfs-3g is run without permission: it would behave just like FAT filesystem (all files and directories are owned by a given uid/gid specified at mount time). It would not be possible to use a save directory on NTFS, but at least the permission wouldn't be touched and Windows will stop complaining. <br /> <br />2. The same ICPUG also found an old bug related to the above: where the user "spot" would be unable to access a Downloads folder that has been relocated to a save partition if the save partition is NTFS. This has been fixed. <br /> <br />3. Fatdog has long supported btrfs. The kernel is built to support btrfs and we have the complete btrfs-progs included; and nominally we support having save directory on btrfs. But this does not actually work; because "aufs" - the unifying filesystem layer that we use to make the magic happen, does not support btrfs the way it supports other filesystem. For the techie: the problem is that aufs cannot have its xino file inside btrfs. It has to be somewhere else. <br /> <br />Thanks for our team member SFR (his Puppy Forum name), this was brought to attention and he shared a fix too. The fix was tested, worked on, and finally merged: now, save directory will work seamlessly on btrfs-formatted partition. <br /> <br />4. The original motivation for doing (3) above was actually to attempt to use a compressed filesystem. This has been a particular interest for me for years. The last time I explored this (a couple of years ago), btrfs wasn't mature enough and there was no other native writable filesystem with compression support. Sure, there were native filesystem with compression support - but they were all readonly (squashfs being the most popular one). Sure, a couple of FUSE filesystems support compression too (in fact it's their number one feature), but they were not actively maintained and being done in userspace, it was slow. I never got enough motivation to implement them properly with Fatdog. <br /> <br />Btrfs finally changed this equation. By now it should be considered mature enough (although probably not as hyped as before), and it supports compression as well. In fact, it supports three different compression algorithms: lzo, lz4 and zstd. <br /> <br />So as we fixed btrfs save directory support, we grafted compression support too. If the kernel parameter "btrfscompress" is specified, the compression support for btrfs will be enabled, using the algorithm specified on that parameter. This has been tested to work wonderfully and using zstd, the compression rate is about 30% on average. <br /> <br />5. We have an update on the in-house "screencaster" application (a program record the video of the display). It now has the ability to take repeated screen capture, better quality video, among other things. <br /> <br />6. Step, another member of Fatdog team, has also revamped the "Samba Shares" application. It has been re-factored and is heavily tested to work across different Samba servers (different Windows versions, etc). <br /> <br />7. We have also fixed a long-standing bug when running Fatdog with RAM layer operation, which can cause inconsistencies if the system is heavily used (a lot of filesystem access) when the "merge down" process happens. SFR found this to be irritating enough to find a solution that works better; and this got incorporated into Fatdog. <br /> <br />An additional feature was also added: the ability to remove whiteouts if they don't hide any files on the lower SFS layers, but not activated by default due to possible conflict with multisession operation (we haven't tested the interaction yet). <br /> <br />8. We also include the "updater script" which re-enable VLC's ability to play youtube video directly, without having to update VLC itself. The script is called update-vlc-playlist-luac.sh. SFR found this script. <br /> <br />And of course, many other smaller bugfixes and feature addition, as well as adding and updating packages on the repo as well. <br /> <br />So when will we have a new release? Well, it will be released when it is released. <br /> http://lightofdawn.org/blog/?viewDetailed=00211 The road to hell is paved with good intentions General Good intentions, good deeds, now matter how good they are, when carried to its logical conclusion, always lead to destruction. <br /> <br />Because when we go north to reach the most extreme northern point, without knowing what the north pole actually is, we will never reach it. <br /> <br />We need directions. We need sign posts. And fortunately there are sign posts and directions to north pole, if that's all we want to go. <br /> <br />How about sign posts of life? <br /> <br />There is this someone. He called himself as The Way. The Way that will lead you to the Truth. The Truth about Life, and beyond. The Truth that will set you free. <br /> <br />But He, like all sign posts, doesn't demand that He be followed. <br />It is up to us if we want to. <br /> <br /> <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00210 Nothing to Fear, but Fear Itself General Thus said Franklin D. Roosevelt, the 32nd president of the United States, even though I'm sure many wise men before him have said the same, although, perhaps, not in the same exact words. <br /> <br /><hr> <br />Fear of something is almost always worse than that something itself. http://lightofdawn.org/blog/?viewDetailed=00209 Love at first sight General "Love at first sight often ends at the first quarrel." <br />--- <br />The Book of Ten Children, 1:2 <br /> http://lightofdawn.org/blog/?viewDetailed=00208 Fatdog64 810 Final is released Linux'Fatdog64 Maintenance release, basically a bug-fixed version of 810 Beta. <br /> <br /><a href=http://distro.ibiblio.org/fatdog/web/810.html target=_blank>Release Notes</a> and <a href=http://murga-linux.com/puppy/viewtopic.php?t=117546 target=_blank>Forum announcements</a> <br /> <br />Can't believe that Fatdog is coming to its twelfth year. <br /> <br /><hr> <br />Download locations: <a href=http://distro.ibiblio.org/fatdog/iso/ target=_blank>ibiblio.org</a>, <a href=https://mirror.aarnet.edu.au/pub/fatdog/iso/ target=_blank>aarnet</a>, <a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/ target=_blank>uoc.gr</a>, <a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/ target=_blank>nluug.nl</a>. http://lightofdawn.org/blog/?viewDetailed=00207 CSVfix patches for regex and exec General'Linux <a href=https://neilb.bitbucket.io/csvfix/ target=_blank>CSVfix</a> is a tool for manipulating CSV files. Along with the usual column re-ordering and filtering, CSVfix offers a powerful per-cell data transformation using a simple expression language, as well as regular-expression for string matching and editing. And if this is not enough, CSVfix can execute external process - for every cell that needs to be processed. And oh, it's available for Windows too! <br /> <br />I find this tool to be very handy in what I need to do, so when I encountered a bug in its regex processing (for its "edit" command), I immediately checked if there is any updates to this tool. Unfortunately, its development seems to have ceased in 2015; and no other people seem to have picked up the development (I did find some forks, but they were all older copies from when it was still hosted in google code). <br /> <br />So I set out to figure out about the problem and hopefully rectify it. I found that the problem was in its regex library, which was a home-grown library (apparently adapted from an algorithm book). It is 2020 as of this time of writing, and C++ now comes with its own STL regex library (std::regex). I decided to rip off the custom regex lib and replace it with the STL regex instead, while keeping the rest of the class interface identical, therefore no other part of the code needed to be changed. This instantly fixed the problem, and as a bonus, now we can use <a href=http://www.cplusplus.com/reference/regex/ECMAScript/ target=_blank>ECMAScript-compatible regex</a> instead of just the basic regex. <br /> <br />Later, I found out that the "exec" command also had a bug (the flag "-ix" did not work properly), so I traced this and fixed it too. <br /> <br />Oh, and during the process, I tried to run its testsuite - and while most of them passed, some did fail. Mainly because of CRLF/LF inconsistencies, so I changed those the test data to use LF. It is also a warning that this tool only works with platform "newline" - CRLF in Windows, and LF in Linux - so if files were to be exchanged between platforms, they must be properly translated first before use. <br /> <br />Here are the individual patches. <br />- <a href=/wiki/main/files/csvfix-regex-fix.patch target=_blank>regex patch</a> <br />- <a href=/wiki/main/files/csvfix-exec-fix.patch target=_blank>exec patch</a> <br />- <a href=/wiki/main/files/csvfix-test-cases-fix.patch target=_blank>test-case patch</a> <br /> <br />They apply on top of the <a href=https://bitbucket.org/neilb/csvfix/commits/93804d497be6e41a0b71864d15d2a8346cc63364 target=_blank>commit 93804d4 from 2015-02</a>, which was the latest when I wrote this. They are licensed in the same way as the original CSVfix is <a href=https://bitbucket.org/neilb/csvfix/src/default/LICENSE target=_blank>licensed</a>. <br /> <br /><hr> <br /> <br />If CSVfix is not powerful enough for you, there are other similar tools: <br /> <br />1. <a href=http://johnkerl.org/miller/doc/index.html target=_blank>miller</a> is a tool in very similar spirit with CSVfix, but it is (much) more sophisticated. Its "data transformation language" looks more expressive than the one in CSVfix. If you have a problem you cannot solve with CSVfix, miller will probably help you. As a bonus, it is still in active development - that means bugs will be squashed. It is written in C, you will need to compile it if it is not in your package repository. (Fatdog, naturally, has it in its repository). <br /> <br />2. <a href=https://csvkit.readthedocs.io/en/latest/ target=_blank>csvkit</a> is a collection of tools that more or less perform the same functions as CSVfix. It supports direct conversion to/from Excel files, importing/exporting into databases (sqlite and postgresql as documented, perhaps others too), as well as running direct SQL queries from CSV files (and databases too). It is written in Python3 so you can install it using pip3. Fatdog has this in its repository too (so you can install it using package manager instead of pip3). <br /> <br />3. <a href=https://rbql.org/ target=_blank>rbql</a> basically enables you to run SQL-like on CSV files; but its power is its ability to run python (or javascript, depending on which backend you choose) code for every cell. Fatdog has it in its repository too, although you can just use pip to install it, if you don't run Fatdog. http://lightofdawn.org/blog/?viewDetailed=00206 The Good Book General Everyone keep two books. The "good book", and the "bad book". People that we know, we like, and generally we like to be associated with, is in our "good book". People that have hurt us, or people who behave in a manner that we despise, on the other hand, get their names written in our "bad book". <br /> <br />Being listed in either book have consequences, both good and bad. <br /> <br />Now, God carries two books too. Would you rather be in men's good books, or God's good book? How do you live your life - are you striving to be in God's good book, or to be in other men's good books? <br /> http://lightofdawn.org/blog/?viewDetailed=00205 Welcome to the new decade ... or not! General This new year 2020 is a new year that is divisble by ten. It changes the second digit from "1" in 20<b>1</b>9 to "2" in 20<b>2</b>0. <br /> <br />A lot of people are wishing me happy new year and also welcoming me to the new decade. I am grateful for their well-wishes. <br /> <br />Except for one thing. <br /> <br />2020 is not the start of the new decade. Just like year 2000 is not the start of the new millenium. The new decade starts at year 2021. Just like the 21st century starts at 2001, not at 2000. Year 2000 belongs to the 20th century. <br /> <br />In case you don't see why: it is because we start counting our calender at Year 1 AD. So the first decade - all ten years of it - would be the counting numbers from 1 to 10. The second decade starts at year 11 AD. <br /> <br />But of course, I'm being pedantic. Who cares anyway. Calendars have always been subject to various "adjustments"; and different cultures use different calendars. Those still using Julian calendars have not celebrated the new year yet, it is still 9 days away at the time of writing. <br /> <br />PS: We do not have Year Zero. The year before 1 AD is year 1 BC. http://lightofdawn.org/blog/?viewDetailed=00204 Fatdog64 810 Beta is Released Fatdog64 Maintenance release. <br /> <br /><a href=http://distro.ibiblio.org/fatdog/web/810b.html target=_blank>Release Notes</a> and <a href=http://murga-linux.com/puppy/viewtopic.php?t=117546 target=_blank>Forum announcements</a> <br /> <br /><hr> <br />Download locations: <a href=http://distro.ibiblio.org/fatdog/iso/Alphas-Betas/810beta/ target=_blank>ibiblio.org</a>, <a href=https://mirror.aarnet.edu.au/pub/fatdog/iso/Alphas-Betas/810beta/ target=_blank>aarnet</a>, <a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/Alphas-Betas/810beta/ target=_blank>uoc.gr</a>, <a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/Alphas-Betas/810beta/ target=_blank>nluug.nl</a>. http://lightofdawn.org/blog/?viewDetailed=00203 The Author of Life General I am an author. I write both fiction and non-fiction works. <br /> <br />When I write fiction stories, I make and write my own characters. These characters are people who live in my fictional world, fictional universe, oblivious to the fact that they are all fictitious. For them, everything that happens in their world is as real as it could be. <br /> <br />Now, of course, those characters I have created: they are not independent of me. They exist because of me, they exist in me. Eventually, I do get to decide that they see, that they hear, what they feel, what they do. I determine their fate, their destiny. I set how life is going to be for them, and what revolves in their world. <br /> <br />But they are not just characters, or puppets. In my mind, they are alive, and I'm only writing part of their life that I happen to see. Not only that, I love my characters, and I care about what happens to them. Certain characters are lovable, and some are detestable - but I certainly care about all of them. <br /> <br />I wish - if it were ever possible - to actually meet my characters, in their own universe. See how they live, how they feel. Feel their joy, and suffer their sadness. Be one of them. And tell them, how much I care and love all of them. That all of their life has meaning to me, their author. <br /> <br />--- <br /> <br />If you are an author, or an artist of any kind that loves your own created arts, it is easy to see that you, too, is a work of art of an omnipotent Author, that is, God. <br /> <br />When God paints, you see the flowers, country side, and the star constellations. <br />When God sculpts, you see the mountains and the Laniakea supercluster. <br />When God builds, you see the quarks self-assembling to atoms and the visible universe. <br />When God engineers, you see protein machines and solar systems and life. <br /> <br />And when God writes, you see yourself. <br /> <br />You exist because of Him, your Author. <br /> <br />He cares about you, more than you care about your own creation. <br /> <br />He loves you so much that He gives you life, and more: something that you can never give to your own creation: a free and independent mind to decide what you want to do about Him - whether to love Him back, or to reject and deny Him. <br /> <br />He loves you so much that He came into this world, to feel its joy and suffering, and to tell you that He love you, and to tell you that your life is meaningful, and that there are more to your life than just this world; He showed the way to Himself. <br /> <br />I thank my Author for giving me life. <br />I thank my Author for loving me more than I love myself. <br />I thank you, Lord Jesus, and I long to see you face-to-face when the time comes. http://lightofdawn.org/blog/?viewDetailed=00202 Life is like a harddisk ... General With every new day, we savour new experience... keeping it in our memory. As time passes, we buried our moments deeper in the hierarchy of directories. Over time, we forget what we know. <br /> <br />Sometime, we feel that life is too much for us - that's when it's time to dig into our repositories and delete unnecessary files and folders, so that new experience can come in. <br /> <br />There are times that we still feel burning anger and hatred or sick to the stomach of somebody or something, though we thought we have forgotten it... it's because the file is still in you, in the recycled bin or trash folder. You need to really empty it to let go, and life will begin a fresh. <br /> <br />The only thing is, we cannot buy a new life like we replace a damaged harddisk or upgrade its capacity... but then, who needs to upgrade? Unlike a harddisk that comes with a predefined storage size, our capacity to absorb experience and live to the fullest is unbounded - provided we know how to not keep the bad stuff. <br />--- <br />Originally posted 18 August 2006, before the advent of SSD. http://lightofdawn.org/blog/?viewDetailed=00201 What is reality General What’s reality, he thinks. Alone, in this world, he can no longer differentiate between dreams and reality. He has no reference to compare things with, and when this happens, there is no way to weight any evidence against any standard. He shouts, and echoes comes back to him, and he does not know whether someone hears his shout or not. But he cannot be sure whether he just thinks he shouts or he actually does shout. Nothing to check against. It’s the point of when and where reality blurs with dreams and nightmares. <br />--- <br />The Book of Ten Children, Reunion:4. Originally posted in 11 November 2006. http://lightofdawn.org/blog/?viewDetailed=00200 Tomorrow is promised to no one General “I am still making peace with them, every single day,” she said softly. “That’s why I don’t want you to go when you were still upset with me. If bad things happen to you – perhaps you lose concentration in your battle against the bad guys, because of me – then really, I’m not sure I can live with it. It’s already difficult with my parents. I can’t let it happen to another person that I really care about. At least, before you go, I want to have peace between us. I can’t wait until the next time you come again, because tomorrow is promised to no-one.” she said with trembling voice. <br />--- <br />The Book of Ten Children, 5:5 http://lightofdawn.org/blog/?viewDetailed=00199 Fatdog64 801 is Released Fatdog64'Linux It is a rolled-up updates, containing bugfixes and minor feature updates. <br /> <br /><a href=http://distro.ibiblio.org/fatdog/web/800.html target=_blank>Release Notes</a> and <a href=http://www.murga-linux.com/puppy/viewtopic.php?t=115537 target=_blank>Forum announcements</a> <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso/ target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso target=_blank>uoc.gr - European mirror</a> <br /> <br />ISO Builder: Get it from here: <a href=http://distro.ibiblio.org/fatdog/iso/builder/ target=_blank>http://distro.ibiblio.org/fatdog/iso/builder/</a> and choose the builder dated 2019.05 and the package list for 801. <br /> <br />Enjoy. http://lightofdawn.org/blog/?viewDetailed=00198 Today is Ash Wednesday General Count your blessings! http://lightofdawn.org/blog/?viewDetailed=00197 Fatdog64 800 Final released Fatdog64 Just two weeks ago we released 800RC. Based on the feedback and our own day-to-day usage, we feel that it is now stable enough and can used to replace the last stable version of Fatdog 721, hence the final release. <br /> <br />The list of changes from 800RC isn't many, and you can check them out in the <a href=http://distro.ibiblio.org/fatdog/web/800.html target=_blank>Release Notes</a>. Forum Announcement is <a href=http://www.murga-linux.com/puppy/viewtopic.php?t=115537 target=_blank>here</a>. <br /> <br />You can get it from ibiblio and the usual mirrors: <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso/ target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso target=_blank>uoc.gr - European mirror</a> <br /> <br />In this release we also publish the ISO builder suitable for making your own custom versions of Fatdog64 800. Get it from here: <a href=http://distro.ibiblio.org/fatdog/iso/builder/ target=_blank>http://distro.ibiblio.org/fatdog/iso/builder/</a> and choose the builder dated 2019.02 and the package list for 800. <br /> <br />Enjoy. http://lightofdawn.org/blog/?viewDetailed=00196 Fatdog64 800RC Release Fatdog64 About two and half months after the initial <a href=?viewDetailed=00193 target=_blank>800 Alpha release</a>, we finally release the first Release Candidate (RC). There is one beta release in between on 20 December which I didn't get to announce here (Christmas time - busy days). <br /> <br />As usual it's package updates and bug fixes - but mainly bug fixes. Not only regressions caused by new packages, but also long-standing bug fixes from earlier version. Hence the recommendation to update. On my last test, I could still run this release on a 1GB Intel Atom N450 Acer eMachines netbook from ca. 2012; so if your machine is similar or more powerful than that, you can run it too. <br /> <br />If things are going to plan and there is no embarassing bugs, this release will become final. <br /> <br />This is the <a href=http://distro.ibiblio.org/fatdog/web/800rc.html target=_blank>Release Notes</a>; and this is the <a href=http://www.murga-linux.com/puppy/viewtopic.php?t=115409 target=_blank>Forum Announcement</a>; but if you're not familiar with the Alpha or Beta release I would suggest you read a little bit on both. We will probably copy and consolidate all the changes for the Final release, but not until then. <br /> <br />You can get it from ibiblio and the usual mirrors: <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/pre-release/800RC/ target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/pre-release/800RC target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/pre-release/800RC target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/pre-release/800RC target=_blank>uoc.gr - European mirror</a> <br /> <br />In this release we also publish the ISO builder suitable for making your own custom versions of Fatdog. Get it from here: <a href=http://distro.ibiblio.org/fatdog/iso/builder/ target=_blank>http://distro.ibiblio.org/fatdog/iso/builder/</a> and choose the builder dated 2019.02 and the package list for 800rc. <br /> <br />Enjoy. http://lightofdawn.org/blog/?viewDetailed=00195 How to destroy FOSS from within - Part 5 General This is the fifth and final installment of this article. <br />In case you missed it, the these are <a href=?viewDetailed=00168 target=_blank>part one</a>, <a href=?viewDetailed=00172 target=_blank>part two</a>, <a href=?viewDetailed=00177 target=_blank>part three</a> and <a href=?viewDetailed=00182 target=_blank>part four</a>. <br /> <br />The last time I wrote about this was the beginning of 2018, where the outlook was bleak with Spectre etc. Well here we are in early 2019. The world didn't collapse, so there is still hope; so I too would end this series with a hopeful note. <br /> <br /><hr> <br /> <br /><b>Part V: Can we do anything about it?</b> <br /> <br />The article was originally a four-parter. But I don't want to end it with a depressing note, so here is the final part, and hopefully more uplifting that the previous parts. <br /> <br />Let's start by observing that "only in democracy the people can vote to elect a dictator". Yet, we don't see hordes of democracies tumbling into dictatorships. So there is still hope for democracy. <br /> <br />Which, I hope, also means that there are still hopes for FOSS. <br /> <br />One way I can see, is to have independent talents that oversees the project; as well as independent talents that actually contribute to the project. (Being an independent leader is meaningless if all the do-ers are against you - remember this is do-ocracy right?). <br /> <br />FOSS flourishes when there is a constant flow of talents going into the community. People don't become expert overnight, but those with enough motivation and effort can always start at the bottom of the ladder and acquire the skills as they continue to participate over time, with mentoring from the older guys. <br /> <br />Alternatively, when a project becomes too unwieldy; perhaps it is a better idea to start with a new codebase, clear from "legacy" stuff and therefore easier to understand - but with still remembering the lessons learnt from that legacy code (or else the new code will be doomed to repeat the same bugs are the legacy code ...). <br /> <br />How can we keep the independent talents coming into FOSS? <br /> <br />I don't have an answer. I can only say that hope springs eternal. Everyone has an itch to scratch: I have seen people take up impossible projects or coming up with impossible replacement projects. New FOSS software coming out from research projects or from student thesis are still happening. So things still does happen. But the trend isn't healthy. And perhaps we all should think of what we can do to help. <br /> <br />THE END <br /> <br /><hr> <br /> <br /><u>After-note 1</u> <br />Some FOSS projects are originally closed-up products opened up by the original company owner. Also, some companies open-sources their products for the "community" and changes for "premium" or "enterprise" version, which is not FOSS (the "freemium" business model). I have nothing against this; and instead applaud those companies who have chosen to open source their products. <br /> <br />In this situation it is normal and fair to expect that the direction of these projects continue to be dictated by the original owner, especially when most of the development are still done by the company's own employees. <br /> <br />The FOSS projects that I'm concerned with are those original grass-root community projects (or once-closed-but-now-opened projects that are no longer controlled by the original authoring entities) that have risen to the top of the mindshare, but are no longer recognisable as such due to these undue influences. <br /> <br /> <br /><u>After-note 2:</u> <br />One must not conclude from these articles that corporate contribution (in terms of money or employee time) into an FOSS project is automatically bad and unwanted. It is not; and in fact many projects won't be as successful or survive without them. <br /> <br />We welcome contributions from anyone in any form, but what we need to ensure is independence from external influences. <br /> <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00194 World map stat counter update General It was over five years ago I wrote this stat counter (<a href=?viewDetailed=00029 target=_blank>here</a> and <a href=?viewDetailed=00030 target=_blank>here</a>). <br /> <br />The world has since moved on, IP addressed have changed hands (and locations), and even the database format has changed. If you still have a copy of the old MaxMind GeoLite database, the old program would still work but if you don't - well, MaxMind has deprecated the old database format as of January 2019 and you cannot get a copy of it anymore. <br /> <br />However, MaxMind still offers freely downloadable geo-IP database, in a slightly different format (GeoLite2). I have now updated the world map stat counter to work with this format. <br /> <br />You can get the updated sources (along with 32-bit statically compiled binary), <a href=downloads/worldmapstat-v4.tar.gz target=_blank>here</a>. The GeoLite2 database is <a href=https://geolite.maxmind.com/download/geoip/database/GeoLite2-City-CSV.zip target=_blank>here</a> and you need a "converter" (to convert the CSV file from network format to integer-range format) <a href=https://github.com/maxmind/geoip2-csv-converter target=_blank>here</a>. Then read the original articles and you should be good to go. The v4 has "2" appended to all the programs - ipgeocode2, preprocess2, etc so they can co-exist with the older version if you so wish. http://lightofdawn.org/blog/?viewDetailed=00193 The Road to Fatdog64 800 Alpha Fatdog64'Linux Today, the first of Fatdog64 800 series is released to the wild - the Alpha release. Despite being labelled "alpha", this release has been tested for a few months and has been used day-to-day in real production machines (in varying degrees) for about two months by all of us, in the team. <br /> <br />It has not been a smooth ride all along. Living in the "bleeding edge" means that you really need to prepare to bleed (that's why we don't update the base on every release - it would be downright impossible). Latest packages don't always build, and when they do, they don't always run, and they do, they don't always run stably, and when they do, they don't always work, and when they do, they don't always work correctly, and when they do, they don't always provide good performance, and so on, and so on - you get the picture. <br /> <br />But we have finally arrived. It may not be perfect, and it never will, but for us - it is good enough for day-to-day usage; so the decision to release. <br /> <br />This blog post documents a few of the stumbling blocks that we passed through on our way to the Alpha release. By sharing the knowledge, I hope that others on the same journey can avoid them. <br /> <br />Warning: the information that comes after this is going to be very technical. Don't worry if you don't understand it - just use whatever that you do. <br /> <br /><hr> <br /> <br />The particular bug that took as considerable time (weeks) to solve was this: it's a bug that causes the desktop to unpredictable, involutary exit to the console. <br /> <br />And it turns out, this problem has __multiple__ underlying causes. Each cause ends up with the X server exiting (sometimes gracefully and sometimes not) so that we're dropped back the console. <br /> <br />Sometimes there crash messages in Xorg.0.log, sometimes don't. Sometimes the exit is immediate (X goes down and we're back in the console), sometimes it's gradual (applications start to fail one by one, before X itself finally gives up its ghost and dies). <br /> <br /><hr> <br /> <br /><b>Bug #1</b>: fontconfig doesn't like its cache to be tampered with. <br /> <br />This bug happens only when running with the RAM layer (savefile=ram:device:xxx, in Puppy's parlance this is pupmode=13). When anything that uses fontconfig starts (e.g X, gtk2, etc), fontconfig will be initialised and its first action is to scan all the font directories and makes its cache. <br /> <br />When we run using the RAM layer, these caches are stored in the RAM layer, and eventually will be "merged-down" to the actual savelayer (copying the files from the RAM layer to the savelayer, and then removing the copy on the RAM layer, and then refreshing the aufs layered filesystem so that the copy on the RAM layer gets re-surfaced on the root of the filesystem). <br /> <br />This has worked for the longest time, but we found out that in Fatdog 800 this isn't the case. Even the process of copying the files from RAM layer to savelayer (not even deleting them) triggered a cascade of failure in fontconfig, which eventually resulted a crash in all higher-level libraries and applications that uses this. <br /> <br /><b>Fixes #1</b>: We still don't really know what changed - this could be a change in the kernel, fontconfig itself, glibc, or others (remember, in 800, with a new base, **all things were updated** so we can't easily isolate one component from another), but once we know what triggered the collapse, we worked around it by making sure that fontconfig caches are not touched during the merging process. <br /> <br /><hr> <br /> <br /><b>Bug #2</b>: Xorg server crash on radeon-based systems when DRI3 and glamor is enabled (the default settings). <br /> <br />This has been a long running bug due to Mesa (open-source OpenGL 3D library) changing its infrastructure to support newer, more powerful radeon cards, changing both the acceleration API (DRI2 to DRI3), acceleration methods (EXA to glamor), memory management (GEM, GBM, etc). <br /> <br />But the bug isn't in Mesa alone. Eventually Mesa needs to interface with the video driver, so co-operation with xf86-video-ati driver is needed. <br /> <br />And then there is the kernel too, the radeon DRM driver from the kernel. <br /> <br />There are multiple components and every component is a potential source of failure (which in this case they all contributed to the problem one way or another). <br /> <br />To make it worse, the problem is completely unpredictable. The system can run hours without a glitch before the desktop crashed, or it can crash in next hour after booting. It brought a completely un-reliable experience. <br /> <br />There is no good solution for this because every component keeps changing, so updating one component could very well break another. All we can do is watch bugzillas, forums, mailing lists, and listen to possible solutions. <br /> <br /><b>Fixes #2</b>: I think in this case, we got lucky. Most of the bugs were fixed in mesa 18.2.3, and the final bug was fixed in xf86-video-ati git-master, one commit after 18.1.0. We're going to stick with this combination for a while! <br /> <br /><hr> <br /> <br /><b>Bug #3</b>: After an unpredictable amount of time, Xorg server will crash (due to failed assert), giving up messages similar to this: <br /> <br /><pre class=code><code class=highlight> <br />[xcb] Unknown sequence number while processing queue <br />[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called <br />[xcb] Aborting, sorry about that. <br />pidgin: xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. <br /></code></pre> <br /> <br />There is one unanswered report here: <a href=https://bugzilla.redhat.com/show_bug.cgi?id=1582478 target=_blank>https://bugzilla.redhat.com/show_bug.cgi?id=1582478</a> <br /> <br />This initially happened on ROX-Filer when it was worked heavily, so naturally we thought the problem was with ROX-Filer. But then it started to happen elsewhere (other GTK applications), so it could be anywhere in gtk2, glib, glibc, aufs kernel module, or even the kernel itself. <br /> <br />We scoured for solutions for similar problems, and we got some solution like this: <a href=https://forum.freecadweb.org/viewtopic.php?t=28054 target=_blank>https://forum.freecadweb.org/viewtopic.php?t=28054</a> <br /> <br />But they don't work and they don't make sense. FreeCAD is a 3D-heavy application, so disabling DRI3 and (in some others links, disabling 3D hardware-acceleration) doesn't make sense for applications like ROX-Filer which is purely 2D and doesn't make use of any acceleration at all. <br /> <br />This was especially difficult to pinpoint because it happened randomly; and it was one of the thing I would have filed into "unsolved" files, were it not for SFR. SFR found a way to reproduce this problem reliably (by clicking the "refresh" button on ROX-Filer a few hundred times - and he even provided a automation script so we don't have to buy a new mouse after every experiment <img src=images/smilies/happy.gif /> ). <br /> <br />Once we can reproduce this, re-building the libraries with debug symbols and running them under "gdb" quickly pointed out that the problem is within libxcb - a library at the bottom of the Xorg stack. <br /> <br />Bisecting on libxcb, we found that the problem is caused by a particular code commit that tries to "fix" another bug when dealing with Vulkan drivers: <br /> <br /><a href=https://cgit.freedesktop.org/xcb/libxcb/commit/?id=fad81b63422105f9345215ab2716c4b804ec7986 target=_blank>https://cgit.freedesktop.org/xcb/libxcb/commit/?id=fad81b63422105f9345215ab2716c4b804ec7986</a> <br /> <br />But the way it was fix is, in my opinion, is incorrect (it was reading stuff when it shouldn't - this kind of thing should be protected with a mutex or we'll end up with a race). <br /> <br /><b>Fixes #3</b>: So we reverted this commit and poof! - the problem disappears. I tested clicking the button to about 16,000 times and no more crash. <br /> <br />This was the last bug we squashed. <br /> <br /><hr> <br /> <br />So there. One symptom, three underlying problems, that can be triggered at different semi-random times, resulting in a totally different error messages and behaviour, confusing all of us. <br /> <br />We falsely declared victory after the first and the second were squashed - only to be humiliated when the crash happened again, in a slightly different way. By the time we squashed the last bug, we were wary enough __not__ to declare it fixed until a few days later and it was finally confirmed that we've finally made it through. <br /> <br />All in all, we spent more than a month to solve all of them. Now that we've past through them, I hope others can avoid the same mistake. <br /> <br />Meanwhile, enjoy Fatdog64 800 Alpha! <br /> <br />Forum announcement here: <a href=http://www.murga-linux.com/puppy/viewtopic.php?t=114719 target=_blank>http://www.murga-linux.com/puppy/viewtopic.php?t=114719</a> <br /> <br />Release Notes here: <a href=http://distro.ibiblio.org/fatdog/web/800a.html target=_blank>http://distro.ibiblio.org/fatdog/web/800a.html</a> <br /> http://lightofdawn.org/blog/?viewDetailed=00192 So you want to roll your own distro? Fatdog64'Linux This was supposed to be posted a week ago. So "today" in the text was actually about 10 days ago, give or take. Since then, all of us in the Fatdog team have migrated and are now eating our own dog food. But it doesn't change the essence of the post. <br /> <br /><hr> <br /> <br />Today has been an exciting day for Fatdog64. For me at least, Fatdog64 800 has now entered the "eat own dogfood" phase. I have migrated my build machine to Fatdog64 800. In another word, Fatdog64 800 is now self-hosting. This is the the phase that we have always done since the early days of Fatdog - we use it ourselves first, internally, for actual day-to-day purposes, to make sure we weed out the most obvious and most annoying bugs. We suffer these bugs so you don't have to. Of course, some would still slip out; and that's why more testers means lesser bugs; and fortunately, there are now four of us in the Fatdog team, and we have helps from other pre-release testers too. We'll stay in this phase for as long as it is needed to polish it up. <br /> <br />It has been a long and winding road to get to this. For those people who aren't involved, they probably aren't aware of the effort that goes to a project like this. Some people says, a distro is just a collection of stuff, right, and you don't write that stuff - you just package them. What's so hard about it? To that, I could have replied "well if you think it's so easy then why don't you do it yourself" - but instead, I'd choose to explain what happens behind the scenes. Along the way, I can explain some key decisions that makes Fatdog what it is today. <br /> <br /><span class=emg>Let's start.</span> <br /> <br />Okay, so a distro is just a collection of "packages". That's correct, in general, but as in everything, the devil is in the details. First and foremost: where do these "packages" come from? You've two choices here: build your own, or use someone else's. If you use someone else's packages, then your distro effectively is a "derivative", even if you call it that way or not. <br /> <br />We don't want Fatdog to become a derivative, at least, not Fatdog64 (I have another stagnant project called Fatdog-Like which *is* a Fatdog derived from a parent distro - but it's not going anywhere at the moment due to lack of time and interest), and the reason is not because we simply don't like to be called as "derivative". <br /> <br />The reason is deeper: it's about management and control. Using someone else's packages means you don't have full-control over the decisions that goes into the making of that packages - from the simple things like "which version is available", to more complex things like what is the optional libraries linked to it (this determines overall size and functionality); where is the configuration data stored; what is the build-time configuration parameters, etc. <br /> <br /><span class=emg>Okay, so we don't want to be derivative.</span> <br /> <br />We want to strike it on our own. What's next? <br /> <br />Well, what's next is we need to build our own packages then. Before we build our own packages, there are a few things that we need to sort out. Firstly, the compiler. A compiler is special, because, a part of the compiler is always attached to the final program (=the run-time). If you use an existing compiler, whoever made that compiler already made a decision for you (at the very least - the version of the compiler), which will get carried on to all of your packages --- <b>even if all those other packages are built by you</b>. So, no, we cannot use an existing compiler, can we? We have to build our own compiler ourselves (more popularly known as the "toolchain" because a compiler is just one of the components that you need to build a program from source - there are also the linker, the libc, and others. I'd gloss over the problems that a correct functioning toolchain requires very specific combination of correct versions of its components). <br /> <br />Well, how to build a compiler then? It just another package, right? No. The compiler we build is special - the program it will build is not meant to run in the machine that the compiler is run; instead, that program is supposed to run in our brand new distro, which currently exists in the gleam of our eyes only. Building a compiler like this is what you call as "toolchain bootstrapping" (aka chicken-and-egg problem); and the compiler you produce this way is a cross-compiler. I'm not going to explain terms like this here - I will assume that if you're interested enough to read this, you have enough motivation to google for terms that you don't understand. <br /> <br />Ok, you've googled it - and as it turns out there are tons of tools to build a cross-compiler! There is "buildroot" from busybox team, and there is "crosstool-ng", and there are many others! Problem solved, no? Eh, the answer is still no. Most of these tools produce a cross-compiler all right, and they do it very beautifully. Only one problem - most of these tools are <b>NOT</b> capable of building a cross-compiler which can build a native compiler (that is, a compiler that will eventually run in the target system). Which means that we will forever be dependent on them to build packages for us. That is not good. We need a cross-compiler that can eventually build a native compiler so that we can use native compiler to build the rest of the packages. <br /> <br />Actually, any good cross-compiler can be used to build a native compiler. You just need to know how. And it's not easy. Even the process of building a corss-compiler itself is almost black magic - that's the reason why there are numerous tools that help you to do it. Fortunately, there is one project that aims to decipher all these gobbledy-gook into something that you can understand. That project guides you, step by step, through the process of making a cross-compiler, and how to build a native compiler. That project is the Linux From Scratch project (LFS). It is not a tool, it will not build a compiler for you, but it is a book, a guide, that will instruct you exactly how to do it, step by step, while explaining why certain things must be done in a certain way. <br /> <br /><span class=emg>Fatdog64, since version 700, is based on LFS.</span> <br /> <br />Once we've got the compiler, then we need to build the packages using it. The LFS is extremely helpful, it will guide you to build minimal set of packages that will enable you to build a minimal system that can boot to a console, in a bash shell. And that's when LFS ends. You end up with about 50 packages gives or take including the toolchain. But at least your target distro is now alive, with a native compiler in it that you configure it yourself (the LFS instructions are just a "guide", and you're welcome to vary it for your own needs once you know exactly what you're doing - so what you build is effectively your own compiler, not LFS'), that you can use for building the rest of the packages. <br /> <br />Ok. The rest of the packages. Where would they come from? Let say, ummm, you want to build a web browser. Firefox sounds good. Okay. How do build a Firefox web browser from scratch? Go to Mozilla website, spend a couple of hours digging in and out ... oh, I need to build a "desktop" first before I can even begin to build Firefox. And even with the desktop, there are these "libraries" that I need to have in order to build it. I also need tools - and certain tools must be very specific version (e.g. autoconf must be version 2.13 exactly and nothing older or newer). But how to build a "desktop"? A desktop is system of many components - quickly broken down to window managers, panel manager, file manager, system settings ... and then the basic graphics subsystem, of which you can choose between X desktop and Wayland as of today. X is more popular, so you decide to explore it - then you have X libraries, XCB libraries, input drivers, video drivers, and servers. And all those things needs supporting libraries before you can even build them - they need gzip libraries, XML libraries, etc down the rabbit hole we go. So how do we even start? <br /> <br />Well, within the umbrella of the LFS project (but run by different people), there is this project called BLFS - Beyond LFS. Its purpose is, you guess, to provide details about building packages which aren't part of LFS. For every package that it describes, it tells you: (a) where to get the source files, (b) what are the dependencies for that package (=what packages must be built and installed before you can build this one), (c) the commands to build it properly. BLFS is much larger in scope than LFS but even it does not cover everything. It will get going, though, as it says at the top of the book: "This book follows on from the Linux From Scratch book. It introduces and guides the reader through additions to the system including networking, graphical interfaces, sound support, and printer and scanner support." So it does get you going in the right direction. It even tells you how to build Firefox, and what exactly you need to build before you can do it (you're still going down the rabbit hole, but at least somebody holding a ladder so you can always climb back up). <br /> <br /><span class=emg>Fatdog64, since version 700, uses parts of BLFS as the source of some of its packages.</span> <br /> <br />But there is on major problem here. Both LFS and BLFS shows and guides you to build an operating system for yourself. It's like building a one-person distro. You cannot easily copy the resulting system into a distribution media, not without tainting it with your own personal information and machine-specific configurations. (the keyword is here "easily" - with enough effort surely you can do it - obviously <b>WE</b> are doing it for Fatdog64). No matter, you say. I'm just building a distro for one, for myself. So all is good, right? No. With the LFS (and BLFS), it is easy to add new packages into the system, but it is rather difficult to get rid of an installed package. All packages are installed to the target system as they're built, without any records of which files goes where; so it's difficult or even impossible to remove without breaking the system. <br /> <br />The ability to track installed packages, and thus remove them (in addition to installing one) is collectively known as "package management". Any decent distro has one. Package Management is not included in LFS/BLFS because it "gets in the way" of explaining how things works - which its main objective. It only goes as far as saying that a package management <b>IS</b> needed, and there are many possible ways to do it. Look it at LFS Chapter 6.3 if you're interested. <br /> <br />So, you need a package management. A package management has two parts - the "creator" that enables you to build a "bundled" package, and the package management proper that can install/uninstall/view installation of your "bundled" packages. The "creator" part must be used in conjunction with your build process; because it needs to keep track of the files produced by your build, collect all those files, and bundle them up in package. <br /> <br />Easy, you say. Just google it up, and you will find "porg". Or you can even use existing package management system. After all, many distros share the same system; it is unnecessary to invent a new package management system (yes, proper and correct package management system is <b>HARD</b>). Why don't I just take Debian's package management system (dpkg), or RedHat's one (rpm), or Slackware's one (tarball), or many other myriads ways that have been suggested? <br /> <br />Well yes we can. <br /> <br /><span class=emg>From version 700 onwards, Fatdog64 uses Slackware's package management system ("pkgtools"), fortified with slapt-get from jaos.org, modified for our use.</span> <br /> <br />"pkgtools" is choosen because it's easy to create its packages, it's easy to host and publish the packages, it has tools that can extend it to support remote package management and dependency tracking (slapt-get), and the packages are basically tarballs that - in the worst case - you can always "untar" to install without needing special tools. No other package management tools comes even close. <br /> <br />Okay, once you choose a system then you need to hook it up with your build system so that the "creation" part of the package works, as I said above. I'd gloss over this fact and assume you can already do it. Let's move on. <br /> <br />Hang on, you said. We already bootstrap a system, have enough guides to build everything up to a web-browers and multimedia player (vlc), can install and uninstall packages, so what's next? <br /> <br />Ok, what's next? How about package updates. Software is being updated. What's new yesterday is old today and obsolete tomorrow. You need to keep building packages. These updates are not in the LFS/BLFS books, because they're published semi-annually. In between, if there is any updates, you must roll on your own sleeves and figure out how to do it yourself (it shouldn't be too hard now if you can follow BLFS guides this far). Ok, so updating is easy. *IF* you do this everyday. But if you don't - well, do you still remember how you built the previous version of the package? Do you remember the build-time flags you specified 3 months ago? Welcome to the club - you're not the only one. <br /> <br />You first response to the question is - I will make sure I keep a note of all the configurations, library dependencies, etc etc when I build the packages. Or, perhaps, even simpler, I will just stick to LFS/BLFS update cycles, so no need to write my own notes. I'm not that desperate to live in the bleeding edge anyway. <br /> <br />OK. If you decide to stick to BLFS, I have nothing else to say, but if you said you'd want to keep notes, then allow me to go a bit further. Rather than making a note, which you need to "translate" into your action when build the package, why don't you write them in the "scripting" language? So next time you can tell the computer to "read" your script for you and do the build too. Sound nice and too good to be true? It isn't, and it is actually the perfect solution. With a "build script", not only you know remember how to build a package, you also save time and (with exceptions) you have accomplished a "repeatable build" - which means that you can repeat the build many times and get a consistent result. <br /> <br />After a while, not only you want to record the build process, you may as well keep a record the location of the source package, perform the download, identify that the downloaded source package is correct (using checksum) before actually building it. You may even want to keep information about library dependencies there. <br /> <br />And if you're like me, after a while you will start to notice two things. (1) About half the content of the script is identical. It's always wget (or curl), then md5sum (or sha512sum), then extract, then apply patches, and build, then activate package-management-creator hooks, and then install and make package. The (2) point that you notice is that with the proliferation of build scripts, you need to manage them, and have the computer build them in the correct order (instead of you manually sorting out the build order). <br /> <br />Congratulations. You have recognised the need for a distro build infrastructure (shortened as "build system"). What we have been calling as "build scripts" are usually known as "build recipes"; once we take out the "scripting" out of them and move them into shared build infrastructure. Some people consider build infrastructure part of package management (because they can be very closely coupled) but they're actually separate systems. <br /> <br />Most major distros have their own distro build system, for example, Debian has "debuild", RedHat has "rpm", Arch Linux has "PKGBUILD", Slackware has SlackBuilds (for 3rd party packages only), etc. Each of these build systems have their own way to specify the "recipes" - some highly structured like debuild and rpm, some are rather loose like Arch PKGBUILD. You can use them, you don't have to re-invent the wheel. Of course, you can also come with your own if you wish. <br /> <br /><span class=emg>Fatdog64 uses its own home-brewed build system conveniently named "Fatdog pkgbuild" (no relation to Arch Linux build system of the same name).</span> <br /> <br />Simply because we found that none of the existing build systems have the features that we need (mainly simple enough to be understood and written but flexible enough to build complex packages). Fatdog64 build's system is more loosely defined (similar in spirit to Arch Linux) as opposed to the highly structured build system like rpm or debuild. It has proven itself enough to be able to build a customised firefox and libreoffice in a single-pass. <br /> <br />Ok. You have now gone a long way from building packages by hand manually and installing directly (./configure && make && make install). You are now the proud owner of a build system that can (re-)build the whole system in one command call. With some clever scripting you can even make install these packages into a chroot, and create an ISO file ready for distribution. Your job is done, you're now official an distro builder! Congratulations! <br /> <br />But wait. I don't have to go through all these processes. There are already "distro build system" out there, complete with fully populated recipes for every software under the sun and the moon! There is one called T2-SDE, for example. "buildroot" from busybox can actually build an entire distro's worth of packages by itself, all nicely packaged into tarballs too. Or you can start with debootstrap from Debian and start working your way up - Debian publishes every single one of their recipes, from every version ever released. And wait, they're not the only one. You have openwrt, you have ptx-dist, you have open-embedded, you have gentoo, you have yocto, and many others that I've forgotten. (You can even include android in the list - yes AOSP is a distro build system). Why not just use them? <br /> <br />Well, why not. If you like them, then by all means use them. Just remember this: every single build system released out there, is released to serve a purpose. Investigate that purpose is, and see if it is aligned with what you want to do. Also, all those build system (and recipes) are maintained by others, and they follow different objectives and schedules than you. This may or may not be important for you. <br /> <br />Fatdog64 500 and Fatdog64 600 were originally build using T2-SDE. T2-SDE was the build system used to build packages for earlier Puppy Linux builds too (version 4.x - the Dingo series). The reason why we drop T2-SDE is again because of management and control. Having T2-SDE is great because we don't have to bother about recipes and upgrades anymore, someone else is taking care of that for us. But it also means that version updates etc depends on the maintainer. Adding new packages (new recipes) depends on the maintainer. Etc. We can of course update the recipes on our own (once we understand the format), but then it means we to start keeping track of these recipes ourselves. The more we have to change, the more we have to maintain ourselves. Some packages' changes only affect themselves, some affect others (e.g. library updates more often than not introduces incompatibilities, which means that all packages that uses this updated library will now also have to be updated themselves, causing an avalanche of recipe updates), so before you know it, you basically have "forked" your build system from the original maintainer. This is even more true if you find "bugs" in the build system which the maintainer won't fix for whatever reasons, and you ends up fixing it yourself. Congratulations, you're now maintaining a distro build system that you don't fully understand and was created by someone else for objectives that may or may not align with yours. <br /> <br />So, okay, you decide to start a build system on your own. At least, this is your system, you know it upside down and you can debug it with your eyes closed. You write this exactly to your own needs - nothing less, nothing more. Then you start to populate this system with recipes. <br /> <br />You can initially source the recipes from BLFS, and then you can start writing your own, or even use recipes from Debian, RedHat, OpenSuse, Arch, or wherever else you can find them. Lots of time spent in experimenting to make sure that the right combination of packages and libraries all work well together and produce the best outcome that you want (in terms of feature, performance/speed and size). If you use only one source (e.g. BLFS) this is not important - the tuning is already done for you. But if you mix recipes from many sources, they have have assumptions that is no longer true when you apply them in your system (e.g. assuming a library is installed when it is not, etc); you will find out when you get a build failure (or worse: a run-time failure) so you have to tinker and adjust. Imagine, then: at the very near end of the building, you've got a compilation failure. You've got to troubleshoot this, figure out why it fails, what's the probably fix, and try to build again. And it fails again. So you investigate again, and try fixing up again. And then rebuild. This cycle continues until you finally get a working recipe that builds correctly, or until you give up. But if this is a very important library which is used by many other packages, "giving up" is not an option (unless you want to give up building the distro altogether). Now imagine this trial-and-error cycles, for a large package like libreoffice which can easily take a few hours to build. A single, big, and yet important recalcitrant package can easily delay progress by days. <br /> <br />Of course you're not always on your own. I don't want to make it sound more difficult than what it is. stackoverflow.com is here to help. You can also ask linuxquestions.org or similar places, and if you're lucky, you're not the first one to encounter the problem, so google is your friend. But once a while you do get unlucky and the problem you need to solve is truly yours only. <br /> <br />Anyway, let's move on. After spending countless, sleepless and thankless hours building your distro, finally come to the period of testing (which is where we are at, now, for Fatdog64 800). This is where the proof meets the pudding - see if those recipes get us a good cake. And then only way to know whether the food is good is to eat it, so the mantra "eat your own dog food". After an eternity of testing, you finally come to the conclusion that you're going to have to publish this distro or it will be obsolete before it is released. <br /> <br />Congratulations! It's release time! (We're not there yet for Fatdog64 800 - but we have been through this many times with earlier versions - you can refer to all our previous releases in Fatdog History site). To be on the safe side, you don't immediately claim for the Final release (aka "Gold" release), but you call is a "test" release and call it using Greek alphabet to make it cooler (alpha release, beta release etc - but normally it stops at beta because "gamma" sounds very bad for your health - after all gamma radiation is what turns Bruce Banner into Hulk, remember?). <br /> <br />And then the silence is deafening. Nobody (in their right mind) will touch a test release within a 10-feet pole (unless they're dedicated testers, and thankfully, over the years we have managed to attract some of them - so the silence is <b>not</b> usually deafening for Fatdog release. But others don't fare as well). Oh well, after a few weeks of test release, we think we've squashed all the bugs we can find (and willing to, and able to fix). It is now finally the time for the Grand release, aka Final, the Gold. <br /> <br /><span class=emg>Drum rolls please!</span> <br /> <br />As soon as we made the final announcement, comes the news that the kernel we included in the final release has a severe CVE problem (privilege escalation). Or that the openssl version that we use has CVE reports about remote exploits. Or the install scripts corrupts the user's hard disk. Or there is yet one more bug that we missed earlier in the test release. This is not including people whose only sorry life's existence depends on trolling people (Again: this paragraph, like the the rest of this post, is illustration only. We don't usually get it so bad - in fact we've evaded the worst and fared quite well in our releases so far). So what can you do? Depending on the severity of the problem, you can: (a) ignore it, or (b) fix it and issue a minor version update, or (c) pull out the release completely until you can work the problem out. Oh, and just ignore the trolls. <br /> <br />But the life of a distro doesn't end here, unless you're fine doing just doing one-trick pony. Software gets updated, bugs got found, more CVEs got reported, so you've got to update your releases too. Sounds familiar isn't it. But you've got your build system! You've got your recipes! So no problem, right? <br /> <br />You update your recipes, (re-)build them in your build system, spending more countless hours fixing breakage caused by the updated recipes, etc. And then you issue an update - this can be package update, or build a new ISO and issue a minor revision release, etc. Generally it's quite manageable unless you're trying to "live in the edge" and wants to always be on the latest version on everything (note of warning: bleeding edge is not wise. Newer versions comes with fixes but also comes with new bugs too). <br /> <br />But updates doesn't last forever. There are components of the system that you can't update. "libc" for example. It is part of the toolchain. It is part of every software in your distro. To update it, firstly you need build a new toolchain. Secondly it means either (1) you need to rebuild all the packages with the new toolchain - this is the proper way of doing it, or (2) just drop the old libc and put in a new one - this is the improper way of doing it and has great risks of introducing unpredictable crashes. In other words, unreliablity. <br /> <br />So, eventually, you have to do the proper way (1) above. This is what in Fatdog64 parlance is what we call as "updating the base". It is what comprises a major release, in Fatdogs. All minor version updates have the same base - same compiler, same libc. For example 600, 601, 602, 610, 611, 620, 621, 630, 631 releases all use the same toolchain and same toolchain libraries. When we move to 700 series, we have a new toolchain - "the base". Fatdog64 800 is one such major release - we're moving from gcc 4.8.3 and glibc 2.19 in 710 series to gcc 7.3.0 and glibc 2.27 in Fatdog64 800. <br /> <br />Naturally, when we have to rebuild all of the packages, we may as well update them as we go. So, not only we're updating the toolchain, but we're updating every single one of the packages that we have build previously. The fact that we've sourced some of our packages from LFS and BLFS doesn't help - we still have to review and update (all of) our recipes; and we still have to build it. Again, we while we borrow and use recipes from many sources (including writing our own as needed), the environment and the combination of packages that we build and use is completely unique to us, so basically we have to repeat the countless, sleepless, and thankless hours of tuning the updated recipes (=all of them) again and again, waiting for a package to build just to see the compiler spits out cryptic error messages because (a) new gcc now uses new c++ standards which considers previously accepted language construct as an error unless you know certain magic incantations that will bring its understanding of "ancient wisdom" back - but sometimes even such incantation doesn't work as advertised and you need to patch it (b) openssl has a new ABI that makes all other openssl-using packages to fail and must be patched (c) poppler changes some of the method signature for no good reason and causes undecipherable error messages to some, but not all, packages that uses it, (=me spending hours convincing scribus to build) or (d) Qt goes yet again for another re-factoring spree that breaks all except the latest of the software; whose fix is easy just insert #include <QStyle> but of course you must know that this is the solution (and on which file you need to insert it) (e) original website goes dead, need to confirm if this is the latest/final version of the software or if anyone picks up the debris and start a fork (f) change of source code hosting - we have been in the business long enough to see migrations sourceforge to googlecode to github and now to gitlab with pieces left along the way - so we need to confirm which hosting contains the latest (g) and many other fine gotchas in similar vein. <br /> <br />So. Having a build system makes maintenance oh so much easier. But when it comes to major updates like this, there is always serious work involved behind it. It is almost like re-starting everything from scratch again (because it actually is). Fatdog64 is relatively small, we have only about 1500 packages (not including the contributed packages), but even this takes considerable amount of time to maintain and especially to perform major update. To give you an idea, Fatdog64 800 work was seriously started not long after LFS 8.2 is released, in March (our based is based on LFS 8.2 - we started ours major cycle update around late May). At the time of this writing, LFS 8.3 has been been released, and we're still in internal testing phase. <br /> <br />Oh, one more thing. Even though we use LFS as the base, it is not purely LFS. LFS is not multi-lib aware. As early as Fatdog64 700 (which uses LFS 7.5), we've tried to build it in away that is 64-bit clean - that is, all 64-bit libs goes for "lib64". This we can tack on 32-bit compatibily libraries in "lib", using libraries we took from other 32-bit distros (mainly from Puppy Linux variants). In Fatdog64 710 we upped the ante and build a full multilib version - which means that both the build system now builds both 64-bit and 32-bit packages directly. One of the key parameter of success is the ability to build wine - which requires both 64-bit and 32-bit infrastucture to work. LFS is not multilib aware, does not support multilib, and does not plan to do it, because, doing so, will just obscure the points (same reason why package management is not included). And I fully agree. <br /> <br />To that end, in 710 our base was actually shifted to CLFS 3.0.0. CLFS - Cross-LFS is an extension of CLFS that builds Linux distros for target platform which is not identical to the original host platform (LFS requires that host and target platform to be the same - if you start on 32-bit system, you end up with 32-bit distro), and in addition, it supports multilib too. But in term of the age of the packages, CLFS 3.0.0 was very close to LFS 7.5 so that they're virtually identical (same glibc versions, gcc only differs in revision numbers, etc) so aside from the build infrastructure changes, CLFS-based Fatdog64 710 is identical to LFS-based Fatdog64 700. <br /> <br />The point is, we don't just take and use LFS/BLFS recipes (or any other 3rd party recipes) verbatim. We need to convert them into multilib-compatible recipes, we need to create the 32-bit version of the recipes, and lastly we need to test them that they build, and that they work. There is serious amount of work going behind the scenes that people rarely see. <br /> <br />And lastly - all those packages you build, you need to publish it somewhere, right? Debian users uses apt-get to install new packages. RedHat/Centos uses "yum". Arch uses pacman. What will your distro use? Where will you publish the packages? How to maintain and ensure that this "software repository" / "package repository" ("repo" for short) of yours is up-to-date? If you have a non-traditional delivery system like Fatdog64, whose entire operating system files are kept in the "initrd", how can you deliver updates? These are important non-rhetorical questions which I leave as an exercise for the reader. <br /> <br /><span class=emb>So, do you still want to start and maintain your own "personal" distro? <img src=images/smilies/happy.gif /> </span> <br /> <br />Sure, writing and programming a software as complex as libreoffice is difficult and hard, especially always playing catch up to moving goalpost of "MS compatibility". Sure, writing and programming a web browser as complex as Chromium is hard - especially playing catch up to the "Living Standard". But building and maintaining a distro with over 1000 independent moving components created by different groups of people who may or may not be aware of each other is not exactly a walk in the park either. <br /> <br />It's fun, though. If you just have the right mindset. Which is why I'm still here, and will still be here for foreseeable future, ceteris paribus. <br /> http://lightofdawn.org/blog/?viewDetailed=00191 Fatdog64 800 development update Fatdog64'Linux We have finally completed the base of the next generation Fatdog64 800. We have it running with 4.18.5 kernel (this will probably change nearer to release). <br /> <br />We are still in the process of fine-tuning it, and trimming the size down. Size growth is unavoidable because newer version of the software are almost always bigger due to feature creep. <br /> <br />Apart from fine-tuning it, we're going to do the usual "eat own dog food" which is to run it ourselves for day-to-day use; this way we can iron out the most obvious bugs. <br /> <br />Once this get stable enough, it will then be ready for alpha release. <br /> http://lightofdawn.org/blog/?viewDetailed=00190 Some random updates General'Fatdog64'Linux OK, let's start with Barry K, the original author of Puppy Linux and the spiritual grandfather of every "dog"-themed Linuxes out there, Fatdog included <img src=images/smilies/happy.gif /> <br /> <br />For you who haven't read <a href=http://bkhome.org/news/ target=_blank>Barry K's blog</a> recently, you should check it out. Barry has come out with new interesting stuff that should spice out your "Puppy" experience (strictly speaking, it's not Puppy Linux anymore - Barry handed over the baton long time ago. Instead Barry now does Quirky/EasyOS/etc - but as far as we're concerned, it's a "puppy" with a quote <img src=images/smilies/happy.gif />). <br /> <br />Barry is also branching to 64-bit ARM (aarch64). I'm sure he will soon release an 64-bit ARM EasyOS. If you remember, Barry was also one of the first to venture to 32-bit ARM a while ago (a tad over 6 years ago) and made a Puppy that ran on the original Raspi. Quite an achievement considering that the original Raspi was barely able to run any desktop at all - but that is Puppy at its best, showing the world that even the weakest computer in the world can still run a desktop (as long as it's Puppy <img src=images/smilies/happy.gif />). It was also one of the motivation that made me do FatdogArm. <br /> <br />Speaking about Raspi and FatdogArm, I have also recently pulled out my Raspi 3 out of retirement. But this time I'm not doing it for desktop replacement, but mainly because I'm interested to use it for what it was originally made: interfacing with stuff. Direct GPIO access, I2C and SPI is so interesting with the myriad of sensor packages out there. I've been playing with Arduino for a while and while it's cool, it's even cooler to do it using a more powerful platform. Now <a href=http://www.zoobab.com/ch341-usb-spi-i2c-uart-isp-dongle target=_blank>this article</a> shows you how to do it directly from the comfort of your own PC (yes, your own desktop or laptop), if you're willing to shell out a couple of $$ to get that adapter. I did, and it is quite fun. Basically it brings back the memory of trying to interface stuff using the PC's parallel port (and that adapter indeed emulates a parallel port ... nice to see things haven't changed much in 30 years). But it's speed is limited to the emulation that it has to go through - the GPIO/I2C/SPI has to be emulated by the kernel driver, which is passed through USB bus, which then emulated by the CH341 on the module. If you want real speed, then you want real device connected to your system bus - and this is where Raspi shines. I haven't done much with it, but it's refreshing to pull out the old FatdogArm, download wiringPi and presto one compilation later you've got that LED to blink. Or just access /sys/class/gpio for that purpose. <br /> <br />Now on to Fatdog64 800. I'm sure you're dying to hear about this too <img src=images/smilies/happy.gif /> OK. As far as Fatdog64 800 is concerned - we've done close to 1,100 packages. We're about 200 packages away from the finish line. As usual, the last mile in the marathon is the hardest. It's bootloaders, Qt libs and apps, and libreoffice. Here's crossing your finger to smooth upgrading of all these packages. <br /> <br />Speaking about updates, I've also decided to go for the newest bluetooth stack (bluez). I have been a hold-on on bluez 4 for the longest time, simply because the bluez 5 does not work with ALSA - you need to use PulseAudio for sound. But all of that have changed, ther e is now an app called <a href=https://github.com/Arkq/bluez-alsa target=_blank>bluez-alsa</a> that does exactly that. I've been thinking to do it myself were it not there; but I've been thinking too long <img src=images/smilies/happy.gif /> Anyway, I'm glad that it's there. Bluez 5 does have a nicer API the last time I looked at it (as in, more consistent) though not necessarily clearer or easier to use than Bluez 4. But that's just Bluez. <br /> <br />Well that's it folks for now. And in case I don't see you... good afternoon, good evening, and good night <img src=images/smilies/happy.gif /> http://lightofdawn.org/blog/?viewDetailed=00189 Github fallout and what we can learn from that General Hahaha. What would I say. <br /> <br />It's the talk of the town: <a href=https://techcrunch.com/2018/06/04/microsoft-has-acquired-github-for-7-5b-in-microsoft-stock/ target=_blank>Microsoft buys Github</a>. <br /> <br />Why are you surprised? It's long time coming. See my previous <a href=?viewDetailed=00182 target=_blank>articles</a> about FOSS. Add the fact that Github fails to make a profit. Their investors want out; they would welcome a buyer. <b>**Any**</b> buyer. <br /> <br />But today I don't want to talk about the sell-out; there are already too many others discussing it. Instead I'd like to ponder on the impact. <br /> <br />Rightly or wrongly, many projects have indicated that they will move away. These projects will not be in github anymore, either in near future or in immediate future. What's going to happen? Some of these projects are libraries, which are dependencies used by other projects. <br /> <br />People have treated github as if it is a public service (hint: it never has been). They assume that it will always exist; and always be what it is. Supported by the public APIs, people build things that depends on github presence; that uses github features. One notable "things" that people build are automated build systems, which can automatically pull dependencies from github. Then people build projects that depends on these automated build tools. <br /> <br />What happens to these projects, when the automated build tools fail because they can no longer find the dependencies on github (because the dependent project has now moved elsewhere)? They will fail to build, of course. And I wonder how many projects will fail in the near future because of this. <br /> <br />We've got a hint a couple years ago, <a href=http://arstechnica.co.uk/information-technology/2016/03/javascript-unpublished-broke-the-internet/ target=_blank>here</a> (which I also covered in a blog post, <a href=?viewDetailed=00158 target=_blank>here</a>). Have we learnt anything since then? I certainly hope so although it doesn't look like so. <br /> <br />It's not the end of the world. Eventually the author of the automated build tools will "update" their code libraries and will attempt to pull the dependencies from elsewhere. You probably need a newer of said build tools. But those github projects don't move at one step; they move at the convenience of the project authors/maintainers. So, you will probably need to constantly updates your automated build tools to keep track with the new location where the library can be pulled from (unless a central authority of sorts is consulted by these build tools to decide where to pull the libraries from - in this case one only needs to update said central authority). It will be an "inconvenience", but it will pass. The only question is how long this "inconvenience" will be. <br /> <br />How many will be affected? I don't know. There are so many automated build tools nowadays (it used to be only "make"). Some, which host local copies of the libraries on their own servers, won't be affected (e.g. maven). But some which directly pulls from github will definitely get it (e.g. gradle). Whatever it is, it's perhaps best to do what I said in my earlier blog post - make a local copy of any libraries which are important to you, folks! <br /> <br /><hr> <br /> <br />Github isn't the only one. On a larger scale (than just code repositories and code libraries), there are many "public service" services today, which aren't really public service (they are run by for-profit entities). Many applications and tools depend on these; and they work great while it lasts. But people often forget that those who provide the services has other goals and/or constraints. People treat this public service as something that lasts forever, while in actuality these services can go down anytime. And every time the service goes down, it will bring down another house of cards. <br /> <br />So what to do? <br /> <br />It's common sense, really. If you really need to make your applications reliable, then you'd better make sure that whatever your application depends are not "here today gone tomorrow". If you depend on certain libraries make sure you have local copy. If you depend on certain services make sure that those services are available for as long your need it. If you cannot make sure of that, then you will have to run your own services to support your application, period. If you cannot run the services in house (too big/too complex/too expensive/etc), then make sure you external services you depend on is easily switchable (this means standards-based protocols/APIs with tools for easy exporting/importing). Among other things. <br /> <br />Hopefully this will avoid another gotcha when another "public service" goes down. <br /> http://lightofdawn.org/blog/?viewDetailed=00188 New Fatdog64 is in the works Fatdog64'Linux It's the time of the year again. The time the bears wake up from hibernation. After being quiet for a few months, the gears start moving in the Fatdog64 development. <br /> <br />Fatdog64 721 was released over 4 months ago. It was based on LFS 7.5, which was cutting edge back in 2014 (although some of the packages have younger ages as they got updated in every release). <br /> <br />As I have indicated earlier (in 720 beta release, <a href=?viewDetailed=00178 target=_blank>here</a>), 700 series is showing its age. Compared to previous series, the 700 series is actually the longest-running Fatdog series so far, bar none. <br /> <br />But everything that has a beginning also has an end. It's time to say goodbye to 700 series and launch a new series. <br /> <br />The new series will be based on LFS 8.2 (the most recent as of today). This gives us glibc 2.27 and gcc 7.3.0. Some packages are picked up from SVN version of BLFS, which is newer. <br /> <br />How far have we gotten with this new release? Well, as of yesterday, we've got Xorg 1.20.0 running with twm, xterm and oclock app running from its build sandbox. <br /> <br /><a rel=prettyPhoto href=images/xorg-twm-xterm.png ><img rel=prettyPhoto src=thumbs/xorg-twm-xterm.png /></a> <br /> <br />Hardly inspiring yet, but if you know the challenge we faced to get there, it's a great milestone. <br /> <br />As it is usual with Fatdog64, however, it will be released when it is ready. So don't hold your breath yet. If 721 is working well for you, hang on to it (I do!). But at least you know that wouldn't be the last time you heard of this dog. <br /> <br /> <br /><hr> <br /> <br />On a special note, I'd like to say special thanks to "step" and "Jake", the newest members of Fatdog64 team (and thus is still full of energy - unlike us the old timers hehe). While I have been shamelessly away from the forum for many reasons, "step" and "SFR" continue to support Fatdog64 users in the forum. My heartful thanks to both of them. <br /> <br />Of course, also thanks to the wonderful Fatdog64 users who continue to support each other. http://lightofdawn.org/blog/?viewDetailed=00187 Measure LED forward voltage using Arduino General Arduino is used for many things, including testing and measuring component values. <br /> <br />Somebody has made a resistance meter: <br /><a href=http://learningaboutelectronics.com/Articles/Arduino-ohmmeter.php target=_blank>http://learningaboutelectronics.com/Articles/Arduino-ohmmeter.php</a> <br /> <br />Another has made a capacitance meter: <br /><a href=https://www.arduino.cc/en/Tutorial/CapacitanceMeter target=_blank>https://www.arduino.cc/en/Tutorial/CapacitanceMeter</a> <br /> <br />Yet another has made an inductance meter: <br /><a href=https://foc-electronics.com/index.php/2017/12/06/how-to-measure-inductance-with-an-arduino/ target=_blank>https://foc-electronics.com/index.php/2017/12/06/how-to-measure-inductance-with-an-arduino/</a> <br /> <br />There is one missing: determine LED forward voltage. <br /> <br />LED comes in variety of colours, and these variations comes from different materials and different doping densities. As a result, the forward voltage of these LEDs are also not the same - lower-energy-light LEDs (e.g. red) usually require less forward voltage than higher-energy-light LEDs (white or blue). The only sure way to know is by reading its datasheet. <br /> <br />But what if you don't have the datasheet? Or you don't know what is the datasheet for some particular LEDs? (e.g. LEDs you salvage from some old boards). <br /> <br />The following Arduino circuit should help you. It helps you to figure out what is the forward voltage for an LED. <br /> <br /><hr> <br /> <br /><b>Connections</b> <br /> <br /><img src=images/LED-fwd-voltage_schem.png /> <br /> <br /><a rel=prettyPhoto href=images/LED-fwd-voltage_bb.png ><img rel=prettyPhoto src=thumbs/LED-fwd-voltage_bb.png /></a> <br /> <br /><hr> <br /> <br /><b>Sketch</b> <br />Get the <a href=downloads/FindLedFwdVoltage2.ino target=_blank>sketch</a>. <br /> <br /><hr> <br /> <br /><b>Principle of operation</b> <br /> <br />Initially we have both D3 and D4 as high (=5V). This charges the capacitor, and turns off the LED. <br /> <br />Then drop both D3 and D4 to low. The diode prevents the capacitor from bleeding off its charge through D3, so the only way it can discharge now is via the LED. <br /> <br />A0 measures the capacitor voltage. <br />A2 measures series resistor voltage. <br />A2-A0 gives you the LED voltage. <br /> <br />In the ideal situation, you expect to see that A0 and A2 will keep dropping off until the conduction suddenly stops, A2 becomes zero (because no more current flows through it), and then A0 will give you the LED forward voltage. <br /> <br />Of course, in real world this does not happen. If you test that circuit you will find that the LED will keep giving out light even when it's below its official forward voltage, and if you wait until the current is zero, the A0 voltage you get will be very much below the nominal forward voltage. <br /> <br />So how do we know how to stop measuring? Well, most LEDs are usually specified to be "conducting" when it pass at least 5mA of current. So when we detect the current across the resistor to be less than 5mA, we stop measuring and declare that the A2-A0 of the last measurement as the forward voltage. <br /> <br />Oh, how do you get the LED current? LED current is the same current that pass through its series resistor (ignoring current going out to A2). The current in the series resistor is simply its voltage (A2) divided by its resistance (130R). <br /> <br /><hr> <br /> <br /><b>Caveat</b> <br />The voltage-current relation of a LED is the same as any diode - it's exponential. In other words, the forward voltage depends on the amount of current that flows (or better yet: the current that flows depend on the applied voltage). There is no single one fixed "forward voltage"; the LED will actually conduct and shine (with varying brightness) on voltages lower or higher than the official forward-voltage. <br /> <br />Ok, that helps. But how about forward current? <br /> <br />Typical LEDs uses 20mA forward current. This is regardless of the colour or the forward voltage. So there you have it. Of course, main exception for this rule is super-bright high-wattage LEDs which is meant for room illumination or for torches. These can easily pass 100mA, and some can even crank up to 500mA or more. Forward voltages on these kind of LEDs can vary a lot depending on whether you're passing 5mA or 500mA. The tester above won't work properly with these kind of LEDs. <br /> <br /><b>FAQs</b> <br />Q1: Why pin D3 and D4? Not D8 or D9? <br />A1: Because I like it that way. You can change it, but be sure to change the code too. <br /> <br />Q2: Why analog pins A0 and A2? <br />A2: Because I like it that way too. Actually, that's because an earlier design used 3 analog pins, but later on I found out that one of them (located in A1) isn't necessary, but I've already wired the circuit with A2, so it stays there. Of course you can change it, but remember to update the code too. <br /> <br />Q3: Why do you use 130R? <br />A3: 130R is the series resistor you use for LEDs with 2.4 forward voltage (green LEDs usually), which is somewhat in the middle of the range for LED forward voltages. Plus, they're what I have laying around. <br /> <br />Q4: Why 470uF? <br />A4: That's what I have laying around too. You can use other values, but make sure they're not too small. <br /> <br />Q5: The diode - IN4001 - you also use that because that's what you have laying around? <br />A5: Actually you can use any diode. In my circuit I actually used IN4007 because that's what I have laying around :) <br /> <br />And finally: <br />Q6: Why do you have separate D3 and D4? Since they will be brought HIGH and LOW at the same time, why not just use one pin? <br />A6: Yes, you can do it that way (remember to change the code). But using two pins make it clearer of what is happening. <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00186 Spectre on Javascript? Linux'General The chaos caused by Spectre and Meltdown seems to have quieten down. Not because the danger period is over, but well, there are other news to report. As far as I know the long tail of the fix is still on-going, and nothing short of hardware revision can really fix them without the obligatory reduction in performance. <br /> <br />Anyway. <br /> <br />One of the those who quickly released a fix, was web browser vendors. And the fix was to "reduce granularity of performance timers" (in Javascript), because with high-precision timers, it is possible to do Spectre-like timing attack. <br /> <br />This, I don't understand. How could one perform Spectre or even Spectre-like timing attack using Javascript? Doesn't a Javascript program run in a VM? How would it be able to access its host memory by linear address, let alone by physical address? I have checked wasm too - while it does have pointers, a wasm program is basically an isolated program that lives in its own virtual memory space, no? <br /> <br />In other words - the fix is probably harmless, but could one actually perform Spectre or Spectre-like attack using browser-based Javascript in the first place? <br /> <br />That is still a great mystery to me. May be one day I will be enlightened. http://lightofdawn.org/blog/?viewDetailed=00185 Spectre and Meltdown Linux'General Forget about the old blog posts for now. <br /> <br />Today the hot item is Spectre and Meltdown. It's a class of vulnerabilities caused by CPU bugs that allows an adversary to steal sensitive data, even without any software bugs. Nice. <br /> <br />Everyone and his dog is talking about it, offering their opinions and such. Thusly, I feel compelled to offer my own. <br /> <br />Mind you, I'm not a CPU engineer, so don't take this as infallible. In fact, I may be totally wrong about it. So treat it like how you treat any other opinions - verify and cross-check with other sources. That being said, I've done some research about it myself, so I expect that I'm not too much fooled by myself :) <br /> <br /><hr> <br /><b>Overview</b> <br /> <br />There are 3 kinds of vulnerabilities: Spectre 1, Spectre 2, and Meltdown. <br /> <br />In very simplified terms, this is how they work: <br />1. <span class=itr>Spectre 1</span> - using speculative execution, leak sensitive data via cache timing. <br />2. <span class=itr>Spectre 2</span> - by poisoning branch prediction cache, makes #1 more likely to happen. <br />3. <span class=itr>Meltdown</span> - Application of Spectre 1: read kernel-mode memory from non-privileged programs. <br /> <br /><hr> <br /><b>How they work</b> <br /> <br />So how exactly do they work? <a href=https://googleprojectzero.blogspot.com.au/2018/01/reading-privileged-memory-with-side.html target=_blank>https://googleprojectzero.blogspot.com.au/2018/01/reading-privileged-memory-with-side.html</a> gives you the super details of how they work, but in the nutshell, here it is: <br /> <br /><span class=itr>Spectre 1</span> - Speculative execution is a phantom CPU operation that supposedly does not leave any trace. And if you view it from CPU point of view, it really doesn't leave any trace. <br /> <br />Unfortunately, that's not the case when you view it from outside the CPU. From outside, a speculative execution looks just like normal execution - peripherals can't differentiate between them; and any side effects will stay. This is well known, and CPU designers are very careful not to perform speculative executions when dealing with external world. <br /> <br />However, there is one peripheral that sits between CPU and external world - the RAM cache. There are multiple levels of RAM cache (L1, L2, L3), some these belongs to the CPU (as in, located in the same physical chip), some are external to the CPU. In most designs, however, the physical location doesn't matter: wherever they are, these caches aren't usually aware of differences between speculative and normal execution. And this is where the trouble is: because the RAM cache is unable to differentiate between these two, <i>any execution</i> (normal or speculative) will leave an imprint in the RAM cache - certain data may be loaded or removed from the cache. <br /> <br />Although one cannot read the contents of RAM cache directly (that would be too easy!), one can still infer information by checking whether certain set of data in inside the RAM cache or not - by timing them (if it's in the cache, data is returned fast, otherwise it's slow). <br /> <br />And that's how Spectre 1 works - by doing tricks to control speculative execution, one can perform an operation which normally isn't allowed to leave RAM cache imprint, which can then be checked to gain some information. <br /> <br /><span class=itr>Spectre 2</span> - Just like memory cache and speculative execution, branch prediction is a performance-improvement technique used by CPU designers. Most branches will trigger speculative execution; branch prediction (when the prediction is correct) makes that speculation run as short as possible. <br /> <br />In addition, certain memory-based branch ("indirect branch") uses small, in-CPU cache to hold the location of the previous few jumps; these are the locations from which speculative execution will be started. <br /> <br />Now, if you can fill this branch prediction cache with bad values (="poisoning" them), you can make CPU to perform speculative execution at the wrong location. Also, by making the branch prediction errs most of the time, you make that speculative execution longer-lived than that it should be. Together, they make it more easier to launch Spectre 1 attack. <br /> <br /><span class=itr>Meltdown</span> - is an application of Spectre 1 to attempt to read data from privileged and protected kernel memory, by non-privileged program. Normally this kind of operation will not even be attempted by the CPU, but when running speculative execution, some CPU "forget" to check for privilege separation and just blindly do it what it is asked to do. <br /> <br /><hr> <br /><b>Impact</b> <br /> <br />Anything that allows non-privileged programs to read and leak infomation from protected memory is bad. <br /> <br /><hr> <br /><b>Mitigation Ideas</b> <br /> <br />Addressing these vulnerabilities - especially Spectre - is hard because the cause of the problem is not a single architecture or CPU bugs or anyhing like it - it is tied to the concept itself. <br /> <br />Speculative execution, memory cache, and branch prediction are all related. They are time-proven performance-enhancing techniques that have been employed for decades (in consumer microprocessor world, Intel was first with their Pentium CPU back in 1993 - that's 25 years ago as of this time of writing. <br /> <br /><span class=itr>Spectre 1</span> can be stopped entirely, if speculative execution does not impact the cache (or if the actions to the cache can be un-done once speculative execution is completed). But that is a very expensive operation in terms of performance. By doing that, you more or less lose the speed gain you get from speculative execution - which means, may as well don't bother to do speculative execution in the first place. <br /> <br /><span class=itr>Spectre 2</span> can be stopped entirely if you can enlarge the branch prediction cache so poisoning won't work. But there is a physical limit on how large the branch cache can be, before it slows down and lose its purpose as a cache. <br /> <br />Alternatively, it can be stopped again in its entirety, if you disable speculative execution during branching. But that's what a branch prediction is for, so if you do that, may as well drop the branch prediction too. <br /> <br /><span class=itr>Meltdown</span> however, is easier to work out. We just need to ensure that speculative execution honours the memory protection too, just like normal execution. Alternatively, we make the kernel memory totally inaccessible from non-privileged programs (not by access control, but by mapping it out altogether). <br /> <br /><hr> <br /><b>Mitigation In Practice</b> <br /> <br /><span class=itr>Spectre 1</span> - There is no fix available, yet (no wonder, this is the most difficult one). <br /> <br />There are clues that some special memory barrier instructions (i.e. LFENCE) can be modified (perhaps by microcode update?) to stop speculative execution or at least remove the RAM cache imprint by undo-ing cache loading during speculative execution, on-demand (that is, when that LFENCE instruction is executed). <br /> <br />However, even when it is implemented (it isn't yet at the moment), this is a piecemail fix at best. It requires patches to be applied to compilers, or more importantly any programs capable of generating code or running interpreted code from untrusted source. It does not stop the attack fully, but only makes it more difficult to carry it out. <br /> <br /><span class=itr>Spectre 2</span> - Things is a bit rosier in this department. The fix is basically to disable speculative execution during branching. This can be done in two ways. In software, it can be used by using a technique called "retpoline" (you can google that) - which basically let speculative execution chases its own tails (=thus effectively disabling it). In hardware, this can be done by the CPU exposing controls (via microcode update) to temporarily disable speculative execution during branching; and then the software making use of that control. <br /> <br />Retpoline is available today. The microcode update is <i>presumably</i> available today for certain CPUs, and the Linux kernel patches that make use of that branch controls are also available today. However, none of them have been merged into mainline yet. (Certain vendor-specific kernel builds already have these fixes, though). <br /> <br />Remember, the point of Spectre 2 is to make it easier to carry out Spectre 1, so by fixing Spectre 2 it makes Spectre 1 less likely to happen to the point of making it irrelevant (hopefully). <br /> <br /><span class=itr>Meltdown</span> - This is where the good news finally is. The fix can be done, again, via CPU microcode update, or by software. Because it may take a while for that microcode update to happen (or not all), the kernel developers have come up with a software fix called KPTI - Kernel Page Table Isolation. With this fix, kernel memory is completely hidden from non-privileged programs (that's what "isolation" stands for). This works, but with a very high-cost in performance: it is reported to be 5% at minimum, and may go to 30% or more. <br /> <br /><hr> <br /> <br /><b>Affected CPUs</b> <br /> <br />Everyone has a different view on this, but here is my take about it. <br /> <br /><span class=itr>Spectre 1</span> - All out-of-order superscalar CPUs (no matter what architecture or vendor or make) from Pentium Pro era (ca 1995) onwards are susceptible. <br /> <br /><span class=itr>Spectre 2</span> - All CPU with branch prediction that use cache (aka "dynamic branch prediction") are affected. The exact techniques to carry out Spectre 2 attack may be different from one architecture to another, but the attack concept is applicable to all CPUs of this class. <br /> <br /><span class=itr>Meltdown</span> - certain CPU get it right and honour memory protection even during specutlative execution. These CPUs don't need the above KPTI patches and they are not affected by Meltdown. Some says that CPUs from AMD are not affected by this; but with so many models involved it's difficult to be sure. <br /> <br /><hr> <br /> <br />So that's it. It does not sound very uplifting, but at least you get a picture of what you're going to have for the rest of 2018. And the year has just started ... <br /> <br />EDIT: If you don't understand some of the terms used in this article, you may want to check <a href=https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/ target=_blank>this excellent article</a> by Eben Upton. http://lightofdawn.org/blog/?viewDetailed=00184 Old blog posts General Long before time began, I had a blog. It was on a shared blogospace. I have long forgotten about it, but a few days ago I remembered about it and visited the site. To my surprise, it still exists; my old posts are still there. As if time stands still. <br /> <br />I tried to login to that site, but Google wouldn't let me. I used Yahoo email for the login id; and I haven't accessed that email account for ages. When I tried to do that, it wouldn't recognise my password. In the light of Yahoo's massive data breach a couple years ago, this isn't surprising. I tried to recover the account using my other emails, but it didn't work either. Well, that's too bad, but I wouldn't have expected an abandoned blog to exist at all. <br /> <br />What I am going to do, instead, is I will scrape the text off that blog; and I will re-post some of the more interesting ones here. There are some unfinished posts there too; those whose subject I still remember I will publish the complete version here too. <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00183 Fatdog64 721 is Released Fatdog64'Linux In the light of recent Spectre and Meltdown fiasco; the Linux kernel team has released patches to sort of workaround the problem. <br /> <br />It's not free, you will get a performance hits anywhere from 5% to 30% depending on the kind of apps that you use (more if you use virtual machines), but at least you're protected. <br /> <br />We have released Fatdog64 721 with updated kernel (4.14.12) that comes with this workaround. <br /> <br />You can, however, decide risk it and not use the workaround, by putting "<b>pti=off</b>" boot parameter. You'd better know what you're doing if you do that, though. <br /> <br />Apart from that, this release also supports microcode update and hibernation. We've bundled the latest microcode from both Intel (dated 8 Jan 2018) and AMD (latest from linux-firmware as of 10 Jan 2018); however it is unclear whether any of them address the problem. <br /> <br /><hr> <br /><a href=http://distro.ibiblio.org/fatdog/web/721.html target=_blank>Release Notes</a> <br /><a href=http://www.murga-linux.com/puppy/viewtopic.php?t=112372 target=_blank>Announcement (same announcement as 720).</a> <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso// target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/ target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso/ target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/ target=_blank>uoc.gr - European mirror</a> <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00182 How to destroy FOSS from within - Part 4 Linux'General This is the fourth installment of the article. <br />In case you missed it, these are <a href=?viewDetailed=00168 target=_blank>part one</a> and <a href=?viewDetailed=00172 target=_blank>part two</a> and <a href=?viewDetailed=00177 target=_blank>part three</a>. <br /> <br />I originally planned to finish this series of articles at the end of last year, so we start 2018 with a more uplifting note - but didn't have enough time so there we are. Anyway, we already start 2018 with the <a href=https://www.theregister.co.uk/2018/01/04/intel_amd_arm_cpu_vulnerability/ target=_blank>biggest security compromise ever</a> (that CPU-level memory protection can be broken even without any kernel bugs, that kernel memory of any OS in the last 20 years can be read by userspace programs) - one more bad news cannot make it worse. <br /> <br />And now, for the conclusion. <br /> <br /><hr> <br /> <br />By now you should already see how easy it is to destroy FOSS if you have money to burn. <br /> <br />From <a href=?viewDetailed=00172 target=_blank>Part 2</a>, we've got conclusion that <span class=itg>"a larger project has more chance of being co-opted by someone who can throw money to get people to contribute"</span>. This is the way to co-opt the project from the bottom-up - by paying people to actively contribute and slowly redirect the project to the direction of the sponsor. <br /> <br />From <a href=?viewDetailed=00177 target=_blank>Part 3</a>, we've got the conclusion that <span class=itg>"direction of the project is set by the committers, who are often selected either at the behest of the sponsor, or by the virtue of being active contributors"</span>. This is the way to co-opt the project from top-down - you plant people who will slowly rise to the rank of the committers. Or you can just become a "premium contributor" by donating money and stuff and instantly get the right to appoint a committer; and when you have them in charge, simply reject contributions that are not part of your plan. Or, if you don't care about being subtle, simply <span class=emr>"buy off"</span> the current committers (= employ them). <br /> <br />In both cases, people can revolt by forking, if they don't have the numbers, the fork will be futile because: <br />a) it will be short-lived <br />b) it will be stagnant <br />and in either case, people will continue to use the original project. <br /> <br />It's probably not the scenario you'd like to hear, but that's how things unfold in reality. <br /> <br /><hr> <br /> <br />In case you think that this is all bollocks, just look around you. <br /> <br />Look around the most important and influential projects. <br /> <br />Look at their most active contributors. <br /> <br />Ask yourself, why are they contributing, who employs them. <br /> <br />Then look at the direction these people have taken. Look very very closely. <br /> <br />Already, a certain influential SCM system used to manage a certain popular OS, is now more comfortable to run on a foreign OS than the OS that it was originally developed (and is used to manage). <br /> <br />Ask yourself how can this be. "Oh, it's because we have millions of downloads of for that foreign OS, so that foreign OS is now considered as a top-tier platform and we have to support that platform" (to the extent that we treat the original OS platform as 2nd tier and avoid using native features which cannot be used on that foreign OS, because, well, millions of downloads). Guess what? The person who says that, works for the company that makes that foreign OS. And not only that, he's got the influence, because, well, there are a lot of "contributors" coming from where he works. <br /> <br />What's next? bash cannot use "fork()" because a foreign OS does not support fork()? <br /> <br />Who pays for people who works on systemd? Who pays for people to work on GNOME? Who pays for people to work on KDE? Who pays for people who works on Debian? Who are the members of Linux Foundation? You think these people work out of the kindness of their heart for the betterment of humanity? Some of them certainly do. Some, however, work for the betterment of themselves - FOSS be damned. <br /> http://lightofdawn.org/blog/?viewDetailed=00181 Fatdog64 720 Final is released Fatdog64'Linux Fatdog64 720 Final was released on 20 December 2017, after about three weeks of beta (720 beta was announced <a href=?viewDetailed=00178 target=_blank>here</a>). <br /> <br />It was a hectic before Christmas so I didn't get to announce it here in my blog. In fact, Barry Kauler (original author of Puppy Linux) <a href=http://bkhome.org/news/201712/fatdog64-720-final-released.html target=_blank>announced it earlier</a> that I do <img src=images/smilies/teeth.gif /> - which is quite a tribute for us <img src=images/smilies/happy.gif /> <br /> <br />There isn't much changes between this and beta, other than a few bug fixes - as I said earlier, 720 beta was actually quite stable. <br /> <br />One new "feature" made it there: 720 now comes with two initrds (dual-initrds) - the first one is the usual huge initrd, and the second one is a very small initrd (around 3.5MB) with ability to "load" the larger initrd. This was a suggestion from forum member LateAdopter, which we "adopted" <img src=images/smilies/teeth.gif /> <br /> <br />Why the need for that? Some people have been complaining about the slow booting speed of Fatdog64 due to its huge initrd. There are many reasons for this slowness but it's mainly because: <br />a) old BIOS <br />b) old bootloaders (grub4dos/grub-legacy) <br />c) boot from modern, large filesystem such as ext4 with size over 16GB. <br /> <br />This particular combination is especially toxic - bootloaders usually use BIOS calls to get data from the disk, and old bootloaders don't understand new filesystem well so while they can load from it, they only do it very very slowly. <br /> <br />Nevertheless, the new "nano-initrd" (as I call it) to the rescue. The small initrd will be loaded fast enough by the bootloader, and then Lnux kernel takes over and load the huge initrd - with the use of modern, optimised code. So booting remain fasts. <br /> <br />However, nothing comes for free. It's basically a stripped down initrd (as explained <a href=/wiki/wiki.cgi/MinimalFatdogBoot target=_blank>here</a> so along with the cutdown in size, a lot of other stuff must be sacrificed too. Don't expect the nano-initrd to be able to boot from exotic environments. <br /> <br /><hr> <br /><a href=http://distro.ibiblio.org/fatdog/web/720.html target=_blank>Release Notes</a> <br /><a href=http://www.murga-linux.com/puppy/viewtopic.php?t=112372 target=_blank>Forum announcement</a> <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso// target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/ target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso/ target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/ target=_blank>uoc.gr - European mirror</a> <br /> http://lightofdawn.org/blog/?viewDetailed=00180 How to create Nvidia driver SFS for Fatdog and Puppy Fatdog64'PuppyLinux'Linux If you need to use Nvidia driver (instead of the open-source nouveau driver), I've written the steps to prepare the driver SFS yourself. <br /> <br />I wrote this article because Nvidia driver is sensitive to kernel changes; each kernel changes requires a rebuild of the driver. And we usually don't provide nvidia driver for beta releases. <br /> <br />Also, there are variations of the nvidia driver (long term, short term, legacy, etc) supporting different cards. Creating a driver for each variation, and re-creating them every time the kernel change, takes a lot of time. <br /> <br />So I've published the way for you to do that yourself. The steps enable you to create the SFS yourself, or, if you can't be bothered about the SFS, it will install the driver directly for you. <br /> <br />As a bonus, it should work on recent Puppy Linux too. <br /> <br />The instruction is <a href=/wiki/wiki.cgi/CreateNvidiaDriverSFS target=_blank>here</a>. <br /> <br />Note: this article is an update of the original instructions I wrote <a href=http://www.murga-linux.com/puppy/viewtopic.php?p=975088#975088 target=_blank>here</a> (which is XenialPup64 specific). http://lightofdawn.org/blog/?viewDetailed=00179 I accidentally removed glibc Fatdog64'Linux I accidentally removed glibc. <br /> <br />I was running Fatdog build process and I wanted to remove glibc from its chroot. <br /> <br />The correct command to do that was this: <pre class=code><code class=highlight>ROOT=chroot removepkg glibc32 glibc</code></pre> <br />but I typed in the wrong way: <pre class=code><code class=highlight>removepkg ROOT=chroot glibc32 glibc</code></pre> <br /> <br />This has the unintended effect of attempting to remove <span class=itb>ROOT=chroot</span> package <br />(which didn't exist), and then <span class=itr>glibc32, and glibc</span>. Of course the removal wasn't fully successful, but the dynamic linker <span class=emb>/lib64/ld-linux-x64_64.so.2 </span>was deleted and that's enough to stop almost anything. <br /> <br />In a normal distro this would probably require an immediate re-install. <br /> <br />In Puppy-like distro (including Fatdog) all you need to do is to boot pristine, disregarding any savefile/savefolder (<span class=itb>pfix=ram</span> for Puppies and <span class=itg>savefile=none</span> for Fatdog); and then clean up the mess you've created by the accidental deletion. This is usually done by deleting the whiteouts, so glibc can "show up" again in the layered filesystem. <br /> <br />But I was in the middle of something and I really didn't want to reboot and abandoned what I was doing. What to do? I still had a few terminals open, is there anything I could do to salvage the situation? <br /> <br />Fortunately, Fatdog has a failover mechanism for situation like this. <br /> <br />Fatdog has a static busybox located in <span class=itg>/aufs/pup_init/bin/busybox</span>. This busybox is linked with complete set of applets, with its shell (ash) compiled to prefer internal busybox applets instead of external commands. <br /> <br />By running its shell <pre class=code><code class=highlight>/aufs/pup_init/bin/busybox ash</code></pre> <br />I am back in a working shell, and I can do "ls" and other things as needed because the busybox is fully static and doesn't need glibc. <br /> <br />Inside there, I then run Fatdog's whiteout clean up script <pre class=code><code class=highlight>sh fatdog-clean-whiteout.sh</code></pre> <br />which run nicely because busybox has enough applets to support it. This removes the whiteout, in effect, undo-ing the deletion. <br /> <br />But trying to do "ls" on another terminal still indicate that glibc isn't installed yet. This is because aufs, the layered filesystem, isn't aware that we have "updated" its layer behind its back. All we need to do is to tell it to re-evaluate its layers. <br /> <br />This can be done by running (from the terminal that runs static busybox shell) this command <pre class=code><code class=highlight>mount -i -t aufs -o remount,udba=reval aufs /</code></pre> <br />Once this is done, the system is back to live, and the project is saved. <br /> http://lightofdawn.org/blog/?viewDetailed=00178 Fatdog64 720 Beta is Released Fatdog64'Linux The next release of Fatdog64 is finally here! <br /> <br />Well, the beta version at least. I actually think this is the next stable release. We have been running this for weeks ourselves, but because we have made so many changes, it's good to treat it as beta and test it on wider audience. <br /> <br />A lot of improvements since the last release; lots of package updates, and lots of fixes too. However this is still based on 710 as the base. <br /> <br />We plan to follow this one up with a Final soon, hopefully before Christmas. <br /> <br />What's next? <br /> <br />Once it goes to final, it would probably be sunset for the 700 series. While 720 is running very well, it is showing its age. Some binary packages refuses to run on it, demanding a newer glibc, for example. <br /> <br />The decision isn't final yet, and the 800 series isn't probably going to be started very soon (we all need to catch our breaths). Meanwhile, enjoy it while you can. <br /> <br /><a href=http://distro.ibiblio.org/fatdog/web/720b.html target=_blank>Release Notes</a> <br /><a href=http://www.murga-linux.com/puppy/viewtopic.php?t=112220 target=_blank>Forum announcement</a> <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso/720b/ target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/720b target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso/720b target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/720b target=_blank>uoc.gr - European mirror</a> <br /> http://lightofdawn.org/blog/?viewDetailed=00177 How to destroy FOSS from within - Part 3 Linux'General This is the third installment of the article. <br />In case you missed it, these are <a href=?viewDetailed=00168 target=_blank>part one</a> and <a href=?viewDetailed=00172 target=_blank>part two</a>. <br /> <br /><hr> <br /> <br />In the previous post, I stated that the direction in an FOSS project is set by two groups of people: <br />a) People who work on the project, and <br />b) People who are allowed to work on the project. <br /> <br />We have examined (a) in <a href=?viewDetailed=00172 target=_blank>part two</a>, so now let's examine (b). <br /> <br />Who are the people allowed to work on the project? Aren't anyone allowed? The answer is a solid, resounding, "NO". <br /> <br />Most FOSS projects, if they are contributed by more than one person, tend to use some sort of source code management (SCM). In a typical SCM system, there are two classes of users: users with commit rights, who can modify the code in the SCM (committers), and read-only users, who can read and check-out code from the SCM but cannot change anything. <br /> <br />In most FOSS projects, the number of committers are much smaller than the read-only users (potentially anyone in the world with enough skill is a read-only user if the SCM system is opened to the world e.g. if you put the code in a public SCM repository e.g. github and others). <br /> <br />The committers don't necessarily write code themselves. Some of them do; some of them just acts are a "gatekeeper"; they receive contributions from others; vet and review the changes; and "commits" them (=update the code to the SCM) when they think that the contribution has met certain standards. <br /> <br />Why does it matter? Because eventually these committers are the ones that decide the direction of the project by virtue of deciding what kind of changes are accepted. <br /> <br />For example, I may be the smartest person in the world, I may be the most prolific programmer or artists in the world; if the committers of the project I want to contribute don't accept my changes (for whatever reason); then for all practical purposes I may as well don't exist. <br /> <br /><hr> <br /> <br />Hang on you say, FOSS doesn't work this way. I can always always download the source (or clone the repo) and work it on my own! No approval from anybody is required! <br /> <br />Yes, you can always do that, but that's you doing the work privately. That's not what I mean. As far as the project is concerned, as far as the people who use that project is concerned; if your patches aren't committed back to the project mainline, then you're not making any changes to the project. <br /> <br />But hey hey, wait a minute here you say. That's not the FOSS I know. The FOSS I know works like this: If they don't let me commit this large gobs of code that I've written, what's stopping me to just publish my private work and make it public for all to see and use? In fact, some FOSS licenses even require me to do just that! <br /> <br />Oh, I see. You're just about to cast the most powerful mantra of all: "just.fork.it", is it? <br /> <br /><hr> <br /> <br />I regret to inform you that you have been misinformed. While the mantra is indeed powerful, it unfortunately does not always work. <br /> <br />Allow me to explain. <br /> <br />Fork usually happens when people disagree with the committers on the direction they take. <br /> <br />Disagreement happens all the time, it's only when they are not reconcilable that fork happens. <br /> <br />But the important question is: what does the forking accomplish in the end? <br /> <br />Personally, I consider a fork to be successful if it meets one of two criteria: <br /> <br />a) the fork flourishes and develops into a separate project, offering alternatives to the original project. <br /> <br />b) the fork flourishes and the original project dies, proving that the people behind the original project has lost their sights and bet in the wrong direction. <br /> <br />In either case, for these to happen, we must have enough skilled people to back the fork. The larger the project, the more complex the project, the more skilled people must revolt and stand behind the fork. It's a game of numbers; if you don't have the numbers you lose. Even if you're super smart, you only have 24 hours a day so chances are you can never single-handedly fork a large-scale project. <br /> <br />In other words, "just.fork.it" mantra does not always work in real world; in fact, it mostly doesn't. <br /> <br />Let's examine a few popular works and let's see how well they do. <br /> <br />1. LibreOffice (fork of OpenOffice). This is a successful fork, because most of the original developers of OpenOffice switch sides to LibreOffice. The original project is dying. <br /> <br />2. eglibc (fork of glibc). Same story as above. Eventually, the original "glibc" folds, and the eglibc fork is officially accepted as the new "glibc" by those who owns "glibc" name. <br /> <br />3. DragonflyBSD (fork of FreeBSD). Both the fork and the original survives; and they grow separately to offer different solutions for the same problem. <br /> <br />4. Devuan (fork of Debian). The fork has existed for about two years now, the judge is still out whether it will be successful. <br /> <br />5. libav (fork of ffmpeg). The fork fails; only Debian supported it and it is now dying. <br /> <br />6. cdrkit (fork of cdrtools). The fork fails; the fork stagnates while the original continues. <br /> <br />7. OEM Linux kernel (fork of Linux kernel). There are a ton of these forks, each ARM CPU maker and ARM boardmaker effectively has one of them. They are mostly failed; the fork didn't advance beyond the original patching to support the OEM. That's why so may Android devices are stuck at 3.x kernels. Only one or two are successful, and those who are, are merging back their changes to the mainline - and eventually will vanish once the integration is done. <br /> <br />8. KDE Trinity (fork of KDE). It's not a real fork per se, but more of a continued maintenance of KDE 3.x. It fails, the project is dying. <br /> <br />9. MATE desktop (fork of GNOME). Same as Trinity, MATE is not a real fork per se, but a continued maintenance of GNOME 2.x. I'm not sure of the future of this fork. <br /> <br />10. Eudev (fork of systemd-udev). The fork survives, but I'd like to note that the fork is mostly about separating "udev" from "systemd" and is not about going to separate direction and implementing new features etc. Its long-term survivability is questionable too because only 2 people maintain it. Plus, it is only used by a few distributions (Gentoo is the primary user, but there are others too - e.g. Fatdog). <br /> <br />11. GraphicsMagick (fork of ImageMagick). The fork survives as an independent project but I would say it fails to achieve its purpose: it doesn't have much impact - most people only knows about ImageMagick and prefers to use it instead. <br /> <br />I think that's enough examples to illustrate that in most cases, your fork will only **probably** survive if you have the numbers. If you don't, then the fork will either die off, or will have no practical impact as people continue to use the original project. <br /> <br />In conclusion: The mantra of "just.fork.it" is not as potent as you thought it would be. <br /> <br />As such, the direction of the a project is mostly set by committers. Different projects have different policies on how committers are chosen; but in many projects the committers are elected based on: <br />a) request by the project (financial) sponsor, and/or <br />b) elected based on meritocracy (=read: do-ocracy) - how much contribution he/she has done before. <br /> <br />But remember what I said about <a href=?viewDetailed=00172 target=_blank>do-ocracy</a>? http://lightofdawn.org/blog/?viewDetailed=00176 Fatdog Update Fatdog64'Linux Well, I'm still here. I've been busy with life, moving houses, making arrangements, etc. Too much things to do, too little time. I wouldn't bore you with all that mundane things, since what most probably you're here for Fatdog. <br /> <br />Anyway. <br /> <br />Fortunately for all of us Fatdog64 lovers, it has not been so quiet for Fatdog64 under the hood. Our two new members, "SFR" and "step", have been busy at work - bug fixes, package updates, package rollback when the updates don't work :), package replacements, etc. You will find them in the Forum as well, helping other people. <br /> <br />I would say that recruiting them was the best decision we have done - the dynamics works well between us so discussion is always productive. <br /> <br />In fact, we're nearing a release now. To be accurate, however, we have been "near a release" for a few months now - there are so many changes we'd like to share with you; but there is always "one more thing we would to before release to make it better" - and the it's back to the kitchen <img src=images/smilies/teeth.gif />. So this release may happen soon or may be a bit later (or a lot later) - cross your fingers! <br /> <br />But seriously, all in all, things are looking good on Fatdog64 side. The team has done lots of exciting improvements. As usual, it may not be perfect, but there is a always the next release <img src=images/smilies/teeth.gif />. <br /> <br />It has not been so well on the ARM front. I'm really the only one who works on FatdogArm, and my lack of time to do anything with it means it gets left behind; and it shows. No new platform supported, packages not updated ... although, all in all, it still runs pretty well, for an aged OS. <br /> <br />Well, that's about it for now. On my other FOSS article, I have published two parts. It's actually a four-parter, so there are two more parts to publish ... I'll get that done very soon. <br /> <br />Cheerios everyone. http://lightofdawn.org/blog/?viewDetailed=00175 Fatdog64 build recipes Fatdog64'Linux I've just uploaded the build recipes for all the official packages of Fatdog64. They are available <a href=http://distro.ibiblio.org/fatdog/source/710/recipes/ target=_blank>here</a>. <br /> <br />They are tarballs, containing the recipe proper, and other supporting files such as patches, desktop files, icons, etc. <br /> <br />They have previously been available in the binary packages (every official Fatdog binary package contains the build recipe tarball inside); but to make it easier for people to search and re-use; we have decided to extract them and upload it in separate place. <br /> <br />The recipe itself is just a shell script, to be used with Fatdog's pkgbuild system. If you want to use it to build it as is, you need that build system which you can get from <a href=http://distro.ibiblio.org/fatdog/source/710/recipes/fatdog-pkgbuild-710.0-noarch-1.tar.gz target=_blank>here</a>. Warning: only tested to work in Fatdog. However, if you just want to examine how the build is done; you can just look at the recipe - it's simple enough to understand. <br /> <br />Note: If you're already on Fatdog64, don't bother getting that. pkgbuild is already included as part of Fatdog's devx. <br /> <br />These build recipes will be updated from time to time; but I can't guarantee any "freshness" of any of these recipes. And oh, they come as totally unsupported - feel free to use them as you see fit, but the risk is all yours. And while I'd be glad to hear suggestion and/or patches for them; please don't come to me for support. My hands are already full of other things. http://lightofdawn.org/blog/?viewDetailed=00174 Real-time Kernel for Fatdog64 710 Fatdog64'Linux I built and uploaded real-time kernel for Fatdog64. <br /> <br />It's based on Linux 4.4.52 - the latest as of today; and from the same branch as the 710 kernel (4.4.35); and one of the LTS (long-term-support) version; patched with 4.4.50-rt63 patches. <br /> <br />I could manage only the "Basic RT" (PREEMPT_RTB) configuration. This is somewhat between "low-lateny" and "fully preemptible" configurations. I tried the "fully preemptible" (PREEMPT_FULL) configuration but while it gave me a kernel binary; it didn't work satisfactorily --- too many lockups at too unpredictable times. <br /> <br />It has been a very long time since I built an RT kernel (the last one was probably around Linux 3.4 days) which can run in fully preemptible manner. The RT patches aren't always stable either; depending on the kernel version they can be good, okay, or just bad; so I suppose for today, this is the best I can get. <br /> <br />Apart from changing the pre-emption level to PREEMPT_RTB, I made two more (unrelated) changes: <br />- I increased timer frequency to 1000 Hz. <br />- I added SDA_HWDEP support. <br /> <br />The first change is done because I plan to use the RT kernel for some audio work that requires lower latency and higher timer resolution. <br /> <br />The second one is done because by tweaking the codec's amplifier I could make my laptop speaker louder by using <a href=http://www.alsa-project.org/main/index.php/HDA_Analyzer target=_blank>HDA Analyzer</a> (which requires HDA_HWDEP support); but it turns out to be wishful thinking. <br /> <br />Anyway, enjoy. If you need a guide on how to use the new kernel, look <a href=http://distro.ibiblio.org/fatdog/web/faqs/kernel.html target=_blank>here</a>. There is a new way to test kernels without having to do all above, but it hasn't been written yet. I'll write it when I have time (and motivation) - basically you use "extrasfs" boot parameter to load the kernel-modules.sfs instead of replacing the kernel modules inside your initrd. http://lightofdawn.org/blog/?viewDetailed=00173 Fatdog64 is now listed in Distrowatch Fatdog64'Linux I have been notified of this for a while, but because of my other stuff I forgot to announce it here. <br /> <br /><a href=http://distrowatch.com target=_blank>Distrowatch</a> is basically a site that monitors various Linux distributions and their updates; as well as news about what's new; what's coming up; and other interesting stuff about Linux distributions. If you haven't been there already, you should check it out. <br /> <br />Fatdog64 has been recommended to Distrowatch for a quite a while, languishing in the "submission queue" for years. Apparently this year is the year - we finally are listed there: <a href=http://distrowatch.com/fatdog target=_blank>http://distrowatch.com/fatdog</a>. <br /> <br />Yay! <br /> <br /> http://lightofdawn.org/blog/?viewDetailed=00172 How to destroy FOSS from within - Part 2 Linux'General This is the second installment of the article. In case you missed it, part one is <a href=/blog/?viewDetailed=00168 target=_blank>here</a>. <br /> <br /><hr> <br /> <br />In the past, companies try to destroy FOSS by disreputing them. This is usually done by hiring an army of paid shills - people who spread hoaxes, misinformation, and self-promotion where FOSS people usually hang around (in forums, blog comments), etc. This is too obvious after a short while, so the (slightly) newer strategy is to employ "unhelpful users" who hangs around the same forum and blog comments, pretending to help, but all they do is to shoot down every question by embarassing the inquirer (giving <i>"oh noobs questions, RTFM!"</i>, or <i>"why would you want to **<u>do that</u>**???"</i> type of responses, all the time). <br /> <br />Needless to say, all these don't always work (usually they don't) as long as the project is still active and its community isn't really filled with assholes. <br /> <br />In order to know how to destroy FOSS, we need to know how FOSS survives in the first place. If we can find lifeline of FOSS; we can choke them and FOSS will inevitably die a horrible death. <br /> <br />The main strength of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project. <br /> <br />The main weakness of FOSS is its principle of do-ocracy. Things will get done when somebody's got the itch do to it; and that somebody will, by virtue of do-ocracy, sets the direction of the project. <br /> <br />The repeated sentence above is not a mistake, it's not a typo. Do-ocracy is indeed both the strength and the Achilles' heel of FOSS. Let's see this is the case. <br /> <br />Direction in an FOSS project is set by two groups of people: <br />a) People who work on the project, and <br />b) People who are allowed to work on the project. <br /> <br /><b><u>Lets examine (a).</u></b> <br /> <br />Who are the people who work on the project? They are: <br />1) People who are capable of contributing <br />2) People who are motivated to contribute <br /> <br /><hr> <br /> <br /><u>Let's examine (1).</u> <br />Who are the people capable of contributing? Isn't everyone equally capable? The answer is, though may not be obvious due to popular "all people is equal" movement, is a big, unqualified NO. People who are capable of contributing are people who have the skill to do so. Contribution in documentation area requires skilled writers; contribution artworks require skillful artists; contribution in code requires masterful programmers. If you have no skill, you can't contribute - however motivated you are. <br /> <br />The larger a project grows, the more complex it becomes. The more complex it comes, the more experience and skill is needed before somebody can contribute and improve the project. To gain more skill, somebody needs to invest the time and effort; and get themselves familiar with the project and or the relevant technology. Bigger "investment" means less number of people can "afford" it. <br /> <br />And this creates a paradox. The more successful a project becomes, the larger it becomes. The larger it becomes, the more complex it becomes. The more complex it becomes, the smaller the available talent pool. <br /> <br /><hr> <br /> <br /><u>Let's examine (2).</u> <br />People contributes to FOSS projects for many reasons, some are less noble than others. Example: <br />- School projects (including GSoC) <br />- Some does it out for "paying back" ("I used FOSS software in the past, now I'm paying it back by contributing"). <br />- Some does it for fame and want to show off their skils. <br />- Some does it just to kill time. <br />- Some does it for enhancing their resume (oh wow - look at the number of projects in my github account !!! (although most of them are forks from others ...)). <br />- Some does it because they are the only one who needs the feature they want, so they just get it done. <br />- Etc, the reasons are too numerous to list. But there is one **BIG** motivation I haven't listed above, and I'm going to write it down in a separate sentence, because it is worthy of your attention. <br /> <br />👉👉👉 Some does it because it is <span style=emr>their day job</span>; they are being paid to do so 👈👈👈 <br /> <br /><hr> <br /> <br /><u><b>What can we conclude from (1) and (2)?</b></u> <br />A larger, more complex project requires people with more skills. <br />More skills requires more investment. <br />More investment requires more motivation. <br />Motivation can be bought (=jobs). <br /> <br />Thus it leads to the inevitable that: the more complex a project becomes, the more chance that the people who are working on it are paid employees. And paid employees follows the direction of their employer. <br /> <br />In other words: a larger project has more chance of being co-opted by someone who can throw money to get people to contribute. <br /> <br />We will examine (b) in the next installment. http://lightofdawn.org/blog/?viewDetailed=00171 Time flies Fatdog64'Linux Wow, it is now the third month of 2017. I haven't written anything for 3 months! <br /> <br />Well, things do get quiet during the holiday season; and as usual there are real-life issues that I need to take care of. <br /> <br />In between, things have happened. Fatdog64 is now featured on Distrowatch: <a href=http://distrowatch.com/table.php?distribution=fatdog target=_blank>http://distrowatch.com/table.php?distribution=fatdog</a>, yay! <br /> <br />Also, we recruited new member, "step" from the Puppy Linux forum. Before joining, step is known as maintainers of a few programs used in Puppy Linux, such as gtkmenuplus, findnrun, and others. Welcome step! <br /> <br />Though this blog is quiet, the Fatdog development is not. It continues nicely in the background with comfortable pace: bug fixes, minor feature updates, etc. Bug fixes isn't always possible, but package updates are visible <a href=http://distro.ibiblio.org/fatdog/packages/710/ChangeLog.txt target=_blank>here</a>. Also checks out <a href=http://murga-linux.com/puppy/viewtopic.php?p=794748 target=_blank>Fatdog contributed packages thread</a>. <br /> <br />On the other news, LFS 8.0 has been released and while it is tempting to conclude that Fatdog 800 will follow suit soon, it won't happen. <br /> <br />While 710 (which is based on LFS 7.5/CLFS 3.0) is getting older, it has no major problem as its program and libraries continue to be updated. Fatdog 700/710 has acquired a large number of third party contributed software and we plan to keep them usable for a foreseeable time to come, by supporting 700-series until at least the end of the year. There may be one or two more releases (720? 721? or 730?) but they will use the same base. <br /> http://lightofdawn.org/blog/?viewDetailed=00170 xscreenshot is updated Linux'General <a href=/wiki/wiki.cgi/Xscreenshot target=_blank>xscreenshot</a>, my dead simple screen capture program for X11, gets a facelift. It can now capture screenshot with mouse cursors in it; and it can also capture a single window. Oh, and now the filenames are created based on timestamp, rather than just a running number. You can get the latest version from <a href=/wiki/main/files/xannotate-2016-10-21.tar.bz2 target=_blank>here</a>. http://lightofdawn.org/blog/?viewDetailed=00169 Fatdog64 710 Final is released Fatdog64'Linux The final version of Fatdog64 710 has been released. A lot of improvements since the last Beta release in August 2016; whose details you can see in the <a href=http://distro.ibiblio.org/fatdog/web/710.html target=_blank>Release Notes</a>. <br /> <br />You can also leave your feedback in the Puppy Linux forum, where we made our <a href=http://www.murga-linux.com/puppy/viewtopic.php?t=109067 target=_blank>Announcement</a>. <br /> <br />Get it from the usual locations: <br /><a href=http://distro.ibiblio.org/fatdog/iso/710beta/ target=_blank>Primary site - ibiblio.org (US)</a> <br /><a href=http://ftp.nluug.nl/ibiblio/distributions/fatdog/iso/ target=_blank>nluug.nl - European mirror</a> <br /><a href=http://mirror.aarnet.edu.au/pub/fatdog/iso/ target=_blank>aarnet.edu - Australian mirror</a> <br /><a href=http://ftp.cc.uoc.gr/mirrors/linux/fatdog/iso/710beta target=_blank>uoc.gr - European mirror</a> <br /> <br />It may take a while for the mirrors to update.