Life is like a harddisk ...

With every new day, we savour new experience... keeping it in our memory. As time passes, we buried our moments deeper in the hierarchy of directories. Over time, we forget what we know.

Sometime, we feel that life is too much for us - that's when it's time to dig into our repositories and delete unnecessary files and folders, so that new experience can come in.

There are times that we still feel burning anger and hatred or sick to the stomach of somebody or something, though we thought we have forgotten it... it's because the file is still in you, in the recycled bin or trash folder. You need to really empty it to let go, and life will begin a fresh.

The only thing is, we cannot buy a new life like we replace a damaged harddisk or upgrade its capacity... but then, who needs to upgrade? Unlike a harddisk that comes with a predefined storage size, our capacity to absorb experience and live to the fullest is unbounded - provided we know how to not keep the bad stuff.
Originally posted 18 August 2006, before the advent of SSD.

Posted on 24 Jun 2019, 12:28 - Categories: General
Comments - Edit - Delete

What is reality

What’s reality, he thinks. Alone, in this world, he can no longer differentiate between dreams and reality. He has no reference to compare things with, and when this happens, there is no way to weight any evidence against any standard. He shouts, and echoes comes back to him, and he does not know whether someone hears his shout or not. But he cannot be sure whether he just thinks he shouts or he actually does shout. Nothing to check against. It’s the point of when and where reality blurs with dreams and nightmares.
The Book of Ten Children, Reunion:4. Originally posted in 11 November 2006.

Posted on 24 Jun 2019, 12:20 - Categories: General
Comments - Edit - Delete

Tomorrow is promised to no one

“I am still making peace with them, every single day,” she said softly. “That’s why I don’t want you to go when you were still upset with me. If bad things happen to you – perhaps you lose concentration in your battle against the bad guys, because of me – then really, I’m not sure I can live with it. It’s already difficult with my parents. I can’t let it happen to another person that I really care about. At least, before you go, I want to have peace between us. I can’t wait until the next time you come again, because tomorrow is promised to no-one.” she said with trembling voice.
The Book of Ten Children, 5:5

Posted on 24 Jun 2019, 12:02 - Categories: General
Comments - Edit - Delete

Today is Ash Wednesday

Count your blessings!

Posted on 7 Mar 2019, 01:04 - Categories: General
Comments - Edit - Delete

How to destroy FOSS from within - Part 5

This is the fifth and final installment of this article.
In case you missed it, the these are part one, part two, part three and part four.

The last time I wrote about this was the beginning of 2018, where the outlook was bleak with Spectre etc. Well here we are in early 2019. The world didn't collapse, so there is still hope; so I too would end this series with a hopeful note.

Part V: Can we do anything about it?

The article was originally a four-parter. But I don't want to end it with a depressing note, so here is the final part, and hopefully more uplifting that the previous parts.

Let's start by observing that "only in democracy the people can vote to elect a dictator". Yet, we don't see hordes of democracies tumbling into dictatorships. So there is still hope for democracy.

Which, I hope, also means that there are still hopes for FOSS.

One way I can see, is to have independent talents that oversees the project; as well as independent talents that actually contribute to the project. (Being an independent leader is meaningless if all the do-ers are against you - remember this is do-ocracy right?).

FOSS flourishes when there is a constant flow of talents going into the community. People don't become expert overnight, but those with enough motivation and effort can always start at the bottom of the ladder and acquire the skills as they continue to participate over time, with mentoring from the older guys.

Alternatively, when a project becomes too unwieldy; perhaps it is a better idea to start with a new codebase, clear from "legacy" stuff and therefore easier to understand - but with still remembering the lessons learnt from that legacy code (or else the new code will be doomed to repeat the same bugs are the legacy code ...).

How can we keep the independent talents coming into FOSS?

I don't have an answer. I can only say that hope springs eternal. Everyone has an itch to scratch: I have seen people take up impossible projects or coming up with impossible replacement projects. New FOSS software coming out from research projects or from student thesis are still happening. So things still does happen. But the trend isn't healthy. And perhaps we all should think of what we can do to help.


After-note 1
Some FOSS projects are originally closed-up products opened up by the original company owner. Also, some companies open-sources their products for the "community" and changes for "premium" or "enterprise" version, which is not FOSS (the "freemium" business model). I have nothing against this; and instead applaud those companies who have chosen to open source their products.

In this situation it is normal and fair to expect that the direction of these projects continue to be dictated by the original owner, especially when most of the development are still done by the company's own employees.

The FOSS projects that I'm concerned with are those original grass-root community projects (or once-closed-but-now-opened projects that are no longer controlled by the original authoring entities) that have risen to the top of the mindshare, but are no longer recognisable as such due to these undue influences.

After-note 2:
One must not conclude from these articles that corporate contribution (in terms of money or employee time) into an FOSS project is automatically bad and unwanted. It is not; and in fact many projects won't be as successful or survive without them.

We welcome contributions from anyone in any form, but what we need to ensure is independence from external influences.

Posted on 28 Feb 2019, 23:56 - Categories: General
Comments - Edit - Delete

World map stat counter update

It was over five years ago I wrote this stat counter (here and here).

The world has since moved on, IP addressed have changed hands (and locations), and even the database format has changed. If you still have a copy of the old MaxMind GeoLite database, the old program would still work but if you don't - well, MaxMind has deprecated the old database format as of January 2019 and you cannot get a copy of it anymore.

However, MaxMind still offers freely downloadable geo-IP database, in a slightly different format (GeoLite2). I have now updated the world map stat counter to work with this format.

You can get the updated sources (along with 32-bit statically compiled binary), here. The GeoLite2 database is here and you need a "converter" (to convert the CSV file from network format to integer-range format) here. Then read the original articles and you should be good to go. The v4 has "2" appended to all the programs - ipgeocode2, preprocess2, etc so they can co-exist with the older version if you so wish.

Posted on 8 Feb 2019, 12:32 - Categories: General
Comments - Edit - Delete

Some random updates

OK, let's start with Barry K, the original author of Puppy Linux and the spiritual grandfather of every "dog"-themed Linuxes out there, Fatdog included

For you who haven't read Barry K's blog recently, you should check it out. Barry has come out with new interesting stuff that should spice out your "Puppy" experience (strictly speaking, it's not Puppy Linux anymore - Barry handed over the baton long time ago. Instead Barry now does Quirky/EasyOS/etc - but as far as we're concerned, it's a "puppy" with a quote ).

Barry is also branching to 64-bit ARM (aarch64). I'm sure he will soon release an 64-bit ARM EasyOS. If you remember, Barry was also one of the first to venture to 32-bit ARM a while ago (a tad over 6 years ago) and made a Puppy that ran on the original Raspi. Quite an achievement considering that the original Raspi was barely able to run any desktop at all - but that is Puppy at its best, showing the world that even the weakest computer in the world can still run a desktop (as long as it's Puppy ). It was also one of the motivation that made me do FatdogArm.

Speaking about Raspi and FatdogArm, I have also recently pulled out my Raspi 3 out of retirement. But this time I'm not doing it for desktop replacement, but mainly because I'm interested to use it for what it was originally made: interfacing with stuff. Direct GPIO access, I2C and SPI is so interesting with the myriad of sensor packages out there. I've been playing with Arduino for a while and while it's cool, it's even cooler to do it using a more powerful platform. Now this article shows you how to do it directly from the comfort of your own PC (yes, your own desktop or laptop), if you're willing to shell out a couple of $$ to get that adapter. I did, and it is quite fun. Basically it brings back the memory of trying to interface stuff using the PC's parallel port (and that adapter indeed emulates a parallel port ... nice to see things haven't changed much in 30 years). But it's speed is limited to the emulation that it has to go through - the GPIO/I2C/SPI has to be emulated by the kernel driver, which is passed through USB bus, which then emulated by the CH341 on the module. If you want real speed, then you want real device connected to your system bus - and this is where Raspi shines. I haven't done much with it, but it's refreshing to pull out the old FatdogArm, download wiringPi and presto one compilation later you've got that LED to blink. Or just access /sys/class/gpio for that purpose.

Now on to Fatdog64 800. I'm sure you're dying to hear about this too OK. As far as Fatdog64 800 is concerned - we've done close to 1,100 packages. We're about 200 packages away from the finish line. As usual, the last mile in the marathon is the hardest. It's bootloaders, Qt libs and apps, and libreoffice. Here's crossing your finger to smooth upgrading of all these packages.

Speaking about updates, I've also decided to go for the newest bluetooth stack (bluez). I have been a hold-on on bluez 4 for the longest time, simply because the bluez 5 does not work with ALSA - you need to use PulseAudio for sound. But all of that have changed, ther e is now an app called bluez-alsa that does exactly that. I've been thinking to do it myself were it not there; but I've been thinking too long Anyway, I'm glad that it's there. Bluez 5 does have a nicer API the last time I looked at it (as in, more consistent) though not necessarily clearer or easier to use than Bluez 4. But that's just Bluez.

Well that's it folks for now. And in case I don't see you... good afternoon, good evening, and good night

Posted on 3 Aug 2018, 17:29 - Categories: General Fatdog64 Linux
Comments - Edit - Delete

Github fallout and what we can learn from that

Hahaha. What would I say.

It's the talk of the town: Microsoft buys Github.

Why are you surprised? It's long time coming. See my previous articles about FOSS. Add the fact that Github fails to make a profit. Their investors want out; they would welcome a buyer. **Any** buyer.

But today I don't want to talk about the sell-out; there are already too many others discussing it. Instead I'd like to ponder on the impact.

Rightly or wrongly, many projects have indicated that they will move away. These projects will not be in github anymore, either in near future or in immediate future. What's going to happen? Some of these projects are libraries, which are dependencies used by other projects.

People have treated github as if it is a public service (hint: it never has been). They assume that it will always exist; and always be what it is. Supported by the public APIs, people build things that depends on github presence; that uses github features. One notable "things" that people build are automated build systems, which can automatically pull dependencies from github. Then people build projects that depends on these automated build tools.

What happens to these projects, when the automated build tools fail because they can no longer find the dependencies on github (because the dependent project has now moved elsewhere)? They will fail to build, of course. And I wonder how many projects will fail in the near future because of this.

We've got a hint a couple years ago, here (which I also covered in a blog post, here). Have we learnt anything since then? I certainly hope so although it doesn't look like so.

It's not the end of the world. Eventually the author of the automated build tools will "update" their code libraries and will attempt to pull the dependencies from elsewhere. You probably need a newer of said build tools. But those github projects don't move at one step; they move at the convenience of the project authors/maintainers. So, you will probably need to constantly updates your automated build tools to keep track with the new location where the library can be pulled from (unless a central authority of sorts is consulted by these build tools to decide where to pull the libraries from - in this case one only needs to update said central authority). It will be an "inconvenience", but it will pass. The only question is how long this "inconvenience" will be.

How many will be affected? I don't know. There are so many automated build tools nowadays (it used to be only "make"). Some, which host local copies of the libraries on their own servers, won't be affected (e.g. maven). But some which directly pulls from github will definitely get it (e.g. gradle). Whatever it is, it's perhaps best to do what I said in my earlier blog post - make a local copy of any libraries which are important to you, folks!

Github isn't the only one. On a larger scale (than just code repositories and code libraries), there are many "public service" services today, which aren't really public service (they are run by for-profit entities). Many applications and tools depend on these; and they work great while it lasts. But people often forget that those who provide the services has other goals and/or constraints. People treat this public service as something that lasts forever, while in actuality these services can go down anytime. And every time the service goes down, it will bring down another house of cards.

So what to do?

It's common sense, really. If you really need to make your applications reliable, then you'd better make sure that whatever your application depends are not "here today gone tomorrow". If you depend on certain libraries make sure you have local copy. If you depend on certain services make sure that those services are available for as long your need it. If you cannot make sure of that, then you will have to run your own services to support your application, period. If you cannot run the services in house (too big/too complex/too expensive/etc), then make sure you external services you depend on is easily switchable (this means standards-based protocols/APIs with tools for easy exporting/importing). Among other things.

Hopefully this will avoid another gotcha when another "public service" goes down.

Posted on 8 Jun 2018, 01:53 - Categories: General
Comments - Edit - Delete

Measure LED forward voltage using Arduino

Arduino is used for many things, including testing and measuring component values.

Somebody has made a resistance meter:

Another has made a capacitance meter:

Yet another has made an inductance meter:

There is one missing: determine LED forward voltage.

LED comes in variety of colours, and these variations comes from different materials and different doping densities. As a result, the forward voltage of these LEDs are also not the same - lower-energy-light LEDs (e.g. red) usually require less forward voltage than higher-energy-light LEDs (white or blue). The only sure way to know is by reading its datasheet.

But what if you don't have the datasheet? Or you don't know what is the datasheet for some particular LEDs? (e.g. LEDs you salvage from some old boards).

The following Arduino circuit should help you. It helps you to figure out what is the forward voltage for an LED.


Get the sketch.

Principle of operation

Initially we have both D3 and D4 as high (=5V). This charges the capacitor, and turns off the LED.

Then drop both D3 and D4 to low. The diode prevents the capacitor from bleeding off its charge through D3, so the only way it can discharge now is via the LED.

A0 measures the capacitor voltage.
A2 measures series resistor voltage.
A2-A0 gives you the LED voltage.

In the ideal situation, you expect to see that A0 and A2 will keep dropping off until the conduction suddenly stops, A2 becomes zero (because no more current flows through it), and then A0 will give you the LED forward voltage.

Of course, in real world this does not happen. If you test that circuit you will find that the LED will keep giving out light even when it's below its official forward voltage, and if you wait until the current is zero, the A0 voltage you get will be very much below the nominal forward voltage.

So how do we know how to stop measuring? Well, most LEDs are usually specified to be "conducting" when it pass at least 5mA of current. So when we detect the current across the resistor to be less than 5mA, we stop measuring and declare that the A2-A0 of the last measurement as the forward voltage.

Oh, how do you get the LED current? LED current is the same current that pass through its series resistor (ignoring current going out to A2). The current in the series resistor is simply its voltage (A2) divided by its resistance (130R).

The voltage-current relation of a LED is the same as any diode - it's exponential. In other words, the forward voltage depends on the amount of current that flows (or better yet: the current that flows depend on the applied voltage). There is no single one fixed "forward voltage"; the LED will actually conduct and shine (with varying brightness) on voltages lower or higher than the official forward-voltage.

Ok, that helps. But how about forward current?

Typical LEDs uses 20mA forward current. This is regardless of the colour or the forward voltage. So there you have it. Of course, main exception for this rule is super-bright high-wattage LEDs which is meant for room illumination or for torches. These can easily pass 100mA, and some can even crank up to 500mA or more. Forward voltages on these kind of LEDs can vary a lot depending on whether you're passing 5mA or 500mA. The tester above won't work properly with these kind of LEDs.

Q1: Why pin D3 and D4? Not D8 or D9?
A1: Because I like it that way. You can change it, but be sure to change the code too.

Q2: Why analog pins A0 and A2?
A2: Because I like it that way too. Actually, that's because an earlier design used 3 analog pins, but later on I found out that one of them (located in A1) isn't necessary, but I've already wired the circuit with A2, so it stays there. Of course you can change it, but remember to update the code too.

Q3: Why do you use 130R?
A3: 130R is the series resistor you use for LEDs with 2.4 forward voltage (green LEDs usually), which is somewhat in the middle of the range for LED forward voltages. Plus, they're what I have laying around.

Q4: Why 470uF?
A4: That's what I have laying around too. You can use other values, but make sure they're not too small.

Q5: The diode - IN4001 - you also use that because that's what you have laying around?
A5: Actually you can use any diode. In my circuit I actually used IN4007 because that's what I have laying around :)

And finally:
Q6: Why do you have separate D3 and D4? Since they will be brought HIGH and LOW at the same time, why not just use one pin?
A6: Yes, you can do it that way (remember to change the code). But using two pins make it clearer of what is happening.

Posted on 28 Feb 2018, 23:47 - Categories: General
Comments - Edit - Delete

Spectre on Javascript?

The chaos caused by Spectre and Meltdown seems to have quieten down. Not because the danger period is over, but well, there are other news to report. As far as I know the long tail of the fix is still on-going, and nothing short of hardware revision can really fix them without the obligatory reduction in performance.


One of the those who quickly released a fix, was web browser vendors. And the fix was to "reduce granularity of performance timers" (in Javascript), because with high-precision timers, it is possible to do Spectre-like timing attack.

This, I don't understand. How could one perform Spectre or even Spectre-like timing attack using Javascript? Doesn't a Javascript program run in a VM? How would it be able to access its host memory by linear address, let alone by physical address? I have checked wasm too - while it does have pointers, a wasm program is basically an isolated program that lives in its own virtual memory space, no?

In other words - the fix is probably harmless, but could one actually perform Spectre or Spectre-like attack using browser-based Javascript in the first place?

That is still a great mystery to me. May be one day I will be enlightened.

Posted on 21 Feb 2018, 23:52 - Categories: Linux General
Comments - Edit - Delete

Pages: ... [2] [3] [4] [5] [6] [7] ...