Shady printing in cloudy weather
I don't usually blog about Android applications; but this one is interesting.All I hear about printing on Android is "Cloud print" - mainly Google Cloud Print of course, but there are cloud-printing solutions too.
This "cloud print" is touted as driver-free, hassle-free solution. Never again you need to worry about looking for your printer driver discs. Never again you have to worry whether your printer will work with that new shiny OS that probably don't have driver for that old clunkers of yours. All you need to do is plug your printer to a computer that happens to be running Chrome web browser; or, if printer is already Cloud-ready, just plug it directly to the Internet and configure it with your account.
Then you can print from anywhere, from any device (that supports "cloud-print" of course). It's really that easy, amazing isn't it! Now you can finally print from your phone, to the printer under your desk, no matter what brand or model, using Cloud-print technology! Wow!! Breakthrough technology!! Save the world, now you can keep that old printer working !! Wahoo !!
But how do this Cloud-print actually works? How can it work without a driver? Does your computer and/or printer suddenly become so smart that it doesn't need a driver to talk to each other? Not so.
Cloud-print works by sending the print job (=that is, your documents: your bank statements, your tax reports, your company-confidential blueprints, for starters) to - aptly enough - the "cloud", to a bunch of undesignated servers that have the printer driver software for many printers (hopefully including yours) pre-installed. These servers take your print job, use the printer driver software to convert it to something that your printer can understand, and then send it back to your printer for actual printing. Nice. Apart from the obvious fact that no Internet == no printer, I'll leave it to your own pondering for other possible implications.
That's where this Lets Print Droid comes to the story. It enables me to print from my Android phone/tablet (when I really really need to) to the printer under my desk *directly*, without having to send my documents to halfway around the world first. It comes only with a limited set of printer drivers (mainly for Postscripts), but the beauty of it is this: for those printers that it doesn't support, all you need is a CUPS server that *can* drive your printer, and you're good to go. Well that CUPS server can be easily your Linux box (Fatdog64 and all versions of Puppy Linuxes comes with CUPS too, as does many other Linux and *BSD distros) - so if you can print with your Linux box, then you can print from your phone/tablet too.
Of course, if you really want to cloud print, the app also supports cloud print (of the Google variety) as well, for the last resort.
The app also has a companion app that enables direct PDF rendering on the phone itself (using the popular open-source mupdf renderer) so that you can print PDF files even if your printer doesn't speak PDF.
Now that's what printing should be. Printing is an embarrassingly local task that the idea of sending it to the "cloud" that I can have the convenience of printing from everywhere is too "shady" for me (pardon the pun). You can of course always find a scenario where remote-printing would be useful, but generally speaking there is not much point printing to a printer which is not physically reachable from where you are.
As a closing note. I just wish that that printer manufacturers continue to supply us with non-cloud version of their printers (and their drivers). Not all of their customers are happy with cloud-printing for the obvious reasons.
Note: I'm not affiliated with Blackspruce (author of the app), I'm just a happy user who would like to say thanks.
Comments - Edit - Delete
Connecting machines behind NATs
One of my friend asked me on how to use VNC when both the client and the server are behind NAT routers. He knows that he can re-configure the router to do port-forwarding and such, but it is not good enough because not everyone is comfortable (or competent) enough to do it.So I looked into the matter and found out that yes it is possible to do so without re-configuring anything. I've written a wiki article to document the process using commonly available tools, it is located here. While it talks specifically about VNC, the methods discussed is readily extendable to any other protocols.
I hope it is useful for others who have similar problems.
Comments - Edit - Delete
Food for thoughts
Interesting blog post from IgnorantGuru, the developer of SpaceFM file manager: http://igurublog.wordpress.com/2014/02/17/biography-of-a-cypherpunk-and-how-cryptography-affects-your-life/You may or may not like what he said; and the conclusion he stated may or may not be true; but the facts he stated are indeed true.
Comments - Edit - Delete
Find duplicate files
I happen to see a request in the Puppy forum on how to find duplicate files. Sure, there are specialised tools to do that (fdupes, doubles, etc) but what if you don't have access to them?Turns out it's not difficult at all. This will do it:
find / -type f -print0 | xargs -0 md5sum | sort -k1 | awk '{ if ($1==prevmd5) { if (prevfile) print prevfile; print $0; prevfile=""} else { prevmd5=$1; prevfile=$0 }}'
What the code above does is basically find all files under / (which you can change to something else, e.g. /mnt/sda1, a mountpoint of your disk), then compute the md5sum of these files (you can use sha1sum if you wish, or any other hash programs); and sort the results based on the hashes, and display those that have identical hashes.
Of course, running through all files in your filesystem and computing the md5sum of *all* of them are going to take quite sometime, grind your harddisks, saturate your I/O, and tax your CPU.
And having identical hashes doesn't always mean that the files are identical (although the chance that they aren't are very little); so if you do this with the intent of deleting duplicate files you may want to extend the code a little bit to do full file-comparison when the hashes match.
Comments - Edit - Delete
Mobile friendly
This blog is now a little more mobile friendly. Using media queries, if the maximum device width is 600px or less, it will hide the usual menubox and display a simplified menubox instead. Simple stuff.Comments - Edit - Delete
Merry Christmas
It's the time of the year again. Time to remember what a blessing this year have been, and time to remember Him who has given His all so that we are where we are today. Time to remember what Christ means in Christmas - not only in this holiday season, but also for the rest of the days in the upcoming new year.Merry Christmas everyone.
Comments - Edit - Delete
Patches page
I have a small collection of patches of various nature, so I thought I'd put them together in a page to make it easier for others (and myself) to find it when it is needed.The patches page is here.
One new patch in that page is a patch for guvcview, a webcam viewer and recorder. The latest version is GTK3 only, but I don't run GTK3, so I created a patch to compile and run guvcview in GTK2.
Comments - Edit - Delete
The beginning of Fatdog ARM
A year ago, most of the people on Puppy Linux forums were taken over by the hype of ARM-based systems. (Some still do). It all started when Barry Kauler, the father of Puppy Linux, got hold of a cheap ARM-based "smart media player" (Mele A1000) which was in fact more powerful that many PCs of yesteryears. Equipped with 512MB RAM and 1GHz ARMv7 CPU, it came with Android and was advertised as a media player, but its wealth of ports and its flexibility (ability to boot directly from SD card - practically unheard of in ARM/embedded world before this) made it ideal for a possible "desktop replacement". By the time we heard about it, somebody has already managed to run Ubuntu ARM on it. Imagine, a standard Linux distribution running on a ARM media player! That was in fact very exciting.Barry started an ARM port of Puppy Linux for Mele, his first release was Puppy "Lui" (see http://bkhome.org/blog/?viewDetailed=02823, http://distro.ibiblio.org/quirky/arm/releases/alpha/README-mele-sd-4gb-lui-5.2.90.htm). Everyone was on ARM-frenzy for a (long) while. Not long after that, Raspberry Pi (Raspi for short) came, and the folks got even more excited. Barry built another ARM port for Raspi (the Mele one didn't because Mele's CPU is ARMv7 while Raspi's one is ARMv6), called "sap6" (short for (Debian) Squeeze Puppy for ARMv6 - see http://bkhome.org/blog/?viewDetailed=02865). He released a few versions of sap6 (Raspi being more popular than Mele despite its obvious shortcomings), but that was that - Barry moved on to other things (he is planning to return to it, though, he got a quad-core Odroid board late last year).
I was caught in the frenzy too, for I can see the possibilities of where this can go, provided that ARM can fulfill the promise of being the low-power, low-cost, ubiquitous computers. For example, it can easily replace traditional desktops for those who can't afford it. We are yet to see the promise fulfilled, but it is still going in that direction so I'm happy. The tablet market (where most of these ARM cpus are going) has been going strong, and despite many misgivings about tablets, if one can add a keyboard and mouse to it, many tablet users can actually become productive. All in all, it is about alternative computing platform for the masses.
To start with, I experimented with Qemu. I documented the process of running sap6 under Qemu, for those who wanted to play/test sap6 but haven't got Raspi on their hand: http://www.murga-linux.com/puppy/viewtopic.php?t=79358.
That little experiment quickly followed with my attempt to cross-build a minimal system from scratch, still targetting Qemu. There was one system that aimed to do so, called "bootstrap linux", that uses musl C library (then brand-new). After a few hurdles and many helpful advices from musl mailing list, I got it up and running (see https://github.com/jamesbond3142/bootstrap-linux/). That experiment taught me about complications of building compilers (gcc) by hand and that small, unforeseen interactions between host tools and compiler build scripts can result in hard to find, hard to debug crashes on the target system (see http://www.murga-linux.com/puppy/viewtopic.php?t=78112&start=30).
Of course, along the way I got to learn to do some cross-debugging and reviewed ARM assembly language on cross-gdb. That brought some good memories of the days I spent writing 386 assembly language for a bare-metal protected mode 386 debugger myself, for a certain 32-bit DOS extender :)
Qemu was nice but in the end I knew I needed a real hardware: compiling gtkdialog which took less than 10 seconds on my laptop took more than 10 minutes on Qemu on the same laptop. To that end, I decided to go for Mele A1000 too. That was mid last year, and apart from booting Puppy Lui on it, that Mele didn't do much for months on end.
Until now.
In the last few days I have built a new kernel and a minimal busybox-based system for it, I've got it running with framebuffer console on my TV. I used Fatdog64's initrd (busybox is suitably replaced with its ARM version) and it felt good to finally see "Fatdog" booting on an ARM cpu.
In the next few posts I will write more about these, the steps, the information I have collected (linux-sunxi has grown from useless to extremely helpful in a year) as well as future roadmaps.
Comments - Edit - Delete
Wiki
Looking back at this blog, I have found that some articles are getting uncomfortably long. Long articles present two problems on this blog:1. It is not easy to read technical articles in the font of my choosing.
2. It is not easy to write (bbcode, which is used on this blog, isn't exactly the most writer-friendly markups).
I don't want to change the font. When I started this blog, I didn't plan that all of its contents will be technical (and it still isn't - it's just that I haven't got the time or inclination to write non-technical posts yet) - thus the choice of casual fonts.
Instead, I have started another sub-site to house longer articles. This blog will just serve as announcements and links to them.
I have two software under consideration (ah, the spoils of open source!): yawk (a wiki software) and grutatxt (a markup processor). While a wiki obviously isn't the same as a markup processor, for my intent and purposes they are identical since the wiki will be read-only.
Grutatxt has the benefit that its source markup is more readable than yawk (or other wiki's for that matter), but yawk definitely has more features (automatic navigation, etc) that I wouldn't want to handle manually. Both are relatively easy to write with, but their markups are largely incompatible. Grutatxt uses perl which is available in most places, yawk uses - well, awk, but it has to be a specific version of awk (GNU Awk 3.1.5+). Grutatxt has a companion software called Gruta that provides full-fledged Content Management System (CMS) (which meets of exceeds what yawk can do) but that will be for consideration at a later date.
For now, I'm using yawk (because that's what I started with before I found grutatxt), but I actually have both systems online and will decide which one I will use later.
Click here to see the wiki.
Comments - Edit - Delete
Sources for World Map stat counter
Okay, as promised, here is the sources to the worldmap stat counter.Requirements
To use the stat, you must have the following installed on the webhost:
1. GD library (usually called "gd" or "libgd")
2. netpbm tools (usually called "netpbm") - this gives you tools like pnmtojpeg, jpegtopnm, etc
3. "Fly" - this is a command line interface to GD, from here: http://martin.gleeson.net/fly
How to use it
1. Extract the archive to a directory.
2. Get the maxmind.com's GeoIP Lite City CSV database from here: http://geolite.maxmind.com/download/geoip/database/GeoLiteCity_CSV/GeoLiteCity-latest.zip.
3. Convert those CSV databases to binary databases like this:
./convert.sh /path/to/geoiplite/block-database.csv /path/to/geoiplite/location.csv
You will get ipinfo.dat and locinfo.dat in the current directory.
4. Copy these two 'dat' files to your cgi-bin directory, along with ipgeocode and genimage.sh and map.png
5. genimage.sh expects that its standard input is feed with IP addresses, one per line. It will produce a JPEG file on standard output, having all the locations of those IP addresses marked.
6. You will need to create a CGI script which calls genimage.sh and feed it with the IP address, as well as returning a proper CGI header etc. For sjppblog, this script would do:
#!/bin/sh
echo -ne "Content-Type: image/jpeg\r\n"
echo -ne "Content-Disposition: inline; filename=\"stat.jpg\"\r\n"
echo -ne "\r\n"
awk -F"|" '{print $1}' /path/to/your/sjpplog/online.ppl.uo 2> /dev/null | sort | uniq | ./genimage.sh
That's all, you're good to go!
Some notes:
1. map.png is a PNG file converted from the original JPEG image here: http://upload.wikimedia.org/wikipedia/commons/thumb/7/74/Mercator-projection.jpg/310px-Mercator-projection.jpg. You can change the image, just make sure it is in PNG format and modify the dimensions in genimage.sh with the new image's.
2. The archive contains 32-bit static programs. This will work on all version of x86 Linuxes (both 32 an 64-bit), so you don't need to re-compile it (although you can if you want - all the sources are included).
3. Fly unfortunately isn't included in most distributions so you have to compile it yourself. I would have included a compiled version in the archive except that in addition to libgd it also depends on other large system libs like libpng, libjpeg, libfreetype etc. It is used in the final rendering of the "marks" in the map image; if you don't like to use it you can modify genimage.sh to use other command-line tools like imagemagick, graphicsmagic, gmic, etc.
4. The GeoIP Lite database is courtesy of www.maxmind.com. The conversion is purely for performance reasons, one can do exactly the same thing using awk and grep but the speed is much slower (more than 100 times slower).
5. As usual, the sources are provided in terms of GNU GPL Version 3 license.
Comments - Edit - Delete