Patent Wars

Can anyone follow this any more? It was Open Source software developers being attacked by the Redmond Giant on the grounds that they utilized conceptual design ideas that were proprietary intellectual property of Microsoft in Open Source software, in such a manner that those "borrowed" aspects were key aspects of the open software, which, as a result, became direct competition for Microsoft's products. duh. that's what we're all about, right? Not necessarily. We, as the Open Source Community, seek to keep aspects of the software design process open, freely accessible to the public, to each other, so that we can learn from each others' trials and successes and build efficient and productive software quickly. It also allows multiple contributors to each product. Take Mozilla or KDE or OpenOffice.org. These large products receive contributions from all over the globe. Then think of our core, the Linux Kernel. Same design. and our compiler, the GCC package. Same structure.

Then Microsoft tries to make some money off of us and step on the competition - Novell and Linspire... signed with Microsoft in such a way that they would be immune from lawsuits against the Open Source community in this regard. Thankfully Red Hat and Ubuntu refused to sign such an agreement.

Ok we pay for Microsoft products, big time. You get your new PC or laptop computer. It comes with Vista Home Premium, but you need a little more, so you upgrade to Business edition. more money. Then you must have Office 2007. way more money, possibly half the price of the system all over again. Look at Red Hat and Novell. Each of them has moved from consumer "little people" business design to targeting the enterprise "big people" market. They continue to be the traditional sponsors of their respective free distros, Novell for openSUSE and Red Hat for it's RHEL product. This is where they make their money, from corporations that require such reliable and productive solutions as Linux offers, with the support and maintenance programs that are included with these products. For the little guys, we have the free versions. What about Microsoft? They'll charge you from the bottom up, any way they can. Consider the OLPC project - One Laptop Per Child. Microsoft bid down to $3 per copy for its software to try to get in on the project. OpenOffice.org is free. now what? It just seems ridiculous how Microsoft owns the market and still manages to beat down on the competition that is already squirming on the ground. I would argue that Linux is gaining much speed, especially since the advent of Ubuntu Linux. As a Slackware user, I have coincidentally never gotten any other distro to work quite perfectly, no matter how many I try, I always end up back home with Slackware. openSUSE works quite well, but I have had endless trouble with Fedora and Ubuntu.

Oh well, just don't give up on your tried-and-true penguin rig, and hope that Linus' successor continues the great work with everyone who has contributed so that the Linux kernel can triumph once and for all.

Registered Linux User #370740 (http://counter.li.org)

Distros again

Well, I had a bad experience with two distros today: Arch Linux, and Ubuntu.

Arch: tried the FTP-based install for 2007.05 "Duke" x86_64 - my home ADSL connection must have been a bit choppy or something; I'd have to kill the downloader, restart it, several times, to get that 100mb of base system down. Then it choked when I tried to go through post-install config; they have you edit /etc/rc.conf, which is their startup configuration script (only one, not like the whole family you get with Slackware). The most important thing in there is the network config, which I even had to edit during install. (Comments explain everything, by the way). What's more, vim started crashing when the target file did not yet exist; the installer was supposed to throw the common settings version of each into your new tree, but that didn't happen quite right, and vim says "/mnt/etc/rc.conf [new DIRECTORY]" or something wrong like that. Then I decided to skip the rest of the config files and set a root password to reboot the thing. Bad idea. I got a screenful of the same error, from chroot, that there was no passwd command. Had to CTRL+C that one, and when I ran the installer again, it kindly formatted my target partition. End of story.

Ubuntu's installer has always been on my nerves. This time on my desktop, I had to change a kernel boot parameter, adding "noapic" because the thing fell apart after 3 lines of kernel messages. So I got the graphic bar running back and forth for a long time... and then it just stopped moving. With no hard drive activity and no way to tell if anything was going on, I hit the little reset button and ejected the CD. End of story.

Sorry, folks, Slackware is my OS.

Registered Linux User #370740 (http://counter.li.org)

File Transfer

Rsync and FTP, here we come.

Everybody knows FTP, but for the linux user, why not run a server? If you have more than one computer, or more than one user on your intranet, it makes sense to share files via ftp, at times. SCP, the encrypted secure copy program that uses ssh to authenticate, greatly reduces the allowable bandwidth, because of that encryption. With a large file, you could squeeze about 400 kb/s out of a 100mbit LAN connection. Over FTP, you are virtually unlimited, and get about 10-11 mb/s from that same LAN. Basically if your hard drive can handle it and our CPU's not playing with something else at the time, the link is pretty much saturated. (100mbit/(8bit/byte) = 12.5 mbyte) The reason I qualified that speed with "large file" is that with smaller files, you basically transfer it in one second, then the negotiations for requesting the next file begin and use up time, so the reported transfer time is decreased. For this reason, transferring a large number of small files is best accomplished with some sort of archive format. See documentation on the tar command for more information. I can tell you right now that of the archival formats, tar compressed with bz2 is about 80% of the size of gzip compression, which is about 10%-30% of the original. Zip has horrible compression ratios compared to those two.

So which FTP server should we run? There is the infamous tftp server, called by the inet daemon, and proftpd, which can be run as a standalone server daemon or called by inetd, which I used for quite some time. With proftpd, the configuration is somewhat complex, and there is the /etc/ftpusers file that must be examined; there is also /etc/proftpd.conf which contains the access permissions for every directory that you want to open to ftp users. Proftpd becomes slow, however, for handshaking stages of the connection, like the initial connection, and after a while it hangs when your FTP client requests a LIST of the cwd (current working directory). I began to look for another client. Pure-ftpd came up, and I started using it. There is no configuration. If you have a user called "ftp" then it uses that user's home directory as the directory for anonymous connections. All other users are chroot-ed to their home directory (cannot browse above it except by symlink), and to my experience, it is much more responsive than proftpd. Additionally, there is no configuration, and it is recommended to run it as a standalone daemon. Access permissions can be specified on the commandline, such as how many concurrent connections the server will allow, how many from the same user, etc. Watch out though, not all options are available by default. They must be compiled in, so check out the output of "./configure --help" when you go to build the thing.

That's about it for ftp; now what about rsync? rsync is used to synchronize directories and files via the network (even the loopback interface). It uses a crafty technique to only transfer the difference between files and works much better than archiving a large directory into a tarball, sending that, then unpacking it on the other side. rsync is as encrypted as the underlying connection. It can use a simple hashing algorithm that rsync will handle; and it can use ssh. Because the transfer time is greatly reduced anyway, ssh is a good choice because the encryption is better, and the authentication is better. For rsync, you have to set up "shares" just like in samba (which is a beast to configure, btw). These are like pseudonames for the directories that you want rsync to be able to deal with. In the config file, you also have to specify which users are allowed to connect. This can be a username from that remote system, or from the local system that is going to connect to it. Here is the rsync command that I use to synchronize my documents between computers:

rsync -e ssh -a ~pnguyen/Documents/ lilmax88@192.168.0.101:Documents

On my lan, I have the other box, 192.168.0.101 that is supposed to keep a copy of all of my documents, from this computer. Using ssh (192.168.0.101 is configured for pnguyen to connect to lilmax88 with only the public/private key pair and no passwords), everything is brought up to date. It does chew on the CPU for a minute or less while it figures out what needs to be transfered, but then it's all done and ready to go. On the remote system, :Documents means the share called Documents as configured in the rsync config file. This actually points to ~lilmax88/Documents on that system, but you cannot specify an absolute path.

Hit me up with any questions! As a Slackware user, I know my stuff!

Registered Linux User #370740 (http://counter.li.org)

Facebook

Paul Nguyen's Facebook profile

Nerd Test

v1.0:
I am nerdier than 94% of all people. Are you a nerd? Click here to take the Nerd Test, get nerdy images and jokes, and talk on the nerd forum!
v2.0:
NerdTests.com says I'm an Uber Cool High Nerd.  Click here to take the Nerd Test, get nerdy images and jokes, and write on the nerd forum!

Bloggers' Rights

Bloggers' Rights at EFF