Archive for October, 2008

Not open source but free apps

October 14, 2008

This interesting article at OStatic mentions one free, non-open-source app I rely on greatly, the Irfanview image viewer/editor, which I use for most of my photo editing, and one I’m anxious to try, the Zoho Web-based app suite, which some think is better than what Google offers.

Google Chrome browser: still super-fast

October 14, 2008

I’ve been getting deep into Google’s many services, and today is no exception. First I discovered a bunch of features in Gmail (Web version, print version) that are turning out to be really helpful.

10692166)

I’m using the Google Chrome browser again on my XP box today, since I’m working on our Google fire map and feeding it data from a Google Spreadsheet.

I’m also going to be looking into creating a private Web page for company use at Google Sites, which is targeted as an easy-to-use alternative to corporate Intranets. It’s also a place where you can set up a site just for your family, friends or whoever. If you wish, you can control who gets access to the pages, a feature I will be tapping for this project.

Back to Google Chrome. It’s still incredibly fast, and I can’t wait until it’s ported to OS X and Linux. As I’ve said, it doesn’t have quite the feature set of, say, Firefox, but for the most part I don’t need any of those features and will easily give them up for increased speed on the 99.9 percent of stuff that Chrome does so well.

Linux with no X: INX is a distro meant for console-only desktop use

October 13, 2008

inx_menu_2.jpg

I have to admit, I’m very intrigued by INX, a Ubuntu-based Linux live CD designed for desktop use without the X Window system. I first read about it at Linux Haxor, and after seeing the distro’s screenshots and information page, and given my own wrangling with life at the command line, I’m ready to try it right now.

Right now, INX is a live-CD only distro and isn’t meant to be installed, but what it might be able to do is give you some good ideas on how to flesh out your current Linux or BSD system to make life in the console that much better. That’s the theory in my mind, anyway.

If you want to download the ISO (or get it via Torrent), it’s 188 megabytes.

There are other systems that are, in one way or another, “meant” for console use, but none that I know of that are aimed in any way at the desktop user, with enough apps on board to keep you happy.

Here’s what looks like a list of the packages in INX.

The biggest impediment to most users when it comes to being productive on the command line is the fact that almost all distributions focus on the X environment and not at all on the console. Any distro that puts the console experience first is something well worth looking into for both its own sake and for the greater cause of making the user more productive on the command line in any Linux/Unix system.

INX is something I will definitely be looking at closely.

Update: The first machine I tried to run INX on was the $15 Laptop, the Compaq Armada 7770dmt with 233 MHz Pentium II MMX and 144 MB RAM. It would start to boot, but at some point during the boot sequence, it got stuck in a loop and wouldn’t get all the way to a prompt.

I’m not surprised because this underpowered laptop has trouble with Xubuntu, the Xfce version of Ubuntu. I thought that a console-only version might do better, but in this case it didn’t. It could have something to do with the kernel being so relatively new and no longer supporting the hardware.

I have yet to try INX on my Dell desktops or the $0 Laptop (Gateway Solo 1450), which is quite Ubuntu-friendly.

Linus Torvalds has a blog

October 12, 2008

Linus Torvalds, the guy who started the Linux project some 17 years ago (and whose autobiography I recommend), now has a blog. He talks about it in this interview.

Until now, the easiest place to find words of wisdom from Linus has been in the quotes between entries at Kernel Trap.

However much you understand about the mechanics of the Linux (or any other kind of) kernel — and I understand very little, Linus is a compelling figure by any stretch.

Update: Wikipedia moves its servers from RHELold Red Hat Linux/Fedora to Ubuntu

October 11, 2008

I found it interesting to read this Steven J. Vaughan-Nichols post about Wikipedia moving its servers from a combination of Red Hat Enterprise Linux old versions of pre-RHEL Red Hat Linux and Fedora Linux to Ubuntu 8.04 LTS.

It’s hard to see exactly why they didn’t opt for the free CentOS version of RHEL, so it’s not just about the distro being free and having long-term support.

Some say it’s the easier upgrade path for Debian-based distros like Ubuntu, the difference in package management between apt-based Debian-like systems and RPM/Yum-based Red Hat-like systems.

Whatever the reason, it’s a big win for Ubuntu and its parent company Canonical. I’ve never really thought of Ubuntu as a server OS because they seem to be all about the desktop experience, and I figure that Debian is a way more popular choice on the server.

But there must be something (or a combination of somethings) from an operational standpoint, whether it be installation and maintenance, long-term support, hardware compatibility, remote/automated management options, reliability or performance that is driving a company/entity like Wikipedia to adopt Ubuntu on the server.

Note: I found out through Matt Asay’s post on this subject, where the comments include a response from Brion Vibber, CTO of the Wikimedia Foundation, where he sort of clarifies the fact that Wikipedia/Wikimedia never used the paid-for, supported Red Hat Enterprise Linux but instead was using old versions of pre-RHEL Red Hat Linux. (Actually, the commenter before Vibber says that Wikimedia used RHL instead of RHEL, and Vibber only says that his company was “never, at any time, a customer of Red Hat.”

In a word: yikes. That’s old code. But it’s good to see that it still works.

And for clarity’s sake, here’s The Register’s article on the subject, which makes somewhat clear the use of RHL, and why Wikimedia is choosing Ubuntu’s LTS distribution:

Wikimedia has 350 servers today supporting its operations and fewer than 20 desktops, with the exception of a couple of servers still running a Red Hat Linux and a Windows desktop machine that is used to run QuickBooks to do the accounting for the foundation.

All remaining servers and many desktops are running Ubuntu 8.04 LTS. All future servers will be setup with Ubuntu 8.04 LTS, and Wikimedia intends to push that LTS-only idea to the limit by not changing Linuxes unless it has to.

Update: Wikipedia moves its servers from RHELold Red Hat Linux/Fedora to Ubuntu

October 11, 2008

I found it interesting to read this Steven J. Vaughan-Nichols post about Wikipedia moving its servers from a combination of Red Hat Enterprise Linux old versions of pre-RHEL Red Hat Linux and Fedora Linux to Ubuntu 8.04 LTS.

It’s hard to see exactly why they didn’t opt for the free CentOS version of RHEL, so it’s not just about the distro being free and having long-term support.

Some say it’s the easier upgrade path for Debian-based distros like Ubuntu, the difference in package management between apt-based Debian-like systems and RPM/Yum-based Red Hat-like systems.

Whatever the reason, it’s a big win for Ubuntu and its parent company Canonical. I’ve never really thought of Ubuntu as a server OS because they seem to be all about the desktop experience, and I figure that Debian is a way more popular choice on the server.

But there must be something (or a combination of somethings) from an operational standpoint, whether it be installation and maintenance, long-term support, hardware compatibility, remote/automated management options, reliability or performance that is driving a company/entity like Wikipedia to adopt Ubuntu on the server.

Note: I found out through Matt Asay’s post on this subject, where the comments include a response from Brion Vibber, CTO of the Wikimedia Foundation, where he sort of clarifies the fact that Wikipedia/Wikimedia never used the paid-for, supported Red Hat Enterprise Linux but instead was using old versions of pre-RHEL Red Hat Linux. (Actually, the commenter before Vibber says that Wikimedia used RHL instead of RHEL, and Vibber only says that his company was “never, at any time, a customer of Red Hat.”

In a word: yikes. That’s old code. But it’s good to see that it still works.

And for clarity’s sake, here’s The Register’s article on the subject, which makes somewhat clear the use of RHL, and why Wikimedia is choosing Ubuntu’s LTS distribution:

Wikimedia has 350 servers today supporting its operations and fewer than 20 desktops, with the exception of a couple of servers still running a Red Hat Linux and a Windows desktop machine that is used to run QuickBooks to do the accounting for the foundation.

All remaining servers and many desktops are running Ubuntu 8.04 LTS. All future servers will be setup with Ubuntu 8.04 LTS, and Wikimedia intends to push that LTS-only idea to the limit by not changing Linuxes unless it has to.

An $800 Apple laptop could really cost Microsoft

October 10, 2008

The blogospheric din is rising about Apple’s supposed $800 laptop, which if it ever happens (and I have my doubts) will really hit hard on the Windows-based laptop market.

With Linux starting to eat away at the very low end of the laptop market on the ASUS EeePC and other netbooks, Apple dominating on the high-end (where it’s share is considerable), the mushy middle is where most of the action is.

Having an $800 Macintosh laptop hits the bulk of the market and would steer plenty of people away from Windows and toward OS X. And like the iPod and iPhone’s tendency to get their users to think about going all-Apple with an expensive desktop or laptop machine, a relatively inexpensive laptop is a hell of a game-changer.

Should this actually happen, Apple will have what looks like the right product at the right price — and at the exactly right time.

Let’s see: Windows Vista not doing so well, and certainly not driving PC sales. Economy in the tank. The holiday season upon us.

If anything, it’s a good time to buy some Apple stock.

Debian Lenny: Stable not in Sept. ’08, maybe in June ’09

October 10, 2008

Update: Debian Project leader Steve McIntyre expects Debian Lenny to go Stable at the end of October.

The Debian Project neither sets nor adheres to defined release dates for its GNU/Linux distribution.

Basically, they have the “we’ll release when ready” philosophy, and given that a version of Debian that has been designated as Stable actually does mean something to users, I’m fine to wait.

According to an entry on Linux Pro Magazine’s site, while some are pushing to get Lenny
to Stable sometime in 2008, others think it will take until mid-2009.

I’ve been using Debian Lenny — currently Debian’s Testing distribution — for quite a few months now. And while I have my issues with Lenny on my Gateway laptop, just about everything works. As far as features go, Lenny represents quite a leap from Etch, especially on the desktop.

One thing about Testing, as opposed to Stable, is that there are lots of updates to download and install. If you are OK with that, it’s not annoying, but it’s nice for me to boot my current Etch box (a Power Macintosh G4) and know that I won’t have 50 packages that need updating. Usually there are no updates at all.

As a kind of bonus, a longer wait for Lenny to go Stable means an equally longer period of support from the Debian Project for both Etch and Lenny.

Going backward for a minute, it took me quite a while to figure out that when a particular Debian distribution goes Stable, the former Stable distribution goes to Old Stable status, after which it will get an additional year of support in the form of security and bug patches from the project. So if Lenny does in fact go Stable in June ’09, that means Etch will be supported until June 2010. Since Etch was designated Stable in April 2007, that would give it a “Stable/Old Stable” life of three years and two months, which I think is about right, especially for a server OS.

More on the run-up to Lenny:

Best Gateway Solo 1450 page … ever

October 10, 2008

OK, so the bar is fairly low when it comes to pages about how to deal with the Gateway Solo 1450 and Linux. This page is the only place I’ve seen a sane way to deal with the CPU fan (besides my mentions of it). Here at least you can find out how to set the trip points.

The page is fairly old (2004), and I reproduce the fan section here if only because I fear the page disappearing:

Turning off the Fan
If you were wondering why I made this page despite the fact that there are several already out there, it was for this section right here. The general solution to this problem is to make some stupid shell script that checks the temperature and manually activates/deactivates the fan. This is the wrong way to do it. You see, since you already went through the hassle of installing the latest ACPI patch, you can make use of the kernel’s ability to do this for you!

Add this to the earliest boot script you can find, like rc.sysinit under Redhat. The idea is that you want to turn off the fan before you start fsck’ing your disk so that you can squeeze the most out of your battery life.

echo 100:90:85:80:80 > /proc/acpi/thermal_zone/THRM/trip_points
echo 30 > /proc/acpi/thermal_zone/THRM/polling_frequency

# It’s safe to turn the fan off now.
echo -n 3 > /proc/acpi/fan/FAN0/state

WARNING These numbers represent the temperature in Celsius at which each state should be triggered. The numbers you should inject into “/proc/acpi/thermal_zone/THRM/trip_points” may be different for your computer. Before you set these numbers, check what the current values are set to by using cat /proc/acpi/thermal_zone/THRM/trip_points. Note that while some fields may be omitted when you read this file (on my machine the Hot field is not shown), all fields must be set when you write to it. The correct order is Critical:Hot:Passive:Active0:Active1 (note colon seperation).

Alternatively, you can use this shell script which I wrote to do the same thing in a more user friendly manner. This script only changes the fields which you mean to change, and not any others. I highly recommend that you use this script (below) because it has the least chance of melting your processor.

function reset_trip_points
{
if [ -f /proc/acpi/thermal_zone/THRM/trip_points -a -f /proc/acpi/thermal_zone/THRM/polling_frequency ]
then

ifs=$IFS
IFS=”

for i in `cat /proc/acpi/thermal_zone/THRM/trip_points`
do
j=”${i%% C*}”
j=”${j##* }”
i=${i%%[ :]*}
i=${i//[][]/}
case “$i” in
critical|hot|passive|active0|active1)
eval “[ \”x\$$i\” = x0 ] && $i=\$j”
;;
esac
done
IFS=$ifs

echo $critical:$hot:$passive:$active0:$active1 > /proc/acpi/thermal_zone/THRM/trip_points

# Activate the kernel’s temperature control system
echo 30 > /proc/acpi/thermal_zone/THRM/polling_frequency

# It’s safe to turn the fan off now.
echo -n 3 > /proc/acpi/fan/FAN0/state
fi
}

# Set the new value for the temperature you would like to change, or leave it
# at zero to get the default.
critical=0
hot=0
passive=0
active0=0
active1=80

# Make the Kernel handle CPU temperature management.
# Do this before the filesystem checks so that the fan will turn off and stop
# draining the battery.
reset_trip_points

The hidden power of Gmail, the increasing reach of everything Google and the inevitability of cloud computing

October 8, 2008

I haven’t made a secret of the fact that I’ve never really delved into Google’s Gmail, even though I automatically have an account due to my much heavier use of Google Docs and previous use of Google Groups.

All that changed in recent weeks due to my ISP DSL Extreme‘s decision to transfer all of its mail accounts from its own servers to Gmail.

I mainly use my DSL Extreme e-mail address for mailing lists. I have my OpenBSD and Debian mailing list traffic — which can be considerable — on that e-mail address just to keep it separate from the rest of my mail.

I never did like the DSL Extreme Web mail interface, and the fact that it’s going away in a week doesn’t bother me one bit.

But since DSL Extreme allowed users appropriately extreme flexibility in handling their mail, I’ve used it consistently, just not in a Web interface.

Instead I’ve used external mail clients — particularly Thunderbird in Windows — to process the mail, accessing it via IMAP and filtering it into folders that live on the server.

Since the connection to the mail servers can be fully encrypted and of the IMAP or POP variety, I’ve used my account fairly regularly.

My “lifestyle,” whatever that means, makes IMAP work way better for me than POP, which downloads mail to a single computer, and since I’m in front of a half-dozen different computers in different places, POP doesn’t work for me at all.

I was worried that the transition to Gmail for my DSL Extreme account would mean that POP and IMAP access would be gone, and I would be limited to the unfamiliar Gmail Web interface only.

But that is not the case. I can read the mail via POP or IMAP with any mail client software, and now I have a lot more space — about 7 GB, even though I can’t ever see needing that much.

And I’ve discovered a few rudimentary things about the Gmail interface that just might have me using it more and more — and dumping traditional mail clients entirely.

Right now, the reason is organization. I’ve relied on the folders and filters of Thunderbird to bring some semblance of order to the heavy volume of mailing-list traffic I receive.

I’m limited only by the folders themselves. A message can only be in a single folder at a time, and that makes finding things difficult in some instances.

But Gmail uses labels instead of folders, and an individual e-mail message can have as many or as few labels as I wish. So I can, for instance have a message from the debian-user mailing list begin its life with the labels INBOX and Debian. I can delete it if I don’t need it, and that’s what happens most of the time. But if I want to save that e-mail, I can remove the INBOX and Debian labels and effectively archive the conversation by giving it a Debian Saved label.

The other way Gmail helps me with mailing-list messages in specific, and the rest of my e-mail in general, is by grouping messages that are replies to each other together when I read one of the messages in that particular group. I think this is what Gmail refers to as “conversations,” but again, I’m so new at this that I’m unsure of the terminology.

What I am sure of is that this labeling and grouping, which at first looks more than a bit forbidding, is in fact quite useful.

Another thing Google does with Gmail is bring together all of the Google services I use (and many I don’t but just might try).

I’m already using the Google Chrome browser to access Gmail, and when I click a link called Sites, I have the option to create secure Web pages, gather information on them and control who has access to them. In short, it’s a great, free tool for collaboration over the Web. In that way, it’s a valuable extension to Google Docs (also easily navigable to from the Gmail interface), which is already performing very well as a collaborative tool used by many of us at the Daily News.

I’m trying to use Google Docs to bring some kind of order to my own documents. I’ll have to get back to you on that one. I finally do have offline access to Docs (via the Google Gears API), and I’m less than impressed with its reliability and speed on my Gateway 1.3 GHz/1 GB RAM laptop. Gears and offline Docs are both still relatively young, so there’s plenty of room for improvement.

One more thing: Chat.

Since I’ve been guesting in the Op-Ed department for the past week and a couple days, I’m not on my own PC, and as a result all my usual apps, from Pidgin to Thunderbird to Notepad++ and Filezilla are not installed.

I did add Google Chrome after Firefox 2 started acting up on me. And on this PC, Internet Explorer 7 has actually been less of a dog than I remember. I did get the installer for FF 3, but I’ve yet to do the install.

I said I was going to get to chat … and I am.

Since I didn’t have Pidgin, which I use to bring my Yahoo!, AOL/AIM and Google chat accounts under one app, I switched from the “Classic” Yahoo Mail Web interface to the “All-New” version of Yahoo Mail, which is designed to look and act like a traditional local mail client, with drag-and-drop capability.

The reason I haven’t been using the “new” interface until now is that its relatively large graphical load doesn’t play well with some of my, ahem, older hardware, and the speed of the “old” Yahoo Mail is very much needed on those creaky laptops and desktops.

To make a long story somewhat shorter, I opted for the “new” Yahoo Mail so I could use the integrated Yahoo Messengher client. When you want to chat with one of your Yahoo contacts, all you do is click on their name, and a chat window opens in your mail interface. That way, you can use Yahoo Messenger without needing to have the application installed on your computer.

Now I’m bringing things around to my point, which is Google. Google’s chat service — Google Talk — has a “gadget” that mimics a standalone IM applications but can be used on any PC with a compatible Web browser. That way you can use Google Talk from just about any Web-connected PC without worrying about individual clients or Pidgin.

I only have one person who I use Google Talk to IM, so I’m probably better off using Pidgin if I can, but it’s nice to see so much innovation in chat from Google and Yahoo. For all I know, AIM has the same capability, but since I’ve probably checked my AOL mail … maybe once or twice … since I first signed up for AIM a few years ago, I know nothing about it. I also remember AOL Mail as offering IMAP and POP to its users, and for that reason alone it might be well worth investigating as a mail solution.

Note: I remember hearing that Google was “rolling out” IMAP access to Gmail users and not granting it to all at once. Since my DSL Extreme account is not part of the regular Gmail throng, I appear to have both IMAP and POP as part of the deal between DSL Extreme and Google.

Summing up: A bit long and rambly, don’t you think? I’m just trying to think out loud about how deep I’m getting into the world of Google and its services.

There’s been a loud, long argument in the free, open-source software community (and at LXer in particular) about what cloud computing means for open-source software, users, freedom and all of that. For me, the freedom to have my files live in the cloud and be accessed from anywhere I’m networked is trumping almost everything else.

I’d love for the Google Docs interface to get more sophisticated about things like indented paragraphs and smart quotes — two of my typographical pet peeves. The technology is there, since Docs is based on HTML and CSS and can do anything that those two sophisticated technologies allow (and that is quite a lot).

And as I’ve said more than a few times recently, having the option of working with my cloud-based files either through Web interfaces or via the same kinds of locally based applications we all use today is something I’m very interested in seeing happen. It’s kind of ironic that the company I see buying into this concept (although their plans and offerings are presented in such a cryptic way that I can never really tell just what they’re planning) is Microsoft.

Yes, Microsoft’s dependence on traditional apps like MS Office and the billions it brings them has profoundly affected the company’s strategy for cloud-based data and apps. At the end of the day, a melding of local client apps that are not necessarily Web browsers could very well be more efficient than doing everything through the browser. (Or not; it’s too early to tell at this point).

The more data we have, from text files to images, audio and video, is increasingly hard to get a handle on. We need help storing, backing up, categorizing and utilizing all of this data. In my mind, it all points to the cloud.

Depending on how you look at it, it’s a little “Matrix”-y, “HAL 9000”-ish, “Neuromancer”-like

All I know is that Sun’s “The Network Is the Computer” mantra is becoming more true every day. Some of that will be good, some not. And that goodness/other will differ from person to person, application to application and entity to entity.

We won’t be limited to the huge cloud providers. There will still be traditional servers everywhere, along with clients in more shapes, sizes and guises than you could imagine. And the lone-PC-in-the-wilderness won’t go away, just as paper itself has survived in this most computer-infused of ages.

But the cloud model is real. And it’s growing.

Companies that understand this will prosper, others not so much.