jep.jpg (13389 bytes)

Chaos Manor Special Reports

User-contributed essays on diverse topics

Monday, December 12, 2005

Email Jerry

Sections

Chaos Manor Home

View From Chaos Manor

Reader Mail

Alt.Mail

Columns

Special Reports

Book &; Movie Reviews

Picture Gallery

Links

Table of Contents

What's New

The BYTE Fiasco

Beginning a new occasional feature. Published here by permission. To subscribe by email make contact with Moshe Bar himself.

Part One

Part Two: The GUI Of The Masses

Part Three: Web Based Linux Administration

Part Four: A worrying development, and SCRIPTING

Part Five: travels in Europe

Part Six: Travelling with Linux

MOSHE BAR'S OPINION

 

From: Moshe Bar’s Opinion [mailto:mbar-owner@listbot.com] Sent: Sunday, February 28, 1999 11:54 PM To: List Member Subject: Linux Maintenance

 

Moshe Bar’s Opinion - http://mbar.webjump.com

 

Dear Readers,

So you have a Linux server in your network that allows you to share the dial-up for the whole family and serves files through Samba. Everything works fine, the machine never crashes. Actually, it so stable after a while you forget its there!

But like all systems (computers or not) a Linux server uses resources. Resources are fixed for a finite system like a computer therefore, sooner or later these resources might be used up or not might wear down.

The resources in a Unix server are: CPU, memory, disk space, network bandwith, communication channels, and kernel table entries.

Even though your Linux server might be running fine right now it is using all these resources and one day there can and will be a shortage in one of these.

You need to maintain and monitor your system to make sure it has sufficient resources at all times to fulfill its mission. How do you do that? Follow me...

CPU ---

Usually CPU speed - or more appropriately - CPU cycles are not a bottleneck on a UNIX server. Most of the work is I/O oriented. Even with a 100% busy CPU you will probably still get acceptable performance. No maintenance is necessary or possible other than to make sure from time to time that no processes are in the system that hang and use 100% CPU or that have a tight loop. For thos of you who use Netscape Communicator on the same server, you might find that sometimes Netscape looks like it quitted, but still runs in the background using all available CPU cycles.

type "top" in the root prompt and see which process uses most CPU. If Nescape is the culprit, kill it with "kill -9 PID" where PID is the process ID number of this particular Netscape instance.

If your system looks slow and by looking at top, you see no particular I/O taking place and "vmstat 1" doesn’t display any significant page in or page ou operation, then you might have a heavily loaded CPU and an upgrade to a faster model might resolve your problem. As I said, this occurs extremely seldomely and you will most certainly run out of other resources before.

Memory ------

Applications in Unix (and this includes Samba, dial-out, telnet, ftp etc.) do not have any notion of RAM. All they know is virtual memory. Each and every process has a huge virtual memory address space available and as long as there is swap space available on the disk, the operating system will satisfy every request for RAM. Obviuolsy, adding swap space makes it possible to run more and bigger programs. However, the limiting factor is I/O bandwidth. Queing theory states that the more items (I/O requests) are waiting the more exponentially the servicing time on average grows.

So, be careful with adding swap space without adding I/O bandwidth (additional SCSI channels, more disks etc.)

In case "top" suggest a shortage of memory, you can do the following to add a swap space:

dd if=/dev/zero of=/swapfile bs24 count92 mkswap /swapfile 8192 sync;sync swapon /swapfile

The above commands preprae a swapfile of 8192KB and format it. Then after synchronizing the file system on disk, you add the swap space. Make sure to distribute swap spaces among disks and SCSI channels. Linux knows by itself to swap in round-robin fashion among all available swap spaces.

If "vmstat" shows a lot of paging activity (more than 200 pages per second on a Pentium II 200mhz) you should not add more swap space as it would slow down your system too much. Consider instead adding more RAM.

As a rule, a 64MB RAM system can easily service web, mail, printing, and file sharing for a group of 50 users. 128MB can satisfy up to 300 users.

Disks -----

One of the greatest shortcomings of Linux compared to commercial UNIXs like Solaris or HP-UX is the lack of volume management on disk. What does that mean? Suppose you have one big partition on your 4.5GB disk with all of the file system in it. One day your /home directory (=3.9GB) grows to fill the whole disk. How do you add more disk space?

You can add a 9GB disk and transfer your /home directory to the new disk. You would do that by formatting the disk (say, /dev/sdb2 ) first, then:

mount /dev/sdb2 /mnt rcp -r /home /mnt umount /mnt mount /dev/sdb2 /home

This will moementarily mount the new disk under the /mnt mount point to be able to write to it. Then recursively copy all of the /home directory to it. Unmount the new disk and remount under the old /home point.

What’s wrong with this procedure?

First, you have now 3.9GB or unused disk-space on your old-disk. Since the operating system itself doesn’t grow and /var grows only very slowly usually. You would waste a lot of space on the disk. Second, you just build a new directory structure because now the users home directories are under /home/home/user1 .

In HP-UX you could just add the new disk to the same logical volume group like /home and then encrase the physical space of the logical volume by 2 or 3 GB and still have some spare to give to /var or so.

So, what do you do in Linux? Best thing is to back-up everything on a networked drive or on tape. Then, re-install Linux using a new partition lay-out making use of /home on the new disk and giving /var also some of the new disk and then restoring the back-up. As you can see, there are still major areas to work on in Linux. Help is on the way, however, as a few hackers are working on a volume manager for Linux for the next release 2.4.0 to come out in late 2000.

Just be aware however as use you use your system, that adding disk space is not a trivial task in Linux (or in Windows for that matter).

And most importantly, have regular back-ups. Trust me that your disks will fail. If you do have 6 disks in your network and the average mean-time between failure is 240,000 hours then the a disk will fail within 40,000 hours (=4 years). If you have nine disks or dozens like in Dr. Pournelle’s Chaos Manor then you will have a filure every few months. I believe Dr. Pournelle can confirm that. I have manage single servers with as much as 90 disks online, and experience a failure on those systems every 6 to 8 weeks. It is not a question of if your disk s will fail, it is a question of when. So, have a back-up strategy in place. Instead of buying that new game or latest MP3 player, buy instead a reliable DLT tape drive and have RedHat Linux’s BRU backup programm back up your /home and /var directories regularly. Murphy’s law will make you lose the file you needed the night before you intended to backup everything. So, be good to yourself and backup nightly or at least twice a week. BRU lets you automate the process. You just have to insert the tape into the drive. So, just do it.

A FEW TIPS ----------

If you frequently have to log-in into your Linux server, you might have noticed that you cannot login as root from a remote telnet session. You have instead to log-in under a non-privileged user and then "su -" to be root. This bothers me a lot. To disactivate this beheaviour just rm -f /etc/securetty

Have a look at your system log at least once a week. Upcoming problems can be recognize early by looking at the system messages. Often, shaky hardware spits early warning messages (especially disks that have a lot of intelligence built-in these days).

tail /var/log/messages

will show you the last 10 lines of the log. Look at it regularly. The messages are easily understandable.

Check the root mailbox regularly, as many subsytems send a message there if they experience problems. Do

mail

to read the root mail and delete them if you don’t need the messages. This saves space. One a friend’s Linux server he never looked at the mail box and one day it filled up all of the disk space with a stupid error message that he could have stopped by a simple command. As a reminder, UNIX doesn’t like full disks. In some cases you cannot even login as root anymore if the disk is full. So, be careful!

Never, ever change the root shell or root home directory. Many people like to give root the tcsh or csh shell. However, if you do something wrong in re-assignign the root shell or home directory, root will not be able to login anymore and you have to either re-install or otherwise salvage the system. I am going to explain how to salvage a lost Linux system in one of my next mailings (including lost root password and full disks).

One friend of mine changed the shell form /bin/sh to /bin/tsch one a big HP system with 5000 users. As you probably saw immediately /bin/tsch is misspelled and he could not log-in anymore. He could also not just shut-down the machine and try to salvage the system as 5000 users do not easily take this kind of down-times. He called me in and I discovered that there was a root telnet terminal open in a branch-office running "top". I could have kidnapped the session from there to the current terminal, however this is tricky stuff and if it goes bad, then we lose all possibilities. So, we drove the 200 kilometers to the branch office. There I make a copy of /bin/tcsh to /bin/tsch so that the login would succeed even with the wrong name and then VERY CAREFULLY I changed the shell or root again to /bin/sh. My friend was lucky this time, but he learned the lesson. Never change the root user settings.

This is all for this week. Hope this gives you a hint of maintenance duties for a small Linux network server.

For questions, comments or your own personal experiences please write to me at moshe_bar@hotmail.com . Also visit the archive of past mailings at http://mbar.webjump.com/

 

Have a nice week

Moshe Bar Tel Aviv, Israel

To unsubscribe, write to mbar-unsubscribe@listbot.com Start Your Own FREE Email List at http://www.listbot.com/

 PART TWO:

THE GUI OF THE MASSES

 

 

There used to be two different stereo-types in computing until very recently: You had the Joe User type of user. He would usually buy a latest model computer with all the multi-media bangs and whistles. He would typically divide his time between the Cable TV and the Internet, all while making sure there was always ample supply of beer at home. Then you had the geeks. Geeks also had ample supplies of beer in the fridge, but usually they spent all their time at their computers (notice the plural.)

Joe User types by definition used Windows 95 to play games and to run all the latest browser add-ons to view movies off the Internet (why not just watch them on his TV?), and other non- relevant stuff.

Geeks began running real operating systems around 1994 or 1995 with the general availability of Linux. Geeks didn t need a GUI to do their work. They were plenty happy with csh (a C shell) or -even better - with an Emacs environment.

These geeks were running on antiquated boxes even by the standards of those times. Still today, geeks run Linux on 486 and early Pentium machines with hardly any multi-media capabilities. Those few who do run their (self-rolled?) Linux with X and a desktop environment normally just do so to run many terminal windows at the same time. Click-and-Drop? Nah, give me a bricks- and-mortar telnet session anytime! And, since most efforts of the hacker community were focussed on Linux itself and not on the GUI, Linux proved to be remarkably stable and efficient for a non-commercial OS. I have had Solaris crash on 2,000,000$ machines. I have had HP-UX hang on big production systems. Hell, even rock-solid Irix from Silicon Graphics, I have managed to bring into such a messy condition, that rebooting was the only escape.

But, never, I repeat, never have I have seen a Linux box hang. Maybe one of its subsystems got stuck. Maybe you got no login anymore, but the machine still kept on kicking. My Toshiba laptop has a hardware sleep mode. Even with Win98-drivers the OS regularly locks up on wake-up. Linux doesn t really support sleep-mode. But guess, what? After wake-up it complains a little and warns me not to do this again, but regains its composure and immediately goes back to work. Linux is rock solid.

Well, was solid...

.. Until the masses hopped onto the Unix train and Joe Users got stuck at the shellprompt and said: And now?

Just to be able to gain market share against Microsoft and to be able to claim to be THE OS for everybody (UNIX never claimed that in the first place) the hackers and geeks began to address the needs of Joe Users, which comes down to give them an attractive GUI that allows to click-and-drag-and-drop. Huge amounts of man-years of very talented programmers were invested in providing the ultimate desktop looks and functionality.

Sure, the Linux kernel and subsystems continued to be developed. New functions were added, partly to make the new wave of GUIs possible, at break-neck speed. Talented developers shifted to where they hype was: GUI development and new features. Testers had to determine if errors were due to the GUI or to the under- lying system. The result is that the new versions of Linux look great, but are far less tested and as a consequence more buggy.

I installed RedHat pre 6.0 with Kernel 2.2.1-5 on my laptop and for the first time in six years of using Linux, my laptop crashed hard the other day. It was certainly a driver problem and not a Kernel problem. But I had been running Linux 2.0.34 for 14 months and never ever experienced a driver problem.

So, I ask you: What is the benefit having a great-looking desktop if the OS crashes? What s the difference to Windows, then? Just to be able to say that the Joe Users of this world are now also running Linux? Sure, Joe User doesn t mind the occasional crash, he got used to it running Windows. But people like me, from the old Unix school, run big servers with thousands of users or strategically important applications and databases. We don t care about GUI. We want stable systems.

It is my firm belief that nobody should be allowed near a Unix system until he mastered command-line mode. All these click-and- drag-and-drop tools just don t cut it.

Is Linux doomed to repeat Windows history?

Regards,

Moshe Bar

 

For comments and questions, please send an mail to the mailing list.

This document was written ,by the way, on a 486-66 Compaq with 32MB RAM and Linux 2.0.29 and WordPerfect8. It certainly is fast enough to write this article. Oh, and at the same time it is also calculating encryption keys and also playing my favorite CD.

To unsubscribe, write to mbar-unsubscribe@listbot.com

Start Your Own FREE Email List at http://www.listbot.com/

 ===

PART THREE: WEB BASED LINUX ADMINISTRATON

Moshe Bar’s Opinion - http://mbar.webjump.com

 

Dear Readers

OK, I surrender. I got so many e-mails from readers of this mailing list or of Dr. Pournelle’s site that I had to change my opinion.

It seems that the species of system administrators of the old text-prompt school of thoughtw ho cut their teeth on dumb Wyse terminals in the eighties is almost extinguished. This mailing list has opened my eyes of what today’s UNIX root-privileged users constitute: mainly computer-savvy people, who started with the PC revolution and grew up first in DOS and later in Windows environment. Usability seems to have (hard for me to digest, but I will have to learn) a higher priority than efficiency and reliability. OK, very well then. If this is the case, I will adapt accordingly. Therefore, today we will talk about web-based administration utilities for Linux.

There a couple of web-based admin utilities: there is the RedHat-included linuxconf and among other the excellent Webmin package. Linuxconf has a nice interface (here it comes again!), but really needs to be fixed badly in the network set-up part. By the way, you can run linuxconf also without the web by typing "linuxconf" from an XTERM window.

On the good side, it is the only web based admin tool that also manages run-levels. This is mainly important if you chose "Install everything" at set-up and also told the installation utility to start all possible "deamons" (a kind of TSR program). Many people write to me because they really didn’t know what those deamons are good for and so simply chose them all ( I would have done the same thing). This may create problems such as long delays at boot-time (Dr. Pournelle used to have such a problem on Linette, the first Linux server at Chaos Manor) Linuxconf’s run-level editor lets you deselect or add deamons for a particular run-level.

A run-level is the UNIX term to identify a particular boot-setup. By default RedHat (and most other Linuxes, too) boot into run-level 3. All deamons that run when you boot the machine, will accordingly figure in the linuxconf run-level editor under run-level three. Dr. Pournelle had in his first install by mistake installed the "gated" deamon. This cause his machine to lock-up at boot for about four minutes until the network timed out, because he didn’t have yet a permanent Net connection. To remove it, he could simply have fired up a linuxconf session, gone to the run-level editor and remove the "gated" for run-level three.

If, by the way, you chose during install to automatically start X, then your run-level is 5. So, if you wish to make changes to deamons configuration, you would have to make changes for run-level 5.

Actually, however, I wanted to talk about webmin, which I consider far superior. First, it runs not only on Linux, but most other UNIXes as well. Second, it also let’s you configure sendmail, disk quotas, NFS, NIS and many, many other things.

A note of warning here. It is easy to mess up your system badly if you don’t know what you’re doing. I suggest you don’t change anything if you don’t explicitly want to change something or at least study the man pages or some UNIX literature first.

Webmin can be used from within any browser (version 3.0 and higher necessary) either locally on the same machine, within your own network or from the other end of the world. Webmin is itself a web server, meaning that you don’t have to set up Apache web server or anything. Just download the gzippen tarball (webmin.tar.gz) from the home site on http://www.webmin.com/webmin and do:

unzip webmin.tar.gz

Tar xvf webmin.tar

cd webmin

install webmin

The install procedure will ask you a few questions about your

specific situation, but all of them can be easily be answered if

you managed to install Linux in the first place. Webmin installs

itself (strangely enough) under /etc/webmin. It asks for a port

number, userID and password. The portnumber is where you have to

point your browser to. Assuming your machine is named Nevercrash

and during installation you gave it port number 8888, then you

would point your browser to http://Nevercrash:8888

 

Login with your userId and password and you will be presented with a nicely designed graphical main menu giving you the choices of administration.

Among other things you can edit entries for user accounts, file systems to be mounted, file systems to be exported, sendmail configuration, DNS configuration, security settings, ftp configuration, disk quotas, cache set-up and many other things.

This being UNIX, no re-boots are ever needed after changing settings. All changes are valid immediately.

I use webmin to maintain some Solaris and HP-UX servers in San Jose and in London and works just fine for me.

Webmin is written in Perl, and if you look at the code you will see just how easy it is to write a web-server in Perl. You don’t need more than 20 lines of code to implement a fully functional web-server.

By the way, if you already have a running web-server (like Apache) it won’t interfere with webmin as long as you don’t use the same TCP/IP port twice. Even then, it will probably work but it is better to make sure not replicate ports among servers.

If you want an easy, powerful Linux (or any UNIX) configuration, administration and maintenance tool, Webmin might be for you.

Next week we will learn a little bit of shell programming to automate basic maintenance tasks. Stay tuned.

I much appreciate getting your opinions, questions and experiences. Send them to moshe_bar@hotmail.com.

Moshe Bar

Copyright 1999 by Moshe Bar, Israel

This document is under the GPL Open Source License and you can freely re-print, copy and distribute it as long the source and author are stated.

Resources:

Webmin - http://www.webmin.com/webmin

Linuxconf - http://www.freshmeat.net/

Moshe Bar Web Site - http://mbar.webjump.com

Chaos Manor Site - http://www.jerrypournelle.com/

 

PART FOUR: A WORRYING NEW DEVELOPMENT

Moshe Bar’s Opinion - http://mbar.webjump.com

 

Dear Readers,

There is a worrying new development within the Linux community. What I am talking about is this new Linux fundamentalism spreading quickly, particularly among the younger Linux users. Users is really the right word here, because among all these loud and arrogant self-proclaimed Linux ambassadors I have never seen or heard somebody who actually ever contributed code or anything elsehow valuable to Linux or the Open Source Movement. Lacking the capability to impress with real contribution they seem to revert to bigotry instead. A behavior not unlike the one of the religious fundamentalists.

Eric Raymond (with a string of very valuable contributions, one of which is fetchmail) already has left his post as Linux Evangelist, disgusted by the ridiculous noise-to-signal ratio in any Open Source discussion.

I hope that the more the Linux OS gains hold in the business and home markets, the more it will become just another PC operating system - albeit a superior one - and these people might then move on to find another play-ground. For the moment all we can do is just utterly ignore them.

End-of-rant

SCRIPTING

We actually wanted to talk about scripting today. Scripts are the glue holding together the various building blocks of an internet server. Scripting languages make the likes of amazon.co, and yahoo.com possible. Scripts are what back-up your records at the IRS or trigger the billing of your credit.

So, it’s only natural that now that we have a UNIX server in our own little network we should start to learn a few things about scripting, too.

If you have a running Linux box, you already have a variety of scripting languages installed. Among them are: sh, csh, ksh, Perl, Python, Tcl/Tkl and probably a dozen more. The ones you should know about are sh and Perl.

The sh (stands for shell) scripting language is built in the sh shell. The default shell on RedHat is bash which is just a clone of sh with some improvements. Basically sh allows you to put into a file all commands that you would otherwise type at the command line and later execute it either from within the prompt by calling it, or through job-scheduling at a certain time.

One thing that I always like to automate first with the use of the sh scripting language is the correction of the clock at every dial-up to the internet. Let’s see how it is done.

Whenever you dial into your ISP the script /etc/ppp/ip-up is automatically executed by ppp. You can call from within ip-up your personal selection of scripts to be executed at dial-up.

Insert (make sure to be root to do that) the following line at the end of ip-up:

/root/clock_corr.sh

Now in your favorite editor create the following file, named /root/clock_corr.sh:

#!/bin/sh

rdate ntp2.usno.navy.mil

 

 

Save the file and make it executable by typing

chmod a+x clock_corr.sh

and start up your connection. Your clock will be adjusted to the official navy atomic clock (which is also the official time source for all countries participating in the International Time Convention (regulating time zones etc.). That was your first script! Easy, isn’t it?

The first line tells it to use the sh to execute this script and the thid line just calls the time sync utility rdate and syncs to the Navy time source. There are more complicated and sophisticated ways to do it, but for me that’s good enough.

Whenever I spent the whole night programming or studying the Holy Books, I want to make sure I can wake up softly but still in time to go to work. Next to my bed there is an old Compaq laptop with a CDROM drive and a sound card. It is attached to my home network and whenever I get certain mails coming through the URGENT filter I hear a beep and so I can read the mail without getting up. Now, with a simple scripts I can program the UNIX on this laptop (which actually happens to be Sun’s Solaris 2.4 for X86. A great OS, by the way!) To wake me up in the morning. Here is the script:

#!/bin/sh

# copyright 95 by Moshe Bar <-insert your name if you like

echo "welcome to moshe’s alarm-clock with music"

echo "please insert a time in at format to start music"

read ti

echo " "

echo " "

echo "please insert a track to play at wake-up"

read track

echo "cdplay play $track" > alarm

 

at $ti -f alarm

 

This uses the UNIX "at" time-scheduler to start the music at a particular time at a particular track. Took me 23 seconds to programm this back in 1995 and I still use it, unchanged. I know of a friend in the air-force who uses this script to start the positional sync of our satellites every morning. So, I guess it is solid enough.

The important line here is the last one, telling the UNIX time-scheduler to start at time $ti (which the user inputted) and to read the exact command to execute from the parameter file alarm.

As I said earlier you can put any command that you would type at the prompt within a script and it will reliably perform the action by itself.

Shell scripts are very portable and as long as you have a UNIX-variant somewhere it will probably execute your script without modification. Some of the complex scripts I use at work, I wrote on UNICOS (the Cray Unix flavor) back in 88 or 89, they still do their job... on Linux servers (whereby the Linux servers are faster by a factor of 2 or 3).

If portability is what you are looking for, then Perl is definitely something worthwhile looking for. Perl stands for Practical Extraction and Reporting Language. It is a very complete interpreter and to really learn it well (something every aspiring SysAdmin should do) you should buy one of the excellent books on the market. Mr. L. Walls’ own book (he is the programmer of Perl) from O’Reilly is the best one I know. Perl is available one most operating systems, including WinXX and mainframes.

One small example of a Perl script to integrate into your webpage is this:

#!/usr/bin/perl

print "Content-type: text/plain","\n\n";

print "Welcome to Moshe Bar’s WWW server, running on mosheb.moenet", "\n";

$remote_host = $ENV{’REMOTE_HOST’}; print "You are conneting from ", $remote_host, ". ";

$uptime = ‘/usr/bin/uptime‘;

($load_average) = ($uptime =~ /average: ([^,]*)/);

print "The load average on this machine is: ",$load_average, "\n";

print "\n\n", "Have fun!", "\n";

exit (0);

This Perl script is called from an HTML page and returns various interesting values, like load factor of the host etc.

We obviously not making use of any special Perl features here. We could actually have achieved all of this also with a shell script or a normal C program. However, many web hosting services restrict CGI scripts to Perl only for security reasons.

Perl can connect to databases, too and knows TCP/IP.

In the next few days I am going to put a few useful scripts for Linux system administration on my website under http://mbar.webjump.com/ Go there to find out more.Among other things I will put up a script that warns you when disk space is low and a Perl script that implements a web server in about 20 lines of code. The exact location of the scripts within my website will be announced on the main page.

Even if these scripts look a bit complicated right now, as you write more and more small scripts to accomplish useful tasks, you will quickly learn to love them. As with every piece of software, scripts should be used to solve a well-defined problem. It is always better to write a collection of scripts that accomplish a single task each than to write a large (and buggy) one that tries to solve all problems.

I like scripts because I get immediate, useful results, something that is more difficult to achieve with plain C programs.

 

One of the draw-backs of having the whole network connect to the internet through a central Linux server is that newsnet can’t really be managed well, as every user connects to an up-stream news server and has to download the whole list of newsgroups. This consumes a lot of bandwidth and is a very slow process over a dial-up line. Next week, we are going to implement a small-footprint news server on the Linux server and let it download only those news to which users inside the network are subscribed to. I have a very elegant solution with low bandwidth requirements for networks of up to 30-40 users. Stay tuned.

Kind regards

Moshe Bar

Resources

Perl http://www.perl.org/

 

shell scritps newsnet comp.shell.programming

scripts http://mbar.webjump.com/

 

Copyrigt 1999 by Moshe Bar, Israel

This document is under the GPL Open Source License

To unsubscribe, write to mbar-unsubscribe@listbot.com

Start Your Own FREE Email List at http://www.listbot.com/

 

 

Moshe Bar’s Opinion - http://mbar.webjump.com

 

Dear Readers

I am sorry for the silence of the last two weeks. Private matters have kept me from writing the next column. However, the next column (on the subject of news servers) will be ready by the first week-end of may.

There is a couple of annoucements.

First, I have been made a Byte Columnist for Unix, OSs and for the Book Reviews section. The first column will be out on May 4th on www.byte.com . Don’t miss it.

The mailings on this mailing list and the column on Byte will not interfere with each other and so those of you who will read Byte will benefit from up to 5 columns every month instead of the 3 to 4 of this mailing list. This mailing list will deal more with home networking and dial-up connections, while the Byte columns will cover more enterprise-class server issues.

Second, Byte and its parent company CMP Communications, Inc. were bought last week from Miller Freeman. The Chief Editor of Byte assures me that nothing will change. We shall see.

Third, I have moved to Europe from Israel for a time of up to two years.

Expect a report on travelling with Linux in the next two weeks.

As you see, there is plenty going on. Stay tuned.

Kind regards

Moshe Bar

moshe_bar@hotmail.com

========

Moshe Bar’s Opinion - http://mbar.webjump.com

Travelling with Linux

 

My dear readers

First of all sincere apologies to you all for being silent for so long. I moved to a new continent and things take their time, especially if you are not a citizen of the country you move to. I am slowly getting settled again. So here it comes, my first European article.

We all love those articles in the computer magazines telling us the writer’s experiences travelling with a laptop to far-away countries. Usually, the only problem is the connection to the internet. I am always stunned at the stupidity of the undertaking. Come on, do they really mean to connect to the internet in Uzbekhistan? Those business people accustomed to travel in second and third-world countries (you would be surprised how many second-world countries there are in Europe alone) know to adjust their lives to realistic possibilities there..

Accordingly, this article will not try to belittle the capabilities of the countries visited and will just try to provide a report on the advancement of Linux in Europe (where, after all, Linux comes from).

At this point it needs to be said, that in Israel, where I come from, IT is at a very advanced stage. Many of the currently hot technologies like ICQ, portal software, hyper-relational Dbs, and embedded security software are developed in Israel. Therefore, it is not uncommon at all to find software companies there investing substantial amounts in Linux-based products.

My first experiences come from Italy and France. Italy, is - from an IT point of view - about 4 years behind the US and Israel. Obvioulsy, computers have the same power as anywhere else. However, the applications being used companies and public services are still mainly based on the mainframe paradigm and PC’s are used as terminals using the 3270 protocol over SNA networks. Telecommunications are particularly old-fashioned. While ISDN is slowly growing as a medium to connect to the Internet for home users, other technologies like cable and Frame Relay and ATM are totally inexistant. More than 98% of the population still get their TV programs with antennas on the top of their buildings. Medium and small companies (which are the economic backbone of Italy’s economy) usually use PCs more as a management status symbol and do implement networks.

Accordingly, LINUX has problems finding a real use in Italy. PC enthusiasts do install Linux and there is actually quite a few contributors to the Linux kernel and some of its subsystems. However, an IT manager at a Milan-based company with about 90 employees best expresses the sentiment about the use of Linux or Unix as server systems: "Nobody knows how to install and maintain Unix. We are shifting to NT only slowly and peer-to-peer networking is still good enough for our requirements".

The picture is a little more promising among the hundreds of small ISP start-ups in the country, particularly in the north. Some new ISPs are based on 256kbit lines as the main pipe to the internet and run their services on a couple of PCs with usually around 300 to 500 customers (the ratio of customers to modems in Italy is around 100 to 1 !). Here, for cost reasons, Linux does gain a foot-hold. Most ISPs use either RedHat or SuSe distributions. The job market best reflects the state of Unix in Italy: in a recent national Jobs Journal, of 214 requests for system administrators, only 8 were for Unix specialists, most others were either for mainframe Oss or Microsoft products. This proves that Linux thrives in a connected, networked environment and will therefore only grow its presence in Italy as communications improve and business switch to the client-server paradigm for their vital applications.

In France the situation is much more positive for Linux. Traditionally, the communications infrastructure has always been advanced and French companies are very IT-savvy and invest important parts of their cash-flows into a modern and information technology. The public administration makes concerted efforts to improve the knowledge of information technologies at all levels of schools. Consequently, Linux has a very strong following in this country.

Recently, the French government announced that more than 2000 schools around the country would install Linux on their machines to teach to the students. Many big and medium-sized companies are very eager to switch to products other than Microsoft’s. Linux has already a sizable chunk of the server market and grows much faster than WinNT.

There are also some French-made Linux distributions to be found there.

Anti-Microsoft sentiment is very strong in

France and users are generally happy to have an alternative to WindowsXX.

 

In England the picture is similar to France. Linux has already a very firm place in the corporate world and I would say that most internet servers there are Linux based. Slackware and RedHat are the favorite Linux flavors and you can find a Linux distribution in every book shop in London. In the South East of the U.K. where most computer companies are located there are thousands of heavy-duty Linux servers and you can find ads in the newspapers of companies looking for Linux specialists every day.

Now, I would like to speak about a very nice piece of software that I have been testing for the last three months. I am talking about VMWare. VMWare is a commercial software that runs either on WindowsNT or on Linux2.0.3X ( I managed to make it run also under Linux 2.2.3-5, but it was a major hassle). What it does is it creates one or more virtual computer inside Linux or WindowsNT. This virtual computer comes with drivers fora SVGA, diskette, IDE, SCSI, carom and network card.

Once you defined the disk space and RAM of such a virtual machine, you can boot any operating system that runs on a modern Pentium-based computer. Note that VMWare will run only on Intel based computers and only provides an Intel-based virtual machine, it is therefore not an emulator. The advantage of this is that no precious CPU cycles are lost on emulating a different architecture (as with Insignia’s SoftWindows and other such emulators). Provided you have a fast CPU ( 266Mhz and faster) and enough RAM (at least 64MB for Linux and 48MB real RAM for each virtual PC) you will not get more than 5-10% performance penalty. This is quite remarkable.

Let me give you some real-world example on how I used VMWare. For my column at Byte, I intended to conduct a review of the most widely available UNIXes for the PC architecture. Instead of installing, testing, deleting and re-installing all the Oss, I just created a virtual PC each for FreeBSD, OpenBSD, NetBSD, Solaris 7, RH 6.0 and one NT machine for comparison. My machine was a dual-Pentium II 450Mhz with 512MB RAM. Installed a self-rolled Linux distribution with the 2.2.3-5 kernel and installed VMWare 1.0Beta. Then, I proceeded to configure each virtual machine for all the above operating systems, giving each 48MB of RAM to test under common conditions. It took me about half-day per OS to install them all, but in the end I had them all running - at the same time! - on the same machine. I could switch from the Solaris window under X to the FreeBSd window and each of the Oss were connected by network to all the other Oss. Therefore, I could FTP some programs from the virtual machine running FreeBSD to the one running NETBSD, and each of them had access to the internet, all through the same real network card on the dual-Pentium computer. Performance was excellent, given the configuration of the host computer I also tried installing WinNT 4.0 and WinNT5.0. Same result, it installed flawlessly and ran without a hiccup.

Once I had the dual-pentium set up this way, I could also run the windows on the screen of my laptop out in the porch, while the computing was done on the big server inside. This is an additional advantage of X window.

Seing that it ran just fine on this fast machine, I was eager to see how it would fare on a smaller, desktop system. I installed Linux 2.0.34 on a 266Mhz PentiumII with 64MB RAM. Then, I proceeded to install WindowsNT under VMWare. It ran a little slower, mainly because of the smaller RAM footprint and the involved memory paging on the disk, but it ran just fine.

I also do Open Source development for KhaOs (see www.kha0s.org ), a paranoically ultra-secure Linux implementation where everything byte and indeed every bit travelling on the wires or stored on the disks is encrypted and where life is made hell for hackers. We early kernel snap-shots are a nightmare to boot. Instead of going through the whole cycle of compiling a kernel, storing it in a separate partition, shutting down Linux, restarting into the other partition, machine crashing immediately and rebooting into Linux to debug, I just created a virtual machine for the new kernel and just had to press a button on the screen after re-compile to see if it booted under VMWare or not. In one typical day of work I could save more than 4 hours of the dump re-boot cycle. The technology under VMWare is not new. In fact, it is older than personal computing itself. I remember using a very advanced version of VM/370 on a big mainframe in the early 80s. You could boot MVS/370 or VSE under it and the performance hit would not be bigger than 5% provided enough core (the old name of RAM) was installed. Which at the time was 8MB. Yes, 8MB was a LOT of RAM those days. The concept of VMWare is so successful that Mr. Kevin Lawton of the U.S. decided to start a new Open Source project to make a freeware VMWare equivalent, called conveniently FreemWare. (See www.freemware.org). I was invited to join as Chief Scientist in this project and have gladly started doing work there about 6 weeks ago.

VmWare will cost about 400$ for normal users and there will be a student version for 99$. Freemware will cost 0$ and will also give you the source code. Freemware will also be faster, smaller and better tested. Hopefully.

Kind regards

Moshe Bar

http://mbar.webjump.com/