Well, I finally got all of this together and there is now a Linux Gazette Mirror Site page available. I'm deeply indebted to everyone who has kindly offered to mirror the LG, particularly those sites outside the continental USA where connections to the US can be very slow. If you're accessing the Linux Gazette from outside the USA you might want to see if a mirror site is available that would provide a faster connection.
Also, if anyone wants to mirror these pages, there is a brief explanation of how to do this.
Thanks!
Help Wanted!
OK, the time has come for the Linux Gazette to expand a bit! It's time for a design change.
I've received a number of letters offering help and ideas for improving the Linux Gazette. Because the LG now reaches a pretty wide audience, I think it's about time to throw the doors open and take up those kind offers for help.
If you think you'd be interested in pitching in and helping, or if you're curious about what I'm planning to do with the LG, then head on down to the Welcome section below...
For those of you who will be within driving/flying/hitchhiking/walking distance of North Carolina State University on Saturday, April 13, 1996 there is a must-go event:
Linux Expo96!
This is being held at NCSU and is being sponsored by RedHat Software Inc.. Here's an excerpt from the letter I received:
John,
Just wanted to let you know about Linux Expo96... Last year's event had almost 300 attendees, over 200 more than we expected! We hope for 400 or so this year... We have several talks planned.
The event is Saturday, April 13. It will go on most of the day...we will have hardware vendors, software vendors, and tons of talks. You can check out the web page at http://www.linux.ncsu.edu but it's a little behind right now. Players make an entrance are:
- Red Hat (of course)
- Caldera
- Digital Equipment Corp
- Michael Johnson (LJ)
- Linux Hardware Solutions
- Softcraft Solutions
- and more!
Guess who else will be there...?
Yup, there goes the neighborhood... :-) I'll be the short, nerdy looking guy with the glasses. Should be lots of fun! It's a seriously cheap date, too (for all of us who are still on a student's budget).
Be there!
Brian Freeze, et al., have recently fired up a new Linux Message Board (webboard) service. If you've been looking for a place to go to chat a bit about Linux or to ask a question and not get flamed to a crisp...
This is your place...
Here is a very worthy cause that some of you may not have heard about:
Hi John,
I enjoy your Linux Gazette a lot! Here's my contribution, a shameless plug for the Linux CD and Support Giveaway List:
I know that many people buy one Linux CD after another, and the old ones are never used again. Why not simply give them away so that more people can be attracted to Linux? It's certainly a great way to make friends, and it's fun to give something back to the community.
Here's how it works: you register your email address on the Giveaway List, then somebody contacts you, sends in a self addressed, stamped envelope, you put the CD in, send it off and that's it! Easy enough. And even if you don't have a CD to spare, you can offer help with downloading/installing Linux. There are already many people from all around the world on the list.
Those people who received a CD from the list are encouraged to pass it on or redonate it to the list once their system is up and running and everything has been installed. This way, I hope the Giveaway system will one day become a self-runner, with a large enough number of circulating CDs.
This service is intended for new Linux users only. Please don't try to use the list in order get a cheap update of your system.
Finally, the URL:
http://emile.math.ucsb.edu:8000/giveaway.html
Again, thanks for the Gazette. Keep up the good work!
-- Axel Boldt ** [email protected] ** http://emile.math.ucsb.edu/~boldt/
I received a VERY welcome letter from Tom Corner with the details of how to get this very worthy email client to compile after a Slacware upgrade.
Details in the Welcome section below!
I've had several letters from folks recently who've had trouble with the link to this program from the Linux Toys page. If you happen to be one of those still looking for this, here's a correct URL:
ftp.cc.gatech.edu/pub/linux/apps/editors/
You'll find it, together with a patch file for Linux, as xwpe-1.4.2.tar.gz. If anyone can get this to work correctly (i.e., colors and graphics display correctly) in console mode, I'd be delighted to hear from you!
I recently recieved a short announcement from the folks at INT. For those of you who might be interested...
INT ANNOUNCES FREE CHART AND TABLE WIDGETS FOR LINUX HOUSTON, TX - February 27,
(INT) announced that it has made a Linux version of its popular table and charting tools, EditTable99 and ChartObject99, available for free. Linux is a freely distributed UNIX-like operating system for PC platforms that is gaining rapid and widespread acceptance in the development community.
David Millar, Vice President of Marketing for INT, said, The overwhelming growth in popularity of Linux among developers has created a need for programming tools for this operating system, and INT is pleased to make available these free versions of our products to the Linux community. We believe our tools will prove to be a valuable asset to Linux developers, and in turn, INT will benefit through the increased exposure.
INTs EditTable Widget and ChartObject Library will provide Linux programmers with flexible, reliable tools for creating, displaying and editing tables and charts. EditTable contains resources for interactive control of all aspects of table data visualization and manipulation, and ChartObject includes a comprehensive library of easy-to-use 2D and 3D graphing tools for building presentation quality charts and graphs. These two products can be used together to provide a seamless table/graph interface within interactive applications. In addition, they can be linked dynamically, so that when a data value is changed in one view, the change is automatically reflected in all other linked views.
The freeware versions of EditTable and ChartObject for Linux can can be downloaded from INTs Web site http://www.int.com/linux.html. Commercial versions of these products are also available for both UNIX and Linux platforms.
PHEW...!!!
Anybody else been up-to-your-neck busy!
I must admit that after the initial shock of being back at school again and having to deal with exams, quizzes, programming assignments, and the like all over again that I've really enjoyed being back in school. I'm having a GREAT time learning C++ and managing to hold my own as I try to relearn calculus.
It's been pretty busy, folks :-) I really appreciate your patience and your kind messages and continued contributions to the Gazette.
I have spent a little time getting xfmail to compile after upgrading to Linux Slackware 3.0 ELF. The executable compiled under Slackware 2.3 would not run after I upgraded. I got a few tips from both the xforms and the xfmail mailing lists.
Here is what I had to do:
- Download and install ELF XForms. (if my notes are correct, I ftp'd it from laue.phys.uwm.edu.)
- Configure xfmail's Makefile as follows:
FINC= -I/usr/include # where xforms install puts forms.h XFLIB= -lforms # xforms libraries are in /usr/lib LIBS= -ldl -lX11 -lXt -lXext -lXpm -lmAccording to the ELF-howto you need ldl when a program makes a call to load a dynamic library.
- In /etc/prfile add:
export LD_LIBRARY_PATH=/usr/X11R6/libyour system might need additional libraries. This is where xfmail can find libX11.so.6.Anyway it was a learning experience for a novice. The new Slackware is nice, but not a trivial upgrade.
---------------------------------------------------- Tom Corner <[email protected]> - 01/31/96 - 19:04:05 Vienna, Austria Don't worry, be happy. ----------------------------------------------------
After the December upgrade to an ELF system I set about the task of recompiling all the old favorites for ELF. XF-Mail kept bailing out at the linking step with a "signal 11" error. I tried Tom's fix and it compiled without a hitch.
Many thanks, Tom!
And the next thing I need to chat about is...
Yup, it's about time for a paradigm change here at the 'ol Linux Gazette.
I have to admit that I've been humbled and delighted by the absolutely unexpected response that I've gotten to the Linux Gazette over these past several months. It's grown from an initial page with a few tips and tricks that I'd run across to a growing e-zine that is mirrored at two dozen sites literally around the world and which is getting contributions from Linux users from all over.
After getting several letters with suggestions and ideas for improvements for the Linux Gazette, I think that it's time to throw the doors open, as it were, and make this more of a community effort.
Up to this point, I've been trying to keep up with a growing load of reading, tinkering, writing, responding to e-mail, coordinating article submissions, doing each issue's layout, and so forth. It's been a huge amount of fun but it's also getting out of hand in terms of the time involved to do this. Thus, in the spirit of community that has always been a hallmark of the body of Linux users I'd like to put out the Help Wanted sign.
In addition to this, I'd like to try to make the following changes in response to a number of reader's requests:
This is in keeping with one of the basic premises of HTML: that of document modularity. In the past I created a single document so as to ease the task of loading and saving it using one's web browser.
Now that the LG is available via anonymous ftp, this need has been obviated and I'd like to use a more rational structure henceforth.
I'd really like to try to do this so as to end the "When's the next issue coming out...?" suspense.
So, what do you think? I'm open for any further ideas or suggestions. Although I enjoy writing the LG and have gotten a huge amount of pleasure out of it, as far as I'm concerned it's primarily for the benefit of the Linux community and I'd like to allow greater reader/user input into what get's printed here.
Here are just a few ideas for possible monthly columns or contributions that I've been thinking about. This is HARDLY a comprehensive list. Some of the possible ideas that I came up with are:
I'd also be interested in hearing from folks who might be interested in helping with more of the "housekeeping" chores:
I'd be particularly interested in help with this last item - that of translating the LG into other languages. I apologize that the LG is currently an English-only publication as many of you speak and read English as a second (or third!) language. I must admit that I'm humbled by your proficiency in English which far exceeds my meagre abilities in a second language (German, from my mother's side of the family and which I studied briefly in college). If there is anyone who might be willing to translate documents into another language I'd be absolutely delighted to include these for the benefit of others.
One of the things I've discovered about relationships is that it's best to communicate your expectations early, clearly, and often repeatedly. In light of this, here are my "expectations" for anyone who is interested in setting up a regular column:
JOB DESCRIPTION
As far as I'm concerned, I'd be delighted to allow anyone interested in doing this to pretty much do as they please in terms of content, graphics, design layout, size, and so forth.
It's your baby!
I understand the vagarities of schedules: sometimes there just isn't time to sit and tinker or read or write. I would like to try to get the LG out on a monthly basis and will simply include whatever is ready to go to print!
How's that for flexible! :-)
That is to say, you shouldn't feel as though you have to write all of the articles or that you even have to write anything yourself. You may end up merely editing and doing the layout for a series of submitted articles.
Again, it's your choice.
That's pretty much it! I figure that I'll continue to write the LG and will accept whatever help or ideas y'all may have to offer. When time permits, I'll try to write up a more detailed letter to send to anyone who might be willing to help out with any of this, explaining in more detail how I'd like to organize things.
TIA
Enjoy!
John
Saturday, March 30, 1996
Thanks again to everyone that took the time to drop a note with ideas, suggestions, encouragement, and offers to help. They have been very much appreciated! I'm sorry that I've recently fallen behind a bit in responding. My semester continues to the first week in May and things will be pretty busy around here until that time.
I've included a number of interesting letters in the MailBag this issue. As always, if you find something particularly interesting or helpful, then by all means, drop the author of the letter a note and let her/him know! Also, comments, clarifications, and enhancements are welcome :-)
Guess what...?!
Through a very kind offer from Laurie Harper the Linux Gazette now is starting a new Question and Answer column that Laurie has graciously offered to contribute. Here's what she wrote...
What is Q&A Gopher? Well, it's the Linux Gazette's new Questions and Answers section, aimed at giving quick solutions to problems. Spent three weeks trying to figure out how to parallel process a bunch of activities from a script file, but can't keep track of the success of each? Bashed your head raw trying to find where that 'your_machine.your_domain' string keeps popping up from when using Elm? Can't find a way to number the lines in a text file for indexing?
Send out the Gopher!
I have been using UN*X systems of one sort or another for over six years, and administering them for more than three. I have been running Linux for over a year now. I don't claim to know ALL the answers, and I may not be able to answer your question off the top of my head. That's why the column is entitled 'Q&A Gopher' - I have various references and sources to draw on, so if I don't know straight off, I'll go(look)pher the answer for ya :-)
I can't promise to answer everything, and I can't promise to give the canonical solution. I can promise to try my best to apply my experience and contacts to your problems and answer them in the Gazette for all to benefit.
Pretty nice, eh?
Well, if you'd like to get in touch with Laurie and present your questions, ideas, or suggestions you can reach her at:
Many thanks, Laurie!
Those of you who have are regular comp.os.linux.announce readers will realize that there are a HUGE number of new programs and products being released for the Linux OS on a very regular basis. There are also regular upgrades and new releases of old favorites as well.
In an attempt to "spread the word" about all these great new offerings, I've decided to start a regular section for New Programs and Products.
The basic idea is a simple one:
I'd like to provide a section of the Gazette that authors of both freely available and commercial programs and products can, if you will, "advertise" their wares. What I'm conceiving this to involve is a short single page layout that includes:
This could be in the form of a "flyer" or "brochure" similar to the color glossy stuff that piles up on your coffee table when you're out shopping for a new car, television set, camera, and other stuff that I can no longer afford since we're back on a student budget... :-)
I'm indebted to Charlie Kempson, author of TKNET - a Tcl/Tk based graphic frontend for SLIP and PPP connections - for being willing to make the first contribution to this (hopefully...) regular column. Have a look...
As I said above, I'd welcome commercial and non-commercial program/product announcements. If you're interested in participating, please send an HTML (sorry, non-HTML documents will NOT be included! I honestly don't have the time to tag up and reformat plain ASCII documents) document that includes the above basic information. Also, please do try to keep it to one or two pages. Be as creative or crazy as you'd like! Also, I'd be happy to include any graphics that you'd like to embellish your "ad" with.
Have fun and drop me a note! If there is any interest in this then I'd be glad to make this a regular feature and set up an index of contributions to this.
Well, this one's admittedly a "VOFAQ" -- a Very Old Frequently Asked Question -- but one that's worth repeating since until you discover it, it's a real pain. The way to get a backspace and delete key that works like many DOS-converts are used to is either:
xmodmap -e 'keycode 22 = BackSpace'
keycode 22 = BackSpace
Either of these two minor additions should help fix up the 'ol BS and Delete key problems. Now, when you're using Netscape or some other Motif app, your backspace key will delete the character before the cursor and your delete key will delete characters on the cursor. Try it! And be happy again :-)
Well, here's a very handy bit of information that I recently came across while skimming through one of the Linux newsgroups.
Remember the 'ol "kscreen" shell function that restored sanity to a screen that had gotten all buggered up? Well, here's a very cogent explanation of what's happening and how to fix it. The author of this was Henry Wong and here's his message:
>Sometimes, when I inadvertently cat a binary file and get junk on my >screen, that screen goes into some sort of graphics mode and I can't get >it out of it. I've tried changing fonts and SOMETHING changes, stty >doesn't do anything, capital letters may still appear. I can switch to >another virtual console and everything is fine. When I type commands on >my messed up (must remember this is going out to the world, use >euphemisms) terminal it seems to understand. I can exit for instance, >or reboot, though what is echoed to the screen is screwed up. >On sunsite (and mirrors), there's a file called fixvt.sh or something in >the (I think) system/console directory. It'll clean this up. I've done this to myself also. The console appears to be using some sort of VT100 (or similar) emulation. This emulation has two modes: G0 which is usually normal text and G1 which is usually graphics (these can be changed but is usually not). When sending binary text to the screen any ^n (== '\f') character will switch the emulator to G1 (graphics) mode whereas any ^o (== '\r') will switch to G0 (normal text) mode. To force it back you need to have a shell or other program to send the ^o to the stricken display. I often switch to another virtual terminal and do an "echo -e '\017' >/dev/tty2" (assuming that tty2 is the stricken terminal) to restore it back to text mode. Of course you can put this into a shell program with the tty? as the parameter. Hope this helps. Henry Wong
So, want to see that this works? Try this:
In BASH, you can enter literal characters using the vi-type key sequence control-v (that's hitting the control key and the letter "v" simultaneously) followed by the character you want to insert. So, enter the following at the command prompt:
^v^nThat's a control-v followed by a control-n combination. Hit the ENTER key and...
ShaaaZZzzamm!, instant trashed console! Except now, we know that the terminal isn't really "trashed" but merely in graphics mode. All that needs to be done is return it to text mode.
Now, you can "blindly" enter the following:
^v^oHit the ENTER key, and your screen is back to normal. You could also use Henry's suggestion for echoing a similar string to the afflicted VT from another VT. That is, presuming that tty2 was the afflicted VT, switch to another VT and then use Henry's command:
echo -e '\017' >/dev/tty2
Thanks, Henry!!
As most of you DOS converts have probably found out by now, there is no Print Scrn function available as there exists under DOS. However, the undaunted Linux'er will quickly discover that there are a couple very simple little tricks that can do a bit of handy screen capture to file that can then be edited and printed at one's leisure. These involve two handy little utilities: script and setterm.
Using the script program is the essence of simplicity: You merely type in something like the following:
script output.logand the script program will begin to log all the terminal output to the file "output.log". When you're done capturing whatever output you want, type in "exit" at the command line and the script program will exit, leaving you with the output.log file to view, edit, and print to your heart's content.
There are a couple quick caveats that you should probably keep in mind:
Sounds pretty silly, but it's VERY easy to do, especially if you are trying to capture a bit of output.
DON'T NAME THE SCRIPT OUTPUT FILE FOR A FILE THAT IS CURRENTLY IN YOUR DIRECTORY UNLESS YOU WANT IT OVERWRITTEN.
Naming a script log file for the ONLY copy of your almost completed doctoral thesis is generally considered a "Bad Idea".
You get the point :-)
Another means of capturing the output on the current screen is using the setterm program. Again, this is very simple to use. To dump the current screen contents to a file you'd enter:
setterm -dumpThis will create a file called screen.dump in the current directory that is, as the name implies, a dump of your current screen contents. A point to keep in mind is that unlike the script program, setterm with the "-dump" option only captures the current screen full of information. Also, suppose that you wanted to save the output to a different file. Not to worry, there's a command option for that as well (Linux takes good care of you!). If the file you wanted to save output to was output.dump, you'd enter something like:
setterm -dump -file output.dumpand there's your nice, shiny new file waiting for you there.
But that's not all, folks...! :-)
This thing is better than the Ronco VegoMatic...
Suppose that you wanted to capture several screen dumps and append them to a single file. Easy enough. Use setterm's "-append&quopt; option in the place of the "-dump" option and the screen dump will be appended to the file:
setterm -append -file output.dumpPretty cool, eh?
This one's compliments of Jesper Pedersen!
If you want to do screen scrollback at a VT simply hold down the Shift button and repeatedly hit the Page Up or Page Down buttons to scroll the screen back or forward by half a screen. This is handy for things like reviewing the boot up messages that go merrily scrolling by when the system boots. It's also handy for long directory listings and things like this.
The amount of scrollback I get on my machine varies a bit from three to six full screen fulls. There's probably some kernel hack that would allow you to increase this, but I don't happen to know it.
Also, FYI...
If you're interested in the boot messages, you can generally find them in the /var/adm/messages file OR you can type in the command:
dmesgwhich will print the kernel boot messages to the screen. These, too, have a tendency to go whizzing by, and so you'll still need to use the 'ol Shift-PageUp two-finger trick to have a peek at all of this.
I have to admit that the first time I came across the colorized version of the GNU ls command, I was seriously impressed. After staring at DOS's austere gray on black mien for several years, this was a pretty nice improvement. And guess what...? This thing is eminently and easily hackable :-) and you can do some serious playing with this thing...
So, let's see how...
Stuff you'll need
Basically, most Linux distributions include the GNU version of the color ls utility. This should include the programs ls, dir, vdir, and dircolors. So, how do you know that you've got these little rascals...? The easiest way is just to do the 'ol "type file_I'm_looking_for" and see where it's stashed away. What you may discover is that by doing this, you'll end up with something like the following:
FiskHaus [VTp1] ~$ type vdir vdir is aliased to `/bin/ls $LS_OPTIONS --format=long'
If this is the kind of output that you get, you're golden! Slackware and RedHat 2.x both include all of the color GNU ls stuff. You'll find that Slackware enables it by default, RedHat presently does not (but instructions for enabling it are included in the RedHat FAQ and as you'll see, it's VERY easy to do). If your distribution doesn't seem to include these programs you'll find a copy of them in the Slackware distribution in a2 disk set. The file you'll need is bin.tgz which contains, amongst many other things, the ELF binaries, manual pages, and the all-important /etc/DIR_COLORS file which we'll be hacking around with in a just a bit.
Getting things rolling
Presuming that you've got the program files installed, the next bit of of scouting around you'll need to do is make sure you've got the color configuration file. Under Slackware, this is the "/etc/DIR_COLORS" file. Other distributions may hide this somewhere else, but the /etc directory is a pretty good place to look. Again, if you're missing this file, the Slackware bin.tgz archive has a copy of it that you can pick up and drop in. The permissions on it should be something like:
-rw-r--r-- 1 root root 2882 Feb 16 09:33 /etc/DIR_COLORSThat is, only root should be able to mess with the global config file, but it does need to be world readable. If the permissions are messed up, you can fix 'em pretty easily by doing something like:
chown root /etc/DIR_COLORS chgrp root /etc/DIR_COLORS chown 644 /etc/DIR_COLORSThis should set the USER and GROUP ownership to root, read-write priviledges for root, and read-only for all the rest of the mere mortals on the system.
The "Open, sesame!" that will unlock this little program is the incantation:
eval `dircolors /etc/DIR_COLORS`(note! that you use the back-quote character to enclose the dircolors command and NOT an apostrophe. The back-quote is that little thingy up in the left-hand corner of the keyboard (at least on my keyboard :-) below the squiggly tilde (~) character.)
You'll need this bit of magic somewhere in one of the profile files. Generally, this should be an entry in the /etc/profile file which is sourced for all logins. You can also add this to your personal ~/.profile, ~/.bash_profile, ~/.cshrc, or whatever file in your HOME directory. BTW, this is what is needed to get RedHat into color ls mode.
The only other issue that needs to be addressed at this point has to do with the shell that you use. You see, the way that dircolors works is that it sources the file that you specify (such as /etc/DIR_COLORS) and then sets up the LS_COLORS and LS_OPTIONS environmental variables. We'll take a look at these a bit more closely in a minute, but you should know that dircolors defaults to using aliases to work its magic. If you're using a shell such as sh or ash which do NOT support aliases but DO support functions, then dircolors will set up functions instead.
The way to make sure that the "Right Thing Happens" is to add a command line option that indicates which type of shell you're using. The list includes:
-a ash -s sh -b bash -k ksh -z zsh -c csh -t tcshSo, to properly set up colorized ls under ash, you'd include something like the following in your /etc/profile:
eval `dircolors -a /etc/DIR_COLORS`Don't believe me...? Then try this little experiment:
At the console or in an xterm fire up the ash shell (presuming that you installed it and it's listed in your /etc/shells file) by simply typing in ash. This will start the ash shell session. Now, do a directory listing by typing in ls -l. Pretty dull, eh?
Now, type in the command:
eval `dircolors /etc/DIR_COLORS`and notice that you DON'T include the "-a" option. Hmmm... just a bunch of error messages. Do another directory listing and it's the same 'ol dull gray on black stuff. Now, type in the "eval" command again and this time, include the "-a" option.
ShaaaZZzzaaMmmm!! Instant Color!
Very cool.
So, enough experimenting for now. Type in exit to get yourself out of there and let's keep going!
It's HACK time...!
Ok, now for the fun part...
The first thing that you'll probably want to do is the 'ol backup :-) The past few issues of the LG have given you some strategies for this and so I'll leave it up to you to decide whether to wear your seatbelt or not. The next thing to decide upon is whether you want to make changes globally or just for your personal login sessions. If you edit the /etc/DIR_COLORS file, then changes will affect all logins. This is probably not a bad idea for a standalone system or one on a small LAN with few users who might not mind your somewhat out-of-kilter idea of a color scheme. On a larger system, or if you just want to make changes for a single user, you can copy the /etc/DIR_COLORS file to your home account:
cp /etc/DIR_COLORS ~/.dir_colors
If you have a .dir_colors file in your home account then dircolors will use this instead of the global /etc/DIR_COLORS file. Once you've decided which file you're going to deface... er...um, edit, then we're ready to go.
The default file that comes with the Slackware distribution (/etc/DIR_COLORS) is well commented and so using that alone you can get a pretty good idea about how you might customize it. Let's take a look at it by sections.
To begin with, there are several possible entries for COLOR, OPTIONS , TERM, and EIGHTBIT. In my slightly modified file, these appear as:
# FILE: /etc/DIR_COLORS # # Configuration file for the color ls utility. # This file goes in the /etc directory, and must be world readable. # You can copy this file to .dir_colors in your $HOME directory to override # the system defaults. # COLOR needs one of these arguments: # # 'tty' color output to tty's only # 'all' or 'yes' color output to tty's and pipes # 'none' or 'no' shuts colorization off completely # COLOR tty # OPTIONS allows you to specify additional commandline options for # the ls command. These can be any options (check 'man ls' for details) # # -F show '/' for dirs, '*' for executables, etc. # -T 0 don't trust tab spacing when formatting ls output. # OPTIONS -F -T 0 # TERM specifies which terminal types are to be colorized. There can # be multiple entries. # TERM linux TERM console TERM con132x25 TERM con132x30 TERM con132x43 TERM con132x60 TERM con80x25 TERM con80x28 TERM con80x30 TERM con80x43 TERM con80x50 TERM con80x60 TERM xterm TERM vt100 # EIGHTBIT specifies whether to enable display of eight-bit ISO 8859 # characters. This is set to either: # # 'yes' or '1' displays eight-bit characters # 'no' or '0' prevents display of eight-bit characters # EIGHTBIT 1These are pretty much self-explanatory, but's let look at each one briefly nonetheless. The COLOR definition allows you to turn colorization either on or off. If you decide to turn it on, then you can specify that it is used only at a tty, or for both tty's and pipes. To get an idea of the effect of using the all or yes option, try this little experiment.
Set the COLOR option to all, save the file, logout, and then log back in. Notice that any changes that are made can quickly be evalulated merely by logging out and logging back in. You DO NOT need to reboot the system! Now, type in something like:
ls -l | lessWhat you're doing is piping the output of the ls command to less. This is a handy trick when you're scouring through a directory with a large number of files in it. You'll notice that the output is seriously encumbered with a lot of "ESC[01;33m" type garbage. That's your old friend, Mr. Escape Code. It looks pretty nice at a tty, but is kinda ugly when piped to something like less. By changing the COLOR option to tty then these escape codes are used only at the tty. If you change this and do the logout/login thing and then try the listing once again, you'll find that the output is a bit more acceptable - sans ESC codes. It's your call, but COLOR tty might not be a bad place to start.
The OPTIONS definition allows you to conveniently add whatever command line options to ls that can be legally added. In this case, we've added the "-F" and "-T 0" options. You'll find a bazillion command line options listed in the ls manual page and any of these can be included. Keep in mind, however, that dircolors DOES NOT check to see if these are legal options -- that's left up to you, my friend :-).
No typos.
The TERM option specifies which terminal types are to be colorized. You'll need an entry for each terminal type. I'm no terminal definition wizard and so I just accepted the defaults :-).
Finally, the EIGHTBIT option can be either "on" (AKA, 1), or "off" (AKA, 0). This specifies whether the eight-bit ISO 8859 character set can be displayed. I'm also no character set guru and so I just turned it on -- figured that it was a "Good Thing To Do".
Now for the part you've all been waiting for...
The last half of the DIR_COLORS file allows you to configure how the various files are colorized. This is where the serious fun happens :-)
Again, in my somewhat modified file, it looks like this:
# Color init strings: # # These specify how various files are displayed. A color init string # consists of one or more of the following numeric codes: # # ATTRIBUTE STRINGS: # ------------------ # # 00 = none # 01 = bold # 04 = underscore # 05 = blink # 07 = reverse # 08 = concealed # # COLOR STRINGS: # -------------- # # COLOR TEXT BACKGROUND # # black 30 40 # red 31 41 # green 32 42 # yellow/brown 33 43 # blue 34 44 # magenta 35 45 # cyan 36 46 # white/gray 37 47 # # Note that the color init strings are a semi-colon delimited series of # color codes. For example, to specify a bright yellow text on blue # background the string 01;33;44 would be used. # # The following entries define the color specifications based upon the # file type. # NORMAL 00 # global default, although everything should be something. FILE 00 # normal file DIR 01;34 # directory LINK 01;36 # symbolic link ORPHAN 01;05;31 # orphaned symbolic link - points to non-existent file FIFO 40;33 # pipe SOCK 01;35 # socket BLK 40;33;01 # block device driver CHR 40;33;01 # character device driver EXEC 01;32 # file with executable attribute set # These entries allow colorization based upon the file extension. These may # either be in the form '.ext' (such as '.gz' or '.tar') or '*ext' (such # as '*~' used with emacs backups). Note that using the asterisk allows you # to specify extensions that are not necessarily preceeded by a period. # .cmd 01;32 .exe 01;32 .com 01;32 .btm 01;32 .bat 01;32 .tar 01;31 .tgz 01;31 .arj 01;31 .taz 01;31 .lzh 01;31 .zip 01;31 .z 01;31 .Z 01;31 .gz 01;31 .jpg 01;35 .gif 01;35 .bmp 01;35 .xbm 01;35 .xpm 01;35 .tif 01;35 .ps 01;35
Basically, the first section allows you to define colorization by the type of file. That is, whether it is a regular file (FILE), a directory (DIR), a symbolic link (LINK), a named pipe (FIFO), and so forth. To set up the color scheme you simply use a semicolon-separated list of color attributed.
So, let's say that you were feeling a bit psychodelic this morning and had an uncontrollable urge to see your directories show up as blinking bright red text on a magenta (that's "purple" for those of us who barely learned their colors) background.
OK, mon... you want blinking Hot Red on a Purple backdrop, it's all your's...
To do this, you'd add something like:
DIR 01;05;31;45
Pretty simple, eh?
The 01 and 05 color codes set bold and blinking attributes respectively; 31 sets the foreground (text) color to red, 45 sets the background color to magenta, and a semicolon separates each entry.
Now, you can do this for the various file types that are listed.
Wanna do something REALLY cool...?!!
Here's a simple addition that will let you flag a symlink that's gone bad!
Two of the options that can be included in the file type specifications are ORPHAN and MISSING. ORPHAN refers to symbolic links that point to a file which no longer exists; MISSING refers to that file which no longer exists but which still has a symlink pointing to it. So, how does that happen?
Easy.
To demonstrate it, just do the following. First, create a file and then create a symlink to it. You can do this by:
touch test.file ln -s test.file test.symlinkIf you do a directory listing, you'll see something like this:
-rw-r--r-- 1 fiskjm users 0 Feb 16 11:48 test.file lrwxrwxrwx 1 fiskjm users 9 Feb 16 11:48 test.symlink -> test.file
You can see that there is now a symbolic link from test.symlink to test.file. Now, go ahead and either rename or delete test.file and then do another directory listing:
-rw-r--r-- 1 fiskjm users 0 Feb 16 11:48 test.FILE lrwxrwxrwx 1 fiskjm users 9 Feb 16 11:48 test.symlink -> test.fileHmmm... the symlink still points to test.file, even though we've renamed it to something else. And so here's the where the rub occurs. Thing is, it isn't always easy to spot a bad symlink, especially if it is linked to a file in some other directory. So, how can you spot these little buggers...? Here's how:
Create an additional entry for the ORPHAN file type. One possibility would be to add something like:
ORPHAN 01;05;31This sets the color attributes of a "bad" (ORPHAN'd) symbolic link to flashing bright red. That is, for the directory listing above, the "test.symlink" portion of the entry would be colorized to flashing bright red, the rest of the line would appear as a "normal" entry. Now, what is the MISSING file type? It's that part of the entry after the "->" portion of the listing. That is:
lrwxrwxrwx 1 fiskjm users 9 Feb 16 11:48 test.symlink -> test.file ^^^^^^^^^^^ ^^^^^^^^^ ORPHAN MISSINGNow, you can colorize both of these, but it admittedly looks a bit odd. To convince yourself of this, try adding a similar entry for MISSING, logout, login, create a file and a symlink, rename the original file, and do a directory listing. There's your bad symlink in flashing red!
Now, whenever you do a directory listing you should easily be able to spot an errant symlink and be able to fix it.
Finally, the last section of the DIR_COLORS file let's you specify colorization by file extension. This means, for example, that you can colorize all *.gif, *.jpg, *.tiff, *.pbm, *.ppm, etc., graphics files by including an entry such as in the example above. The intuitive will notice that this doesn't really tell you what TYPE a particular file might be -- only that it has a particular suffix. In other words, you could rename our vacuous file test.file to something like testfile.gif and you'd find that it would be colorized like all the other *.gif files. It would obviously NOT be a graphics file. All that this does is allow you to quickly spot files that have similar suffixes.
One final point about defining extensions.
There are actually two forms that can be employed and these can be illustrated as follows:
.ps 01;35 *~ 01;33
In this example, all files that have the ".ps" suffix will be colorized to bright magenta. In the second example, no "." is needed. Any file that ends with a tilde will be colorized to bright yellow. So, files such as:
test.file.ps~ test.file~ testfile~would all be colorized to bright yellow because they match the pattern "*~". This enables you to colorize files which do not have the typical dot-suffix ending.
Pretty cool, eh?
Admittedly, there are a few other tricks up the 'ol dircolors sleeve, but these are the basic ones that will get you going. If you're interested, there is a very good manual page for the dircolors program that includes a good deal of helpful documentation. Don't forget that you can easily print up a copy of any manual page using the old trick:
man dircolors | col -b > dircolors.txt
For those of you who might be intersted, I've included a text copy of my slightly modified DIR_COLORS file and a plain text rendition of the dircolors manual page:
Hope you enjoy this as much as I did!
Well, I have to admit that after a spate of playing and tinkering around with X and FVWM, I started messing around with text-mode stuff. X Window is a LOT of fun and offers power and flexibility and there are TONS of great apps to play with. Thing is, though, that it's admittedly pretty resource intensive and there are times when I didn't really feel like starting up X just to get something simple done.
Well, after a bit of playing around with color_ls, I ran across a means of colorizing text at the console using escape sequences. This is actually pretty simple and kinda fun to include in shell scripts and such.
One such use that came to mind was adding a bit of color to the /etc/issue file. Y'all will remember that contents of /etc/issue are displayed before the login prompt while /etc/motd is displayed after a login. So, here's a bit of quick tinkering that adds a bit of color to the /etc/issue file.
The /etc/DIR_COLORS file that we've just been tinkering around with above is going to come in pretty handy here. You'll use the same color codes as in the DIR_COLORS file, only they'll be entered as escape codes. To demonstrate how easy this is, let's try a little experiment. First, however, a word about entering literal characters.
The key to using ESC sequences is knowing how your shell handles literal character insertion. The BASH shell uses vi type editing commands whereas csh and tcsh use emacs type editing commands. Since I'm a bit more familiar with BASH and vi I'll use these as examples. BASH allows you to enter a literal key using the "control-v, key" combination. That is, you first hit the control-v key pair -- what you'll see is a caret "^" at the prompt -- then hit whatever key you wish to be literally inserted. In this case, hit the escape key. What you'll now see is, at least under BASH, the caret-left_bracket symbol pair "^[" which indicates the ESC character.
Now, try this: enter the following string at a command line -- either at the console or in a color xterm:
echo "Ctrl-vESC[44;33;01mHello World!Ctrl-vESC[m"The "Ctrl-vESC" means you hit the control key and the letter "v" together, followed by the escape key. Then, type in a left bracket and the numbers 44, 33, and 01 separated by semicolons, and the letter m. Following this, put in your text string and close it using the same control-v, ESC, [, and m sequence. Hit the enter key and
Whamoo!!, instant color "Hello World!"
Cool, eh?
And now wait a minute... those numbers look a bit familiar to you...? They should, we just used them in the previous section when we edited the DIR_COLORS file. You see, you can use the same color codes as with the color ls program. The only difference is that, at the command line, you have to use the echo command and enclose the string in double quotes after you've inserted the ESC character.
Now, this is starting to look a bit like a handy little tool, eh?
Yup, and this is how we'll spruce up the 'ol /etc/issue file. Now, the way to do this is pretty simple. What you do is fire up vi, emacs, or your favorite editor and load up /etc/issue. On the first line, enter the literal escape code (in vi, this is the aforementioned ctrl-v, ESC combination) and the left bracket "[" and the color codes that you want to use followed by the letter "m". Now, enter the message that you want displayed and on the last line, enter the "ESC", "[", and "m" characters. This last character sequence simply restores the normal color attributes.
Save your file and you should be all set! Now, to test drive it you can simply "cat" the file:
cat /etc/issueand you can admire your handiwork. You may discover that the color looks a bit ragged in that it doesn't always extend to the edge of the screen. What is helpful here is to avoid using tabs for spacing and simply use the space key. Adding spaces to each line will then let you "extend" the color all the way to the edges. You'll need to experiment around a bit with this to get exactly the right number of characters (keeping in mind that most VT's default to 80 columns and so you'll need exactly 80 characters per line).
My admittedly rather whimsical /etc/issue looks something like this:
^[[44;33;01m F I S K H A U S - - N A S H V I L L E, T N Welcome to FiskHaus, running Linux ELF Current Kernel Version: Linux 1.2.13 #1 Sat Dec 30 21:40:28 CST 1995 ^[[m
I'll admit that the kernel information is a bit out of date since I have to update this by hand at the moment. I'm sure that there is a way to do this automagically at boot time using a few little utils that would produce a line exactly 80 characters long...
but that's for tinkering later :-)
(Later... :-)
Well, I had a little time and set up the following rc.issue script that updates the /etc/issue file at each boot up. Specifically, it updates the kernel version information, making pretty heavy use of the cut program as you can see:
------------------------------- CUT HERE ------------------------------------ #!/bin/sh # # File: rc.issue # # Description: re-creates the /etc/issue file at bootup. Bascially, all # this really does is update the kernel version name and information. # # Author: John M. Fisk TEMPFILE=/tmp/.issue ISSUE=/etc/issue # Let folks know what we're up to... echo -n "Updating /etc/issue... " echo "44;33;01m" > $TEMPFILE echo " " >> $TEMPFILE echo " F I S K H A U S - - N A S H V I L L E, T N " >> $TEMPFILE echo " " >> $TEMPFILE echo " Welcome to FiskHaus, running Linux ELF " >> $TEMPFILE echo " " >> $TEMPFILE echo " Current Kernel Version: " >> $TEMPFILE echo -ne " " >> $TEMPFILE echo -ne `uname -svr` >> $TEMPFILE echo -ne " " >> $TEMPFILE echo " " >> $TEMPFILE echo " " >> $TEMPFILE echo "m" >> $TEMPFILE cut -b 1-80 $TEMPFILE > $ISSUE echo "done." # END rc.issue ------------------------------- CUT HERE ------------------------------------
Have fun!
Well, here's a bit of scripting coolness that I just ran across and thought I'd pass along to all of you PPP users out there. It has to do with setting up the /etc/ppp/ip-up and /etc/ppp/ip-down shell scripts which had completely stymied me until recently. So here's the skinny...
For those of you who've been hanging around here for a while, and who might have tinkered with the PPP script examples that were included a while back, a reader wrote in about using a shell script to update the /etc/hosts file. Those of you who read his letter will recall that his concern was over that of permissions: in order to update the /etc/hosts file (since the local University does dynamic IP addressing) you either had to have very unsecure permissions on the file, or else run the shell script as root. Both of these being "Not A Good Idea".
Well, over December I managed to do a temporary ELF install with Slackware 3.0.0 (but we're heading for a RedHat 2.1 system by the end of Spring Break! :-) and had occasion to reinstall ppp-2.2.0d. In the process, I came across an example shell script that finally pried the lid off this small mystery. Those of you who've done the same will agree that there's all kinds of nifty stuff included with the PPP distribution and it's worth your while to pick up a copy of it even if you don't need to update. It comes with a wealth of VERY helpful documentation and example scripts.
Sincerest kudos to Al Longyear and Michael Callahan!!
Anyway, they included several example shell scripts and one was an ip-down script. This, coupled with the manual page, finally made ip-up and ip-down available! And, it solved the permissions problem involving updating the /etc/hosts file which is done automatically by the ip-up script!
Very cool :-)
So, let's cut to the chase...
Those of you who've set up PPP know that the pppd daemon checks a couple files on the system as it goes about setting itself up. One of these is the /etc/ppp/options file which contains a listing of the various options that can be passed to it (such as modem, noipdefault, crtscts, -detach, and so forth). This file is strict optional and you can pass it these run time options from the command line.
The other couple optional files are the ip-up and ip-down scripts. Checking out the manual page regarding these little rascals we find:
I must admit that I mused over the meaning of this for some time. For the life of me I couldn't figure out just exactly what these did or how to use them. Nothing like a good example to clear things up :-)./etc/ppp/ip-up A program or script which is executed when the link is available for sending and receiving IP packets (that is, IPCP has come up). It is executed with the parameters interface-name tty-device speed local-IP-address remote-IP-address and with its standard input, output and error streams redirected to /dev/null. This program or script is executed with the same real and effective user-ID as pppd, that is, at least the effective user-ID and possibly the real user-ID will be root. This is so that it can be used to manipulate routes, run privileged daemons (e.g. sendmail), etc. Be careful that the con- tents of the /etc/ppp/ip-up and /etc/ppp/ip-down scripts do not compromise your system's security. /etc/ppp/ip-down A program or script which is executed when the link is no longer available for sending and receiving IP packets. This script can be used for undoing the effects of the /etc/ppp/ip-up script. It is invoked with the same parameters as the ip-up script, and the same security considerations apply, since it is executed with the same effective and real user-IDs as pppd.
To cut the suspense, here's the Rosetta Stone that unlocked the mysteries of these little gems. This is taken from the ppp-2.2.0d distribution.
What caught my eye was the first part of the file:#!/bin/sh # # This script does the real work of the ip-down processing. It will # cause the system to terminate just to make sure that everything is # dead; restart the ppp-on script processing to re-dial the sequence. # NETDEVICE=$1 TTYDEVICE=`basename $2` SPEED=$3 LOCAL_IP=$4 REMOTE_IP=$5 # # If the process is still running, then try to terminate the process # if [ -r /var/run/$NETDEVICE.pid ]; then echo '' >>/var/run/$NETDEVICE.pid pid = `head -1 /var/run/$NETDEVICE.pid` if [! "$pid" = "" ]; then sleep 5s kill -HUP $pid if [ "$?" = "0" ]; then sleep 5s KILL -TERM $pid if [ "$?" = "0" ]; then sleep 5s fi fi fi # # Ensure that there is no junk left in the system # rm -f /var/run/$NETDEVICE.pid rm -f /var/lock/LCK..$TTYDEVICE fi # # Since the defaultroute will not be added if there is an existing default # route, remove it now. Do not do this if the defaultroute route was not # added by the ppp script. # # route del default # # Finally, restart the connection sequence. # exec /etc/ppp/ppp-on
NETDEVICE=$1 TTYDEVICE=`basename $2` SPEED=$3 LOCAL_IP=$4 REMOTE_IP=$5and suddenly, there was enlightenment!
Going back to the manual page and rereading it, it suddenly became clear that what was happening was that /etc/ppp/ip-up was merely a shell script (ah...duh!) and that it was being passed several parameters once the interface was up and ready. These included
By setting up a series of variables and assigning them the values of these parameters, you could manipulate them in your shell script! And that is all that ip-up and ip-down really are -- plain 'ol garden variety shell scripts! The same thing that you've been writing now for ages.
So, to work a bit of magic, fire up your favorite editor and create your very own shell script. Because this is a root-y kinda thing to do, I left user space and entered rootdom. Going to the /etc/ppp/ directory I created a couple shell scripts called, what else...?, ip-up and ip-down. Here's my own rather humble scripts that do a couple things:
#!/bin/sh # # file: /etc/ppp/ip-up # # description: this script is automatically sourced by pppd once the # PPP interface has been established. We'll use it to # # [1] update the /etc/hosts file # [2] load the bsd_comp.o compression module # [3] start the ppp-up script that periodically pings our # host and keeps the line up # # These are the parameters that are passed to the /etc/ppp/ip-up script # once the PPP interface has been established. By assigning them to these # variables, we can make use of them in the script below # NETDEVICE=$1 TTYDEVICE=`basename $2` SPEED=$3 LOCAL_IP=$4 REMOTE_IP=$5 DATE=`date` # set DATE = current time # update the /etc/hosts file... # echo "# # This file automatically generated by /etc/ppp/ip-up and should # indicate the correct dynamically allocated IP address below. # # This file generated on $DATE # 127.0.0.1 localhost $LOCAL_IP FiskHaus.vanderbilt.edu FiskHaus" > /etc/hosts # make sure the bsd_comp.o module has been loaded if [ -e /lib/modules/1.2.13/net/bsd_comp.o ]; then /sbin/insmod -f /lib/modules/1.2.13/net/bsd_comp.o fi # start the /usr/local/sbin/pppup shell script . /usr/local/sbin/pppup &And here's the ip-down script:
#!/bin/sh # # kill the pppd process if [ -e /var/run/ppp0.pid ]; then kill -9 `cat /var/run/ppp0.pid` fi # rmmod bsd_comp.o /sbin/rmmod bsd_comp.o
Now these are admittedly pretty far from wizardly, but they work! :-) And here's what's going on.
One of the coolest things that the ip-up script can do is be used to update the /etc/hosts file. Since my IP is dynamically allocated with each dial up this is handled very easily. The script is passed the local-IP-address as the fourth parameter ($4) and its value is assigned to LOCAL_IP. Since this is my dynamically assigned IP address it is used to recreate the /etc/hosts file each time a connection comes up. I added a DATE parameter so as to see when the last connection was established. Now, the Very Nice Thing about this is that because these shell scripts are run, as the manual page states, with root permissions they can update the /etc/hosts file which has write permissions only for root.
Now, for those of us on single user systems, this is a fairly small point but for those on a system with more than one user, you'll need to take the warning about system security to heart. These DO run with root priviledges (assuming that pppd is started by root) and their standard input, output, and error messages are all routed to /dev/null and so they silently go about their work. In a multi-user environment, be careful!
That said, you can also see that I use this script to load up the bsd_comp.o module (again, because I start pppd as root it can execute the /sbin/insmod program) and start a small "keepalive" script. One thing that could be added to this is, for those of you who've set up sendmail for remote queuing, to have sendmail start attempting mail delivery in the background one the connection is up with something like:
if [ -x /usr/bin/sendmail ]; then sendmail -q & fiwhich would make sure that sendmail is present AND is executable and then invoke it with the "-q" option which causes it to attempt delivery of any mail in the mail queue. Since I generally do this "by hand" I haven't added it to ip-up.
Notice, too, some of the other information that you get with this. The first parameter ($1, NETDEVICE) gives you the interface name - e.g., ppp0, ppp1, ppp2, etc. The second ($2, TTYDEVICE) is the tty device from which pppd was invoked. You'll notice that the "`basename $2`" construct is used. The basename program is a small utility that strips all of the leading directory, and optionally a specified suffix, from its argument. In this case, if you started pppd on your second VT which would be /dev/tty2 if you were at a console, then basename would strip away all of the leading directory information and leave you with "tty2" only.
The third parameter ($3, SPEED) specifies the connection speed (such as 38400, 57600, etc.). The fourth and fifth ($4, LOCAL_IP; $5, REMOTE_IP) parameters are the local IP address and the remote IP address respectively.
Now, I'm sure that there are all kinds of cool and groovy things that could be done with these handy little scripts. If anyone has any other ideas or suggestions and doesn't mind bringing them to "Show and Tell" then I'd be happy to include them for other's benefit and edutainment (and yes, that's a bona fide yuppie-dom lexicality :-)
Have fun!
No doubt about it...
If you're running Linux then you absolutely need to learn shell scripting.
Period.
Unlike DOS's rather anemic COMMAND.COM and its handful of batch file commands, UN*X shells often come with a very rich set of shell scripting commands that allow you to create complex and powerful shell programs. Linux supports a number of freely available shells including BASH, csh, tcsh, zsh, and pdksh that give you a tremendous set of tools with which to work. There is only one small problem...
You need to learn to use them.
Sorry, man. If you're using Linux, the pricetag of inclusion is reading, studying, tinkering, reading some more, fixing your mistakes, tinkering even more, and then sharing your successes! While much of Linux can be obtained either freely or for a marginal cost, its use depends upon your learning the tools.
That said, let me quickly add that the pay back is excellent! :-) And this is particularly true of shell programming.
What I'd like to present here are just a few small examples of programming constructs that I've come across that have been pretty helpful. This is NOT a basic primer on shell scripting. If you're just learning you might want to consider one of these helpful sources of information:
Well, let me quickly state here that this is one of those books that I currently do not have :-(
However, having read the reviews and skimmed though it it is a GREAT resource for anyone using the BASH shell. It not only covers shell scripting under BASH, but also many other aspects of using this powerful and versatile shell.
It's definitely on my Christmas list... :-)
Here's another excellent, must-have reference for anyone running their own Linux system and having to contend with system administration. Aeleen Frisch is a marvelous author who draws from a wealth of experience. Her writing is clear, concise, and full of wit and candor about this interesting occupation. In addition to covering most of the better known commercial UN*X implementations, she now has extensive coverage of Linux in the 2nd Edition of this work.
The reason to include this here is that she includes an Appendix with a very nice primer on shell scripting. It's probably not enough to make you a scripting guru, but it covers all of the basics and will definitely get you up and going.
Well, this rather wordy title tells it all... :-)
I picked up a copy of this helpful book some time ago and have used it frequently. Now, I also have to admit that I really haven't used this "like it was meant to be used". I haven't worked though all of the 2 weeks worth of lessons. However, you really don't need to as the material is clearly presented and has a wealth of examples, summary tables, and DO's and DON'T's that will get you up and going quickly.
Yup, this old trick will provide you with all 60 pages of the manual page for BASH. Send this to your favorite printer and you've got a VERY handy little document.
Now, admittedly, the manual page can be a bit... shall we say, obtuse... in its descriptions of how things work. Still, if you need just a quick refresher, this can be quite helpful and a printed copy lets you scribble in all those helpful margin notes.
Anyway, that said, let's see what we can do.
Here are four fairly simple and VERY handy BASH shell scripting constructs that may come in useful. The first of these is...
CASE statements
Let's say that you wanted to set up an /etc/profile that performed a particular action depending on which user logged in. For argument's sake, let's say that you wanted to set the umask for root to be 022 and 077 for everyone else. Then you could use a construct such as the following:
USER_WHOAMI=`whoami` case "$USER_WHOAMI" in "root") umask 022;; *) umask 077;; esacsee what's happening?
A case statement such as the above will let you assign a particular set of commands based upon the value of a variable. The construct itself goes something like this...
First, you assign a variable the value that you're intersted in. In this example it was the user's login name. Now obviously, you'll need to have a means of getting this information. This is where all those myriad little UN*X utilities suddenly come in VERY handy :-) and we're using the whoami command for this example.
After the variable USER_WHOAMI is assigned the login name, we use a case statement to indicate which action to take. Notice that using the handy little "*" asterisk let's you assign a default action. The basic syntax of the case statement is:
case "$VARIABLE" in value1) command ;; value2) command command command ;; value3) command command ;; value4) command ;; *) command command ;; esacThe value of $VARIABLE is compared to each of the patterns in value1, value2, value3, etc., until a match is found. If no match is found then the default "*" entry let's you assign a default action. This last entry is quite optional. Note that you can have more that one command statement and that each entry is terminated by the double semi-colons.
Pretty easy, eh?
Here's another example that we've used in the past here at the 'ol LG...
Suppose that you wanted to change the VT color scheme using the setterm program so that the color scheme depends either on which virtual terminal you've logged in on OR what user logs in (such as root). Here's one possible implementation of this:
# Use the following construct to set up colors and various login commands # depending upon with virtual terminal is logged into. # V_TERMINAL=`tty` case "$V_TERMINAL" in "/dev/tty1") setterm -background blue -foreground yellow -bold -store;; "/dev/tty2") setterm -background black -foreground white -store;; "/dev/tty3") setterm -background black -foreground white -store;; "/dev/tty4") setterm -background black -foreground white -store; "/dev/tty5") setterm -background black -foreground white -store;; "/dev/tty6") setterm -background black -foreground white -store;; esac
In this case, we've used the tty command to give us the information we want -- which terminal we've logged onto. The value is assigned to the V_TERMINAL variable and the value of this is compared against the various patterns listed in the case statement. If we had wanted to base our actions on the user's login identity then we could have used the aforementioned whoami program to tell us who is logging in. Change the value of the various case options and now terminal color is based on the user!
But, using the example above, what would happen if you logged onto /dev/tty7 for which there was no entry...?
Nothing.
You see, in this case we didn't set up a default and so if no match is found then no action is performed. Notice, too, that when we assigned the variable a value we used the grave character to enclose the command and NOT an apostrophe.
So, the case statement is a handy little tool when you want to pick an item from a list of possibilities. This next construct will let you do something quite the opposite... it'll let you perform an action on an entire list of files...
Using "find" and "while read"...
This one's a tremendously useful little item that let's you perform an action or series of actions on a set of files using the find command. To demonstrate this, let's suppose that you're intersted in generating a list of all of the dynamic libraries that your programs in the /usr/local/bin directory are linked against. You know that you can do this manually by using the command:
ldd /usr/local/bin/program_nameand ldd would print out what dynamic libraries the executable is linked against. Well, this wouldn't be too bad if you only had a few programs in the directory, but what if you have several dozen or more...?
Hmmm... that's a pretty stiff workout for the 'ol fingers...
Fortunately, there's a VERY easy way to accomplish this using the following construct:
find /usr/local/bin/* -print | while read FILE; do echo $file echo ldd $file echo done
Wanna know something REALLY cool...? :-)
You can do this in a shell script AND you can do this at a terminal!
Yup! Try typing the above lines in at a terminal. On my machine this is what happens when I do this...
FiskHaus [VT] ~# find /usr/local/bin/* -print | > while read FILE; do > echo $FILE > echo > ldd $FILE > echo > done /usr/local/bin/Dispatch libc.so.5 => /lib/libc.so.5.0.9 /usr/local/bin/GetListOfGroups libc.so.5 => /lib/libc.so.5.0.9 /usr/local/bin/GetSelectedArticlesInGroup libc.so.5 => /lib/libc.so.5.0.9 [...snip!]See...? Using this simple trick I get a listing of all of the files in the /usr/local/bin directory and what libraries they are linked against. The basic construct is:
find /directory/* -print | while read FILE; do command $FILE next command $FILE next command after next command $FILE doneThe find program is a VERY powerful program that can be used for finding files using all kinds of specifications. I'm not going to go into this now, but for those of you who are intersted the Linux Journal recently had a very complete article on using the find program.
Anyway, we use find to generate a list of files that we're interested in and then pipe them on to the next command. The GNU version of find doesn't really need the "-print" option since this is the action that it defaults to. Still, since we're being a bit pedagogic here, it won't hurt... :-)
The next step is where the coolness begins. What we've done here is set up a while-do loop that takes each value piped to it from the find command and then uses the BASH read function to read that value into the variable FILE. While there are still filenames being read into file, we perform the do statements which in this case print the name of the file to stdout, echo a blank line, and then use ldd against the file. When the last filename has been read in the while loop terminates.
Suppose that you wanted to catch all of this output that goes merrily whizzing by you on your screen...? Well, you could do a couple different things. What I did was simply use the script program to make a logfile of the output. This is one easy means of doing this. Another might be to change the commands a bit and save the output to a file:
find /usr/local/bin/* -print | while read FILE; do echo $file >> ldd.log echo >> ldd.log ldd $file >> ldd.log echo >> ldd.log done
In this case, we've sent the output to the ldd.out file. The only problem with this is that you don't get to see what's going on until the operation is done. If you wanted to save the output to file AND have it print to stdout you could use the tee command:
find /usr/local/bin/* -print | while read FILE; do echo $file | tee ldd.log echo | tee ldd.log ldd $file | tee ldd.log echo | tee ldd.log done
Here, the tee command saves the output to the file you designate and prints its input to the screen.
Very Cool... :-)
Another example that comes from the folks at RedHat involves creating a small shell script that generates context diffs. This suggestion comes from their Red Hat Linux User's Guide:
#!/bin/sh # # program; gendiff # # description: generates context diffs # usage: # gendiff file_directory .suffix # # the `file_directory' is the location of the files to patch, # the `.suffix' is the suffix (plus the `.') that the original files were # saved with # # [originial program described in Redhat Commercial Linux Users Guide, p. 74-75] # if [ "$1" = "" ]; then echo " gendiff program: generates unified diffs USAGE: gendiff [file_directory] [.suffix] file_directory = the directory containing the modified files .suffix = the suffix (plus the . ) for original files " else find $1 -name "*$2" -print | while read i do diff -u $i ${i%%$2} done fi
I won't go into a discussion of just all that's going on here... it's pretty straight forward and the comments describe how this program is called. Many thanks to the folks at RedHat for this great suggestion!
Since we're talking about using variables here, let's see a little trick to do a bit of operating on our variables...
Using the BASH "##" and "%%" operators...
This is a little trick that I came across a while back while skimming through some Usenet news. I honestly can't recall the author or even the subject of the message, but what I did come across was the following handy trick.
Suppose that you successfully assign a variable a value but it's not in the form that you can use easily. What kind of operations can you perform on that variable in order to convert it to a form that you can use? I'm obviously being a bit rhetorical here, so let me use a real-life example.
When I first started learning shell scripting I wrote a simple program that allowed me to print files in various formats - PostScript, ASCII, .DVI, TeX, LaTeX, etc. The reason for this was simply that I was getting tired of having to type in half a dozen or so commands in order to convert a file to a format that could be printed AND then actually print the file. I also discovered that by echoing certain escape sequences to my printer that I could get it to do all kinds of groovy stuff, such as control the default resident font that was used if I wasn't using soft fonts.
The problem that I ran up against was trying to print the .DVI files. I was using the dvilj2p program to convert the .dvi file to a .lj file that I could simply cat to /dev/lp1 to print. The problem that I ran up against was that I was able to get the filename with its .dvi suffix but was having trouble changing the suffix to .lj in order the print the file. That is, if the variable for the filename was "document.dvi", then what I need to do was convert it somehow to "document.lj" since that was the default output from the dvilj2p program.
The answer came by using BASH's "##" and "%%" operators. What these do is let you delete a portion of the variable's value - either at the beginning of the value or at the end.
For example, using the fictitious document.dvi file from the above, what I could do was convert the value of the variable (let's call it FILE) from document.dvi -> document, and then append the ".lj" suffix to it. If this sounds confusing, it's only that explanations fall short of having a good example. So, let's see what the solution was:
Here's a snippet from my print program that let's me print DVI files:
DIR=`pwd` cd /tmp dvilj2p "$DIR"/"$FILE_NAME" cat ${FILE_NAME%%.dvi}.lj > /dev/lp1 /bin/rm -f /tmp/${FILE_NAME%%.dvi}.lj
See what's happening...? In this case I've assigned the FILE_NAME variable the name of the file that I want to print. After changing to the /tmp directory I run the dvilj2p program against that file and the output is saved to the /tmp directory. At this point, my fictitious document.dvi file has been renamed and is now document.lj. The way that I handled this before was simply to clean out all *.lj files from the /tmp directory, create my document.lj file, and then just "cat *.lj" to the /dev/lp1 device.
Now, using these handy little BASH functions, the variable value was "changed" to have a ".lj" suffix. Now, don't lose me here because once you see what's going on, you'll find this pretty helpful.
The manual page for these things in BASH are admittedly a bit sketchy:
${parameter#word} ${parameter##word} The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the expansion is the value of parameter with the short- est matching pattern deleted (the ``#'' case) or the longest matching pattern deleted (the ``##'' case). ${parameter%word} ${parameter%%word} The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches a trailing portion of the value of parameter, then the expansion is the value of parameter with the shortest matching pattern deleted (the ``%'' case) or the longest matching pattern deleted (the ``%%'' case).
So, let's see how this works with my printing program and the 'ol document.dvi file.
If the FILE variable was assigned the value "document.dvi", then the command echo $FILE at this point would print out:
document.dvi
Now, suppose that we changed this a bit and used the construct above: Let's change that document.dvi to the value "document.lj". We could do this using something like:
echo ${FILE%%dvi}ljwhich would produce the output:
document.lj
The "%%" operator expands the FILE variable and then deletes that portion of it that matches "dvi". Because this would now leave us with "document." (notice the "." remains in this example) and adding the "lj" to the end produces the output that we wanted - document.lj.
The same thing could be done to delete the first portion of the filename. Suppose for argument's sake that we wanted to convert our now much-used document.dvi file to something like "test.dvi". To do this we could do something like:
echo test${FILE##document}
Again, we could do this at a terminal just to convince ourselves that this actually works and doing so generates the following output:
FiskHaus [VTp1] ~# FILE=document.dvi && > echo test${FILE##document} test.dvi
Very Cool... :-)
One important final note: all along I've said that the value of the variable was "changed". This isn't really true since the value of the variable really isn't being changed: what we're doing is manipulating its output so that it appears to have been changed. That is, the value of FILE may be "document.dvi" but we can output a value of "document.lj" by using the construct ${FILE%%.dvi}.lj.
Getting tired of this example...? I am. :-)
Try messing around with this a bit and see what you can do. But before we leave this subject, here's one more small tidbit that's not very functional but is a lot of fun...
Using the echo -n construct...
In tinkering around with the various rc.d files I've learned a bit about the init process. In doing so, I noticed that during boot up there was often quite a bit of lag time while some process was going on... either a daemon was being launched, or some program was being called, or a mail queue was being checked, and so forth. I didn't particularly care to see all of the usual output from these actions but I was interested in knowing what was going on and when a particular action was done. Well, here's a simple trick for setting up a:
"I'm doing something.... done."
kind of output.
The trick is to use the echo command with the "-n" option which causes echo to print the value to the screen without moving the cursor to a new line. This is used in the form:
echo -n "Starting MegaDaemon (this'll take a moment, folks)..." /usr/sbin/MegaDaemon -t -L -ouextpPxXZ1r -cf /usr/lib/MegaDaemon.cfg echo " done!!"
Hmmm....
Since I'm admittedly still a bit of a UN*X tenderfoot, I don't particularly care to know all of the gory details about what in the world a MegaDaemon is doing, but it is kind of comforting to see that an attempt is made to start it and that, at some point, it is finally done. Using echo -n helps here.
As you might anticipate, the output is simply:
Starting MegaDaemon (this'll take a moment, folks)... done!!
As long as the program itself doesn't print output to the screen, then all you'll see is the initial message and the ellipsis followed by the "done!!" string when things are completed.
This is admittedly a pretty cutesy thing to do and not quite as functional as the previous constructs.
Still...
You gotta keep reminding yourself...
Linux was meant to be fun!
Enjoy!! :-)
OK... here's a few fun suggestions for customizing the 'ol login prompt.
I started messing around with this a while ago, after tinkering around with the keypad so that I could use it to change VT's. Once I was able to get that done, it seemed that it would be a great idea to somehow come up with a means of "labelling" each VT so that I could tell which one I was on.
The first idea I had was to colorize each individual VT using the setterm program (which is something that those of you who've been hanging around here a while will remember...). At first, this seemed like a pretty good idea, but after trying this, and ending up with more color combos than a pack of LifeSavers, this didn't seem like such a great idea.
So... after a bit more reading and tinkering, here's what I came up with and a few kinda fun things that you can do with the 'ol login prompt.
First, ever wonder just exactly where that little rascal comes from...?
I did... :-)
If you're using one of the Slackware distributions, then the answer will be found in your /etc/profile. FYI, this is the file that is sourced for all logins including root. The section of interest, at least for Slackware, is the following:
# Set up the various login prompts based upon the shell. #PS1='`hostname`:`pwd`# ' # SCREEN_NO=`/usr/bin/tty` if [ "$SHELL" = "/bin/pdksh" -o "$SHELL" = "/bin/ksh" ]; then PS1="! $ " elif [ "$SHELL" = "/bin/bash" ]; then if [ `whoami` = "root" ]; then if [ ! "$TERM" = xterm ]; then PS1='\033[44;01;33m root \033[m [VT${SCREEN_NO##/dev/tty}] \w# ' else PS1='\u [VT${SCREEN_NO##/dev/tty}] \w# ' fi elif ! [ `whoami` = "root" ]; then PS1='\u [VT${SCREEN_NO##/dev/tty}] \w$ ' fi elif [ "$SHELL" = "/bin/zsh" ]; then PS1="%m:%~%# " elif [ "$SHELL" = "/bin/ash" ]; then PS1="$ " else PS1='\h:\w\$ ' fi PS2='> '
Now, those of you actually using Slackware will immediately recognize that this has been hacked up a bit...
Surprising..., huh? :-)
Even the plain vanilla /etc/profile should have a vague resemblence to this and so you should be able to spot the section without much trouble. Let's see what's going on here...
First, you'll need to know a couple thing about BASH. The little rascal that we'll need to tinker around with is the "PS1" environment variable. This is what actually sets the primarily prompt. Let's first see what the 'ol BASH manual page has to say about this:
PS1 The value of this parameter is expanded (see PROMPTING below) and used as the primary prompt string. The default value is ``bash\$ ''.So... that's the definition, here's a bit more on the subject:
PROMPTING When executing interactively, bash displays the primary prompt PS1 when it is ready to read a command, and the secondary prompt PS2 when it needs more input to complete a command. Bash allows these prompt strings to be cus- tomized by inserting a number of backslash-escaped special characters that are decoded as follows: \t the current time in HH:MM:SS format \d the date in "Weekday Month Date" format (e.g., "Tue May 26") \n newline \s the name of the shell, the basename of $0 (the portion following the final slash) \w the current working directory \W the basename of the current working direc- tory \u the username of the current user \h the hostname \# the command number of this command \! the history number of this command \$ if the effective UID is 0, a #, otherwise a $ \nnn the character corresponding to the octal number nnn \\ a backslash \[ begin a sequence of non-printing characters, which could be used to embed a terminal con- trol sequence into the prompt \] end a sequence of non-printing characters The command number and the history number are usually dif- ferent: the history number of a command is its position in the history list, which may include commands restored from the history file (see HISTORY below), while the command number is the position in the sequence of commands exe- cuted during the current shell session. After the string is decoded, it is expanded via parameter expansion, com- mand substitution, arithmetic expansion, and word split- ting.Actually, this short description was pretty helpful in doing a bit of "prompt tinkering".
What I wanted to do involved primarily two things:
You might think the second of these is a bit odd until you recall that it is too easy to login to several VT's and have at least one of these logins be a root login. Now, you know what kind of mischief you can get yourself into as root and so it pays to know just exactly how you're logged in at each VT. Here's one means of doing it...
The pertinent section of /etc/profile is as follows:
elif [ "$SHELL" = "/bin/bash" ]; then if [ `whoami` = "root" ]; then if [ ! "$TERM" = xterm ]; then PS1='\033[44;01;33m root \033[m [VT${SCREEN_NO##/dev/tty}] \w# ' else PS1='\u [VT${SCREEN_NO##/dev/tty}] \w# ' fi elif ! [ `whoami` = "root" ]; then PS1='\u [VT${SCREEN_NO##/dev/tty}] \w$ ' fi
As you can see by looking at the entire entry above, what we're doing here is going through a series of nested if statements with the first series of tests to determine which shell is being used. In this case, we're assuming that you're using the BASH shell. Once this evaluates to true, the next set of if statements are evaluated.
Next, we use the whoami utility to test for who we log in as. This is how we catch the root login. If this is true, then we start the next bit of magic :-)
The third test is to see where we're logged in. In this case, it's testing to make sure that we're NOT in X and logging in to an xterm... you'll see why in a minute. As long as we're not in X, then the first PS1 is set, if not (and we ARE in X), then the second definition is used.
So, let's take a look at all that mess on the first PS1 definition...
What I'm doing here is a bit more playing around with colors! Those of you who've skimmed through the section above on colorizing /etc/issue will recognize the "44;01;33m" format as the escape sequence for bright yellow text on a blue background (see the info in that section for details). What might not be immediately obvious is what is going on with the "\033[" stuff. The answer is found by going back to our friend, Mr. BASH Manual Page.
Looking again at this, you'll notice that you can use a designation described as:
\nnn the character corresponding to the octal number nnn
Now, we know from our prior mucking about that the escape character can be entered in VIM by using the "ctrl-v, ESC" key combination (that is, hitting the control and v keys simultaneously and then hitting the escape key). Another means of doing this is simply providing the octal value for escape which is...
You guessed it...! 033.
So... the light is beginning to dawn...
What we're doing here is putting in an escape sequence to temporarily change the color of the prompt to bright yellow text on blue background and print the word "root". The final "\033[m" resets the console to its normal color scheme.
Cool, eh?
So, you ask, what's to stop you from doing all kinds of crazy and wild things with colors and stuff... I mean... you could make it bright yellow blinking on a purple background or something unorthodox like that...!!
I mean... think of the raw, unbridled power that this puts in your trembling hands...!!!
Hey, man... welcome to Linux :-)
This is serious coolness...
Anyway, you get the point. Go ahead, get nuts!! But before you do, keep reading to see what little bit magic we can continue to do.
Our next bit of fun has to do with printing the VT number that we've logged in at. As I mentioned above, I wanted to do this because I've set up the keypad as the VT changer and want to keep track of where I am. This is accomplished using the tty program which we've mentioned here before.
In this case, what we've done is set up a variable called SCREEN_NO and assigned it the value of the tty command. To try this out yourself just enter tty at any console and you'll see that it'll print out which terminal you're at. Now, you'll see that the output is actually in the form:
/dev/tty3if you happened to be at tty3. What I wanted to do was convert this to to something like:
VT3This is pretty easy to do using one of those nifty BASH builtins that those of you who've skimmed over the shell scripting stuff will already know about. By taking the output of tty, assigning it to the variable SCREEN_NO, and then "reformatting" it a bit using the following construct:
VT${SCREEN_NO##/dev/tty}we end up with the format that we want! The ${SCREEN_NO##/dev/tty} thingy cuts off all of the stuff at the front (/dev/tty) leaving only the part that we're really interested in. That is, /dev/tty3 is truncated to the number 3. Stick the letters "VT" in front of it and we've got our VT3!
Very cool.
The final bit of explanation has to do with the "\w" and "\u" stuff. Again, we'll find the answer in the BASH manual page. You see, in defining the login prompt, there are several predefined backslash-escaped strings that you can use. They are listed above and let you add all kinds of fun things! In my case, I've added the \u and \w strings that evaluate to the user (\u) and working directory (\w). But there's nothing to stop me from including all kinds of other fun and useful stuff like the time (\t), date (\d), hostname (\h), the history number of the command (\!), and so forth.
One useful thing to keep in mind for those of you who want a more terse prompt is to substitute \W for \w if you include the working directory. This truncates the directory listing to only the basename and not the entire path. Thus, is you were working in the /home/fiskjm/LinuxGazette_work/newstuff/drafts/letters/letters_I_want_to_include/keep/ directory, using \W would print out only the final /keep part of the directory path.
kinda handy :-)
Anyway, so what's the Big Picture?
Let's recap...
By using whoami we've found that we can customize the login prompt for a root login vs. the rest of the users. Using escape sequences, we added a bit of color so that a root login has a bright yellow on blue prompt with the word "root" printed so we know that this is a root login (and, consequently, be careful of mucking around with the system). I didn't mention why I tested for an xterm login -- the reason for this is because when I use rxvt or a color-capable xterm, I use a gray86 background color. These xterms are capable of displaying colors, but on my system they come out looking pretty washed out. Therefore, I don't colorize the prompt, but instead use the \u sequence to print out who I'm logged in as.
So, for a normal login, my prompt looks like:
fiskjm [VT4] /usr/local/src$
A root login would look similar except that the first portion of the prompt would be colorized if I was not in X.
Keep in mind that if you edit your /etc/profile file (AFTER you've made a backup of it :-) to test drive your changes you only need to log out and log back in at some terminal; you DON'T need to reboot the system.
This has been a LOT of fun to play around with. I've found that, coupled with the keypad VT changer stuff, that's it's pretty easy to keep track of multiple logins. I use VT1 - VT3 (bottom row of keypad numbers) for all my root logins; VT4 - VT6 for user logins (second row of keypad numbers); VT7 and VT8 for any additional logins that I need; and VT9 (for which I purposely don't have a getty hanging off of) is where I send ALL logging messages using a suggestion in one of the past LG's. For those of you who might have missed this, it involved adding a stanza similar to this to your /etc/syslog.conf:
*.* /dev/tty9This sends EVERYTHING to tty9 so that you can quickly switch to this to get an up-to-the-minute look at what's being logged by syslog.
Anyway, as usual, there's all kinds of great stuff that could be added here and I'm sure that I've touched on just the tip of the proverbial iceberg. If you've got any ideas or suggestions of your own, drop me a note and share the wealth!
Have fun!
So I called the number, and gave my name, number, address, credit card number etc to the lady, and was told that it would be here within 5 days. A few days later, FedEx showed up with a package for me. In it was the CND CD and manual.
I had been to the Caldera WWW site, and they had said it was possible to install the CND on top of a Slackware setup (what I was running), but it was left as an excersise to the hacker :) not much more infor than that. So I decided not to screw around with my current setup, and to install the CND "clean". I backed up my ~230 MB of linux files with the KBACKUP backup program (which uses the archive utility afio, both available on sunsite) onto my iomega 250 tape backup.
The manual went through how to make the boot and root disks, what the utilities were that you would have to use to manage the partitions, users and groups. The CND is based on the RedHat Linux Distribution, which differes to the "normal" Slackware setup. I'll get into how it is diffrent is just a minute. The setup requires a boot disk and two root disks.
This is a problem that I know about with my CDROM drive (a NEC 2vi with an IDE interface) and the way the IDE interface interacts with my computers BUS. I originally had to modify the /usr/src/linux/drivers/block/ide.c file, and get rid of a couple of lines. The problem was solved in the 1.3.x kernel series. I rebooted and watched the boot messages more carefully... sure enough the kernel was 1.2.13. I made a call to Caldera and asked for their tech support. After a day or so of telephone tag, their techy Alan told me that there were some additional boot disks hidden at the redhat ftp site that use the 1.3.x kernel. I needed a new bootdisk and a new ramdisk2. I got these, used rawrite to write them to disk, and rebooted again. This time Success!
I followed the instructions in the CND manual, leaving out only the network information (DNS, IP address, etc), as I was only a standalone machine. I was eventually given some very easy X11R6 setup questions - much easier than what one must normally go through to get it setup - and finally a choice of what to install. One thing I didn't like was there was not the level of control you have with the Slackware+Pkgtool setup. For example, I chose to install the "X games", but was never give a list of what would be installed, or a choice to select or deselect any of the individual games.
Oh yea, the 160 MB free that Caldera suggests - forget it. If you want a decent setup I would suggest at least 200 MB.
The whole slew of X utilities are very nice. There is an FSTAB manager,
which
makes mounting, unmounting, and the options for your filesystems very easy
to manage. There is also usercfg. You can make a user's password none or
locked with a mouse press... same with change shells or secondary groups.
There is nothing new here, just easier and quicker for people who can't
interpret
richie:*:60:100:Dennis Richie:/home/richie:/bin/sh
in their sleep. Caldera also includes a WWW browser called Arena. It
is... well... OK. I would suggest getting the latest version of
Netscape, though.
The CND uses a Perl program called RPM to manage installing and de-installing packages. It keeps track of what is installed where so when you want to upgrade you can safely and easily delete an old package, and install the new one. I used this program to get rid of the slag that was installed against my wishes with the command: rpm -u [package name]. RPM also supports powerful query options - so you can keep track of what package a file belongs to, or what files a package has, or what packages are installed total! This is a HUGE improvement over pkgtool, and gives Linux users the potential to actually upgrade their systems, instead of re-installing the whole thing! Unfortunatly, if the package you get is not in an RPMed form, you have to use the traditional meathod, and you are back at square one :). It is possible to make your own RPM packages, but I didn't investigate this too far.
What is lacking from the CND are a lot of the little things the Slackware users are used to. EMACS is a big one - though the X editor CRiSP-Lite is very nice. Other little things like uuencode and uudecode and color-ls are standard in other distributions, and it is a hassle not to have them.
[Ed. - color-ls is included with the CND II/RedHat 2.0 distribution but is not set up for use by default. You'll need to read the RedHat-FAQ to get details of how to do this -- John]
The file system is diffrent too. Because of RPM, Caldera expects that you will have no need for a /usr/local directory. Also, there are a lot of changes in the initialization files and directory structure in /etc/rc.d/* . The RedHat distributions file system is close to the Linux FSSTND (file system standard), but still falls short. They know this, and in the manual describe the changes and reasons for them.
In the /etc/httpd directory were all the config files for the http server daemon, there is support for being a: "full-featured NetWare Client with: access to NetWare 3 and Netware 4 servers; NDS support..." Also, it looked VERY easy to set oneself up as a WWW server, an FTP server, News server, or SMTP mail server. If you can plug yourself into the net at school or whatnot, this would be fantastically easy and fun to do (or so I think).
But if you are a first-time Linux user, the CND will give you a very nice, easy introduction to Linux. Or if you have network access and want to be a server, the CND makes it very easy to do.
For more information, please mail me.
by Edward Cameron <[email protected]>
This is the the second in a series of reviews of the much talked about Caldera Network Desktop distribution. Ed continues his series of write-ups with a review of the Preview II release. He also continues with WebSurfer - recounts of his peripateticisms around LinuxSpace...
(Caveat! The CND review page is graphics intense! Those of you with a slow connection be advised that this may take a few minutes to fully load. It's worth it though... :-)
Many thanks to Ed for writing all of this up!
Enjoy.
by Jesper Pedersen at www.imada.ou.dk
Well, I owe a HUGE debt of gratitude to Jesper, author of the program Dotfile Generator, for being willing to take time out of a very hectic schedule to write this article on emacs' enhanced features. Those of you who've been hanging out around here know that I'm still pretty emacs-illiterate and generally revert to VIM for basic editing.
Here's a marvelous article on some of the more fun things that you can do with EMACS (and who knows... maybe it'll make an emacs user out of me yet...!)
For those of you who've been looking for a graphical file manager here's a VERY informative article by Larry Ayers on the moxfm file manager - a descendant of the xfm file manager. If you've never tried out this program, this is a must read article!
Next, here's a program for all of you OS/2 users out there...! Ext2-OS/2 is a freely available program that runs under OS/2. It acts as a filesystem driver, allowing you to access Linux from OS/2! This is serious coolness for all of us "OS/2 is my next fave OS" kinda folks. I must admit that I was pretty excited when Larry first wrote about this discovery. For us OS/2 users, this is a serious must read article!
Also, no LG issue is complete without putting in a good word for VI (or one of the more capable offspring! :-) Larry pulls a double here and covers two very worthy vi clones - xvile and elvis. If it's been a while since you test drove one of these more modern VI editors, you owe it to yourself to come on down and kick the tires!
Finally, Larry includes a short write up of a great little X utility: unclutter which chases away that cursor that's been hanging out just a bit too long.
Again, a very hearty round of thanks to Larry for all of his hard work!
Enjoy!
Two problems many of Unix users may have encountered: 1) Linux provides you with virtual consoles. But when you login via telnet, or even using terminal and modem, they do not work. 2) You want to logoff from computer and still let some program running on it. In case of simple command line oriented programs '& or 'nohup' may be sufficient, but with full-screen programs you are out of luck. Fortunately, these both problems and others have already been addressed and the solution is called 'screen'.
'Screen' is basically a program that implements virtual consoles. Basic usage is pretty simple. Just type screen at your prompt and dispatch info screen with [Enter] and you'll find yourself at the same shell prompt. Fine, that's familiar. But here the magic begins: now, within the "screen", you can press A-c (stands for ALT and c), and it adds second screen. Now you are running two shells - you can do finger to see that you're actually logged in twice. Of course, you can add more virtual screens when you wish. Every screen is fully independent of the others and you can run anything you want in it. Switching between screen is quick: C-n rotates you around the active screens. There are also other means of switching (see man page).
Now even better thing comes: while in screen, press A-n -- this returns you to the login shell. Now you can logoff, while all the screens that you've started on computer and the processes you started under them continue running and doing their job. This is invaluable: you can start NcFTP'ing large file and logoff, you can logoff without leaving IRC, you can stay in text editor and so on. When you want to get back to your stuff, you just log in and type at your login prompt "screen -r" and -- wow -- the jobs you've left running are back and well. There's no need to re-attach screens from the same terminal you started them. So you may leave your work on university, go home and from there continue.
No need to say, that these are only two basic features of the 'screen' program. It is well documented, so you can find out the rest yourself. Screen runs on most current Unices, it can work in telnet, terminal or even xterm session. I personally use it even on Linux console. It is fairly configurable. It is official GNU program, so you can download it from any GNU mirror or directly from prep.ai.mit.edu:/pub/gnu. RedHat Linux users might also want to look for something like screen-*.rpm.
[Once again, I'm indebted to Borek for providing information about a program that I honestly haven't had a chance to try out and don't know about :-). This is one of those great little programs that I've heard about ever since I started using Linux. I appreciate Borek's taking the time to write this up! -- John]
Here is an absolutely delightful and informative narrative chronicling Dr. Peter Breuer's work at reducing his Linux system ("Bambam") to a 4.7 MB partition on a 386sx with 3 MB RAM. Now don't touch that dial!! This is an exceptional article and one that teaches you a LOT about the basic requirements of a Linux system. It is also required reading for those of you who are attempting a similar task -- squeezing Linux into a small space. As the abstract confirms: it CAN be done.
READ IT!
You made it...! This is the end, my friend.
Well, as always, I'm deeply indebted to the growing number of folks you have kindly given of their time and talents and have offered ideas, suggestions, and great articles to the Linux Gazette! I've really enjoyed hearing from y'all!
Also, for those of you who have a bit of programming experience under your belt and want to try your hand at something seriously cool you've GOTTA try Tcl/Tk
This stuff is way too cool...!
I just started tinkering around with this over Spring Break (did a bit of recreational programming) and this is just too much fun. I picked up a copy of John Ousterhout's excellent book Tcl and the Tk Toolkit (Addison Wesley, (c) 1994) and printed up a copy of the book draft Practical Tcl and Tk Programming by Brent Welch. Using these and the program xskim by Rene Pijlman as an example, I learned a LOT about the basics of Tcl/Tk programming. In the course of a couple days I managed to add a few customizations (menubar, directory browser for a "Save As..." item, and so forth) to the xskim program, which is a program I use on a daily basis and really appreciate.
Caveat! This stuff is addicting! :-)
Those of you who've done shell scripting realize the fairly impressive and powerful set of tools that many modern shells offer - including the ability to create small programs. With Tcl/Tk you get an even richer and more powerful and diverse set of tools that let you create graphical programs that are really impressive. Many of you are familiar with the programs ical, addressbook, exmh, and xskim: these are wonderful examples of what can be done with this powerful scripting language.
In addition, a number of rich extensions are available including TclX, Blt, itcl, Tix, and so forth. If you've got a bit of a programming bent and have a little time on your hands, this is definitely worth the investment in time to learn!
Also, there's a lot of very helpful documentation available including postscript drafts of both John Ousterhout's and Brent Welch's books that are available on the 'Net! There is no excuse for not learning!
Have fun!
Oh, and see you at Linux Expo96!
John