Now that you have the regular maintenance
running at more reasonable times and emailing you their reports, you would
probably like to know what those jobs do, and what the reports tell
To begin, let's have another look at the system
specifically the command for the daily job:
15 3 * * * root periodic daily
As you might remember,
cron uses the
utility to run any scripts found in the
directory. The real meat of this
cron job, then, exists in
are both rather complex and would require several articles to explain
fully. However, the scripts do contain comments describing each general
task, so having a good look at them is informative.
You might also remember that the numbers in the files' names dictate
the order in which
periodic runs them. Lower numbers run
first, and we'll take a look at the first script to run,
100.clean-logs. To view it with
As you look through the script, keep in mind that lines beginning with
#character are ignored as the shell runs through the
script. Typically, such lines contain descriptive comments about the
script, but they can also be actual command lines that for various reasons
are turned off or "commented out."
Also, the lines beginning with
echo provide some of the
output that ends up in the reports. For example, these two lines found in
100.clean-logs will begin the daily report with a blank line
and the quoted line of text:
echo "" echo "Removing old log files:"
Here's what the top of the file will look like:
As you probably have guessed by the name of the script, its function is
to remove old system logs. In fact, this is a standard BSD script that
reads its own configuration information from the main
periodic configuration file,
/etc/defaults/periodic.conf, which tells this script which
directory to clean, and for how long the files can stay there unmodified
before being removed.
In Mac OS X's case, the directory it will maintain is
/Library/Logs/CrashReporter, and the length of time files can
stay there before this script will delete them is 60 days. You can view
these settings yourself inside
/etc/defaults/periodic.conf. To have
display a file starting from the location of a search string in that file,
For example, to view the section of
/etc/defaults/periodic.conf that begins with the string
100.clean-logs, use this command:
more +/100.clean-logs /etc/defaults/periodic.conf
You'll see clearly where these settings are defined:
# 100.clean-logs daily_clean_logs_enable="YES" # Delete stuff daily daily_clean_logs_dirs="/Library/Logs/CrashReporter" # Delete under here daily_clean_logs_days="60" # If not accessed for daily_clean_logs_ignore="" # Don't delete these daily_clean_logs_verbose="NO" # Mention files deleted
CrashReporter is a process that runs as a background
daemon if you've enabled "crash reporting" from within the Console
application's preferences. (You'll find Console along with Terminal in
for applications crashing and records their dying gasps, mostly cryptic
debugging data, to log files in
/Library/Logs/CrashReporter. Upon a first-ever crash, an
application gets its own log file in that directory, named
.crash.log. Any subsequent crashes of
that application will then get logged to that same file.
If you've often been running an especially troublesome application,
it's possible that the
could accumulate some large files.
100.clean-logs script, then, will look at that directory each
time it runs and summarily delete any file inside that hasn't been written
to in at least 60 days, since such files would by then be of little
The next script to run as part of the daily routine is
500.daily. This is a much longer script; again, I can't fully
detail it here, but what follows are the most pertinent highlights. The
skipped parts of the script mostly relate to processes not applicable to a
"stock" Mac OS X system.
The first section of the script removes "scratch and junk files" from your system. Specifically, some of these items are:
Files existing in
/tmp that haven't been
accessed or changed in at least the last three days. The
/tmp directory contains, among other things, the
Temporary Items directory used by many GUI applications, so
it's often chock-full of good trash fodder.
Files existing in
/var/tmp that haven't been
accessed in at least a week, or changed in at least the last three
days. Some Unix processes leave junk here as well.
500.daily script also performs one of the most
important tasks of any in these scripts, that is, backing up your NetInfo
database. Actually, the background command in this script doesn't
duplicate the database as you can do manually using NetInfo Manager (also
found in Applications-> Utilities), but instead dumps the data into a raw
Restoring your database from this file requires a bit of work, but the dozen or so steps needed would certainly be worth the time if you really had to restore all of you NetInfo data. You can find a good description of these steps (which I've tested to work with Mac OS X 10.2.4) here.
The heading of the next section of the script is, as you can see in the
cron report, "Checking subsystem status."
Checking subsystem status: disks: Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/disk0s5 40016844 33274772 6742072 83% / fdesc 1 1 0 100% /dev /dev/disk2s1s2 354552 354552 0 100% /Volumes/Q3TA
This output is a result of the
df -k -l command in the
daily script, which reports the used and free space on all local
disks. This example shows a 40-gigabyte system disk (which will always
show as "mounted on"
/, or "root") with about 6.7 gigabytes
of free space.
You can ignore the
fdesc line, which doesn't refer to any
actual disk, but to part of the filesystem plumbing.
Any other local volumes you have mounted will also show up in this
list. This example shows a disk (in fact, a CD-ROM) mounted on
/Volumes/Q3TA. This attribute, known as the disk's "mount
point," shows you the path you would take to reach that disk via the
CLI. For example, to peek inside the Q3TA CD you would enter
You will find all local disks and most network volumes mounted within
The next relevant command simply checks
for any undeliverable messages. If the report doesn't show this directory
as empty (and the procedure in this tutorial is your only use of
sendmail), then it's likely you have some
500.daily script then runs the
command, which outputs the network statistics to the report, a few lines
of which might look something like this:
network: Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll lo0 16384 <Link#1> 15528 0 15528 0 0 lo0 16384 localhost ::1 15528 - 15528 - - lo0 16384 fe80:1::1 fe80:1::1 15528 - 15528 - - lo0 16384 127 localhost 15528 - 15528 - - gif0* 1280 <Link#2> 0 0 0 0 0 stf0* 1280 <Link#3> 0 0 0 0 0 en0 1500 <Link#4> 00:03:93:bd:c9:2c 160141 545668 54718 0 0 en0 1500 fe80:4::203 fe80:4::203:93bb: 160141 - 54718 - - en0 1500 172.24 dhcp-172-24-31- 160141 - 54718 - - en0 1500 (16)00:00:00:67:24 160141 545668 54718 0 0 en1 1500 <Link#5> 00:30:bd:09:4b:bd 7723 0 2489 0 0 en1 1500 fe80:5::230 fe55:5::230:65ff: 7723 - 2489 - - en1 1500 172.18 172.18.1.21 7723 - 2489 - - ppp0 1466 <Link#6> 0 0 0 0 0 ppp0 1466 172.24 dhcp-172-24-40- 0 - 0 - -
netstat -i command lists your network interfaces in
rows, showing traffic statistics for each (since activation) in the
columns. This example actually shows just two hardware interfaces:
en1 is an Airport card, and
en0 is the Ethernet
port. The ppp0 interface shown is one that in use by the PPTP VPN
You will see some other lines for the IP interfaces, including the
"local loop" interface (
lo0), and a couple more related to
IPv6 networking (
gif0). At this point,
you'll be fine to only focus on the lines for showing "<link>" in
the network column. The other pertinent columns are the actual
Ipkts -- Incoming packets Ierrs -- Incoming packet errors Opkts -- Outgoing packets Oerrs -- Outgoing packet errors Coll -- Packet collisions
What you should be concerned with, of course, are any non-zero entries in the error or collision columns. I won't go into troubleshooting your network here, but this page might be a good place to start if something does turn up: http://www.princeton.edu/~unix/Solaris/troubleshoot/netstat.html.
The next important job of the
500.daily script is the
rotation of the system log. This log file,
records the status and error messages from the large number of processes
that comprise the OS.
In the case where no backups of
system.log yet exist, the
script makes the first backup, compresses it using
0. This results in a file called
system.log.0.gz. By "rotating" this log file on subsequent
days, the script will first rename
system.log.1.gz, and then create a new
Each day, the script creates a new
after incrementing the other backup filenames by one. Once
system.log.7.gz is created, however, there will be no ninth
backup file. Instead, on the subsequent backup the
system.log.6.gz file (renamed to
system.log.7.gz) just overwrites the previous
This procedure, then, ensures that you'll have over a week's worth of logs to refer to in case problems arise, but not so many as to waste disk space, and likely none too large to not view easily.
Next, the script "cleans" the web server log files by deleting any rotated files that have been around longer than a week.
As you saw in the
also calls on
periodic each week to look in the
/periodic/weekly directory for scripts to run, and the script
it will find there is named
500.weekly script performs three important tasks, none
of which provides any output to the report except a statement that the
command was performed (unless there are any errors to report).
One of the most useful Unix command line utilities is
locate, a lightning-fast file finder.
does its magic by searching through a database of filenames created by
indexing every pathname on your system. Instead of scanning your disks to
find a file,
locate just whips through its pre-indexed
database, and returns results almost immediately.
locate results are only as accurate as its
database. Files added after the database has been built will not be
locate is not the tool for every search, but with
weekly database rebuilding, it's great for quickly finding that long-lost
file you know is tucked away somewhere on your drive. The first task of
the weekly script, then, is to rebuild the
If you are antsy to try
locate for yourself, have a look
at the short
locate tutorial included in the article found here.
The weekly script updates another important database used by the
whatis is a nifty little memory
jogger that quickly shows you the function of a given command, like
[localhost:~] chris% whatis netstat netstat(1) - show network status
whatis displays is, in fact, the first line of a
command's "man page". If you're not already familiar with
pages, you should be. These comprise the massive collection of online Unix
documentation included with Mac OS X. Look here for a great
tutorial on using
The weekly script, then, creates a
whatis database from
man pages it finds, allowing
return an answer faster than you can say, "Duh!"
Last, the weekly script also rotates several other log files, including
As you also saw in the
cron calls on
periodic each month to run any
script found in the
/periodic/monthly directory, which you
can see is a script named
There's actually very little to the monthly script, but what it does
provide can be pretty interesting, if you like to know where all your time
goes. The monthly script's first task is to run the "connect time
ac. ("Connect" here means "logged
When run from the monthly script,
ac will report the
cumulative time, in hours, each user account has been logged in since the
last time the script ran, as well as the total for all users:
Doing login accounting: total 714.22 chris 548.76 miho 101.77 andy 54.39 jonny 9.18 test1 0.06 ftp 0.06
ac calculates these totals by reading the current
wtmp log file, which logs every login and logout. You can
view this list anytime with the
[localhost:/var/log] chris% last chris ttyp2 Thu Feb 21 16:18 still logged in chris ttyp1 Thu Feb 21 16:16 still logged in chris console localhost Thu Feb 21 16:02 still logged in reboot ~ Thu Feb 21 16:01
So how does
ac know to restart the accounting each month?
Well, right after
ac reports its findings, the monthly script
wtmp logs, creating a new empty
file to start logging to. The next time the monthly script runs, then,
ac will do its accounting based on this new file.
You should now have a good idea of what the three
jobs do and what to look for in the reports. If you're still having
problems with anything, make sure to look at the TalkBack sections for all
parts of this tutorial, where readers and I have covered most of the
common problems and made some corrections.
Also, if you would like to learn lots more about
tutorial for you.
Now that your feet (or even your knees) are wet working with Terminal and Unix, you have an entire ocean left to explore. I hope this tutorial has given you the confidence to dive in. There are other articles here on the Mac DevCenter you should now be ready for, as well as plenty more around the Internet.
Chris Stone is a Senior Macintosh Systems Administrator for O'Reilly, coauthor of Mac OS X in a Nutshell and contributing author to Mac OS X: The Missing Manual, which provides over 40 pages about the Mac OS X Terminal.
Read more Learning the Mac OS X Terminal columns.
Return to the Mac DevCenter.
Copyright © 2009 O'Reilly Media, Inc.