I don't have enough diskspace for the entire system - what can I do?

	The IRAF system is distributed in three parts: the source tree
for the core system and the NOAO package (the as.* files), the binaries
for the core system (the ib.* files), and the NOAO package binaries (the
nb.* files).  Because the system is distributed with source, considerable
disk space can be recovered by stripping the source files with the
	cl> mkpkg strip			# strip the core system
	cl> cd noao			# move to NOAO directory
	cl> mkpkg -p noao strip		# strip the NOAO package
utility, run as iraf from the iraf root.  The source files are not required
for a run-time system.  Software development, including IMFORT programming
and building external packages, can still be accomplished on a stripped system.
	The "mkpkg strip" is normally done after unpacking the as and ib/nb
files, after IRAF is fully installed.  On systems where space is extremely
tight, you can run "mkpkg strip" immediately after unpacking the as files
and running the install script.  This would free sufficient space to allow
the binaries to be unpacked.  We estimate about 1/2 the total diskspace
consumed by IRAF is recovered by stripping the source.  If this is still not
sufficient, it is possible to delete individual binaries by hand.  IRAF site
support can advise you as to which binaries are least likely to be useful
for your particular applications.
	It is important to remember that IRAF doesn't necessarily have to
be installed on a disk local to the machine, any available (e.g. NFS mounted)
disk will do.   External packages can similarly be stripped of source.

What does the IRAF install script really do - what files are modified?

 In general terms the install script does the following:
	- edits the iraf pathname and imdir directory in the following files:
	- creates fifo pipes for image display in the /dev directory
	- creates the /usr/include/iraf.h symbolic link defining the iraf root
	- sets root ownership for the tape allocation task 'alloc.e'
	- creates symbolic links for IRAF commands like 'cl' in the site
	  dependent 'local bin directory'.
Because the install script affects files in system directories, root permission
is required to run it successfully.  Workarounds for some things done by the
install script can be found elsewhere in this FAQ.
	The install script must be run on each client machine to create the
fifo pipes for that node.  If nodes in the network will be sharing an
iraf directory tree, the iraf root must appear to be the same on all nodes.
This can be done with a symbolic link and is necessary so the definition of
the iraf root in the shared hlib$cl.csh file is valid for all nodes.

I'm not able to write in /usr - can I still install IRAF?

	The only thing done by the Unix IRAF install script in /usr is the
creation of a symbolic link /usr/include/iraf.h.  This file contains
definitions necessary to rebuild IRAF (which you will not be doing) and
defines the IRAF root and HOST directories used in iraf networking.  The CL
must know the IRAF root or it cannot start.  However, this information is now
part of the $hlib/cl.csh script, so /usr/include/iraf.h is no longer searched
when starting the CL.  /usr/include/iraf.h is searched, however, when the IRAF
root must be known without starting the CL, as in any node! reference invoking
IRAF networking.  For this reason, the /usr/include/iraf.h symbolic link is
a required part of the IRAF installation.
	It is common, although not necessary, to choose /usr/local/bin as
the "local bin directory" when installing IRAF.  This is where the install
script makes links for commands such as 'cl' and 'mkiraf'.  Any directory
(outside of the iraf tree!) can serve this purpose as long as it is in each
user's search path.  It is also possible to make these commands available as
aliases, by putting the following in each user's .login file:
	setenv 	iraf	/path/iraf/		# note trailing '/' !!
	source  $iraf/unix/hlib/irafuser.csh
	alias	cl 	$hlib/cl.csh
Even if you cannot write to the system directories, it is still imperative
that the install script be run (as user 'iraf' at least) so that the files
in the IRAF system are properly edited with the required pathnames.  If this
is not done the system may not run at all, or images will be inaccessible.

Can I have more than one version of IRAF installed at a time?

	Yes, but it's not recommended.  Usually the only time this is needed
is when locally written software fails to run after an IRAF upgrade (e.g.
large scripts using obsoleted tasks or parameters).  In this case it is
better in the long term to upgrade the software.
	Where there is no alternative to having both systems around it is
possible to have separate installations of the system, but only one can be
'installed' in the normal way for doing program development.  What's typically
done is that a host level script such as
	#! /bin/csh
	# IRAFO -- Run the "old" (previous) version of IRAF.
	setenv  iraf    /ursa/iraf/irafo/
	setenv  host    $iraf/unix/
	setenv  hlib    $iraf/unix/hlib/
	setenv  hbin    $iraf/unix/bin/
	# Set value of IRAFARCH and determine platform architecture.
	if (`uname -r | cut -c1` == 5) then
	    setenv IRAFARCH ssun
	    setenv IRAFARCH sparc
	# Run the desired CL.
	setenv arch .$IRAFARCH
        exec $iraf/bin.$IRAFARCH/cl.e
is put in the local bin directory (which should be common to all users).  The
path definitions for $iraf should be changed to point to the iraf root dir-
ectory for the old system (or whichever one is not the default).  Users would
start up IRAF using this script (call it 'irafo' or something) instead of
using the 'cl' command.
	In this way it is possible to have two versions of IRAF available at
the same time, but problems can arise if you log into one version using a or parameter files generated by the other version of iraf.  It is
easiest to create separate login directories for each version to isolate any
version specific files.

Can we make our local software look like an IRAF package?

	Section 8 of the "Introductory User's Guide to IRAF Scripts" (avail-
able as "" from the iraf/docs directory of the ftp
archive) deals with creating a personal package of tasks, including help
pages.  Similarly, Chapter 7 of the "Introductory User's Guide to IRAF SPP
Programming" (available as '' in the iraf archive iraf/docs
directory), also covers the creation of an IRAF package.  Lastly, Chapter 4
of the "SPP Reference Manual" written by the STSDAS group (available via ftp
as in the file '') discusses how to implement a package of SPP tasks.
	Note that any external package available from our archive can be
used as a template for creating a new package.  Indeed, the SPPTOOLS external
package has tasks which create and rename packages.  Contact
if you still have questions about how to create a package.

What does "ERROR: Cannot open device (node!imtool,,512,512)" mean?

 This message indicates a problem with the communications between the
IRAF DISPLAY task and the image display server, e.g., SAOimage or Imtool.
There are several known causes:
1. First verify that an XImtool or SAOimage process is running.  You must have
   a display window open before attempting to send output to it.  The window
   can be iconified, but it must be running.
2. If you are running SAOimage, make sure the message "Open to accept IRAF
   input" appears in the text window from which you started SAOimage.  If not,
   restart SAOimage so it uses the IRAF fifo pipes.  For the current version
   of SAOimage, v1.07, or greater, this is the default.  It can be
   explicitly specified by adding the command line argument +imtool.  Note that
   the -imtool flag turns OFF IRAF communication for v1.06/1.07 of SAOimage.
   If you're running v1.02 of SAOimage, the -imtool command line argument is
   required for communication with IRAF.
3. If that's not it, you may have an installation problem.  One function of
   the install script is to make entries for the fifo pipes in the /dev
   lrwxrwxrwx  1 root           10 Oct  6  1989 /dev/imt1 -> /dev/imt1o
   prwxrwxrwx  1 root            0 Oct 27  1988 /dev/imt1i
   prwxrwxrwx  1 root            0 Oct 27  1988 /dev/imt1o
   This may have failed for some reason when you ran install.  Make sure you
   ran install as root after defining the IRAF environment variables.
   This problem will generally only affect SAOimage displays since IRAF will
   attempt to use unix sockets to connect to XImtool.
4. Another possibility is that the install script was not run on this
   particular node.  Install must be run on (or for) each node in the network
   you intend to use with IRAF.  For those nodes that have a local /usr
   partition, run the install script on the machine itself.  For those nodes
   that don't have a local /usr partition, run install on the server for the
   diskless node, then run install on the diskless node itself.  More
   information about installing IRAF on a network is found elsewhere in this
   FAQ listing.
If none of these explains your problem contact site support for assistance.

Where's my image? I display an image to SAOimage but get no image dis- played and no error, only the cl prompt.

 If there's no error, DISPLAY has successfully sent the image to an SAOimage
process, but apparently not the one you intended.  The image could have
been sent to another user's SAOimage window (see information about multiple
SAOimage processes per CPU elsewhere in this FAQ) or to a "zombie" process.
If you interrupt the DISPLAY task with ^C and certainly in other ways, you
can end up with an SAOimage process running without a connected window.  If
you try to display and you get no output and no error, this may be the cause.
On Unix systems, you can use "ps -aux" to find the process and then kill it.

Why am I told "task `cl' has no param file" when I try to start the CL?

	The error message almost always means there is an error in the iraf
root pathname and the param file simply can't be found.  But since there's
a small chance the parameter file for the CL has been deleted or had read
removed, get a long listing of the file permissions:
	tucana% ls -l $iraf/pkg/cl/cl.par
	-rw-r--r--  1 tody         1811 May 29  1992 /usr/iraf//pkg/cl/cl.par
If that checks out, you probably have an incorrect definition of the IRAF
root directory in one of two places.  The iraf root is defined in the
hlib$cl.csh script which gets edited by the install script.  The iraf
root can also be defined in the user's environment, which takes precedence
over the cl.csh definition.
	If only the iraf account shows the error, it may be this definition
that is wrong.  Make sure the login directory for the iraf account
is $iraf/local, that is, the local subdirectory of the iraf root directory.
Otherwise, the .login file won't be read as intended at login time.  If the
error is seen for users other than iraf it may be that something went wrong
when install was run that resulted in an incorrect definition of iraf being
placed in hlib$cl.csh.  Sometimes people have an "old" definition of iraf in
their .login or .cshrc file which can cause the error.  Also check that the
value of IRAF in /usr/include/iraf.h (which is a symbolic link to
hlib$libc/iraf.h) is correct.  To solve the error, you need to determine
the source of the incorrect value for the iraf root directory.  Make sure
any definition of iraf in .login or .cshrc includes a trailing slash.
	A second but less likely cause is that the user's environment has
defined a 'host' environment variable, typically as the machine name.  IRAF
assumes that 'host', if defined, is the path to the iraf$unix (or iraf$vms)
directory.  Removing or resetting this definition will fix the problem.

What does "ERROR: Cannot open connected subprocess (pkg$x_pkg.e) mean?

	In general, the message indicates the named executable can't be found
or executed for some reason.  It could be a problem with permissions (no read
or execute permission) or, more likely, the executable can't be found.  The
named executable (x_pkg.e) is first looked for in the package bin directory,
e.g., bin$ or noaobin$.  [The last placed searched is the package root
directory as reported in the error message.]  You can cd to the package bin
directory and look around:
	cl> cd noaobin
	cl> path
	cl> dir long+
	If all non-script tasks in the NOAO package can't be executed, an
installation error may have occurred.  Check that the noao bin executables
were placed in the directory pointed to by the noao$bin.`mach' symbolic link.
It may be that they weren't installed at all or that they were placed in the
wrong directory.
	Often, when tasks in an external package can't be executed, it is be-
cause mkpkg failed and the executables weren't created.  Check the spool file
for errors.  Another possibility with external packages is that the "-p pkg"
flag was omitted on the mkpkg command line, in which case the executables end
up in the pkg root directory with names like "pkgbinx_pkg.e".  In this case,
you can simply move them to the the appropriate bin directory, the architecture
correct subdirectory of the package root directory.  A trivial reason for
this error with external packages is that the package root is incorrectly
defined (maybe a missing trailing slash (UNIX) or unescaped $ (VMS)).

What do I do about "Warning older than expected" or " can't open file" messages?

	This usually indicates a missing or insufficient definition for
the LD_LIBRARY_PATH host environment variable.  For example, on a Sun
system this may be set to simply '/usr/openwin/lib',  but if the application
in question was compiled under the local X11R5 system the missing shared
object may be in /usr/lib/X11.  In rare cases the missing object is in
the compiler directory.
	To find out where the missing file is and reset the LD_LIBRARY_PATH
variable appropriately, try the following:
	% echo $LD_LIBRARY_PATH		# see what the current setting is
	% ldd `which saoimage`		# check dependencies
             -lX11.4 => /usr/openwin/lib/
             -lc.1 => /usr/lib/
             -ldl.1 => /usr/lib/
	% setenv LD_LIBRARY_PATH  /usr/lib/X11:/usr/openwin/lib
This last command reset the variable so both /usr/lib/X11 and /usr/openwin/lib
are searched for the needed file, the command should run normally after that,
the directories /lib, /usr/lib and /usr/local/lib are searched by default.

Why does VMS/IRAF report "cannot open tmp$uidxxx" when accessing a tape?

	The error "cannot open file (tmp$uidxxxxx)" typically indicates a
problem with the definition of the tmp directory.  Either the directory
doesn't exist or you don't have write permission in it.  In VMS/IRAF,
tmp$ is defined as tempdisk:[iraftmp], where tempdisk is defined in
hlib$ as part of the IRAF installation.  It is necessary to
create a directory [.iraftmp] as a subdirectory of the tempdisk area.
Some sites choose to have private tmp areas rather than a single area for
all IRAF users.  This is described in the VMS/IRAF IG; often the private
tmp directory is a subdirectory of the user's login directory.
Another less likely explanation is that you may have tmp defined relative to
the current directory, so when you change directories, tmp can't be found.
Tmp should be an absolute pathname.  How is tmp defined in your file?
The first step is to see how tmp$ is defined on your system and whether or not
you can edit a junk file in the directory:
        cl> show tmp
        cl> show tempdisk
        cl> edit tmp$junk

Why does my script tell me "dictionary full"?

	`Dictionary full' means the CL's dictionary (what it uses to catalog
tasks, packages, parameters, etc.) is full.  It can occur legitimately if
you are loading really long scripts with lots of loaded packages, local
variables, etc.  It can also occur illegitimately, e.g. if a script were
repeatedly loading the same things in a loop and not unloading them.  The
first thing to check is that it is not the latter; if you have to, send us
your script.
    	If it turns out that it is not a fault in the script design, then the
size of the dictionary can be increased.  To increase the size of the diction-
ary, you would need to edit an include file in the CL package directory and
rebuild the CL.  To do this:
	cl> edit pkg$cl/config.h
	...change DICTSIZE from its current value to something
	    larger (say, 50% larger)
	% cd $iraf/pkg/cl
	% mkpkg update

What does "Warning: Out of space in image header" mean?

	This means that the 'min_lenuserarea' iraf environment variable is
too small for the header size.  This value is set as a system default in
the hlib$zzsetenv.def but can be reset by the user in their file
by uncommenting the definition (i.e. removing the '#' sign) and increasing
the size.  Certain packages which deal with large headers will set this to
an appropriate value when the package is loaded.  See IRAF Newsletter #10,
October 1990 (available from iraf/docs on, for a related
discussion of "Problems with Long Image Headers".

Why does PHOT warn "Graphics overlay not available for display device."?

	For all platforms, graphics overlay is not currently available on
the image display in the digital photometry tasks.  These tasks can read from,
but not write to, the image display.  The warning message you report is normal
behavior for this task.  The PHOT and POLYMARK manual pages give examples
of working interactively from a contour plot, by setting the task.display
parameter to stdgraph and the CL environment variable stdimcur to stdgraph.

Why does my task report "ERROR: parameter `foo' not found"?

	This most often happens after an IRAF upgrade when the parameters for
a particular task may have changed, but the parameter file in the user's
uparm directory contain the set for the previous version.  It is usually
cured with an "unlearn <task>" command, or by initializing the uparm directory
with a new MKIRAF (which is recommended anyway after an IRAF upgrade).
        In cases where the problem continues, it means the last IRAF update
wasn't done properly (e.g. the patch files were not applied before installing
the binaries).  Contact site support if you are unsure of the installation.

What does "ERROR: MWCS: dimension mismatch (mw_translate)" mean?

	This message means there is some error in the image header dealing
with the description of the world coordinate system (WCS).  In particular
the WCSDIM keyword is incorrect.  The value of this keyword should
either match the dimensionality of the image if there is no WAXMAP01
keyword or half of the number of elements in that keyword if
present (that is if there are 6 numbers then WCSDIM should be 3).
How does this happen?  The most common way in V2.10-V2.10.1 is that
the images produced by the APEXTRACT package when the "extras" parameter
is set, which produces 3D images, has a bug that sets WCSDIM=2.  This
can be easily fixed by:
        cl> hedit <images> wcsdim 3
        cl> hedit <images> cd3_3 1. add+
        cl> hedit <images> ltm3_3 1. add+
The last two additions are to avoid a "matrix inversion error" because
if the WCS dimensionality is 3 then there must be nonzero elements
for the step per pixel.  The NPROTO.LINPOL task also has a similar
error which may be fixed similarly.  Other possibilities are improper
editing of the image header by a user.

My pixel files were moved to another disk and now the i_pixfile pathname in the image headers is wrong. How can IRAF find the pixels?

 The pixels of an IRAF image are stored separately from the image header.
The pathname to the pixel file is contained in a header parameter referenced as
"pixfile" by the HSELECT and HEDIT tasks.  Users occasionally need to
modify this pixel pathname, most commonly when the disk containing the pixels
has been renamed or if the pixel files have been moved en masse for system
administration reasons to a new location.  The following method enables you to
modify a large number of image headers to contain new pixel pathnames.  The
technique is to first create a temporary file of image names and their current
pixel pathnames using the task HSELECT.  You globally edit this temporary file
to contain the new pixel pathnames and then use the modified file as input to
the HEDIT task.
       cl> hselect *.imh $I,pixfile yes > filin
       .... [do a global edit to filin and edit in the new pixel pathname]
       cl> list="filin"
       cl> {
           while(fscan(list,s1,s2) != EOF)

How do I turn off the system id banner in output hardcopy plots?

 In any task that uses the GTOOLS interface (like SPLOT but not IMPLOT), you
can turn off the sysid banner with a cursor command.  In interactive cursor
mode, try the command :/help for a full help page.  You'll see:
    :/sysid [yes|no]        Include the standard IRAF user/date banner?
    :/title string          Title
The sysid banner includes information from the CL variable "version":
        cl> show version
So you can modify the sysid banner by resetting "version" to be the null
string or anything you want:
        cl> reset version = ""
You can also reset the CL variable "userid" to personalize the banner.  This
will affect IMPLOT as well as SPLOT plot banners.

Why does IRAF kick me out when I type ^Z to exit EPARAM?

	The ^Z sequence has probably been mapped to the suspend character
on your machine.  [You bring a suspended process back into the foreground
with the UNIX 'fg' command.]  The ^Z is being intercepted by the terminal
driver and suspending the CL before the EPARAM task ever sees it.  Mapping
of control characters is typically done with the stty command from a .login
	Whatever keytroke you use to EXIT_UPDATE in eparam, it must be noted
in the <editor>.ed file in dev$, where <editor> is the name of the editor
you are using (e.g., dev$vi.ed).  You can have your own copy of this file in
your iraf home directory ("cl> show home").  The CL looks first in your home
directory before searching dev$ for the ed file.  You can also have multiple
choices for the mapping in the .ed file, such as already exist for the MOVE_UP
and related keystrokes.