Failed to create poller with specified size

It had always bugged me that I got this warning on my OS X, MacBook Pro development system. (Well, it’s really an INFO, not a WARN but it bugged me nonetheless.)

Apr 8, 2009 4:10:55 PM org.apache.coyote.ajp.AjpAprProtocol init
INFO: Initializing Coyote AJP/1.3 on ajp-8009

	<!-- snip -->

Apr 8, 2009 4:10:58 PM org.apache.tomcat.util.net.AprEndpoint allocatePoller
INFO: Failed to create poller with specified size of 8192


	<!-- snip -->

Apr 8, 2009 4:10:58 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 2703 ms

I never saw this issue on my CentOS production servers so I figured it must be a BSD thing. It any rate, I recently tracked down a solution: add a pollerSize attribute to Tomcat’s connector element in server.xml.

<Connector
  pollerSize="1024"
  port="8009"
  URIEncoding="UTF-8"
  enableLookups="false"
  redirectPort="8443"
  maxPostSize="104857600"
  protocol="AJP/1.3" />

For a full listing of my server.xml, see UTF8 JDBC on Tomcat.

I suppose I could have mucked with OS X’s limits but somehow this solution seemed less problematic. I’m not serving to the web from my laptop so I feel comfortable lowering the pollerSize. Besides, I had two years of history in which a problem never arose even though tomcat failed to create the poller… I’ve had a couple of months now in which the poller has been created and works fine.

UTF8 JDBC on Tomcat

I’ve had opportunity to once again visit the UTF8 chain of failure and thought I’d write about it. If for no other reason, it’s easier for me to find my notes when I shove them into a blog entry.

I previously wrote about UTF8 on Tomcat. I pointed out that I needed to add an attribute to the connector element so that the mod_jk connection would be UTF8-ified. I neglected to also point out that I needed to UTF8-ified the database connection.

jdbc:mysql://localhost/mywebapp?useUnicode=true&amp;characterEncoding=utf8

I’ve included my entire Tomcat server.xml file to illustrate just where this stuff goes. I don’t use port 8080; all my traffic comes through the mod_jk connector. Also, I don’t claim that I know what everything in my config file does. Most of everything: yes. All of everything: no.

I do claim that this server.xml has given me trouble free service for more than two years.

<Server port="8005" shutdown="SHUTDOWN">

<Listener
  className="org.apache.catalina.core.AprLifecycleListener" />

<Listener
  className="org.apache.catalina.mbeans.ServerLifecycleListener" />

<Listener
  className="org.apache.catalina.mbeans↩
.GlobalResourcesLifecycleListener" />

<Listener
  className="org.apache.catalina.storeconfig↩
.StoreConfigLifecycleListener"/>


<GlobalNamingResources>

  <Environment
    name="simpleValue"
    type="java.lang.Integer"
    value="30"/>

  <Resource
    name="UserDatabase"
    auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />

</GlobalNamingResources>

<Service name="Catalina">

  <Connector
    pollerSize="1024"
    port="8009" 
    URIEncoding="UTF-8"
    enableLookups="false"
    redirectPort="8443"
    maxPostSize="104857600"
    protocol="AJP/1.3" />

  <Engine name="Catalina" defaultHost="www.mywebapp.com">

    <Host
      name="www.mywebapp.com"
      appBase="webapps"
      debug="4"
      unpackWARs="true">

      <Alias>www.mywebapp.com</Alias>

      <Valve
        className="org.apache.catalina.valves.AccessLogValve"
        directory="logs/mywebapp"
        prefix="access."
        suffix=".log"
        pattern="common"/>

      <Logger
        className="org.apache.catalina.logger.FileLogger"
        directory="logs/mywebapp"
        prefix="host."
        suffix=".log"
        verbosity="debug"
        timestamp="true"/>

      <Context
        path=""
        docBase="mywebapp"
        debug="4"
        reloadable="true">

        <Logger
          className="org.apache.catalina.logger.FileLogger"
          directory="logs/mywebapp"
          prefix="context."
          suffix=".log"
          timestamp="true"/>

        <Resource
          name="jdbc/mywebapp"
          auth="Container"
          type="javax.sql.DataSource"
          driverClassName="com.mysql.jdbc.Driver"
          username="somename"
          password="somepass"
          url="jdbc:mysql://localhost/mywebapp?useUnicode=true↩
&amp;characterEncoding=utf8" />

        <Resource
          name="mail/Session"
          auth="Container"
          type="javax.mail.Session"
          mail.smtp.host="localhost" />

      </Context>

    </Host>

  </Engine>

</Service>

</Server>

use curl for api documentation

I’ve been working quite a bit with the rest plugin for Struts2. The really nice thing about this plugin is the way it cleans up Struts URLs. Makes them more rails-like. I chuckled when depressed programmer suggested that struts2 is “WebWork on drugs.” I hate struts2. I really do.

Anyway, I have stripped down an AccountController to show just the POST service. In reality, the create() method is wired to a middle tier service that authenticates username, password pairs then updates session attributes with member id and other bits of persistent session data I need.

// imports omitted

@Results({
  @Result(
    name = "success",
    type = ServletActionRedirectResult.class,
    value = "account")
})
public class AccountController extends ActionSupport
{
  private String username;
  private String password;
  // getters/setters omitted

  public AccountController() { }

  public HttpHeaders index()   { return notImplemented(); }
  public HttpHeaders show()    { return notImplemented(); }
  public HttpHeaders edit()    { return notImplemented(); }
  public HttpHeaders editNew() { return notImplemented(); }
  public HttpHeaders update()  { return notImplemented(); }
  public HttpHeaders destroy() { return notImplemented(); }
  public HttpHeaders create()
  {
    int status = (username.equals("alice")
           && password.equals("restaurant"))
      ? HttpServletResponse.SC_ACCEPTED
      : HttpServletResponse.SC_UNAUTHORIZED;

    return new DefaultHttpHeaders().withStatus(status);
  }

  private DefaultHttpHeaders notImplemented()
  {
    return new DefaultHttpHeaders()
      .withStatus(HttpServletResponse.SC_NOT_IMPLEMENTED);
  }

}

Note that I only return HTTP headers; the body content will always be empty.

I have found curl invaluable for documenting the API. This is a simple case but consider a much more complicated system with dozens of URLs and each URL implements many of the HTTP methods (including PUT and DELETE).

Third party developers are the bane of the support engineer. First, few people read documentation. They skim the material and furiously code. When their software fails, they file a bug that the API is broken. Usually, the API isn’t broken; the developer simply did not understand the API.

I subscribe to the agile manifesto value of “working software over comprehensive documentation.” In my work, I have found that a few curl examples clears up most of these issues. For example, to exercise the create() method in the AccountController, simply post a form.

curl                                    \
  --request POST                        \
  --include                             \
  --url "http://ws.example.com/account" \
  --form "username=alice"               \
  --form "password=restaurant"          \
  --cookie-jar "cookies"                \
  --cookie "cookies"

I like to add the “–include” flag as it displays some extra header information. When I get a support call, I have the developer trot out the “documentation” curl examples and open a bash shell. This, of course, drives the Windows guys nuts–to which I reply, “buck up.” We work through the exercise of getting the http request working with the curl example. Then a miracle occurs. The developer now has a working example on their machine from which to re-examine their code.

A final note. The “–cookie-jar” and “–cookie” parameters will handle cookies between the web server and your curl commands. In otherwords, you can login to a website and these parameters will store your authenticated session id in a file. The file in this example is named “cookies” but it can be legal filename. You can then make subsequent calls to URLs, passing the cookies (and, therefore, the session id) back up to the server.

For example, to upload your avatar picture to your new social network, first login using the curl command above. This establishes an authenticated session. Then post your picture using the curl command below, making sure you pass the cookies back up.

curl                                   \
  --request POST                       \
  --include                            \
  --url "http://ws.example.com/avatar" \
  --form "avatar=@somepix.jpg"         \
  --cookie-jar "cookies"               \
  --cookie "cookies"

Finally, if you need to add a description, publish the curl command as part of a bash script. For example,

#!/bin/bash

# 1. you must login before you can upload the avatar
# 2. the web server will reject any avatar exceeding 2MB
# 3. do not forget the '@' symbol, a common mistake
# 4. do not forget to include --cookie and --cookie-jar

curl                                   \
  --request POST                       \
  --include                            \
  --url "http://ws.example.com/avatar" \
  --form "avatar=@somepix.jpg"         \
  --cookie-jar "cookies"               \
  --cookie "cookies"

Good luck!

Software RAID 10

I’ve been putting off building the software RAID10 on marmaduke. Today, I put it off no longer.

The server marmaduke has six storage devices (2 IDE and 4 SATA)

$ ls -1 /dev/hd?
/dev/hde
/dev/hdf

$ ls -1 /dev/sd?
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd

The CDROM is attached as /dev/hde and a 300GB HDD as /dev/hdf on which I’ve installed CentOS 5.2. The four SATA drives will be used to build a RAID 10. I’ve read through a number of postings on how to build a software RAID. The cleanest, shortest and clearest of them is on tgharold.com.

mknod

First, create a node for the array.

# mknod /dev/md0 b 9 0

I chose md0 since it was available.

The parameter ‘b’ directs mknod to create a block (buffered) special file.

The parameter ‘9’ is the Major version. (huh?) Seems the correct parameter can be found in /proc/devices. (see also, centos docs)

$ cat /proc/devices | grep -e "md$"
  9 md

The parameter ‘0’ corresponds to the last digit in the device /dev/md0. tgharold.com points out that the digit used in the device name should be the same as the last parameter. Why? Dunno. Something to look up some day.

fdisk

Second, partition the disks, each and every one.

# fdisk /dev/sda
# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd

I don’t know why I set the boot flag. The important point is to set the ID to ‘fd’ which is ‘Linux raid autodetect’.

   Device  Boot  Start    End     Blocks  Id  System
/dev/sda1  *         1  60801  488384001  fd  Linux raid autodetect
/dev/sdb1  *         1  36481  293033601  fd  Linux raid autodetect
/dev/sdc1  *         1  36481  293033601  fd  Linux raid autodetect
/dev/sdd1  *         1  36481  293033601  fd  Linux raid autodetect

One of my drives is larger than the others. Three drives (Seagate ST3300620AS) where previously used in a RAID5. It’s impossible to find these drives any longer so I picked up the closest match (Seagate ST3500630AS). It has a larger capacity but otherwise the specs match. In a RAID10, the larger drive’s extra space (~200GB) will go unused.

I formatted the drives but perhaps it was unnecessary. I did this as a check on each drive before I began building the array. Didn’t seem to hurt anything.

# mkfs.ext3 /dev/sda1
# mkfs.ext3 /dev/sdb1
# mkfs.ext3 /dev/sdc1
# mkfs.ext3 /dev/sdd1

mdadm

Third, time to pull the trigger. Let mdadm do the heavy lifting.

# mdadm             \
  --create /dev/md0 \
  -v                \
  --raid-devices=4  \
  --chunk=32        \
  --level=raid10    \
  /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

mkfs

Fourth, whether you format the drive individually or not you must format the RAID.

# mkfs.ext3 /dev/md0

mount

Finally, mount the RAID.

# mount /dev/md0 /mnt/xen

Sweet.

$ df -h
Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00
           287G  36G  237G  14% /
/dev/hdf1   99M  19M   76M  20% /boot
tmpfs      3.9G    0  3.9G   0% /dev/shm
/dev/md0   551G 198M  523G   1% /mnt/xen

Now, where is that Xen tutorial?

RAID 01 vs. RAID 10

I just finished building a four-drive software RAID10 on marmaduke and wanted to jot down my thoughts on RAID failure. In particular, I read a number of postings on the difference between RAID 01 and RAID 10. None of them satisfactory described the differences and how those differences changed when adding more drives.

Marmaduke only has four drives in its array. Most of the web postings dealt with four drives but I also wanted to see the impact on six drives. Here is a hypothetical set of six drives.

Six Drives:
  /dev/sda  a
  /dev/sdb  b
  /dev/sdc  c
  /dev/sdd  d
  /dev/sde  e
  /dev/sdf  f

For clarity, I will refer to /dev/sda simply as ‘a’, and so on.

Recall that RAID 0 ‘stripes’ two or more drives and RAID 1 ‘mirrors’ two drives.

RAID 01 Composition

Four Drive RAID 01
  STRIPE:   a  b     as  0
  STRIPE:   c  d     as  1
  MIRROR:   0  1     as  R01 (RAID 0+1)

Six Drive RAID 01
  STRIPE:   a  b  c  as  0
  STRIPE:   d  e  f  as  1
  MIRROR:   0  1     as  R01 (RAID 0+1)

In both the four and six disk arrays, RAID 01 mirrors two striped arrays. Each striped array can contain two or more drives but there are always only two striped arrays. (A mirror has only two subarrays.)

RAID 10 Composition

Four Drive RAID 10
  MIRROR:   a  b     as  0
  MIRROR:   c  d     as  1
  STRIPE:   0  1     as  R10 (RAID 1+0)

Six Drive RAID 10
  MIRROR:   a  b     as  0
  MIRROR:   c  d     as  1
  MIRROR:   e  f     as  2
  STRIPE:   0  1  2  as  R10 (RAID 1+0)

In both the four and six disk arrays, RAID 10 stripes two or more mirrored raids. Each mirror has exactly two disks.

01 vs. 10

Which is better? Both cost the same in terms of disk drives. Both yield the same final RAID capacity. Performance is (for my purposes) the same.

I conclude that the difference is primarily in the failure rates between the two drives. There are two types of failures.

First is a failure that takes out a drive but not the array. Replace the drive and you can rebuild the array.

Second is a failure that takes out a drive and the array. Nothing you can do. The array is lost. Game over.

Which drive or drive set cause a catastrophic array loss? I’ve created two tables (ah, the beauty of pure 7-bit ASCII) to detail every scenario for both a 4-drive and a 6-drive array of both RAID 01 and RAID 10.

An asterisk denotes a catastrophic failure.

Column 1 : “F”, number of drives that failed in the array.

Column 2 : “DRIVES”, each drive in the array.

Column 3 : “RO1”, subarrays for RAID 01.

Column 4 : “R10”, subarrays for RAID 10.

Column 5 : “RAIDS”, the two final arrays.

            Four Drive Arrays

       DRIVES     R01    R10     RAIDS
    ...........   ...   .....   ...  ...
F   a b c d       0 1   0 1     R01  R10
----------------------------------------
0 |             |     |       |        
----------------------------------------
  |       *     |   * |       |        
1 |     *       |   * |       |        
  |   *         | *   |       |        
  | *           | *   |       |        
----------------------------------------
  |     * *     |   * |   *   |       *  
  |   *   *     | * * |       |  *     
2 |   * *       | * * |       |  *     
  | *     *     | * * |       |  *     
  | *   *       | * * |       |  *     
  | * *         | *   | *     |       *
----------------------------------------
  |   * * *     | * * |   *   |  *    * 
3 | *   * *     | * * |   *   |  *    * 
  | * *   *     | * * | *     |  *    * 
  | * * *       | * * | *     |  *    * 
----------------------------------------
4 | * * * *     | * * | * *   |  *    * 
----------------------------------------

Neither RAID configurations can survive a 3 or 4 drive failure.

Both configurations can survive a 1 drive failure. One of the subarrays in RAID 01 always fail with a single drive failure but it doesn’t bring down the array. In RAID 10, the subarray doesn’t fail because the subarray is a mirror.

With four drives, there are six possible combinations of two drive failures. In this case, RAID 10 has twice the survival rate (two failure points) as does RAID 01 (four failure points).

            Six Drive Arrays

       DRIVES     R01    R10     RAIDS
    ...........   ...   .....   ...  ...
    a b c d e f   0 1   0 1 2   R01  R10
----------------------------------------
0 |             |     |       |        
----------------------------------------
  |           * |   * |       |        
  |         *   |   * |       |        
1 |       *     |   * |       |        
  |     *       | *   |       |        
  |   *         | *   |       |        
  | *           | *   |       |        
----------------------------------------
  |         * * |   * |     * |       *
  |       *   * |   * |       |        
  |       * *   |   * |       |        
  |     *     * | * * |       |  *     
  |     *   *   | * * |       |  *     
  |     * *     | * * |   *   |  *    *
2 |   *       * | * * |       |  *     
  |   *     *   | * * |       |  *     
  |   *   *     | * * |       |  *     
  |   * *       | *   |       |        
  | *         * | * * |       |  *     
  | *       *   | * * |       |  *     
  | *     *     | * * |       |  *     
  | *   *       | *   |       |        
  | * *         | *   | *     |       *
----------------------------------------
  |       * * * |   * |     * |       *
  |     *   * * | * * |     * |  *    *
  |     * *   * | * * |   *   |  *    *
  |     * * *   | * * |   *   |  *    *
  |   *     * * | * * |     * |  *    *
  |   *   *   * | * * |       |  *     
  |   *   * *   | * * |       |  *     
  |   * *     * | * * |       |  *     
  |   * *   *   | * * |       |  *     
  |   * * *     | * * |   *   |  *    *
3 | *       * * | * * |     * |  *    *
  | *     *   * | * * |       |  *     
  | *     * *   | * * |       |  *     
  | *   *     * | * * |       |  *     
  | *   *   *   | * * |       |  *     
  | *   * *     | * * |   *   |  *    *
  | * *       * | * * | *     |  *    *
  | * *     *   | * * | *     |  *    *
  | * *   *     | * * | *     |  *    *
  | * * *       | *   | *     |       *
----------------------------------------
  |     * * * * | * * |   * * |  *    *
  |   *   * * * | * * |     * |  *    *
  |   * *   * * | * * |     * |  *    *
  |   * * *   * | * * |   *   |  *    *
  |   * * * *   | * * |   *   |  *    *
  | *     * * * | * * |     * |  *    *
4 | *   *   * * | * * |     * |  *    *
  | *   * *   * | * * |   *   |  *    *
  | *   * * *   | * * |   *   |  *    *
  | * *     * * | * * | *   * |  *    *
  | * *   *   * | * * | *     |  *    *
  | * *   * *   | * * | *     |  *    *
  | * * *     * | * * | *     |  *    *
  | * * *   *   | * * | *     |  *    *
  | * * * *     | * * | *     |  *    *
----------------------------------------
  |   * * * * * | * * |   * * |  *    *
  | *   * * * * | * * |   * * |  *    *
5 | * *   * * * | * * | *   * |  *    *
  | * * *   * * | * * | *   * |  *    *
  | * * * *   * | * * | * *   |  *    *
  | * * * * *   | * * | * *   |  *    *
----------------------------------------
6 | * * * * * * | * * | * * * |  *    *
----------------------------------------

With a six drive array, RAID 10 has three failure points if two drives fail. However, RAID 01 has nine failure points.

Finally if three drives fail, RAID 10 has 12 failure points compared to RAID 01 which has 18 failure points. In the following table, ‘prm’ is the number of permutations for that number of drive failures.

          RAID Failure Points

        4-drive           6-drive
    ...............   ...............
F   R01   R10   prm   R01   R10   prm
----------------------------------------
0                1                 1
1                4                 6
2    4     2     6     9     3    15
3    4     4     4    18    12    20
4    1     1     1    15    15    15
5    -     -     -     6     6     6
6    -     -     -     1     1     1

It is my conclusion that the likelihood of a catastrophic array failure is substantially greater for RAID 01 and prudence suggests a preference for RAID 10.

bash progress monitor

I have a remote machine that is used to store and process XML files. Recently, I had need to duplicate a directory of XML files (e.g., cp -r a b). It’s not really germane to the subject here, but this particular server has a whack configuration and I gotta rant before I continue.

The office server (scrappy) has pretty good specs.

[scrappy ~]$ cat /proc/meminfo

MemTotal:      3980800 kB

[scrappy ~]$ cat /proc/cpuinfo

processor   : 0
model name  : Intel(R) Core(TM)2 CPU   6600  @ 2.40GHz
cpu MHz     : 2394.000
cache size  : 4096 KB

processor   : 1
model name  : Intel(R) Core(TM)2 CPU   6600  @ 2.40GHz
cpu MHz     : 2394.000
cache size  : 4096 KB

[scrappy ~]$ cat /proc/scsi/scsi

Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: SONY   Model: DVD RW AW-Q170A    Rev: 1.72
  Type:   CD-ROM                           ANSI SCSI revision: 05

[scrappy ~]$ cat /proc/ide/hd?/model

ST3320620AS

Whoa! What’s my SATA drive doing attached to the IDE driver? When I compare to my home CentOS box (marmaduke), I see that its drives are connected differently. Yes, marmaduke has one HDD connected via the IDE driver (ST3320620A) but that drive is a PATA drive. The four SATA drives are connected via SATA drivers. (The SATA drives will be configured as a software RAID 10, stay tuned. There’s a xen project in the making.)

[marmaduke ~]$ cat /proc/scsi/scsi

Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3500630AS      Rev: 3.AA
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3300620AS      Rev: 3.AA
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3300620AS      Rev: 3.AA
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi5 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: ST3300620AS      Rev: 3.AA
  Type:   Direct-Access                    ANSI SCSI revision: 05

[marmaduke ~]$ cat /proc/ide/hd?/model

PIONEER DVD-RW DVR-111D
ST3320620A

scrappy was configured before arriving at the office by a friend of a friend who runs a PC shop. “But it was such a deal!” Yeah, right. Bunch of monkeys. How hard is it to configure the BIOS to use the SATA interface rather than the IDE interface?

Anyway, I don’t have time to rebuild scrappy right now so I live with the dismal disk performance. Here’s the problem at hand. I have numerous XML files—some largish and some smallish. I have several sets and each set has about 4000 files.

[scrappy ~]$ ls src/*xml | wc -w

4323

[scrappy ~]$ ls -l src/*xml | sort -n -r -k5

-rw-r--r-- 1 kelly kelly 315804120 Dec 19 15:46 0001.xml
-rw-r--r-- 1 kelly kelly 275651475 Dec 19 17:34 0002.xml
-rw-r--r-- 1 kelly kelly 260250994 Dec 19 16:15 0003.xml
-rw-r--r-- 1 kelly kelly 222402294 Dec 19 16:25 0004.xml
-rw-r--r-- 1 kelly kelly 204642813 Dec 19 15:52 0005.xml
     .
     .
     .
-rw-r--r-- 1 kelly kelly      1467 Dec 19 19:15 4321.xml
-rw-r--r-- 1 kelly kelly      1467 Dec 19 16:01 4322.xml
-rw-r--r-- 1 kelly kelly      1098 Dec 19 19:19 4323.xml

I wanted to duplicate the set of files as I needed to run some prototype code that I didn’t trust to be non-destructive. Simple.

[scrappy ~]$ cp -r src tgt

However, the disk performance is agonizing. So bad that I leave it while I work on another machine. But I want to know the progress and see it as it changes. With six to ten shells open, I want something that can be resized to use minimal screen real estate. I want a quick command line progress monitor.

bash to the rescue. I didn’t want to create a script file so I just jack it right into the terminal’s command line. When you open the while loop, bash will continue on the next line until you close it with the done keyword.

[scrappy ~]$ while 'true'; do
>   ts=`date`
>   src=`ls src/*xml 2>/dev/null | wc -w`
>   tgt=`ls tgt/*xml 2>/dev/null | wc -w`
>   echo -ne "  ${ts}  ${src}  ${tgt}        \r"
>   sleep 1
> done

  Fri Jan  9 15:20:17 PST 2009  4323  2304

Recall we’ve previously covered that 2>/dev/null hides the error message generated by ls if no file is found.

The components are stored in local variables as a matter of convenience and displayed using echo.

echo is passed two switches. The -n switch supresses the trailing newline so that the cursor remains on the same line as the displayed text. The -e switch causes backslashes in the text to be interpreted as the escape character. This is useful since I want to add a trailing carriage return character. This will push the cursor to the beginning of the line while remaining on the same line as the text.

After sleeping for one second, the script generates a new echo output which overwrites the old text. I suppose I could add a test to the script to break when ${src} equals ${tgt}.

I don’t know why disk I/O is so slow on scrappy. Perhaps the mode is set to use programmed I/O rather than DMA. Who knows? Who cares? Both scrappy and marmaduke have Intel ICH8 SATA controllers. scrappy has a faster processor with more cache. Yet, marmaduke smokes on disk throughput on either the SATA or IDE drives. Something is wonky.

I’d like to say that I can ignore the issue. I have way too much going on right now. But it bugs me.