ADIC 6-00026-01 Rev A Microscope & Magnifier User Manual


 
AMASS Overview
6-00026-01 Rev A Online Archiving with AMASS 1-9
Design
Prevents
Thrashing
In a storage environment, there are many volumes but only a
few drives. If several requests come in for many different
volumes, the potential exists for AMASS to spend most of its
time moving media in and out of drives and little of its time
actually processing requests.
The following items help AMASS handle many simultaneous
requests, thus preventing thrashing as well as optimizing
requests:
Request queue sorting
Read-ahead
Write-behind
Prioritizing algorithm
Cache
Optimizes
Requests
The AMASS cache resides on a hard disk attached to the UNIX
server where AMASS is installed. The cache implementation
follows all UNIX file system conventions for synchronous IO,
sync, and fsync functions.
Data caching provides the following benefits:
Faster system performance
Protection against thrashing
In addition, a large cache allows large files to be queued faster
before being moved to a library thus increasing throughput.
After files are in the cache, multiple writes to the same volumes
are grouped into single operation that minimizes volume
movement and maximizes throughput. Therefore, a high
aggregate throughput is achieved through the following items:
Grouping write operations in the cache