0 Dino Explorer AMI Lite   

Welcome to DASD Explorer AMI


This AMI is tailored to collect and present DASD performance related data based on the SMF records 74 (RMF DEVICE Activity), the SMF 78 (RMF I/O QUEUEING Activity) and eventually on Volume records, such as capacity, allocated space, free space, storage groups gathered by DXCOLLECT.

Once you complete the loading process you can easily get results such as the one described below:
  • Cache - Activity inside the storage (RMF 74.5);
  • Device – Front-end response times (RMF 74.1);
  • Channel – channel path activity (RMF 73);
  • LCU – Hyper-PAV and LCU activity (RMF 78.3);
  • DASD occupation – DCOLLECT “V? records;
  • DASD configuration discovery.

DASD Performance Portal

Table of content


Somethings you will have

Cache Activity

Activity inside the storage array:

Go top

Config Explorer

Configuration discovering navigation.



Crossing information on RMF 70-78 records, we build explorers for many views.
Go top

IOPS x Response time



Go top


AMI Description

This Amazon Machine Image (AMI) is ready to use, i.e. it has been pre-configurated with everything required to you download tools, upload data and run your queries.

Go top


This AMI includes:

Go top

Limitations:

Go top

Other AMI's

You should check the other AMI availables.
Go top

Overview


The SMF (System Management Facility) is the central repository for mainframe events (logs) and the DXPLSMF program (Dino batch collector) reads these dump files and creates a CSV file (comma separated values) with the relevant information.

These CSV files has to be transferred to this AMI instance and the events will be loaded on the DinoDB that resides on the SQL Server database.

The process consist of following phases:
The following picture ilustrates the collection process.


Go top

Collecting SMF

In order to run the DXPLSMF program, you need to:
  1. Download the DXPL.V400.DXPLLOAD.XMIT load library in XMIT format to a z/OS partition that has access to the SMF dump files

  2. Once you have copied the load library to mainframe, you receive it:



  3. Execute the following job:
    //DXPLSMF  JOB MSGLEVEL=(1,1),NOTIFY=&SYSUID
    //DXPLSMF  EXEC PGM=DXPLSMF
    //STEPLIB  DD DISP=SHR,DSN=DXPL.V400.DXPLLOAD
    //SMFIN    DD DISP=SHR,DSN=smf.dump.file
    //CSVOUT   DD DSN=dxpl.csvout,UNIT=SYSALLDA,
    //         DISP=(NEW,CATLG),VOL=SER=volser,
    //         SPACE=(CYL,(100,50),RLSE)
    //SYSPRINT DD SYSOUT=*
    //DXPLIN   DD *
    PROD CACHE
    PROD CEC
    PROD CHPID
    PROD DEVICE
    PROD LCU
                
  4. You can check your output browsing the CSVOUT file. It should look like this:



  5. TERSE the CSVOUT file on the mainframe:

    CSVOUT is an excellent candidate to compression. The TRS file should be 5% of the original CSV file.
    The following example shows how to TERSE a file on the z/OS:
    //DXTERSE  JOB MSGLEVEL=(1,1),NOTIFY=&SYSUID
    //STEPNAME EXEC PGM=TRSMAIN,PARM='PACK'
    //SYSPRINT DD SYSOUT=*
    //INFILE   DD DISP=SHR,DSN=dxpl.csvout
    //OUTFILE  DD DISP=(,CATLG,DELETE),SPACE=(CYL,(100,500),RLSE),
    //         DSN=dxpl.csvout.trs,VOL=SER=volser,UNIT=3390
                
You can get detailed information on the MVS Data Collector for Mainframe Assessment v312.pdf.

Go top


Transfering from mainframe to AMI instance

The main concern about file transfer between mainframe and the open platform is the transfer mode:

Tersed Transfer in BINARY mode.
Text CSV Transfer in TEXT mode, in order to convert from EBCDIC to ASCII code

Depending on your network infrastructure you can transfer your files directly from your mainframe to the AMI instance.
However mostly of the installations have severe restrictions to moving data around.
Perhaps it's easyer to transfer to your local desktop and them to the AMI instance.

Go top


FTP service

You can use the FTP service already available on the AMI instance.

The hostname is the public IP address of your AMI instance:


Host: Your public IP
User: Administrator
Password: your password


A FTP process should look like this:



Go top


Sharing local drives

You can also try to share a local drive or removable device with your remote desktop session.



Go top


Upload directory (Z:\)

The Z: drive of your AMI instance is a temporary storage, i.e. everytime you stop your instance, you loose any data on this drive.



The default location for the FTP server is Z:\.

Go top


Expanding the tersed file (.TRS)

Before you load the data you need to expand (unterse) the uploaded file (DXPL.CSVOUT.TRS).
On a command window (cmd.exe), change to z:\ directory.
terse dxpl.csvout.trs dxpl.csvout.csv


The output file from terse execution will apear, as shown below.



That's the file you will insert on Dino Explorer.

Go top


Loading on Dino database

Launch the Data Loader program by clicking:

Start -> All programs -> Dino Explorer 2014 -> Server utilities -> Data Loader, as shown below.



The importation process is achieved through the Import data option on the Data menu.



When the window appear, you will notice that we created a _default configuration to help you on this process where you just select the files that you have transfered.

Taking a look on this screen we can quickly realize that:


Your AMI has a Temporary Storage 1 (Z:) it's the default location for uploads (170GB space available).

Through the Add button you select your files.



There's a tab related to each Dino Explorer product.
These tabs are to configure individual loads for each product, which means that you should only bother with the DASD views tab.
Click on DASD views to check the pre-configured views.



After that, click Start button and the loading process will begin.
The last tab is where the messages will being displayed during the execution.
At end of the process you will get the message Load executed successfully

Go top

DASD Explorer

There's a interface for each Dino Explorer product.

Here, we are going to fetch data on DASD Explorer.

DASD Explorer is an analytic tool that allows users to track and analyze the usage of DASD volumes for IBM mainframe computer. There are several relevant tasks the users can perform with this powerful tool:
Its main function is submit queries to Dino database about CPU utilization and about all jobs that are running or already been executed on the mainframe.

There are several relevant tasks that users can perform with this powerful tool:
Start the DASD Explorer by clicking:
Start -> All programs -> Dino Explorer 2014 -> DASD Explorer, as shown below.



The DASD Explorer main window is shown below



Go top

DASD Peformance Portal

There is a facility to create numerous charts from DASD Explorer informations.

Click here to access DASD Performance Portal.
Go top