4bears Dino Explorer - AMI Lite
0 Dino Explorer AMI Lite   

Welcome to CPU Explorer AMI

This AMI is tailored to collect and present CPU related data based on the SMF records 30 and its subtypes (Accounting) and the RMF 70.1 (CPU Activity).
Once you complete the loading process you can easily get results such as the one described below:
CPU Activity information based on RMF 70.1 record.

Table of content

CPU Activity

CPU Activity information summarized per LPAR, SYSPLEX, CEC (mainframe box) or the whole environment.

Go top

Running Applications

Running applications and resources usage on a specific time period.

This information is base on SMF 30 sub-types 2 and 3.
Go top

AMI Description

This Amazon Machine Image (AMI) is ready to use, i.e. it has been pre-configurated with everything required to you download tools, upload data and run your queries.

Go top

This AMI includes:

Go top


Go top

Other AMI's

You should check the other AMI availables.
Go top


The SMF (System Management Facility) is the central repository for mainframe events (logs) and the DXPLSMF program (Dino batch collector) reads these dump files and creates a CSV file (comma separated values) with the relevant information.

These CSV files has to be transferred to this AMI instance and the events will be loaded on the DinoDB that resides on the SQL Server database.

The process consist of following phases:
The following picture ilustrates the collection process.

Go top

Collecting SMF

In order to run the DXPLSMF program, you need to:
  1. Download the DXPL.V400.DXPLLOAD.XMIT load library in XMIT format to a z/OS partition that has access to the SMF dump files

  2. Once you have copied the load library to mainframe, you receive it:

  3. Execute the following job:
    //SMFIN    DD DISP=SHR,DSN=smf.dump.file
    //CSVOUT   DD DSN=dxpl.csvout,UNIT=SYSALLDA,
    //         DISP=(NEW,CATLG),VOL=SER=volser,
    //         SPACE=(CYL,(100,50),RLSE)
    //DXPLIN   DD *
  4. You can check your output browsing the CSVOUT file. It should look like this:

  5. TERSE the CSVOUT file on the mainframe:

    CSVOUT is an excellent candidate to compression. The TRS file should be 5% of the original CSV file.
    The following example shows how to TERSE a file on the z/OS:
    //INFILE   DD DISP=SHR,DSN=dxpl.csvout
    //         DSN=dxpl.csvout.trs,VOL=SER=volser,UNIT=3390
You can get detailed information on the MVS Data Collector for Mainframe Assessment v312.pdf.

Go top

Transfering from mainframe to AMI instance

The main concern about file transfer between mainframe and the open platform is the transfer mode:

Tersed Transfer in BINARY mode.
Text CSV Transfer in TEXT mode, in order to convert from EBCDIC to ASCII code

Depending on your network infrastructure you can transfer your files directly from your mainframe to the AMI instance.
However mostly of the installations have severe restrictions to moving data around.
Perhaps it's easyer to transfer to your local desktop and them to the AMI instance.

Go top

FTP service

You can use the FTP service already available on the AMI instance.

The hostname is the public IP address of your AMI instance:

Host: Your public IP
User: Administrator
Password: your password
<%-- If you forgot your password, use can get it from the AWS instance launcher:


A FTP process should look like this:

Go top

Sharing local drives

You can also try to share a local drive or removable device with your remote desktop session.

Go top

Upload directory (Z:\)

The Z: drive of your AMI instance is a temporary storage, i.e. everytime you stop your instance, you loose any data on this drive.

The default location for the FTP server is Z:\.

Go top

Expanding the tersed file (.TRS)

Before you load the data you need to expand (unterse) the uploaded file (DXPL.CSVOUT.TRS).
On a command window (cmd.exe), change to z:\ directory.
terse dxpl.csvout.trs dxpl.csvout.csv

The output file from terse execution will apear, as shown below.

That's the file you will insert on Dino Explorer.

Go top

Loading on Dino database

Launch the Data Loader program by clicking:

Start -> All programs -> Dino Explorer 2014 -> Server utilities -> Data Loader, as shown below.

The importation process is achieved through the Import data option on the Data menu.

When the window appear, you will notice that we created a _default configuration to help you on this process where you just select the files that you have transfered.

Taking a look on this screen we can quickly realize that:

Your AMI has a Temporary Storage 1 (Z:) it's the default location for uploads (170GB space available).

Through the Add button you select your files.

There's a tab related to each Dino Explorer product.
These tabs are to configure individual loads for each product, which means that you should only bother with the CPU views tab.
Click on CPU views to check the pre-configured views.

After that, click Start button and the loading process will begin.
The last tab is where the messages will being displayed during the execution.
At end of the process you will get the message Load executed successfully

Go top

CPU Explorer

There's a interface for each Dino Explorer product.

Here, we are going to fetch data on CPU Explorer, once we have loaded informations from SMF records type 30 and 70.

The CPU Explorer is an analytic tool that allows users to track and analyze the usage pattern and trends of mainframe resource in an effective and straightforward way.
Its main function is submit queries to Dino database about CPU utilization and about all jobs that are running or already been executed on the mainframe.

There are several relevant tasks that users can perform with this powerful tool:
Start the CPU Explorer by clicking:
Start -> All programs -> Dino Explorer 2014 -> CPU Explorer, as shown below.

The CPU Explorer main window is shown below

Go top