hatchet.readers package

Submodules

hatchet.readers.caliper_native_reader module

class hatchet.readers.caliper_native_reader.CaliperNativeReader(filename_or_caliperreader, native, string_attributes)[source]

Bases: object

Read in a native .cali file using Caliper’s python reader.

create_graph(ctx='path')[source]
read()[source]

Read the caliper records to extract the calling context tree.

read_metrics(ctx='path')[source]

hatchet.readers.caliper_reader module

class hatchet.readers.caliper_reader.CaliperReader(filename_or_stream, query='')[source]

Bases: object

Read in a Caliper file (cali or split JSON) or file-like object.

create_graph()[source]
read()[source]

Read the caliper JSON file to extract the calling context tree.

read_json_sections()[source]

hatchet.readers.cprofile_reader module

class hatchet.readers.cprofile_reader.CProfileReader(filename)[source]

Bases: object

create_graph()[source]

Performs the creation of our node graph

read()[source]
class hatchet.readers.cprofile_reader.NameData[source]

Bases: object

Faux Enum for python

FILE = 0
FNCNAME = 2
LINE = 1
class hatchet.readers.cprofile_reader.StatData[source]

Bases: object

Faux Enum for python

EXCTIME = 2
INCTIME = 3
NATIVECALLS = 1
NUMCALLS = 0
SRCNODE = 4
hatchet.readers.cprofile_reader.print_incomptable_msg(stats_file)[source]

Function which makes the syntax cleaner in Profiler.write_to_file().

hatchet.readers.dataframe_reader module

class hatchet.readers.dataframe_reader.DataframeReader(filename)[source]

Bases: ABC

Abstract Base Class for reading in checkpointing files.

read(**kwargs)[source]
exception hatchet.readers.dataframe_reader.InvalidDataFrameIndex[source]

Bases: Exception

Raised when the DataFrame index is of an invalid type.

hatchet.readers.gprof_dot_reader module

class hatchet.readers.gprof_dot_reader.GprofDotReader(filename)[source]

Bases: object

Read in gprof/callgrind output in dot format generated by gprof2dot.

create_graph()[source]

Read the DOT files to create a graph.

read()[source]

Read the DOT file generated by gprof2dot to create a graphframe. The DOT file contains a call graph.

hatchet.readers.hdf5_reader module

class hatchet.readers.hdf5_reader.HDF5Reader(filename)[source]

Bases: DataframeReader

hatchet.readers.hpctoolkit_reader module

class hatchet.readers.hpctoolkit_reader.HPCToolkitReader(dir_name)[source]

Bases: object

Read in the various sections of an HPCToolkit experiment.xml file and metric-db files.

count_cpu_threads_per_rank()[source]
create_node_dict(nid, hnode, name, node_type, src_file, line, module)[source]

Create a dict with all the node attributes.

fill_tables()[source]

Read certain sections of the experiment.xml file to create dicts of load modules, src_files, procedure_names, and metric_names.

parse_xml_children(xml_node, hnode)[source]

Parses all children of an XML node.

parse_xml_node(xml_node, parent_nid, parent_line, hparent)[source]

Parses an XML node and its children recursively.

read()[source]

Read the experiment.xml file to extract the calling context tree and create a dataframe out of it. Then merge the two dataframes to create the final dataframe.

Returns:

new GraphFrame with HPCToolkit data.

Return type:

(GraphFrame)

read_all_metricdb_files()[source]

Read all the metric-db files and create a dataframe with num_nodes X num_metricdb_files rows and num_metrics columns. Three additional columns store the node id, MPI process rank, and thread id (if applicable).

hatchet.readers.hpctoolkit_reader.init_shared_array(buf_)[source]

Initialize shared array.

hatchet.readers.hpctoolkit_reader.read_metricdb_file(args)[source]

Read a single metricdb file into a 1D array.

hatchet.readers.json_reader module

class hatchet.readers.json_reader.JsonReader(json_spec)[source]

Bases: object

Create a GraphFrame from a json string of the following format.

Returns:

graphframe containing data from dictionaries

Return type:

(GraphFrame)

read()[source]

hatchet.readers.literal_reader module

class hatchet.readers.literal_reader.LiteralReader(graph_dict)[source]

Bases: object

Create a GraphFrame from a list of dictionaries.

TODO: calculate inclusive metrics automatically.

Example:

dag_ldict = [
    {
        "frame": {"name": "A", "type": "function"},
        "metrics": {"time (inc)": 30.0, "time": 0.0},
        "children": [
            {
                "frame": {"name": "B",  "type": "function"},
                "metrics": {"time (inc)": 11.0, "time": 5.0},
                "children": [
                    {
                        "frame": {"name": "C", "type": "function"},
                        "metrics": {"time (inc)": 6.0, "time": 5.0},
                        "children": [
                            {
                                "frame": {"name": "D", "type": "function"},
                                "metrics": {"time (inc)": 1.0, "time": 1.0},
                            }
                        ],
                    }
                ],
            },
            {
                "frame": {"name": "E", "type": "function"},
                "metrics": {"time (inc)": 19.0, "time": 10.0},
                "children": [
                    {
                        "frame": {"name": "H", "type": "function"},
                        "metrics": {"time (inc)": 9.0, "time": 9.0}
                    }
                ],
            },
        ],
    }
]
Returns:

graphframe containing data from dictionaries

Return type:

(GraphFrame)

parse_node_literal(frame_to_node_dict, node_dicts, child_dict, hparent, seen_nids)[source]

Create node_dict for one node and then call the function recursively on all children.

read()[source]

hatchet.readers.pyinstrument_reader module

class hatchet.readers.pyinstrument_reader.PyinstrumentReader(filename)[source]

Bases: object

create_graph()[source]
read()[source]

hatchet.readers.spotdb_reader module

class hatchet.readers.spotdb_reader.SpotDBReader(db_key, list_of_ids=None, default_metric='Total time (inc)')[source]

Bases: object

Import multiple runs as graph frames from a SpotDB instance

read()[source]

Read given runs from SpotDB

Returns:

List of GraphFrames, one for each entry that was found

class hatchet.readers.spotdb_reader.SpotDatasetReader(regionprofile, metadata, attr_info)[source]

Bases: object

Reads a (single-run) dataset from SpotDB

create_graph()[source]

Create the graph. Fills in df_data and metric_columns.

read(default_metric='Total time (inc)')[source]

Create GraphFrame for the given Spot dataset.

hatchet.readers.tau_reader module

class hatchet.readers.tau_reader.TAUReader(dirname)[source]

Bases: object

Read in a profile generated using TAU.

create_graph()[source]
create_node_dict(node, columns, metric_values, name, filename, module, start_line, end_line, rank, thread)[source]
read()[source]

Read the TAU profile file to extract the calling context tree.

hatchet.readers.timemory_reader module

class hatchet.readers.timemory_reader.TimemoryReader(input, select=None, **_kwargs)[source]

Bases: object

Read in timemory JSON output

create_graph()[source]

Create graph and dataframe

read()[source]

Read timemory json.

Module contents