Skip to main content
  • Home
  • Development
  • Documentation
  • Donate
  • Operational login
  • Browse the archive

swh logo
SoftwareHeritage
Software
Heritage
Archive
Features
  • Search

  • Downloads

  • Save code now

  • Add forge now

  • Help

https://bitbucket.org/NetaRS/sched_analytics
08 February 2023, 11:25:08 UTC
  • Code
  • Branches (1)
  • Releases (0)
  • Visits
    • Branches
    • Releases
    • HEAD
    • refs/heads/master
    No releases to show
  • ba2af62
  • /
  • make_graphs.py
Raw File Download
Take a new snapshot of a software origin

If the archived software origin currently browsed is not synchronized with its upstream version (for instance when new commits have been issued), you can explicitly request Software Heritage to take a new snapshot of it.

Use the form below to proceed. Once a request has been submitted and accepted, it will be processed as soon as possible. You can then check its processing state by visiting this dedicated page.
swh spinner

Processing "take a new snapshot" request ...

To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.

  • content
  • directory
  • revision
  • snapshot
origin badgecontent badge
swh:1:cnt:a6b8e23f6f6a136d6f120b4dd1ddd4e55021ad2f
origin badgedirectory badge
swh:1:dir:ba2af62f8b1e8f483cb493908b711f9de4dbf488
origin badgerevision badge
swh:1:rev:ed1f2acca39de9eb5f34a6cb5b0c8db1492f74f2
origin badgesnapshot badge
swh:1:snp:36f6bbe0f26fc27286535954004e9fae1c8c82d7

This interface enables to generate software citations, provided that the root directory of browsed objects contains a citation.cff or codemeta.json file.
Select below a type of object currently browsed in order to generate citations for them.

  • content
  • directory
  • revision
  • snapshot
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Tip revision: ed1f2acca39de9eb5f34a6cb5b0c8db1492f74f2 authored by NetaRS on 12 December 2020, 09:53:39 UTC
bounded traffic distributed
Tip revision: ed1f2ac
make_graphs.py
from os import path
from shutil import copyfile
import json
import matplotlib as mpl
mpl.use('Agg')
import argparse
import csv
from collections import defaultdict
import matplotlib.pyplot as plt
import default_params
from matplotlib.font_manager import FontProperties
import random
from matplotlib import rcParams
from compute_tp import SeriesStats

font_size = 16 # 9

rcParams['axes.labelsize'] = font_size
rcParams['xtick.labelsize'] = font_size
rcParams['ytick.labelsize'] = font_size
rcParams['legend.fontsize'] = font_size
rcParams['font.family'] = 'serif'
rcParams['font.serif'] = ['Computer Modern Roman']
#rcParams['text.usetex'] = True
rcParams['figure.figsize'] = 7.3, 5 #4.3
rcParams['lines.linewidth'] = 3
"""


Graph per epoch delay (0,1,2) (agg interval = compute interval)
Compute MWM epoch (X=1-100) vs. optical throughput ratio (Y)
Centralized only (threshold =0)
Distributed (threshold 0.1-1) (window 1)


Graph per epoch delay (0,1,2) (agg interval = compute interval)
Compute MWM epoch (X=1-100) vs. optical throughput ratio  (Y)
Centralized only (threshold =0)
Distributed (threshold 0.1-1) (window 2)


Graph per epoch delay (0,1,2) (agg interval < compute interval)
Compute MWM epoch (X=1-100) vs. optical throughput ratio (Y)
Centralized only (threshold =0)
Distributed (threshold 0.1-1) (window 1)

"""


def get_dist_args(mode):
    args = {}
    arr = mode.split("_")
    for v in arr:
        if v.startswith("win"):
            args["win"] = int(v[3:])
        elif v.startswith("t"):
            args["t"] = float(v[1:])
        elif v.startswith("i"):
            args["i"] = int(v[1:])
        elif v.startswith("wd"):
            args["wd"] = int(v[2:])
    return args


cache = {u'#feffb3': (0.99607843137254903, 1.0, 0.70196078431372544),
         u'#bcbcbc': (0.73725490196078436, 0.73725490196078436, 0.73725490196078436),
         u'#ffed6f': (1.0, 0.92941176470588238, 0.43529411764705883),
         u'#467821': (0.27450980392156865, 0.47058823529411764, 0.12941176470588237),
         u'#555555': (0.33333333333333331, 0.33333333333333331, 0.33333333333333331),
         u'0.60': (0.59999999999999998, 0.59999999999999998, 0.59999999999999998),
         u'#F0E442': (0.94117647058823528, 0.89411764705882357, 0.25882352941176473),
         u'#81b1d2': (0.50588235294117645, 0.69411764705882351, 0.82352941176470584),
         u'#56B4E9': (0.33725490196078434, 0.70588235294117652, 0.9137254901960784),
         u'#E24A33': (0.88627450980392153, 0.29019607843137257, 0.20000000000000001),
         u'#0072B2': (0.0, 0.44705882352941179, 0.69803921568627447),
         u'#f0f0f0': (0.94117647058823528, 0.94117647058823528, 0.94117647058823528),
         u'0.40': (0.40000000000000002, 0.40000000000000002, 0.40000000000000002), u'blue': (0.0, 0.0, 1.0),
         u'#fc4f30': (0.9882352941176471, 0.30980392156862746, 0.18823529411764706), u'0.00': (0.0, 0.0, 0.0),
         u'#bfbbd9': (0.74901960784313726, 0.73333333333333328, 0.85098039215686272),
         u'#ccebc4': (0.80000000000000004, 0.92156862745098034, 0.7686274509803922),
         u'#eeeeee': (0.93333333333333335, 0.93333333333333335, 0.93333333333333335),
         u'#A60628': (0.65098039215686276, 0.023529411764705882, 0.15686274509803921),
         u'#988ED5': (0.59607843137254901, 0.55686274509803924, 0.83529411764705885), u'black': (0.0, 0.0, 0.0),
         u'#777777': (0.46666666666666667, 0.46666666666666667, 0.46666666666666667),
         u'#fdb462': (0.99215686274509807, 0.70588235294117652, 0.3843137254901961),
         u'#FFB5B8': (1.0, 0.70980392156862748, 0.72156862745098038),
         u'#30a2da': (0.18823529411764706, 0.63529411764705879, 0.85490196078431369),
         u'#EEEEEE': (0.93333333333333335, 0.93333333333333335, 0.93333333333333335),
         u'#7A68A6': (0.47843137254901963, 0.40784313725490196, 0.65098039215686276),
         u'#8b8b8b': (0.54509803921568623, 0.54509803921568623, 0.54509803921568623),
         u'#e5ae38': (0.89803921568627454, 0.68235294117647061, 0.2196078431372549),
         u'#8dd3c7': (0.55294117647058827, 0.82745098039215681, 0.7803921568627451),
         u'#348ABD': (0.20392156862745098, 0.54117647058823526, 0.74117647058823533),
         u'#FBC15E': (0.98431372549019602, 0.75686274509803919, 0.36862745098039218),
         u'#bc82bd': (0.73725490196078436, 0.50980392156862742, 0.74117647058823533),
         u'#E5E5E5': (0.89803921568627454, 0.89803921568627454, 0.89803921568627454),
         u'0.70': (0.69999999999999996, 0.69999999999999996, 0.69999999999999996),
         u'#009E73': (0.0, 0.61960784313725492, 0.45098039215686275),
         u'#CC79A7': (0.80000000000000004, 0.47450980392156861, 0.65490196078431373), u'0.75': (0.75, 0.75, 0.75),
         u'0.50': (0.5, 0.5, 0.5), u'gray': (0.50196078431372548, 0.50196078431372548, 0.50196078431372548),
         u'c': (0.0, 0.75, 0.75), u'b': (0.0, 0.0, 1.0), u'g': (0.0, 0.5, 0.0),
         u'#cbcbcb': (0.79607843137254897, 0.79607843137254897, 0.79607843137254897), u'k': (0.0, 0.0, 0.0),
         u'#D55E00': (0.83529411764705885, 0.36862745098039218, 0.0), u'm': (0.75, 0, 0.75),
         u'#8EBA42': (0.55686274509803924, 0.72941176470588232, 0.25882352941176473), u'r': (1.0, 0.0, 0.0),
         u'#6d904f': (0.42745098039215684, 0.56470588235294117, 0.30980392156862746), u'w': (1.0, 1.0, 1.0),
         u'y': (0.75, 0.75, 0), u'#fa8174': (0.98039215686274506, 0.50588235294117645, 0.45490196078431372),
         u'#b3de69': (0.70196078431372544, 0.87058823529411766, 0.41176470588235292)}

#color_values = list(set(cache.values())-{(1.0, 1.0, 1.0)})
#colors = defaultdict(lambda: (random.random(), random.random(), random.random()))
color_values = ["grey", "black", "brown", "pink", "red", "orange", "yellow", "gold", "olive", "green", "teal", "cyan", "steelblue", "blue", "magenta", "purple"]
color_values2 = ["black", "red", "orange", "gold", "olive"]#, "steelblue", "blue", "magenta"]
labels = 0
labels2 = 0
label_colors = {}
label_colors2 = {}


def get_color(label):
    global labels
    if label in label_colors:
        return label_colors[label]
    color = color_values[labels]
    label_colors[label] = color
    labels += 1
    return color


def get_color2(label):
    global labels2
    if label in label_colors2:
        return label_colors2[labels2]
    color = color_values2[labels2]
    label_colors2[labels2] = color
    labels2 += 1
    return color


def plot_epoch_based_graph(graphs, agg_epoch_delay, x,
                           dist_only_res, optimal_res, opt_online_res,
                           out_file_name, ylabel, title, conf=None):
    graph_dict = dict(x=x)
    for label in sorted(graphs[agg_epoch_delay]):
        print "label", label
        y = [graphs[agg_epoch_delay][label][str(compute_epoch)].get_avg() for compute_epoch in x]
        yerr = [graphs[agg_epoch_delay][label][str(compute_epoch)].get_var()**0.5 for compute_epoch in x]
        plt.errorbar(x, y, yerr=yerr, label=label, color=get_color(label))
        graph_dict[label] = y
    
    y = [dist_only_res.get_avg()] * len(x)
    yerr = [dist_only_res.get_var()**0.5] * len(x)
    label = "dist_only"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color(label))
    graph_dict[label] = y
    
    y = [optimal_res.get_avg()] * len(x)
    yerr = [optimal_res.get_var()**0.5] * len(x)
    label = "optimal"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color(label), linestyle=":")
    graph_dict[label] = y
    
    if opt_online_res is not None:
        y = [opt_online_res.get_avg()] * len(x)
        yerr = [opt_online_res.get_var()**0.5] * len(x)
        label = "opt_online"
        plt.errorbar(x, y, yerr=yerr, label=label, color=get_color(label), linestyle=":")
        graph_dict[label] = y

    graph_dict["_graph_params"] = dict(agg_epoch_delay=agg_epoch_delay, ylabel=ylabel, title=title, conf=conf)
    json.dump(graph_dict, open(out_file_name + "_graph.json", "w"), indent=True, sort_keys=True)
    plt.title(title)
    lg = plt.legend(bbox_to_anchor=(1.05, 1.0), loc='upper left')  # (bbox_to_anchor=(1.05, 1), loc='upper left')
    # plt.tight_layout()
    plt.xlabel('Centralized Epoch (ms)')
    plt.ylabel(ylabel)
    plt.ylim([0, 1.0])
    plt.savefig(out_file_name + ".pdf", bbox_extra_artists=(lg,), bbox_inches='tight')
    plt.savefig(out_file_name + ".png", bbox_extra_artists=(lg,), bbox_inches='tight')
    plt.show()
    plt.close()


def plot_degree_based_graph(x, dist_only_results, optimal_results, opt_online_results,
                            centralized_results, best_chopin_results,
                            out_file_name, ylabel, title, conf=None):
    graph_dict = dict(x=x)
    
    y = [best_chopin_results[d].get_avg() for d in x]
    yerr = [best_chopin_results[d].get_var()**0.5 for d in x]
    label = "chopin"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color2(label))
    graph_dict[label] = y
    
    y = [centralized_results[d].get_avg() for d in x]
    yerr = [centralized_results[d].get_var()**0.5 for d in x]
    label = "centralized"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color2(label))
    graph_dict[label] = y

    y = [dist_only_results[d].get_avg() for d in x]
    yerr = [dist_only_results[d].get_var()**0.5 for d in x]
    label = "dist_only"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color2(label))
    graph_dict[label] = y

    y = [optimal_results[d].get_avg() for d in x]
    yerr = [optimal_results[d].get_var()**0.5 for d in x]
    label = "optimal"
    plt.errorbar(x, y, yerr=yerr, label=label, color=get_color2(label), linestyle=":")
    graph_dict[label] = y
    
    if opt_online_results is not None:
        y = [opt_online_results[d].get_avg() for d in x]
        yerr = [opt_online_results[d].get_var()**0.5 for d in x]
        label = "opt_online"
        plt.errorbar(x, y, yerr=yerr, label=label, color=get_color2(label), linestyle=":")
        graph_dict[label] = y
    
    graph_dict["_graph_params"] = dict(ylabel=ylabel, title=title, conf=conf)
    json.dump(graph_dict, open(out_file_name + "_graph.json", "w"), indent=True, sort_keys=True)
    plt.title(title)
    lg = plt.legend(bbox_to_anchor=(1.05, 1.0), loc='upper left')  # (bbox_to_anchor=(1.05, 1), loc='upper left')
    # plt.tight_layout()
    plt.xlabel('Degree')
    plt.ylabel(ylabel)
    plt.ylim([0, 1.0])
    plt.xticks(x)
    plt.savefig(out_file_name + ".pdf", bbox_extra_artists=(lg,), bbox_inches='tight')
    plt.savefig(out_file_name + ".png", bbox_extra_artists=(lg,), bbox_inches='tight')
    plt.show()
    plt.close()
    
    
def make_degrees_graph(csv_path, degrees, iters, win, compute_epoch, agg_epoch_delay=2, win_delay=1, epoch_vs_agg_dif=0, use_max_peer_cent_weights=False,
                n_milis=5000, n_tors=80, n_hosts_per_tor=10, n_host_peers=10, top=None, output_dir=".",
                wanted_compute_epoch=3, graphs_dir="results", conf=None, **kwargs):

    wanted_dict = {}
    wanted_dict["win"] = win
    wanted_dict["iterations"] = iters
    wanted_dict["win_delay"] = win_delay
    wanted_dict["agg_epoch_delay"] = agg_epoch_delay
    skipped = set()
    
    def compute_epoch_vs_agg_interval_dif(compute_epoch, agg_interval):
        compute_epoch_index = default_params.COMPUTE_EPOCHES.index(int(compute_epoch))
        agg_interval_index = default_params.COMPUTE_EPOCHES.index(int(agg_interval))
        return compute_epoch_index - agg_interval_index
    
    def add_dist_args(row):
        if not row["mode"].startswith("dist"):
            return
        dist_args = get_dist_args(row["mode"])  # includes window size and threshold
        row["win"] = dist_args["win"]
        row["iterations"] = dist_args["i"]
        row["win_delay"] = dist_args["wd"]
        row["threshold"] = dist_args["t"]
        
    def verify_centralized_params(row):
        if int(row["compute_epoch"]) != compute_epoch:
            skipped.add(("compute_epoch", row["compute_epoch"]))
            return False
        if int(row["agg_epoch_delay"]) != wanted_dict["agg_epoch_delay"]:
            skipped.add(("agg_epoch_delay", row["agg_epoch_delay"]))
            return False
        if compute_epoch_vs_agg_interval_dif(row["compute_epoch"], row["agg_interval"]) != epoch_vs_agg_dif and \
                int(row["compute_epoch"]) != 1:
            skipped.add((row["compute_epoch"], row["agg_interval"]))
            return False
        return True
    
    def verify_distributed_params(row):
        if row["win"] != wanted_dict["win"]:
            skipped.add(("win", row["win"]))
            return False
        
        if row["win_delay"] != wanted_dict["win_delay"]:
            skipped.add(("win_delay", row["win_delay"]))
            return False
        
        if row["iterations"] != wanted_dict["iterations"]:
            skipped.add(("iterations", row["iterations"]))
            return False
        return True
    
    def is_distributed(row):
        return row["mode"].startswith("dist")
    
    def is_dist_only(row):
        return is_distributed(row) and row["threshold"] == -1
        
        
    out_file_names = []
    dist_only_results = defaultdict(SeriesStats)
    optimal_results = defaultdict(SeriesStats)
    opt_online_results = defaultdict(SeriesStats)
    centralized_results = defaultdict(SeriesStats)
    best_chopin_results = defaultdict(SeriesStats)
    best_chopin_thresholds = defaultdict(SeriesStats)
    totals = SeriesStats()
    flow_avgs = SeriesStats()
    cur_run_id = None
    cur_best_chopin_res = {}
    cur_best_chopin_threshold = {}
    with open(csv_path) as csvfile:
        reader = csv.DictReader(csvfile, restkey="more_tps")
        for row in reader:
            add_dist_args(row)
            if degrees is not None and int(row["max_degree"]) not in degrees:
                skipped.add(("max_degree", row["max_degree"]))
                continue
            total = float(row["total_load"])
            max_degree = int(row["max_degree"])
            if row["mode"].startswith("offline") and int(row["compute_epoch"]) == 1 and int(row["agg_interval"]) == 1:
                optimal_results[max_degree].add(int(row["total_tp"]) * 1.0 / total)
                totals.add(total)
                flow_avgs.add(float(row["flow_avg"]))
                #optimal_res_c = float(row["change_avg"]) / (n_tors * max_degree)
                continue
            if row["mode"].startswith("online") and int(row["compute_epoch"]) == 1 and int(
                    row["agg_interval"]) == 1 and int(row["agg_epoch_delay"]) == 1:
                opt_online_results[max_degree].add(int(row["total_tp"]) * 1.0 / total)
                #opt_online_res_c = float(row["change_avg"]) * 1.0 / (n_tors * max_degree)
                continue
            
            if is_distributed(row):
                row["type"] = "dist"
                if not verify_distributed_params(row):
                    continue
                if is_dist_only(row):
                    dist_only_results[max_degree].add(int(row["total_tp"]) * 1.0 / total)
                    #dist_only_res_c = float(row["change_avg"]) * 1.0 / (n_tors * max_degree)
                    continue

                if not verify_centralized_params(row):
                    continue
                #label = row["type"] + "_" + str(row["threshold"])
                res = int(row["total_tp"]) * 1.0 / total
                run_id = row["run_id"]
                if run_id != cur_run_id and cur_run_id is not None:
                    for deg in cur_best_chopin_res:
                        best_chopin_results[deg].add(cur_best_chopin_res[deg])
                        best_chopin_thresholds[deg].add(cur_best_chopin_threshold[deg])
                    cur_best_chopin_res = {}
                    cur_best_chopin_threshold = {}
                    cur_run_id = run_id
                if max_degree not in cur_best_chopin_res or res > cur_best_chopin_res[max_degree]:
                    cur_best_chopin_res[max_degree] = res
                    cur_best_chopin_threshold[max_degree] = float(row["threshold"])
            else:
                if not verify_centralized_params(row):
                    continue
                centralized_results[max_degree].add(int(row["total_tp"]) * 1.0 / total)
                #row["type"] = "centralized"
                #label = row["type"]

    for deg in cur_best_chopin_res:
        best_chopin_results[deg].add(cur_best_chopin_res[deg])
        best_chopin_thresholds[deg].add(cur_best_chopin_threshold[deg])
        
    #print "skipped", sorted(skipped)
    """
    print "dist_only_results=", dist_only_results
    print "optimal_results=", optimal_results
    print "opt_online_results=", opt_online_results
    print "centralized_results=", centralized_results
    print "best_chopin_results=", best_chopin_results"""
    print "average flow_avgs", flow_avgs.get_avg(), "stderr", flow_avgs.get_var()
    avg_flow_avgs = flow_avgs.get_avg()
    avg_total = totals.get_avg()
    avg_rate = 1.0 * avg_total / n_milis / n_tors / n_hosts_per_tor / 1000
    name = "tors=%d" % n_tors + \
           " peers=%d" % n_host_peers + \
           " mwm-delay=%sT" % str(agg_epoch_delay) + \
           " win-size=%sms" % str(wanted_dict["win"]) + \
           " win-delay=%dms" % (int(wanted_dict["win_delay"]) + 1) + \
           "\niters=" + str(wanted_dict["iterations"]) + \
           " avg-rate~%dMbps" % avg_rate + \
           " avg_flow~%dKB" % (avg_flow_avgs/8/1000) + \
           " " + kwargs.get("flow_dist_name", "HULL")
    if top:
        name += " mwm-top" + str(top)
    if epoch_vs_agg_dif:
        name += " {agg-interval<compute epoch}"
    if use_max_peer_cent_weights:
        name += " {umpcw}"
    title = name.replace(" ", ", ")
    out_file_name = path.join(graphs_dir, name.replace("ms", "").replace("=", "").replace("\n", " "))
    plot_degree_based_graph(degrees,
                            dist_only_results=dist_only_results, optimal_results=optimal_results,
                            opt_online_results=opt_online_results,
                            centralized_results=centralized_results, best_chopin_results=best_chopin_results,
                            out_file_name=out_file_name + "_degs",
                            ylabel='Optical Throughput Ratio', title=title, conf=conf)
    out_file_names.append(out_file_name)
    thres_stats_res = {k: {"avg": best_chopin_thresholds[k].get_avg(), "var": best_chopin_thresholds[k].get_var()} for k in best_chopin_thresholds}
    json.dump(thres_stats_res,open(out_file_name+"_thresh_stats.json", "w"), indent=True, sort_keys=True)
    #copyfile(csv_path, out_file_name + "_res.csv")
    return out_file_names

    
def make_graphs(csv_path, iters, win, agg_epoch_delay=2, win_delay=1, epoch_vs_agg_dif=0, use_max_peer_cent_weights=False,
                n_milis=5000, n_tors=80, n_hosts_per_tor=10, n_host_peers=10, top=None, output_dir=".", max_degree=1,
                wanted_compute_epoch=3, graphs_dir="results", conf=None, **kwargs):
    graphs = defaultdict(lambda: defaultdict(lambda: defaultdict(SeriesStats)))
    graphs_c = defaultdict(lambda: defaultdict(lambda: defaultdict(SeriesStats)))
    wanted_dict = {}
    wanted_dict["win"] = win
    wanted_dict["iterations"] = iters
    wanted_dict["win_delay"] = win_delay
    wanted_dict["agg_epoch_delay"] = agg_epoch_delay
    out_file_names = []
    
    def compute_epoch_vs_agg_interval_dif(compute_epoch, agg_interval):
        compute_epoch_index = default_params.COMPUTE_EPOCHES.index(int(compute_epoch))
        agg_interval_index = default_params.COMPUTE_EPOCHES.index(int(agg_interval))
        return compute_epoch_index - agg_interval_index
    skipped = set()
    dist_only_res = SeriesStats()
    optimal_res = SeriesStats()
    opt_online_res = SeriesStats()
    dist_only_res_c = SeriesStats() # change
    optimal_res_c = SeriesStats()
    opt_online_res_c = SeriesStats()
    totals = SeriesStats()
    flow_avgs = SeriesStats()
    with open(csv_path) as csvfile:
        reader = csv.DictReader(csvfile, restkey="more_tps")
        for row in reader:
            if int(row["max_degree"]) != max_degree:
                skipped.add(("max_degree", row["max_degree"]))
                continue
            total = float(row["total_load"])
            if row["mode"].startswith("offline") and int(row["compute_epoch"]) == 1 and int(row["agg_interval"]) == 1:
                optimal_res.add(int(row["total_tp"]) * 1.0 / total)
                optimal_res_c.add(float(row["change_avg"]) / (n_tors * max_degree))
                totals.add(total)
                flow_avgs.add(float(row["flow_avg"]))
                continue
            if row["mode"].startswith("online") and int(row["compute_epoch"]) == 1 and int(row["agg_interval"]) == 1 and int(row["agg_epoch_delay"]) == 1:
                opt_online_res.add(int(row["total_tp"]) * 1.0 / total)
                opt_online_res_c.add(float(row["change_avg"]) * 1.0 / (n_tors * max_degree))
                continue
            if compute_epoch_vs_agg_interval_dif(row["compute_epoch"], row["agg_interval"]) != epoch_vs_agg_dif and \
                    int(row["compute_epoch"]) != 1:
                skipped.add((row["compute_epoch"], row["agg_interval"]))
                continue
            if row["mode"].startswith("dist"):
                row["type"] = "dist"
                dist_args = get_dist_args(row["mode"])  # includes window size and threshold
                row["win"] = dist_args["win"]
                if row["win"] != wanted_dict["win"]:
                    skipped.add(("win", row["win"]))
                    continue
                row["win_delay"] = dist_args["wd"]
                if row["win_delay"] != wanted_dict["win_delay"]:
                    skipped.add(("win_delay", row["win_delay"]))
                    continue
                row["iterations"] = dist_args["i"]
                if row["iterations"] != wanted_dict["iterations"]:
                    skipped.add(("iterations", row["iterations"]))
                    continue
                row["threshold"] = dist_args["t"]
                if row["threshold"] == -1:
                    dist_only_res.add(int(row["total_tp"]) * 1.0 / total)
                    dist_only_res_c.add(float(row["change_avg"]) * 1.0 / (n_tors * max_degree))
                    continue
                label = row["type"] + "_" + str(row["threshold"])
            else:
                row["type"] = "centralized"
                label = row["type"]
            if int(row["agg_epoch_delay"]) != wanted_dict["agg_epoch_delay"]:
                skipped.add(("agg_epoch_delay", row["agg_epoch_delay"]))
                continue
            graphs[row["agg_epoch_delay"]][label][row["compute_epoch"]].add(int(row["total_tp"]) * 1.0 / total)
            graphs_c[row["agg_epoch_delay"]][label][row["compute_epoch"]].add(float(row["change_avg"]) * 1.0 / (n_tors * max_degree))

    #print "skipped", sorted(skipped)
    
    #print graphs
    print "average flow_avgs", flow_avgs.get_avg(), "stderr", flow_avgs.get_var()
    avg_flow_avgs = flow_avgs.get_avg()
    #print json.dumps(graphs, indent=True, sort_keys=True)
    x = default_params.COMPUTE_EPOCHES
    for agg_epoch_delay in graphs:
        #print "agg_epoch_delay", agg_epoch_delay
        avg_total = totals.get_avg()
        avg_rate = 1.0 * avg_total / n_milis / n_tors / n_hosts_per_tor / 1000
        name = "tors=%d" % n_tors + \
               " peers=%d" % n_host_peers + \
               " mwm-delay=%sT" % str(agg_epoch_delay) + \
               " win-size=%sms" % str(wanted_dict["win"]) + \
               " win-delay=%dms" % (int(wanted_dict["win_delay"]) + 1) + \
               "\niters=" + str(wanted_dict["iterations"]) + \
               " max-deg=%d" % (max_degree) + \
               " avg-rate~%dMbps" % avg_rate + \
               " avg_flow~%dKB" % (avg_flow_avgs/8/1000) + \
               " " + kwargs.get("flow_dist_name", "HULL")
        if top:
            name += " mwm-top" + str(top)
        if epoch_vs_agg_dif:
            name += " {agg-interval<compute epoch}"
        if use_max_peer_cent_weights:
            name += " {umpcw}"
        title = name.replace(" ", ", ")
        out_file_name = path.join(graphs_dir, name.replace("ms", "").replace("=", "").replace("\n", " "))
        plot_epoch_based_graph(graphs=graphs, agg_epoch_delay=agg_epoch_delay, x=x,
                               dist_only_res=dist_only_res, optimal_res=optimal_res, opt_online_res=opt_online_res,
                               out_file_name=out_file_name,
                               ylabel='Optical Throughput Ratio', title=title, conf=conf)
        plot_epoch_based_graph(graphs=graphs_c, agg_epoch_delay=agg_epoch_delay, x=x,
                               dist_only_res=dist_only_res_c, optimal_res=optimal_res_c, opt_online_res=opt_online_res_c,
                               out_file_name=out_file_name+"_reconf",
                               ylabel='Reconfiguration Ratio', title=title, conf=conf)
        plot_based_thresholds(name, graphs_dir, graphs, dist_only_res, agg_epoch_delay, wanted_compute_epoch)
        out_file_names.append(out_file_name)
        copyfile(csv_path, out_file_name + "_res.csv")
    return out_file_names


def plot_based_thresholds(name, graphs_dir, graphs, dist_only_res, agg_epoch_delay=2, wanted_compute_epoch=3):
    X = []
    Y = []
    for label in sorted(graphs[str(agg_epoch_delay)]):
        if not label.startswith("dist"):
            continue
        t = label.strip().split("_")[1]
        X.append(float(t))
        Y.append(graphs[str(agg_epoch_delay)][label][str(wanted_compute_epoch)].get_avg() / dist_only_res.get_avg() * 100 - 100)
    print X
    print Y
    plt.plot(X, Y, color="blue", label="distributed-diff")

    X = []
    Y = []
    centralized_res = graphs[str(agg_epoch_delay)]["centralized"][str(wanted_compute_epoch)].get_avg()
    for label in sorted(graphs[str(agg_epoch_delay)]):
        if not label.startswith("dist"):
            continue
        t = label.strip().split("_")[1]
        X.append(float(t))
        Y.append(graphs[str(agg_epoch_delay)][label][str(wanted_compute_epoch)].get_avg() / centralized_res * 100 - 100)
    print X
    print Y
    plt.plot(X, Y, color="green", label="centralized-diff")
    
    plt.title(name)
    plt.xlabel('Threshold')
    plt.ylabel('Improvement (%)')
    plt.ylim([0, 20])
    plt.legend()
    plt.savefig(path.join(graphs_dir, name.replace("ms","").replace("=","").replace("\n"," ") + "_thresh_imp.pdf"))
    plt.show()
    plt.close()


def make_graphs_ex(conf, graphs_dir="results", csv_path=None, load_path=None):
    default_params.update_globals(**conf)
    if csv_path is None:
        csv_path = path.join(conf["output_dir"], "res_" + str(conf["n_milis"]) + ".csv")
    if load_path is None:
        load_path = path.join(conf["output_dir"], "total_load_" + str(conf["n_milis"]) + ".json")
    total_load = json.load(file(load_path))["total_load"]

    out_file_names = []
    if "max_degree" in conf:
        print "max_degree=", conf["max_degree"]
        out_file_names += make_graphs(csv_path=csv_path, iters=1, win=1, epoch_vs_agg_dif=0, total=total_load,
                                      graphs_dir=graphs_dir, conf=conf, **conf)
    else:
        print "degrees=", conf["degrees"]
        for max_degree in conf["degrees"]:
            conf["max_degree"] = max_degree
            out_file_names += make_graphs(csv_path=csv_path, iters=1, win=1, epoch_vs_agg_dif=0, total=total_load,
                                          graphs_dir=graphs_dir, conf=conf, **conf)
            #print max_degree, "out_file_names=", out_file_names

        out_file_names += make_degrees_graph(csv_path=csv_path, compute_epoch=3, iters=1, win=1, epoch_vs_agg_dif=0,
                                             total=total_load, graphs_dir=graphs_dir, conf=conf, **conf)
    #print "out_file_names=", out_file_names
    for out_file_name in out_file_names[-1:]:
        #print "out_file_name=", out_file_name
        json.dump(conf, open(out_file_name+"_conf.json", "w"), indent=True, sort_keys=True)
        #copyfile(args.conf.name, out_file_name+"_conf.json")
        copyfile(load_path, out_file_name+"_load.json")
    #make_graphs(iters=1, win=2, epoch_vs_agg_dif=0, total=total_load, **conf)
    #make_graphs(iterations=3, win=1, total=total_load, **conf)
    #make_graphs(iterations=3, win=1, total=total_load, **conf)
    #make_graphs(iterations=3, win=1, total=total_load, **conf)


def main():
    parser = argparse.ArgumentParser(
        description="""Make graphs """,
        epilog="""
    """)
    parser.add_argument('--conf', default="conf.json", type=open,
                        help='configuration file (default: conf.json)')
    parser.add_argument('--csv_path', default=None,
                        help='configuration file (default: <output_dir>\\res_<n_milis>.csv)')
    parser.add_argument('--load_path', default=None,
                        help='configuration file (default: <output_dir>\\total_load_<n_milis>.json)')
    args = parser.parse_args()
    conf = json.load(args.conf)
    
    make_graphs_ex(conf, load_path=args.load_path, csv_path=args.csv_path)


if __name__ == "__main__":
    main()

back to top

Software Heritage — Copyright (C) 2015–2025, The Software Heritage developers. License: GNU AGPLv3+.
The source code of Software Heritage itself is available on our development forge.
The source code files archived by Software Heritage are available under their own copyright and licenses.
Terms of use: Archive access, API— Content policy— Contact— JavaScript license information— Web API