{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "\n\n# Forecasting\n\nIn this example, we present a full forecasting workflow for the Sioux Falls example model.\n\nWe start creating the skim matrices, running the assignment for the base-year, and then\ndistributing these trips into the network. Later, we estimate a set of future demand vectors\nwhich are going to be the input of a future year assignnment with select link analysis.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        ".. seealso::\n    Several functions, methods, classes and modules are used in this example:\n\n    * :func:`aequilibrae.paths.Graph`\n    * :func:`aequilibrae.paths.TrafficClass`\n    * :func:`aequilibrae.paths.TrafficAssignment`\n    * :func:`aequilibrae.distribution.Ipf`\n    * :func:`aequilibrae.distribution.GravityCalibration`\n    * :func:`aequilibrae.distribution.GravityApplication`\n    * :func:`aequilibrae.distribution.SyntheticGravityModel`\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# Imports\nfrom uuid import uuid4\nfrom os.path import join\nfrom tempfile import gettempdir\n\nimport pandas as pd\n\nfrom aequilibrae.utils.create_example import create_example"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# We create the example project inside our temp folder\nfldr = join(gettempdir(), uuid4().hex)\n\nproject = create_example(fldr)\nlogger = project.logger"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Traffic assignment with skimming\nIn this step, we'll set the skims for the variable ``free_flow_time``, and execute the\ntraffic assignment for the base-year.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from aequilibrae.paths import TrafficAssignment, TrafficClass"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# We build all graphs\nproject.network.build_graphs()\n# We get warnings that several fields in the project are filled with NaNs. \n# This is true, but we won't use those fields.\n\n# We grab the graph for cars\ngraph = project.network.graphs[\"c\"]\n\n# Let's say we want to minimize the free_flow_time\ngraph.set_graph(\"free_flow_time\")\n\n# And will skim time and distance while we are at it\ngraph.set_skimming([\"free_flow_time\", \"distance\"])\n\n# And we will allow paths to be computed going through other centroids/centroid connectors\n# required for the Sioux Falls network, as all nodes are centroids\ngraph.set_blocked_centroid_flows(False)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's get the demand matrix directly from the project record, and inspect what matrices we have in the project.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "proj_matrices = project.matrices\nproj_matrices.list()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We get the demand matrix, and prepare it for computation\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "demand = proj_matrices.get_matrix(\"demand_omx\")\ndemand.computational_view([\"matrix\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's perform the traffic assignment\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# Create the assignment class\nassigclass = TrafficClass(name=\"car\", graph=graph, matrix=demand)\n\nassig = TrafficAssignment()\n\n# We start by adding the list of traffic classes to be assigned\nassig.add_class(assigclass)\n\n# Then we set these parameters, which an only be configured after adding one class to the assignment\nassig.set_vdf(\"BPR\")  # This is not case-sensitive \n\n# Then we set the volume delay function and its parameters\nassig.set_vdf_parameters({\"alpha\": \"b\", \"beta\": \"power\"})\n\n# The capacity and free flow travel times as they exist in the graph\nassig.set_capacity_field(\"capacity\")\nassig.set_time_field(\"free_flow_time\")\n\n# And the algorithm we want to use to assign\nassig.set_algorithm(\"bfw\")\n\n# Since we haven't checked the parameters file, let's make sure convergence criteria is good\nassig.max_iter = 1000\nassig.rgap_target = 0.001\n\n# we then execute the assignment\nassig.execute()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "After finishing the assignment, we can easily see the convergence report.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "convergence_report = assig.report()\nconvergence_report.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "And we can also see the results of the assignment\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "results = assig.results()\nresults.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We can export our results to CSV or get a Pandas DataFrame, but let's put it directly into the results database\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "assig.save_results(\"base_year_assignment\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "And save the skims\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "assig.save_skims(\"base_year_assignment_skims\", which_ones=\"all\", format=\"omx\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Trip distribution\nFirst, let's have a function to plot the Trip Length Frequency Distribution.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from math import log10, floor\nimport matplotlib.pyplot as plt"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "def plot_tlfd(demand, skim, name):\n    plt.clf()\n    b = floor(log10(skim.shape[0]) * 10)\n    n, bins, patches = plt.hist(\n        np.nan_to_num(skim.flatten(), 0),\n        bins=b,\n        weights=np.nan_to_num(demand.flatten()),\n        density=False,\n        facecolor=\"g\",\n        alpha=0.75,\n    )\n\n    plt.xlabel(\"Trip length\")\n    plt.ylabel(\"Probability\")\n    plt.title(f\"Trip-length frequency distribution for {name}\")\n    return plt"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Calibration\nWe will calibrate synthetic gravity models using the skims for ``free_flow_time`` that we just generated\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import numpy as np\nfrom aequilibrae.distribution import GravityCalibration"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We need the demand matrix and to prepare it for computation\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "demand = proj_matrices.get_matrix(\"demand_aem\")\ndemand.computational_view([\"matrix\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We also need the skims we just saved into our project\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "imped = proj_matrices.get_matrix(\"base_year_assignment_skims_car\")\n\n# We can check which matrix cores were created for our skims to decide which one to use\nimped.names"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Where ``free_flow_time_final`` is actually the congested time for the last iteration\n\nBut before using the data, let's get some impedance for the intrazonals.\nLet's assume it is 75% of the closest zone.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "imped_core = \"free_flow_time_final\"\nimped.computational_view([imped_core])\n\n# If we run the code below more than once, we will be overwriting the diagonal values with non-sensical data\n# so let's zero it first\nnp.fill_diagonal(imped.matrix_view, 0)\n\n# We compute it with a little bit of NumPy magic\nintrazonals = np.amin(imped.matrix_view, where=imped.matrix_view > 0, initial=imped.matrix_view.max(), axis=1)\nintrazonals *= 0.75\n\n# Then we fill in the impedance matrix\nnp.fill_diagonal(imped.matrix_view, intrazonals)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Since we are working with an OMX file, we cannot overwrite a matrix on disk. \nSo let's give it a new name to save.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "imped.save(names=[\"final_time_with_intrazonals\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "This also updates these new matrices as those being used for computation\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "imped.view_names"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's calibrate our Gravity Model\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "for function in [\"power\", \"expo\"]:\n    gc = GravityCalibration(matrix=demand, impedance=imped, function=function, nan_as_zero=True)\n    gc.calibrate()\n    model = gc.model\n    # We save the model\n    model.save(join(fldr, f\"{function}_model.mod\"))\n\n    _ = plot_tlfd(gc.result_matrix.matrix_view, imped.matrix_view, f\"{function} model\")\n\n    # We can save the result of applying the model as well\n    # We can also save the calibration report\n    with open(join(fldr, f\"{function}_convergence.log\"), \"w\") as otp:\n        for r in gc.report:\n            otp.write(r + \"\\n\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "And let's plot a trip length frequency distribution for the demand itself\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "plt = plot_tlfd(demand.matrix_view, imped.matrix_view, \"demand\")\nplt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Forecast\nWe create a set of 'future' vectors using some random growth factors.\nWe apply the model for inverse power, as the trip frequency length distribution\n(TFLD) seems to be a better fit for the actual one.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from aequilibrae.distribution import Ipf, GravityApplication, SyntheticGravityModel"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Compute future vectors\n\nFirst thing to do is to compute the future vectors from our matrix.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "origins = np.sum(demand.matrix_view, axis=1)\ndestinations = np.sum(demand.matrix_view, axis=0)\n\n# Then grow them with some random growth between 0 and 10%, and balance them\norig = origins * (1 + np.random.rand(origins.shape[0]) / 10)\ndest = destinations * (1 + np.random.rand(origins.shape[0]) / 10)\ndest *= orig.sum() / dest.sum()\n\nvectors = pd.DataFrame({\"origins\":orig, \"destinations\":dest}, index=demand.index[:])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### IPF for the future vectors\nLet's balance the future vectors. The output of this step is going to be used later\nin the traffic assignment for future year.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "args = {\n    \"matrix\": demand,\n    \"vectors\": vectors,\n    \"column_field\": \"destinations\",\n    \"row_field\": \"origins\",\n    \"nan_as_zero\": True,\n}\n\nipf = Ipf(**args)\nipf.fit()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "When saving our vector into the project, we'll get an output that it was recored\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "ipf.save_to_project(name=\"demand_ipfd\", file_name=\"demand_ipfd.aem\")\nipf.save_to_project(name=\"demand_ipfd_omx\", file_name=\"demand_ipfd.omx\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Impedance\n\nLet's get the base-year assignment skim for car we created before and prepare it for computation\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "imped = proj_matrices.get_matrix(\"base_year_assignment_skims_car\")\nimped.computational_view([\"final_time_with_intrazonals\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "If we wanted the main diagonal to not be considered...\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# np.fill_diagonal(imped.matrix_view, np.nan)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Now we apply the Synthetic Gravity model\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "for function in [\"power\", \"expo\"]:\n    model = SyntheticGravityModel()\n    model.load(join(fldr, f\"{function}_model.mod\"))\n\n    outmatrix = join(proj_matrices.fldr, f\"demand_{function}_model.aem\")\n    args = {\n        \"impedance\": imped,\n        \"vectors\": vectors,\n        \"row_field\": \"origins\",\n        \"model\": model,\n        \"column_field\": \"destinations\",\n        \"nan_as_zero\": True,\n    }\n\n    gravity = GravityApplication(**args)\n    gravity.apply()\n\n    # We get the output matrix and save it to OMX too,\n    gravity.save_to_project(name=f\"demand_{function}_modeled\", file_name=f\"demand_{function}_modeled.omx\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We update the matrices table/records and verify that the new matrices are indeed there\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "proj_matrices.update_database()\nproj_matrices.list()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Traffic assignment with Select Link Analysis\nWe'll perform traffic assignment for the future year.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "logger.info(\"\\n\\n\\n TRAFFIC ASSIGNMENT FOR FUTURE YEAR WITH SELECT LINK ANALYSIS\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's get our future demand matrix, which corresponds to the IPF result we just saved,\nand see what is the core we ended up getting. It should be ``matrix``.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "demand = proj_matrices.get_matrix(\"demand_ipfd\")\ndemand.names"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's prepare our data for computation\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "demand.computational_view(\"matrix\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "The future year assignment is quite similar to the one we did for the base-year.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# So, let's create the assignment class\nassigclass = TrafficClass(name=\"car\", graph=graph, matrix=demand)\n\nassig = TrafficAssignment()\n\n# Add at a list of traffic classes to be assigned\nassig.add_class(assigclass)\n\nassig.set_vdf(\"BPR\")\n\n# Set the volume delay function and its parameters\nassig.set_vdf_parameters({\"alpha\": \"b\", \"beta\": \"power\"})\n\n# Set the capacity and free flow travel times as they exist in the graph\nassig.set_capacity_field(\"capacity\")\nassig.set_time_field(\"free_flow_time\")\n\n# And the algorithm we want to use to assign\nassig.set_algorithm(\"bfw\")\n\n# Once again we haven't checked the parameters file, so let's make sure convergence criteria is good\nassig.max_iter = 500\nassig.rgap_target = 0.00001"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Now we select two sets of links to execute select link analysis.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "select_links = {\n    \"Leaving node 1\": [(1, 1), (2, 1)],\n    \"Random nodes\": [(3, 1), (5, 1)],\n}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "<div class=\"alert alert-info\"><h4>Note</h4><p>As we are executing the select link analysis on a particular ``TrafficClass``, we should set the\n   links we want to analyze. The input is a dictionary with string as keys and a list of tuples as\n   values, so that each entry represents a separate set of selected links to compute. \n\n   ``select_link_dict = {\"set_name\": [(link_id1, direction1), ..., (link_id, direction)]}``\n\n   The string name will name the set of links, and the list of tuples is the list of selected links\n   in the form ``(link_id, direction)``, as it occurs in the `Graph <aequilibrae-graphs>`.\n\n   Direction can be one of 0, 1, or -1, where 0 denotes bi-directionality.</p></div>\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "# We call this command on the class we are analyzing with our dictionary of values\nassigclass.set_select_links(select_links)\n\n# we then execute the assignment\nassig.execute()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "To save our select link results, all we need to do is provide it with a name.\nIn addition to exporting the select link flows, it also exports the Select Link matrices in OMX format.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "assig.save_select_link_results(\"select_link_analysis\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "<div class=\"alert alert-info\"><h4>Note</h4><p>Say we just want to save our select link flows, we can call: ``assig.save_select_link_flows(\"just_flows\")``\n\n   Or if we just want the select link matrices: ``assig.save_select_link_matrices(\"just_matrices\")``\n\n   Internally, the ``save_select_link_results`` calls both of these methods at once.</p></div>\n\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We can export the results to CSV or AequilibraE Data, but let's put it directly into the results database\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "assig.save_results(\"future_year_assignment\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "And save the skims\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "assig.save_skims(\"future_year_assignment_skims\", which_ones=\"all\", format=\"omx\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Run convergence study\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "df = assig.report()\nx = df.iteration.values\ny = df.rgap.values\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nplt.plot(x, y, \"k--\")\nplt.yscale(\"log\")\nplt.grid(True, which=\"both\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Relative Gap\")\nplt.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Close the project\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "project.close()"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.18"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}